终于到了最后一部分。我将自己训练好的模型移植到开发板上,编译好的MSSD本身就有调用其它模型的接口,我为了方便,将原程序内的模型文件改为自己的模型文件,省去了一长串命令行。
先看效果吧,有意思一点。
/home/openailab/tengine/examples/build/mobilenet_ssd/MSSD
proto file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.prototxt by default
model file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.caffemodel by default
--------------------------------------
repeat 1 times, avg time per run is 214.362 ms
detect result num: 2
banana :98%
BOX:( 143.936 , 145.889 ),( 299.282 , 299.525 )
pine :85%
BOX:( 36.6145 , -1.68488 ),( 172.289 , 299.669 )
======================================
[openailab@localhost mobilenet_ssd]$ ./MSSD -i ../../../tests/images/pine.jpg
/home/openailab/tengine/examples/build/mobilenet_ssd/MSSD
proto file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.prototxt by default
model file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.caffemodel by default
--------------------------------------
repeat 1 times, avg time per run is 212.762 ms
detect result num: 1
pine :100%
BOX:( 16.2116 , 70.2431 ),( 473.994 , 249.246 )
======================================
proto file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.prototxt by default
model file not specified\,using /home/openailab/tengine/models/MobileNetSSD\_bn\_upgraded.caffemodel by default
--------------------------------------
repeat 1 times, avg time per run is 227.734 ms
detect result num: 5
peach :86%
BOX:( 134.725 , 204.476 ),( 480.3 , 554.351 )
fuji_apple :80%
BOX:( 463.516 , 225.621 ),( 821.036 , 561.508 )
======================================
命令行也给出来了。先不说正确率,和我训练模型有关系。整体使用感受来说还是不错的,技术支持说如果使用EAIDK自带的Tengine会性能更好。
本人对于linux下的cpp不是很熟悉,所以有很多地方可以提升,这些东西需要长时间摸索和思考,更重要还有调试。完全可以基于这个tengine写出一个应用。