江流儿_DRCNM · 2021年07月29日

【周易AIPU 仿真】R329开发板模拟仿真

基本思想:R329开发板可以免费申请,但是需要先做一下模拟仿真,因此在此记录一下仿真过程~

我本想使用自己曾经训练的mobilev3-ssd 后来发现有些op不支持,官网给出的解释是,某些op现在不支持,可以抛出后处理,进行输出,奈何手中模型比较多,我把我曾经的一篇私密博客mobilev2 改了一下,就贴出来了,然后移植了R329;

(1)按照官方说明。下载文件,解压 docker安装

方法二,百度云下载镜像文件(压缩包约2.9GB,解压后约5.3GB)
链接:https://pan.baidu.com/s/1wNrGuc6k-cfCSalp_ohdtQ
提取码:r8ge
gunzip zhouyi_docker.tar.gz
sudo docker load --input zhouyi_docker.tar

首先保证已经安装docker到PC机上,然后安装过程如下~

ubuntu@ubuntu:~/TESTINT8YOL5$ sudo docker load --input zhouyi_docker.tar
[sudo] ps 的密码:
6effd95c47f2: Loading layer [==================================================>]  65.61MB/65.61MB
4ad5df11bd98: Loading layer [==================================================>]  15.87kB/15.87kB
a2d674c9d7ed: Loading layer [==================================================>]  3.072kB/3.072kB
ed91bf4bfffa: Loading layer [==================================================>]  419.3MB/419.3MB
96fe5b649559: Loading layer [==================================================>]  13.02MB/13.02MB
07e36a7cbd23: Loading layer [==================================================>]  3.072kB/3.072kB
8c1d550eee8f: Loading layer [==================================================>]  687.9MB/687.9MB
96e2b5e250d9: Loading layer [==================================================>]  4.096kB/4.096kB
c2fd32564963: Loading layer [==================================================>]  4.096kB/4.096kB
81dd3a3cea3f: Loading layer [==================================================>]  172.1MB/172.1MB
a93455f66176: Loading layer [==================================================>]  176.1kB/176.1kB
ff4b8b6451e5: Loading layer [==================================================>]  3.584kB/3.584kB
0df6a038f567: Loading layer [==================================================>]   2.56kB/2.56kB
03b3e209faf1: Loading layer [==================================================>]  2.048kB/2.048kB
6ee41f09e5cc: Loading layer [==================================================>]  2.024MB/2.024MB
e31f35b85ac8: Loading layer [==================================================>]  35.33kB/35.33kB
f786f622da31: Loading layer [==================================================>]   29.7kB/29.7kB
b46c203abbf5: Loading layer [==================================================>]  3.072kB/3.072kB
8d1d9238c141: Loading layer [==================================================>]  1.347MB/1.347MB
5d96bab78113: Loading layer [==================================================>]  12.29kB/12.29kB
d448d9dff93c: Loading layer [==================================================>]  4.403GB/4.403GB
Loaded image: zepan/zhouyi:latest

(2)、查看一下docker环境和demo实列

ubuntu@ubuntu:~/TESTINT8YOL5$ sudo docker run -i -t zepan/zhouyi  /bin/bash

________                               _______________
___  __/__________________________________  ____/__  /________      __
__  /  _  _ \_  __ \_  ___/  __ \_  ___/_  /_   __  /_  __ \_ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ /
/_/    \___//_/ /_//____/ \____//_/    /_/      /_/  \____/____/|__/


WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.

To avoid this, run the container by specifying your user's userid:

$ docker run -u $(id -u):$(id -g) args...

root@efd0499c2a6e:/tf# ls
tensorflow-tutorials
root@efd0499c2a6e:/tf# cd tensorflow-tutorials/
root@efd0499c2a6e:/tf/tensorflow-tutorials# ls
README.md  basic_classification.ipynb  basic_text_classification.ipynb
root@efd0499c2a6e:/tf/tensorflow-tutorials# cd ..
root@efd0499c2a6e:/tf# python3
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.15.3'
>>> exit()
root@efd0499c2a6e:/tf# ls
tensorflow-tutorials
root@efd0499c2a6e:/tf# cd
root@efd0499c2a6e:~# cd demos/
root@efd0499c2a6e:~/demos# ls
pb  tflite
root@efd0499c2a6e:~/demos# cd tflite/
root@efd0499c2a6e:~/demos/tflite# ./run_sim.sh
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

[I] Parsing model....
[I] [Parser]: Begin to parse tflite model mobilenet_v2...
[W] [Parser]: Meets too high Onnx version! Please downgrade to 1.07!
2021-07-21 02:37:11.310169: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-07-21 02:37:11.343378: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600010000 Hz
2021-07-21 02:37:11.346610: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xac80e80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-21 02:37:11.346649: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
[I] [Parser]: Parser done!
[I] Parse model complete
[I] Quantizing model....
[I] AQT start: model_name:mobilenet_v2, calibration_method:MEAN, batch_size:1
[I] ==== read ir ================
[I]     float32 ir txt: /tmp/AIPUBuilder_1626835030.7969813/mobilenet_v2.txt
[I]     float32 ir bin2: /tmp/AIPUBuilder_1626835030.7969813/mobilenet_v2.bin
[I] ==== read ir DONE.===========
WARNING:tensorflow:From /usr/local/bin/aipubuild:8: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/bin/aipubuild:8: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

[I] ==== auto-quantization ======
[I]     step1: get max/min statistic value DONE
[I]     step2: quantization each op DONE
[I]     step3: build quantization forward DONE
[I]     step4: show output scale of end node:
[I]             layer_id: 66, layer_top:MobilenetV2/Predictions/Softmax, output_scale:[255.99994]
[I] ==== auto-quantization DONE =
[I] Quantize model complete
[I] Building ...
[I] [common_options.h: 276] BuildTool version: 4.0.175. Build for target Z1_0701 at frequency 800MHz
[I] [common_options.h: 297] using default profile events to profile AIFF

[I] [IRChecker] Start to check IR: /tmp/AIPUBuilder_1626835030.7969813/mobilenet_v2_int8.txt
[I] [IRChecker] model_name: mobilenet_v2
[I] [IRChecker] IRChecker: All IR pass
[I] [graph.cpp : 846] loading graph weight: /tmp/AIPUBuilder_1626835030.7969813/mobilenet_v2_int8.bin size: 0x364ee8
[I] [builder.cpp:1059] Total memory for this graph: 0xa7e930 Bytes
[I] [builder.cpp:1060] Text   section:  0x00029130 Bytes
[I] [builder.cpp:1061] RO     section:  0x00003900 Bytes
[I] [builder.cpp:1062] Desc   section:  0x00022a00 Bytes
[I] [builder.cpp:1063] Data   section:  0x0044a600 Bytes
[I] [builder.cpp:1064] BSS    section:  0x005a4b00 Bytes
[I] [builder.cpp:1065] Stack         :  0x00040400 Bytes
[I] [builder.cpp:1066] Workspace(BSS):  0x0000f500 Bytes
[I] [main.cpp  : 467] # autogenrated by aipurun, do NOT modify!
LOG_FILE=log_default
FAST_FWD_INST=0
INPUT_INST_CNT=1
INPUT_DATA_CNT=2
CONFIG=Z1-0701
LOG_LEVEL=0
INPUT_INST_FILE0=/tmp/temp_381cb655395ebbc75c7990c73c832.text
INPUT_INST_BASE0=0x0
INPUT_INST_STARTPC0=0x0
INPUT_DATA_FILE0=/tmp/temp_381cb655395ebbc75c7990c73c832.ro
INPUT_DATA_BASE0=0x10000000
INPUT_DATA_FILE1=/tmp/temp_381cb655395ebbc75c7990c73c832.data
INPUT_DATA_BASE1=0x20000000
OUTPUT_DATA_CNT=2
OUTPUT_DATA_FILE0=output_mobilenet_v2.bin
OUTPUT_DATA_BASE0=0x20c6e400
OUTPUT_DATA_SIZE0=0x3e9
OUTPUT_DATA_FILE1=profile_data.bin
OUTPUT_DATA_BASE1=0x20800000
OUTPUT_DATA_SIZE1=0x700
RUN_DESCRIPTOR=BIN[0]

[I] [main.cpp  : 118] run simulator:
aipu_simulator_z1 /tmp/temp_381cb655395ebbc75c7990c73c832.cfg
[INFO]:SIMULATOR START!
[INFO]:========================================================================
[INFO]:                             STATIC CHECK
[INFO]:========================================================================
[INFO]:  INST START ADDR : 0x0(0)
[INFO]:  INST END ADDR   : 0x2912f(168239)
[INFO]:  INST SIZE       : 0x29130(168240)
[INFO]:  PACKET CNT      : 0x2913(10515)
[INFO]:  INST CNT        : 0xa44c(42060)
[INFO]:------------------------------------------------------------------------
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0x44e: 0x472021b(POP R27,Rc7) vs 0x5f00000(MVI R0,0x0,Rc7), PACKET:0x44e(1102) SLOT:0 vs 3
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0x45b: 0x472021b(POP R27,Rc7) vs 0x5f00000(MVI R0,0x0,Rc7), PACKET:0x45b(1115) SLOT:0 vs 3
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0x5c0: 0x472021b(POP R27,Rc7) vs 0x9f80020(ADD.S R0,R0,0x1,Rc7), PACKET:0x5c0(1472) SLOT:0 vs 3
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0x7fd: 0x4520180(BRL R0) vs 0x47a03e4(ADD R4,R0,R31,Rc7), PACKET:0x7fd(2045) SLOT:0 vs 3
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0x996: 0x4720204(POP R4,Rc7) vs 0x9f80020(ADD.S R0,R0,0x1,Rc7), PACKET:0x996(2454) SLOT:0 vs 3
[WARN]:[0803] INST WR/RD REG CONFLICT! PACKET 0xe40: 0x4720204(POP R4,Rc7) vs 0x47a1be0(ADD R0,R6,R31,Rc7), PACKET:0xe40(3648) SLOT:0 vs 3
[INFO]:========================================================================
[INFO]:                             STATIC CHECK END
[INFO]:========================================================================

[INFO]:AIPU START RUNNING: BIN[0]
[INFO]:TOTAL TIME: 2.385680s.
[INFO]:SIMULATOR EXIT!
[I] [main.cpp  : 135] Simulator finished.
Total errors: 0,  warnings: 0
root@efd0499c2a6e:~/demos/tflite# python3 quant_predict.py
predict first 5 label:
    index  231, prob 180, name: Shetland sheepdog, Shetland sheep dog, Shetland
    index  232, prob  67, name: collie
    index 1000, prob   0, name: toilet tissue, toilet paper, bathroom tissue
    index  342, prob   0, name: hog, pig, grunter, squealer, Sus scrofa
    index  340, prob   0, name: sorrel
true first 5 label:
    index  232, prob  83, name: collie
    index  231, prob  83, name: Shetland sheepdog, Shetland sheep dog, Shetland
    index  158, prob  41, name: papillon
    index  170, prob  40, name: borzoi, Russian wolfhound
    index  161, prob  39, name: Afghan hound, Afghan
Detect picture save to result.jpeg
root@efd0499c2a6e:~/demos/tflite#

(3)这里测试的pb是我之前训练的模型,具体训练和环境搭建不在详细叙述。https://blog.csdn.net/sxj731533730/article/details/113865061

将我的文件映射到docker文件夹中,同时修改这个文件,进行官方要求的校正数据集。

  ---首先退出docker环境

root@efd0499c2a6e:~# exit
exit

矫正的数据集。我使用https://blog.csdn.net/sxj731533730/article/details/113865061曾经训练的模型的测试数据集;

(4)重新挂载训练目录,发现多了一个model文件夹;

ubuntu@ubuntu:~$ sudo docker run -it -v /home/ubuntu/models:/models zepan/zhouyi  /bin/bash

________                               _______________
___  __/__________________________________  ____/__  /________      __
__  /  _  _ \_  __ \_  ___/  __ \_  ___/_  /_   __  /_  __ \_ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ /
/_/    \___//_/ /_//____/ \____//_/    /_/      /_/  \____/____/|__/


WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.

To avoid this, run the container by specifying your user's userid:

$ docker run -u $(id -u):$(id -g) args...

root@615a1c5ee580:/tf# cd /
root@615a1c5ee580:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  models  opt  proc  root  run  sbin  srv  sys  tf  tmp  usr  var
root@615a1c5ee580:/#

(5)然后进入修改一下proprocess_data.py文件,具体参考训练的tf模1.15的训练参数和配置文件设置

root@615a1c5ee580:~# vim  ~/demos/pb/dataset/preprocess_dataset.py

(5-1)因为模型训练过程中的配置文件ssdlite_mobilenet_v3_small_320x320_coco.config  我上一篇博客使用输入是300

image_resizer {
      fixed_shape_resizer {
        height: 224
        width: 224
      }
    }

(5-2) 生成一下矫正数据集的标签20210722label.txt;写个简单的generate.py~

root@615a1c5ee580:~/demos/pb/dataset# cat generate.py
import os

path = r"/models/research/object_detection/samples/configs/20210223/TestSet"

def file_name(file_dir):
    jpg_list = []
    for root, dirs, files in os.walk(file_dir):
        for file in files:
            if os.path.splitext(file)[1] == '.jpg':
                jpg_list.append(os.path.join(root,file))
    return jpg_list

listName=file_name(path)
with open("20210722label.txt" , "a") as f:
     for idx,item in enumerate(listName):
         f.write('{:} {:}\n'.format(item, str(idx)))
root@615a1c5ee580:~/demos/pb/dataset# python3 generate.py

(5-3)首先修正proprocess_data.py 代码,导入图片数据集路径和标签文件

img_dir='/models/research/object_detection/samples/configs/20210223/TestSet'
label_file='/root/demos/pb/dataset/20210722label.txt'

因为模型训练过程中的配置文件 https://github.com/tensorflow...
'mobilenet_v3_small': inception_preprocessing,

修正对应的参数,这里的mean值要保证和你训练命令的参数保持一致或者和默认参数保持一致

#ssdlite_mobilenet_v3_small_320x320_coco.config PARAM
input_height=224
input_width=224
input_channel = 3
mean = [123.68, 116.78, 103.94]
var = 1

执行脚本,生成矫正数据集

root@615a1c5ee580:~/demos/pb/dataset# python3 preprocess_dataset.py
WARNING:tensorflow:From preprocess_dataset.py:19: The name tf.enable_eager_execution is deprecated. Please use tf.compat.v1.enable_eager_execution instead.

WARNING:tensorflow:From preprocess_dataset.py:137: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

2021-07-22 06:19:50.947170: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-07-22 06:19:50.975297: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600010000 Hz
2021-07-22 06:19:50.979323: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4a620c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-22 06:19:50.979363: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From preprocess_dataset.py:67: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

root@6efb7df65458:~/demos/pb/dataset# ls
20210722dataset.npy  20210722label.npy  20210722label.txt  dataset.npy  generate.py  img  label.npy  label.txt  preprocess_dataset.py

(6)、修改/root/demos/pb/model/gen_inputbin.py 文件

input_height=224
input_width=224
input_channel = 3
mean = 0 #这个地方我设置成0  因为 我训练的时候 设置了--input_mean=0 可以参考mobilenetv2的训练命令
var = 1

img_name = "/models/research/object_detection/samples/configs/20210727/car/jeep/00c47631aae05108859a5a3e5dc9661c.jpg"

这里的图片路径设置的是来自 jeep文件夹

生成输入bin成功

root@6efb7df65458:~/demos/pb/model# python3 gen_inputbin.py
save to input.bin OK
root@6efb7df65458:~/demos/pb/model# ls -l
total 100796
-rw-r--r-- 1 root root    307200 Jul 22 08:48 20210722input.bin
-rw-r--r-- 1 root root 102591623 Dec 15  2020 frozen.pb
-rw-r--r-- 1 root root      1211 Jul 22 08:48 gen_inputbin.py
-rw-r--r-- 1 root root    307200 Jul 22 08:36 input.bin

(7)、修改配置文件,并且执行文件;

root@6efb7df65458:~/demos/pb/config# root@6efb7df65458:~/demos/pb/config# cp resnet_50_run.cfg  ssdlite_mobilenet_v3_small_320x320_coco.cfg
root@6efb7df65458:~/demos/pb/config# root@6efb7df65458:~/demos/pb/config# cat ssdlite_mobilenet_v3_small_320x320_coco.cfg
[Common]
mode=run

[Parser]
model_name = ssdlite_mobilenet_v3_small_320x320_coco
detection_postprocess =
model_domain = image_classification
output = detection_boxes,detection_scores,detection_classes
input_model =  /models/research/object_detection/samples/configs/20210223/saved/frozen_inference_graph.pb
input = image_tensor
input_shape = [1,224,224,3]
output_dir = ./

[AutoQuantizationTool]
model_name = ssdlite_mobilenet_v3_small_320x320_coco
quantize_method = SYMMETRIC
ops_per_channel = DepthwiseConv
calibration_data = /root/demos/pb/dataset/20210722dataset.npy
calibration_label = /root/demos/pb/dataset/20210722label.npy
preprocess_mode = normalize
quant_precision=int8
reverse_rgb = False
label_id_offset = 0

[GBuilder]
inputs=/root/demos/pb/model/20210722input.bin
simulator=aipu_simulator_z1
outputs=output_ssdlite_mobilenet_v3_small_320x320_coco_50.bin
profile= True
target=Z1_0701

对模型进行量化,好像存在一些op不支持~

root@b9bc72aa6d16:~/demos/pb# aipubuild config/ssdlite_mobilenet_v3_small_320x320_coco.cfg3_small_320x320_coco.cfg
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

[I] Parsing model....
[I] [Parser]: Begin to parse tensorflow model ssdlite_mobilenet_v3_small_320x320_coco...
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/AIPUBuilder/Parser/passes/tf/__init__.py:73: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
[Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArraySizeV3(TensorArraySizeV3)] : generate IR failed for node Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArraySizeV3 due to <class 'numpy.void'>
[W] [Parser] : [Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArrayGatherV3(TensorArrayGatherV3)] : generate IR failed for node Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArrayGatherV3 due to <class 'numpy.void'>
[I] [Parser]: Parser done!
[I] Parse model complete
[I] Quantizing model....
[I] AQT start: model_name:ssdlite_mobilenet_v3_small_320x320_coco, calibration_method:MEAN, batch_size:1
[I] ==== read ir ================
[I]     float32 ir txt: /tmp/AIPUBuilder_1627019106.3694997/ssdlite_mobilenet_v3_small_320x320_coco.txt
[I]     float32 ir bin2: /tmp/AIPUBuilder_1627019106.3694997/ssdlite_mobilenet_v3_small_320x320_coco.bin
[E] unsupported op: Merge
[E] unsupported op: Exit
[E] unsupported op: TensorArraySizeV3
[E] unsupported op: Range
[E] unsupported op: TensorArrayGatherV3
[I] ==== read ir DONE.===========
[E] Quantize model failed! 'NoneType' object has no attribute 'type_'
[I] Building ...
[I] [common_options.h: 276] BuildTool version: 4.0.175. Build for target Z1_0701 at frequency 800MHz
[I] [common_options.h: 297] using default profile events to profile AIFF

[E] [main.cpp  : 170] Cannot Open file:/tmp/AIPUBuilder_1627019106.3694997/ssdlite_mobilenet_v3_small_320x320_coco_int8.txt
Total errors: 1,  warnings: 0

踢掉后端处理 貌似可以,官方say可以直接后端处理~,我还是放弃吧,太菜了我~

sudo docker commit b9bc72aa6d16 zepan/zhouyi2

先保存一下docker吧,换个模型,重新加载docker和本地文件夹

(8)、把我之前的私密博客搬过来改一下,进行mobilenet_v2的训练一下,参考https://github.com/tensorflow/models/tree/master/research/slim

先将数据集各个类别 提取到各自类别的文件夹,写了个脚本处理一下~ (总共 1312张图片)

import os
import xml.etree.ElementTree as ET
import shutil
import io
import json
dir="C:\\Users\\PHILIPS\\Desktop\\TEST\\dataset"

for file in os.listdir(dir):
    if file.endswith(".json"):
        file_json = io.open(os.path.join(dir,file), 'r', encoding='utf-8')
        json_data = file_json.read()
        data = json.loads(json_data)
        m_filename = data['shapes'][0]['label']
        newDir = os.path.join(dir, m_filename)
        if not os.path.isdir(newDir):
            os.mkdir(newDir)
        (filename, extension) = os.path.splitext(file)
        if not os.path.isfile(os.path.join(newDir, ".".join([filename, "jpg"]))):
            shutil.copy(os.path.join(dir, ".".join([filename, "jpg"])), newDir)
    elif  file.endswith(".xml"):
        #with open(os.path.join(root,file), 'r') as f:
        tree = ET.parse(os.path.join(dir,file))
        root = tree.getroot()
        for obj in root.iter('object'):  # 多个元素
            cls = obj.find('name').text
            newDir=os.path.join(dir,cls)
            print(newDir)
            if not os.path.isdir(newDir):
                os.mkdir(newDir)
            (filename, extension) = os.path.splitext(file)
            if not os.path.isfile(os.path.join(newDir,".".join([filename,"jpg"]))):
                shutil.copy(os.path.join(dir,".".join([filename,"jpg"])) , newDir)

    elif file.endswith(".jpg"):
        print(os.path.join(dir,file))

修改一下代码,直接放修改后文件(整个训练过程是仿照flower的训练方式,不懂得话,可以先跑一边flower的训练过程)

/home/ubuntu/models/research/slim/datasets/download_and_convert_flowers.py

修改一下,执行该文件即可

# Copyright 2016 The TensorFlow Authors. All Rights Reserved.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import math
import os
import random
import sys

from six.moves import range
from six.moves import zip
import tensorflow.compat.v1 as tf

from datasets import dataset_utils

# The URL where the Flowers data can be downloaded.


# The number of images in the validation set.
_NUM_VALIDATION = 350

# Seed for repeatability.
_RANDOM_SEED = 0

# The number of shards per dataset split.
_NUM_SHARDS = 5


class ImageReader(object):
  """Helper class that provides TensorFlow image coding utilities."""

  def __init__(self):
    # Initializes function that decodes RGB JPEG data.
    self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
    self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)

  def read_image_dims(self, sess, image_data):
    image = self.decode_jpeg(sess, image_data)
    return image.shape[0], image.shape[1]

  def decode_jpeg(self, sess, image_data):
    image = sess.run(self._decode_jpeg,
                     feed_dict={self._decode_jpeg_data: image_data})
    assert len(image.shape) == 3
    assert image.shape[2] == 3
    return image


def _get_filenames_and_classes(dataset_dir):
  """Returns a list of filenames and inferred class names.

  Args:
    dataset_dir: A directory containing a set of subdirectories representing
      class names. Each subdirectory should contain PNG or JPG encoded images.

  Returns:
    A list of image file paths, relative to `dataset_dir` and the list of
    subdirectories, representing class names.
  """
  flower_root = os.path.join(dataset_dir)
  directories = []
  class_names = []
  for filename in os.listdir(flower_root):
    path = os.path.join(flower_root, filename)
    if os.path.isdir(path):
      directories.append(path)
      class_names.append(filename)

  photo_filenames = []
  for directory in directories:
    for filename in os.listdir(directory):
      path = os.path.join(directory, filename)
      photo_filenames.append(path)

  return photo_filenames, sorted(class_names)


def _get_dataset_filename(dataset_dir, split_name, shard_id):
  output_filename = 'cars_%s_%05d-of-%05d.tfrecord' % (
      split_name, shard_id, _NUM_SHARDS)
  return os.path.join(dataset_dir, output_filename)


def _convert_dataset(split_name, filenames, class_names_to_ids, dataset_dir):
  """Converts the given filenames to a TFRecord dataset.

  Args:
    split_name: The name of the dataset, either 'train' or 'validation'.
    filenames: A list of absolute paths to png or jpg images.
    class_names_to_ids: A dictionary from class names (strings) to ids
      (integers).
    dataset_dir: The directory where the converted datasets are stored.
  """
  assert split_name in ['train', 'validation']

  num_per_shard = int(math.ceil(len(filenames) / float(_NUM_SHARDS)))

  with tf.Graph().as_default():
    image_reader = ImageReader()

    with tf.Session('') as sess:

      for shard_id in range(_NUM_SHARDS):
        output_filename = _get_dataset_filename(
            dataset_dir, split_name, shard_id)

        with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
          start_ndx = shard_id * num_per_shard
          end_ndx = min((shard_id+1) * num_per_shard, len(filenames))
          for i in range(start_ndx, end_ndx):
            sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
                i+1, len(filenames), shard_id))
            sys.stdout.flush()

            # Read the filename:
            image_data = tf.gfile.GFile(filenames[i], 'rb').read()
            height, width = image_reader.read_image_dims(sess, image_data)

            class_name = os.path.basename(os.path.dirname(filenames[i]))
            class_id = class_names_to_ids[class_name]

            example = dataset_utils.image_to_tfexample(
                image_data, b'jpg', height, width, class_id)
            tfrecord_writer.write(example.SerializeToString())

  sys.stdout.write('\n')
  sys.stdout.flush()




def _dataset_exists(dataset_dir):
  for split_name in ['train', 'validation']:
    for shard_id in range(_NUM_SHARDS):
      output_filename = _get_dataset_filename(
          dataset_dir, split_name, shard_id)
      if not tf.gfile.Exists(output_filename):
        return False
  return True


def run(dataset_dir):
  """Runs the download and conversion operation.

  Args:
    dataset_dir: The dataset directory where the dataset is stored.
  """
  if not tf.gfile.Exists(dataset_dir):
    tf.gfile.MakeDirs(dataset_dir)

  if _dataset_exists(dataset_dir):
    print('Dataset files already exist. Exiting without re-creating them.')
    return


  photo_filenames, class_names = _get_filenames_and_classes(dataset_dir)
  class_names_to_ids = dict(
      list(zip(class_names, list(range(len(class_names))))))

  # Divide into train and test:
  random.seed(_RANDOM_SEED)
  random.shuffle(photo_filenames)
  training_filenames = photo_filenames[_NUM_VALIDATION:]
  validation_filenames = photo_filenames[:_NUM_VALIDATION]

  # First, convert the training and validation sets.
  _convert_dataset('train', training_filenames, class_names_to_ids,
                   dataset_dir)
  _convert_dataset('validation', validation_filenames, class_names_to_ids,
                   dataset_dir)

  # Finally, write the labels file:
  labels_to_class_names = dict(
      list(zip(list(range(len(class_names))), class_names)))
  dataset_utils.write_label_file(labels_to_class_names, dataset_dir)


  print('\nFinished converting the car dataset!')

dir="/home/ubuntu/models/research/object_detection/samples/configs/20210727/car"
run(dir)

生成数据集成功,这里的模型生成方式和我的v3数据生成方式不是太一样~

(tensorflow) ubuntu@ubuntu:~/models/research/slim/datasets$ python3 download_and_convert.py
2021-07-27 16:52:59.629715: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
.....
2021-07-27 16:53:28.837691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10320 MB memory) -> physical GPU (device: 2, name: GeForce RTX 2080 Ti, pci bus id: 0000:82:00.0, compute capability: 7.5)
>> Converting image 350/350 shard 4

Finished converting the car dataset!

产生的文件

(tensorflow) ps@ps-Super-Server:~/NewDisk2/sxj731533730/models/research/object_detection/samples/configs/20210727$ tree -L 2
.
├── car
│   ├── bike
│   ├── bus
│   ├── fire_engine
│   ├── heavy_truck
│   ├── jeep
│   ├── mini_bus
│   ├── motorcycle
│   ├── racing
│   ├── sedan
│   ├── suv
│   ├── taxi
│   └── truck
├── cars_train_00000-of-00005.tfrecord
├── cars_train_00001-of-00005.tfrecord
├── cars_train_00002-of-00005.tfrecord
├── cars_train_00003-of-00005.tfrecord
├── cars_train_00004-of-00005.tfrecord
├── cars_validation_00000-of-00005.tfrecord
├── cars_validation_00001-of-00005.tfrecord
├── cars_validation_00002-of-00005.tfrecord
├── cars_validation_00003-of-00005.tfrecord
├── cars_validation_00004-of-00005.tfrecord
├── frobzen_mobilenet_v2.pb
├── labels.txt
├── mobilenet_v2.pb
├── save #后期训练产生的文件
│   ├── checkpoint
│   ├── events.out.tfevents.1627380721.ps-Super-Server
│   ├── events.out.tfevents.1627380918.ps-Super-Server
│   ├── events.out.tfevents.1627381057.ps-Super-Server
│   ├── events.out.tfevents.1627459933.ps-Super-Server
│   ├── events.out.tfevents.1627460822.ps-Super-Server
│   ├── graph.pbtxt
│   ├── model.ckpt-100000.data-00000-of-00001
│   ├── model.ckpt-100000.index
│   ├── model.ckpt-100000.meta
│   ├── model.ckpt-98812.data-00000-of-00001
│   ├── model.ckpt-98812.index
│   ├── model.ckpt-98812.meta
│   ├── model.ckpt-99175.data-00000-of-00001
│   ├── model.ckpt-99175.index
│   ├── model.ckpt-99175.meta
│   ├── model.ckpt-99535.data-00000-of-00001
│   ├── model.ckpt-99535.index
│   ├── model.ckpt-99535.meta
│   ├── model.ckpt-99895.data-00000-of-00001
│   ├── model.ckpt-99895.index
│   └── model.ckpt-99895.meta
└── small.log

14 directories, 36 files

 修改一下训练文件 /home/ubuntu/models/research/slim/datasets/flowers.py,就开始训练吧 直接放修改后文件

# Copyright 2016 The TensorFlow Authors. All Rights Reserved.


from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import tensorflow.compat.v1 as tf
import tf_slim as slim

from datasets import dataset_utils

_FILE_PATTERN = 'cars_%s_*.tfrecord'

SPLITS_TO_SIZES = {'train': int(1312*0.8), 'validation': int(1312*0.2)}

_NUM_CLASSES = 12

_ITEMS_TO_DESCRIPTIONS = {
    'image': 'A color image of varying size.',
    'label': 'A single integer between 0 and 4',
}


def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
  """Gets a dataset tuple with instructions for reading flowers.

  Args:
    split_name: A train/validation split name.
    dataset_dir: The base directory of the dataset sources.
    file_pattern: The file pattern to use when matching the dataset sources.
      It is assumed that the pattern contains a '%s' string so that the split
      name can be inserted.
    reader: The TensorFlow reader type.

  Returns:
    A `Dataset` namedtuple.

  Raises:
    ValueError: if `split_name` is not a valid train/validation split.
  """
  if split_name not in SPLITS_TO_SIZES:
    raise ValueError('split name %s was not recognized.' % split_name)

  if not file_pattern:
    file_pattern = _FILE_PATTERN
  file_pattern = os.path.join(dataset_dir, file_pattern % split_name)

  # Allowing None in the signature so that dataset_factory can use the default.
  if reader is None:
    reader = tf.TFRecordReader

  keys_to_features = {
      'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
      'image/format': tf.FixedLenFeature((), tf.string, default_value='png'),
      'image/class/label': tf.FixedLenFeature(
          [], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),
  }

  items_to_handlers = {
      'image': slim.tfexample_decoder.Image(),
      'label': slim.tfexample_decoder.Tensor('image/class/label'),
  }

  decoder = slim.tfexample_decoder.TFExampleDecoder(
      keys_to_features, items_to_handlers)

  labels_to_names = None
  if dataset_utils.has_labels(dataset_dir):
    labels_to_names = dataset_utils.read_label_file(dataset_dir)

  return slim.dataset.Dataset(
      data_sources=file_pattern,
      reader=reader,
      decoder=decoder,
      num_samples=SPLITS_TO_SIZES[split_name],
      items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
      num_classes=_NUM_CLASSES,
      labels_to_names=labels_to_names)

修改一下 /home/ubuntu/models/research/slim/datasets/dataset_factory.py

from datasets import car

datasets_map = {
    'cifar10': cifar10,
    'flowers': flowers,
    'imagenet': imagenet,
    'mnist': mnist,
    'visualwakewords': visualwakewords,
   'car':car,
}

训练模型

CUDA_VISIBLE_DEVICES=0,1,2 nohup  python3 research/slim/train_image_classifier.py --train_dir=/home/ps/NewDisk2/sxj731533730/models/research/object_detection/samples/configs/20210727/save --dataset_dir=/home/ps/NewDisk2/sxj731533730/models/research/object_detection/samples/configs/20210727/car --dataset_name=car  --dataset_split_name=train  --model_name=mobilenet_v2 --max_number_of_steps=100000 --batch_size=32  --learning_rate=0.0001 --learning_rate_decay_type=fixed --save_interval_secs=60 --save_summaries_secs=60 --log_every_n_steps=10 --optimizer=rmsprop --weight_decay=0.00004  --alsologtostderr > /home/ps/NewDisk2/sxj731533730/models/research/object_detection/samples/configs/20210727/small.log 2>&1 &

训练完成

INFO:tensorflow:global step 9990: loss = 2.2234 (0.351 sec/step)
I0727 18:45:43.082987 139726695827200 learning.py:512] global step 9990: loss = 2.2234 (0.351 sec/step)
INFO:tensorflow:global step 10000: loss = 1.8100 (0.257 sec/step)
I0727 18:45:45.167634 139726695827200 learning.py:512] global step 10000: loss = 1.8100 (0.257 sec/step)
INFO:tensorflow:Stopping Training.
I0727 18:45:45.168541 139726695827200 learning.py:769] Stopping Training.
INFO:tensorflow:Finished training! Saving model to disk.
I0727 18:45:45.168777 139726695827200 learning.py:777] Finished training! Saving model to disk.
/home/ps/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow_core/python/summary/writer/writer.py:386: UserWarning: Attempting to use a closed FileWriter. The operation will be a noop unless the FileWriter is explicitly reopened.
  warnings.warn("Attempting to use a closed FileWriter. "

先生成前导图

ubuntu@ubuntu:~/models$ python3 research/slim/export_inference_graph.py  --alsologtostderr --dataset_dir=/home/ubuntu/models/research/object_detection/samples/configs/20210727/car --
dataset_name=car --model_name=mobilenet_v2 --output_file=/home/ubuntu/models/research/object_detection
/samples/configs/20210727/mobilenet_v2.pb

查看一下生成的前导图,这个和我后期生成的模型步骤不是太一样,引用该博主的话,“”对上面生成的前向传播图进行固化,即把cpkt文件的参数导入到前向传播图中得到最终的模型“”

参考
https://blog.csdn.net/chenyuping333/article/details/81537551#2、模型训练
https://blog.csdn.net/chenyuping333/article/details/82106863

根据上图生成前导图查看输出node名字

python3 ../tensorflow/tensorflow/python/tools/freeze_graph.py --input_graph=/home/ubuntu/models/research/object_detection/samples/configs/20210727/mobilenet_v2.pb --input_checkpoint=/home/ubuntu/models/research/object_detection/samples/configs/20210727/save/model.ckpt-100000 --output_graph=/home/ubuntu/models/research/object_detection/samples/configs/20210727/frobzen_mobilenet_v2.pb --input_binary=True  --output_node_name=MobilenetV2/Predictions/Softmax

然后产生了模型,先用python测试一下,看看模型能不能用,很慌~

ubuntu@ubuntu:~/models$ python3 ../tensorflow/tensorflow/examples/label_image/label_image.py --image=/home/ubuntu/models/research/object_detection/samples/configs/20210727/car/jeep/00c47631aae05108859a5a3e5dc9661c.jpg --input_layer=input --output_layer=MobilenetV2/Predictions/Reshape --graph=/home/ubuntu/models/research/object_detection/samples/configs/20210727/frobzen_mobilenet_v2.pb --labels=/home/ubuntu/models/research/object_detection/samples/configs/20210727/labels.txt  --input_mean=0  --input_std=255 --input_height=224 --input_width=224

测试结果 的确是个jeep车 这样测试不是太合理 我用训练集测试结果  ^^

alhost/replica:0/task:0/device:GPU:1 with 10320 MB memory) -> physical GPU (device: 1, name: GeForce RTX 2080 Ti, pci bus id: 0000:03:00.0, compute capability: 7.5)
2021-07-28 20:49:34.669721: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-07-28 20:49:35.975967: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
[0 6 7 2 8]
WARNING:tensorflow:From ../tensorflow/tensorflow/examples/label_image/label_image.py:67: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

4:jeep 2.5490875
5:mini_bus 1.9137394
0:bike 1.4003849
6:motorcycle 0.38158944
3:heavy_truck -0.16444218

附录一下测试代码,在执行代码过程中,将自动生成R329开发板的output_ref.bin ,更名为20210727output_ref.bin

ubuntu@ubuntu:~/models$ cat vim ../tensorflow/tensorflow/examples/label_image/la
bel_image.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse

import numpy as np
import tensorflow as tf


def load_graph(model_file):
  graph = tf.Graph()
  graph_def = tf.GraphDef()

  with open(model_file, "rb") as f:
    graph_def.ParseFromString(f.read())
  with graph.as_default():
    tf.import_graph_def(graph_def)

  return graph


def read_tensor_from_image_file(file_name,
                                input_height=299,
                                input_width=299,
                                input_mean=0,
                                input_std=255):
  input_name = "file_reader"
  output_name = "normalized"
  file_reader = tf.read_file(file_name, input_name)
  if file_name.endswith(".png"):
    image_reader = tf.io.decode_png(file_reader, channels=3, name="png_reader")
  elif file_name.endswith(".gif"):
    image_reader = tf.squeeze(tf.io.decode_gif(file_reader, name="gif_reader"))
  elif file_name.endswith(".bmp"):
    image_reader = tf.io.decode_bmp(file_reader, name="bmp_reader")
  else:
    image_reader = tf.io.decode_jpeg(
        file_reader, channels=3, name="jpeg_reader")
  float_caster = tf.cast(image_reader, tf.float32)
  dims_expander = tf.expand_dims(float_caster, 0)
  resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
  normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
  sess = tf.compat.v1.Session()
  result = sess.run(normalized)

  return result


def load_labels(label_file):
  label = []
  proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
  for l in proto_as_ascii_lines:
    label.append(l.rstrip())
  return label


if __name__ == "__main__":
  file_name = "tensorflow/examples/label_image/data/grace_hopper.jpg"
  model_file = \
    "tensorflow/examples/label_image/data/inception_v3_2016_08_28_frozen.pb"
  label_file = "tensorflow/examples/label_image/data/imagenet_slim_labels.txt"
  input_height = 299
  input_width = 299
  input_mean = 0
  input_std = 255
  input_layer = "input"
  output_layer = "InceptionV3/Predictions/Reshape_1"

  parser = argparse.ArgumentParser()
  parser.add_argument("--image", help="image to be processed")
  parser.add_argument("--graph", help="graph/model to be executed")
  parser.add_argument("--labels", help="name of file containing labels")
  parser.add_argument("--input_height", type=int, help="input height")
  parser.add_argument("--input_width", type=int, help="input width")
  parser.add_argument("--input_mean", type=int, help="input mean")
  parser.add_argument("--input_std", type=int, help="input std")
  parser.add_argument("--input_layer", help="name of input layer")
  parser.add_argument("--output_layer", help="name of output layer")
  args = parser.parse_args()

  if args.graph:
    model_file = args.graph
  if args.image:
    file_name = args.image
  if args.labels:
    label_file = args.labels
  if args.input_height:
    input_height = args.input_height
  if args.input_width:
    input_width = args.input_width
  if args.input_mean:
    input_mean = args.input_mean
  if args.input_std:
    input_std = args.input_std
  if args.input_layer:
    input_layer = args.input_layer
  if args.output_layer:
    output_layer = args.output_layer

  graph = load_graph(model_file)
  t = read_tensor_from_image_file(
      file_name,
      input_height=input_height,
      input_width=input_width,
      input_mean=input_mean,
      input_std=input_std)

  input_name = "import/" + input_layer
  output_name = "import/" + output_layer
  input_operation = graph.get_operation_by_name(input_name)
  output_operation = graph.get_operation_by_name(output_name)

  with tf.compat.v1.Session(graph=graph) as sess:
    results = sess.run(output_operation.outputs[0], {
        input_operation.outputs[0]: t
    })
  results = np.squeeze(results)
  fw=open('20210727output_ref.bin', 'wb') #这里仿照官网例子,把真实的测试结果存一下
  fw.write(results)
  fw.close()
  labelfile = '20210727output_ref.bin'
  npylabel = np.fromfile(labelfile, dtype=np.float32)
  labelclass = npylabel.argmax()
  head5t = npylabel.argsort()[-5:][::-1]
  print(head5t)
  top_k = results.argsort()[-5:][::-1]
  labels = load_labels(label_file)
  for i in top_k:
    print(labels[i], results[i])

重新进入保存的docker目录

ubuntu@ubuntu:~$ sudo docker run -it -v /home/ubuntu/models:/models zepan/zhouyi2  /bin/bash

将生成的 20210727output_ref.bin 文件拷贝到docker内部 

ubuntu@ubuntu:~/models$ docker cp 20210727output_ref.bin 332d7ad26d9a:/root/demos/pb

重新构建一个 mobilenet_v2.cfg文件

root@3b34326375f7:~/demos/pb/config# cat mobilenet_v2.cfg
[Common]
mode=run

[Parser]
model_name = mobilenet_v2
detection_postprocess =
model_domain = image_classification
output = MobilenetV2/Predictions/Reshape
input_model =  /models/research/object_detection/samples/configs/20210727/frobzen_mobilenet_v2.pb
input = input
input_shape = [1,224,224,3]
output_dir = ./

[AutoQuantizationTool]
model_name = mobilenet_v2
quantize_method = SYMMETRIC
ops_per_channel = DepthwiseConv
calibration_data = /root/demos/pb/dataset/20210722dataset.npy
calibration_label = /root/demos/pb/dataset/20210722label.npy
preprocess_mode = normalize
quant_precision=int8
reverse_rgb = False
label_id_offset = 0

[GBuilder]
inputs=/root/demos/pb/model/20210722input.bin
simulator=aipu_simulator_z1
outputs=output_output_mobilenet_v2.bin
profile= True
target=Z1_0701

执行

root@aa607455f9b4:~/demos/pb#  aipubuild config/mobilenet_v2.cfg

最后的测试代码

root@332d7ad26d9a:~/demos/pb# cat quant_predict.py
from PIL import Image
import cv2
from matplotlib import pyplot as plt
import matplotlib.patches as patches
import numpy as np
import os
#import imagenet_classes as class_name

current_dir = os.getcwd()
label_offset = 0
outputfile = current_dir + '/output_mobilenet_v2.bin'
npyoutput = np.fromfile(outputfile, dtype=np.uint8)
outputclass = npyoutput.argmax()
head5p = npyoutput.argsort()[-5:][::-1]

labelfile = current_dir + '/20210727output_ref.bin'
npylabel = np.fromfile(labelfile, dtype=np.float32)
labelclass = npylabel.argmax()
head5t = npylabel.argsort()[-5:][::-1]
class_names=["bike","bus","fire_engine","heavy_truck","jeep","mini_bus","motorcycle","racing","sedan","suv","taxi","truck"]
print("predict first 5 label:")
print(head5p)
print(head5t)
for i in head5p:
    #print("    index %4d, prob %3d, name: %s"%(i, npyoutput[i], class_name.class_names[i-label_offset]))
    print("    index %4d, prob %3d, name: %s"%(i, npyoutput[i], class_names[i-label_offset]))
print("true first 5 label:")
for i in head5t:
    #print("    index %4d, prob %3d, name: %s"%(i, npylabel[i], class_name.class_names[i-label_offset]))
    print("    index %4d, prob %3d, name: %s"%(i, npylabel[i], class_names[i-label_offset]))
# Show input picture
print('Detect picture save to result.jpeg')

input_path = './model/20210722input.bin'
npyinput = np.fromfile(input_path, dtype=np.int8)
image = np.clip(np.round(npyinput)+128, 0, 255).astype(np.uint8)
image = np.reshape(image, (224, 224, 3))
im = Image.fromarray(image)
im.save('result.jpeg')

重新构建仿真器命令

root@3b34326375f7:~/demos/pb# aipubuild config/mobilenet_v2.cfg

 测试结果

这里的文件标签顺序,根据训练的标签顺序填写的 

执行命令进行仿真测试

root@3b34326375f7:~/demos/pb# python3 quant_predict.py

测试结果

root@3b34326375f7:~/demos/pb# python3 quant_predict.py
predict first 5 label:
[4 7 2 3 6]
[4 5 0 6 3]
    index    4, prob 234, name: jeep
    index    7, prob 231, name: racing
    index    2, prob 202, name: fire_engine
    index    3, prob 201, name: heavy_truck
    index    6, prob 141, name: motorcycle
true first 5 label:
    index    4, prob 650, name: jeep
    index    5, prob 488, name: mini_bus
    index    0, prob 357, name: bike
    index    6, prob  97, name: motorcycle
    index    3, prob -41, name: heavy_truck
Detect picture save to result.jpeg

附录图附录测试的模型量化后生成的测试文件要和output_ref.bin文件测试测试参数保持一致

拷贝出来数据,已上传百度云

ubuntu@ubuntu:~$ sudo docker cp d5326b8d465c:/root/demos/pb .

百度云盘  训练的模型:

链接:https://pan.baidu.com/s/15ltGQbtHViDaIzM7VjXADg
提取码:woo1

参考:

推荐阅读
关注数
7443
内容数
92
人工智能边缘计算软硬件解决方案,提供高性能、低成本、低功耗、易使用的硬件选型方案.
目录
极术微信服务号
关注极术微信号
实时接收点赞提醒和评论通知
安谋科技学堂公众号
关注安谋科技学堂
实时获取安谋科技及 Arm 教学资源
安谋科技招聘公众号
关注安谋科技招聘
实时获取安谋科技中国职位信息