Refer to https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD ### Step 1 ### ### Download TensorFlow SSD-Inception_v2_coco model $ cd ~/Downloads $ wget http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz $ tar -zxvf ssd_inception_v2_coco_2017_11_17.tar.gz ### Step 2-1 ### ### Copy the TensorFlow protobuf file (frozen_inference_graph.pb) from the downloaded directory in the previous step ### to the working directory (for example /usr/src/tensorrt/samples/sampleUffSSD/). ### [sudo] password for adlink: adlink $ cd ~/Downloads $ echo adlink | sudo -S cp ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb /usr/src/tensorrt/samples/sampleUffSSD/frozen_inference_graph.pb $ cd /usr/src/tensorrt/samples/sampleUffSSD ### Step 2-2 ### ### Perform preprocessing on the tensorflow model using the UFF converter. $ echo adlink | sudo -S python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p config.py ### Step 3 ### ### Run the following command for the conversion ### The conversion takes about 3 minutes. $ /usr/src/tensorrt/bin/trtexec --uff=frozen_inference_graph.uff --output=NMS --uffInput=Input,3,300,300 --workspace=3000 --saveEngine=/home/adlink/Downloads/model/ssdv2.engine --verbose --maxBatch=4 ### Step 4 ### ### Test by EVA $ cd ~/Downloads $ gst-launch-1.0 pylonsrc camera=0 fps=15 ! videoscale ! video/x-raw, width=800, height=600 ! videoconvert ! adrt model=/home/adlink/Downloads/model/ssdv2.engine batch=1 device=0 scale=0.0078 mean="0 0 0" norm=false ! adtrans_ssd label=/home/adlink/Downloads/model/ssd_coco_labels.txt ! admetadrawer ! videoconvert ! fpsdisplaysink video-sink=xvimagesink text-overlay=true