refer to https://github.com/jkjung-avt/tensorrt_demos & https://jkjung-avt.github.io/jetpack-4.4/ HW: Neon-2000-JNX Jetpack 4.5 1. Run the following commands to do basic set-up of the system. $ sudo apt update ### ### Set proper environment variables $ cd ~/Downloads $ git clone https://github.com/jkjung-avt/jetson_nano.git $ cd jetson_nano $ ./install_basics.sh $ source ${HOME}/.bashrc 2. Install dependencies for python3 "cv2" $ sudo apt-get update $ sudo apt-get install -y build-essential make cmake cmake-curses-gui \ git g++ pkg-config curl libfreetype6-dev \ libcanberra-gtk-module libcanberra-gtk3-module $ sudo apt-get install -y python3-dev python3-testresources python3-pip $ sudo pip3 install -U pip Cython $ cd ~/Downloads/jetson_nano $ ./install_protobuf-3.8.0.sh $ sudo pip3 install numpy matplotlib==3.2.2 ### ### Installing tensorflow-1.15.2 $ sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran $ sudo pip3 install -U pip testresources setuptools $ sudo pip3 install -U numpy==1.16.1 future mock h5py==2.10.0 keras_preprocessing keras_applications gast==0.2.2 futures pybind11 $ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==1.15.2 ### Install pycuda==2019.1.2 $ cd ${HOME}/Downloads $ git clone https://github.com/jkjung-avt/tensorrt_demos.git $ cd ${HOME}/Downloads/tensorrt_demos/ssd $ ./install_pycuda.sh ### Install ONNX == 1.4.1 ### Install version "1.4.1" (not the latest version) of python3 "onnx" module. Note that the "onnx" module would depend on "protobuf" == 3.8.0 $ sudo pip3 install onnx==1.4.1 ### Go to the "plugins/" subdirectory and build the "yolo_layer" plugin. ### When done, a "libyolo_layer.so" would be generated. $ cd ${HOME}/Downloads/tensorrt_demos/plugins $ make ### Download yolo model $ cd ${HOME}/Downloads/tensorrt_demos/yolo $ ./download_yolo.sh ### convert ONNX model to TensorRT $ python3 yolo_to_onnx.py -m yolov3-416 ### Optimize onnx model by TensorRT (takes about 4 minutes) ### yolov3-416 FP32 $ /usr/src/tensorrt/bin/trtexec --onnx=yolov3-416.onnx --workspace=3000 --maxBatch=4 --verbose --saveEngine=yolov3-416-fp32.engine ### Optional ### Optimize onnx model by TensorRT (takes about 14 minutes) ### yolov3-416 FP16 $ /usr/src/tensorrt/bin/trtexec --onnx=yolov3-416.onnx --workspace=3000 --maxBatch=4 --fp16 --verbose --saveEngine=yolov3-416-fp16.engine ### Test by EVA ### yolov3-416 fp16 gst-launch-1.0 pylonsrc camera=0 fps=11 ! videoconvert ! adrt model=~/Downloads/tensorrt_demos/yolo/yolov3-416-fp16.engine scale=0.004 mean="0 0 0" device=0 batch=1 ! adtranslator label=~/Desktop/EVA_IDE/model/yolo_RT_labels.txt topology=yolov3 dims=1,255,13,13,1,255,26,26,1,255,52,52 input_width=416 engine-type=2 ! admetadrawer ! videoconvert ! fpsdisplaysink video-sink=xvimagesink text-overlay=true ### yolov3-416 fp32 gst-launch-1.0 pylonsrc camera=0 fps=6 ! videoconvert ! adrt model=~/Downloads/tensorrt_demos/yolo/yolov3-416-fp32.engine scale=0.004 mean="0 0 0" device=0 batch=1 ! adtranslator label=~/Desktop/EVA_IDE/model/yolo_RT_labels.txt topology=yolov3 dims=1,255,13,13,1,255,26,26,1,255,52,52 input_width=416 engine-type=2 ! admetadrawer ! videoconvert ! fpsdisplaysink video-sink=xvimagesink text-overlay=true ### Optional ### If class trained by 2, input size=416x416 ### model name: yolov3-416_test.engine, label files: label.txt $ python3 yolo_to_onnx.py -m yolov3-416 --category_num 2 $ /usr/src/tensorrt/bin/trtexec --onnx=yolov3-416.onnx --workspace=3000 --maxBatch=4 --verbose --saveEngine=yolov3-416_test.engine $ gst-launch-1.0 pylonsrc camera=0 fps=6 ! videoconvert ! adrt model=yolov3-416_test.engine scale=0.004 mean="0 0 0" device=0 batch=1 ! adtranslator label=label.txt topology=yolov3 dims=1,21,13,13,1,21,26,26,1,21,52,52 input_width=416 engine-type=2 ! admetadrawer ! videoconvert ! ximagesink sync=false