JetPack includes: • Full desktop Linux with NVIDIA drivers • AI and Computer Vision libraries and APIs • Developer tools • Documentation and sample code
Recommended System Requirements
Training GPU:
Maxwell, Pascal, Volta, or Turing-based GPU (ideally with at least 6GB video memory) optionally, AWS P2/P3 instance or Microsoft Azure N-series
Ubuntu 16.04/18.04 x86_64
Deployment:
Jetson Nano Developer Kit with JetPack 4.2 or newer (Ubuntu 18.04 aarch64).
Jetson Xavier Developer Kit with JetPack 4.0 or newer (Ubuntu 18.04 aarch64)
Jetson TX2 Developer Kit with JetPack 3.0 or newer (Ubuntu 16.04 aarch64).
Jetson TX1 Developer Kit with JetPack 2.3 or newer (Ubuntu 16.04 aarch64).
Jetson Nano Developer Kit
Jetson Nano Device
Jetson Nano was introduced in April 2019 for only $99.
jetson nano
microSD card slot for main storage
40-pin expansion header
Micro-USB port for 5V power input or for data
Gigabit Ethernet port
USB 3.0 ports (x4)
HDMI output port
DisplayPort connector
DC Barrel jack for 5V power input
MIPI CSI camera connector
power input: 3 and 8 camera: 9 (MIPI CSI camera) green LED (D53) close to the micro USB port should turn green
0+167548 records in 0+167548 records out 12884901888 bytes (13 GB, 12 GiB) copied, 511.602 s, 25.2 MB/s
# 12 partitions generated by the writing process ??? $ sudo fdisk -l
GPT PMBR size mismatch (25165823 != 62333951) will be corrected by w(rite). Disk /dev/sde: 29.7 GiB, 31914983424 bytes, 62333952 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: E696E264-F2EA-434A-900C-D9ACA2F99E43
Device Start End Sectors Size Type /dev/sde1 24576 25165790 25141215 12G Linux filesystem /dev/sde2 2048 2303 256 128K Linux filesystem /dev/sde3 4096 4991 896 448K Linux filesystem /dev/sde4 6144 7295 1152 576K Linux filesystem /dev/sde5 8192 8319 128 64K Linux filesystem /dev/sde6 10240 10623 384 192K Linux filesystem /dev/sde7 12288 13439 1152 576K Linux filesystem /dev/sde8 14336 14463 128 64K Linux filesystem /dev/sde9 16384 17663 1280 640K Linux filesystem /dev/sde10 18432 19327 896 448K Linux filesystem /dev/sde11 20480 20735 256 128K Linux filesystem /dev/sde12 22528 22687 160 80K Linux filesystem
Partition table entries are not in disk order.
# When the dd command finishes, eject the disk device from the command line: $ sudo eject /dev/sdc
# Physically remove microSD card from the computer.
Steps:
Insert the microSD card into the appropriate slot
Connect the display and USB keyboard /mouse and Ethernet cable.
Depending on the power supply you want to use, you may have to add or remove the jumper for power selection: – If using a jack(part 8), the jumper must be set. – if using USB (part 3), the jumper must be off.
Plug in the power supply. The green LED (D53) close to the micro USB port should turn green, and the display should show the NVIDIA logo before booting begins.
# method 2: with `-X` $ ssh -X [email protected] # `-X` means enabling ForwardX11
add CUDA envs
edit ~.bashrc
1 2 3 4 5 6 7
# Add this to your .bashrc file
export CUDA_HOME=/usr/local/cuda # Adds the CUDA compiler to the PATH export PATH=$CUDA_HOME/bin:$PATH # Adds the libraries export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
check cuda version
1 2 3 4 5 6 7
$ source ~/.bashrc $ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Mon_Mar_11_22:13:24_CDT_2019 Cuda compilation tools, release 10.0, V10.0.326
check versions
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
$ uname -a Linux nano-desktop 4.9.140-tegra #1 SMP PREEMPT Sat Oct 19 15:54:06 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
[jetson-inference] Checking for'dialog' deb package...installed [jetson-inference] FOUND_DIALOG=INSTALLED
[jetson-inference] Model selection status: 0 [jetson-inference] No models were selected for download.
[jetson-inference] to run this tool again, use the following commands: $ cd <jetson-inference>/tools $ ./download-models.sh
[jetson-inference] Checking for'dialog' deb package...installed [jetson-inference] FOUND_DIALOG=INSTALLED
head: cannot open '/etc/nv_tegra_release'for reading: No such file or directory [jetson-inference] reading L4T version from "dpkg-query --show nvidia-l4t-core" [jetson-inference] Jetson BSP Version: L4T R32.2
[jetson-inference] installation complete, exiting with status code 0 [jetson-inference] to run this tool again, use the following commands: $ cd <jetson-inference>/build $ ./install-pytorch.sh
[Pre-build] Finished CMakePreBuild script
-- Finished installing dependencies -- using patched FindCUDA.cmake Looking for pthread.h Looking for pthread.h - found Looking for pthread_create Looking for pthread_create - not found Looking for pthread_create in pthreads Looking for pthread_create in pthreads - not found Looking for pthread_create in pthread Looking for pthread_create in pthread - found Found Threads: TRUE
[ 1%] Linking CXX shared library ../aarch64/lib/libjetson-utils.so [ 31%] Built target jetson-utils [ 32%] Linking CXX shared library aarch64/lib/libjetson-inference.so [ 43%] Built target jetson-inference [ 44%] Linking CXX executable ../../aarch64/bin/imagenet-console [ 45%] Built target imagenet-console [ 46%] Linking CXX executable ../../aarch64/bin/imagenet-camera [ 47%] Built target imagenet-camera [ 47%] Linking CXX executable ../../aarch64/bin/detectnet-console [ 48%] Built target detectnet-console [ 49%] Linking CXX executable ../../aarch64/bin/detectnet-camera [ 50%] Built target detectnet-camera [ 50%] Linking CXX executable ../../aarch64/bin/segnet-console [ 51%] Built target segnet-console [ 52%] Linking CXX executable ../../aarch64/bin/segnet-camera [ 53%] Built target segnet-camera [ 54%] Linking CXX executable ../../aarch64/bin/superres-console [ 55%] Built target superres-console [ 56%] Linking CXX executable ../../aarch64/bin/homography-console [ 57%] Built target homography-console [ 58%] Linking CXX executable ../../aarch64/bin/homography-camera [ 59%] Built target homography-camera [ 60%] Automatic MOC for target camera-capture [ 60%] Built target camera-capture_autogen [ 61%] Linking CXX executable ../../aarch64/bin/camera-capture [ 64%] Built target camera-capture [ 65%] Linking CXX executable ../../aarch64/bin/trt-bench [ 66%] Built target trt-bench [ 67%] Linking CXX executable ../../aarch64/bin/trt-console [ 68%] Built target trt-console [ 69%] Linking CXX executable ../../../aarch64/bin/camera-viewer [ 70%] Built target camera-viewer [ 71%] Linking CXX executable ../../../aarch64/bin/v4l2-console [ 72%] Built target v4l2-console [ 73%] Linking CXX executable ../../../aarch64/bin/v4l2-display [ 74%] Built target v4l2-display [ 75%] Linking CXX executable ../../../aarch64/bin/gl-display-test [ 76%] Built target gl-display-test [ 76%] Linking CXX shared library ../../../aarch64/lib/python/2.7/jetson_utils_python.so [ 82%] Built target jetson-utils-python-27 [ 83%] Linking CXX shared library ../../../aarch64/lib/python/3.6/jetson_utils_python.so [ 89%] Built target jetson-utils-python-36 [ 90%] Linking CXX shared library ../../aarch64/lib/python/2.7/jetson_inference_python.so [ 95%] Built target jetson-inference-python-27 [ 96%] Linking CXX shared library ../../aarch64/lib/python/3.6/jetson_inference_python.so [100%] Built target jetson-inference-python-36 Install the project... -- Install configuration: "" -- Installing: /usr/local/include/jetson-inference/detectNet.h -- Installing: /usr/local/include/jetson-inference/homographyNet.h -- Installing: /usr/local/include/jetson-inference/imageNet.h -- Installing: /usr/local/include/jetson-inference/segNet.h -- Installing: /usr/local/include/jetson-inference/superResNet.h -- Installing: /usr/local/include/jetson-inference/tensorNet.h -- Installing: /usr/local/include/jetson-inference/imageNet.cuh -- Installing: /usr/local/include/jetson-inference/randInt8Calibrator.h -- Installing: /usr/local/lib/libjetson-inference.so -- Set runtime path of "/usr/local/lib/libjetson-inference.so" to "" -- Installing: /usr/local/share/jetson-inference/cmake/jetson-inferenceConfig.cmake -- Installing: /usr/local/share/jetson-inference/cmake/jetson-inferenceConfig-noconfig.cmake -- Installing: /usr/local/bin/imagenet-console -- Set runtime path of "/usr/local/bin/imagenet-console" to "" -- Installing: /usr/local/bin/imagenet-camera -- Set runtime path of "/usr/local/bin/imagenet-camera" to "" -- Installing: /usr/local/bin/detectnet-console -- Set runtime path of "/usr/local/bin/detectnet-console" to "" -- Installing: /usr/local/bin/detectnet-camera -- Set runtime path of "/usr/local/bin/detectnet-camera" to "" -- Installing: /usr/local/bin/segnet-console -- Set runtime path of "/usr/local/bin/segnet-console" to "" -- Installing: /usr/local/bin/segnet-camera -- Set runtime path of "/usr/local/bin/segnet-camera" to "" -- Installing: /usr/local/bin/superres-console -- Set runtime path of "/usr/local/bin/superres-console" to "" -- Installing: /usr/local/bin/homography-console -- Set runtime path of "/usr/local/bin/homography-console" to "" -- Installing: /usr/local/bin/homography-camera -- Set runtime path of "/usr/local/bin/homography-camera" to "" -- Installing: /usr/local/bin/camera-capture -- Set runtime path of "/usr/local/bin/camera-capture" to "" -- Installing: /usr/local/include/jetson-utils/XML.h -- Installing: /usr/local/include/jetson-utils/commandLine.h -- Installing: /usr/local/include/jetson-utils/filesystem.h -- Installing: /usr/local/include/jetson-utils/mat33.h -- Installing: /usr/local/include/jetson-utils/pi.h -- Installing: /usr/local/include/jetson-utils/rand.h -- Installing: /usr/local/include/jetson-utils/timespec.h -- Installing: /usr/local/include/jetson-utils/gstCamera.h -- Installing: /usr/local/include/jetson-utils/v4l2Camera.h -- Installing: /usr/local/include/jetson-utils/gstDecoder.h -- Installing: /usr/local/include/jetson-utils/gstEncoder.h -- Installing: /usr/local/include/jetson-utils/gstUtility.h -- Installing: /usr/local/include/jetson-utils/cudaFont.h -- Installing: /usr/local/include/jetson-utils/cudaMappedMemory.h -- Installing: /usr/local/include/jetson-utils/cudaNormalize.h -- Installing: /usr/local/include/jetson-utils/cudaOverlay.h -- Installing: /usr/local/include/jetson-utils/cudaRGB.h -- Installing: /usr/local/include/jetson-utils/cudaResize.h -- Installing: /usr/local/include/jetson-utils/cudaUtility.h -- Installing: /usr/local/include/jetson-utils/cudaWarp.h -- Installing: /usr/local/include/jetson-utils/cudaYUV.h -- Installing: /usr/local/include/jetson-utils/glDisplay.h -- Installing: /usr/local/include/jetson-utils/glTexture.h -- Installing: /usr/local/include/jetson-utils/glUtility.h -- Installing: /usr/local/include/jetson-utils/imageIO.h -- Installing: /usr/local/include/jetson-utils/loadImage.h -- Installing: /usr/local/include/jetson-utils/devInput.h -- Installing: /usr/local/include/jetson-utils/devJoystick.h -- Installing: /usr/local/include/jetson-utils/devKeyboard.h -- Installing: /usr/local/include/jetson-utils/Endian.h -- Installing: /usr/local/include/jetson-utils/IPv4.h -- Installing: /usr/local/include/jetson-utils/NetworkAdapter.h -- Installing: /usr/local/include/jetson-utils/Socket.h -- Installing: /usr/local/include/jetson-utils/Event.h -- Installing: /usr/local/include/jetson-utils/Mutex.h -- Installing: /usr/local/include/jetson-utils/Process.h -- Installing: /usr/local/include/jetson-utils/Thread.h -- Installing: /usr/local/lib/libjetson-utils.so -- Installing: /usr/local/share/jetson-utils/cmake/jetson-utilsConfig.cmake -- Installing: /usr/local/share/jetson-utils/cmake/jetson-utilsConfig-noconfig.cmake -- Installing: /usr/local/bin/camera-viewer -- Set runtime path of "/usr/local/bin/camera-viewer" to "" -- Installing: /usr/local/bin/gl-display-test -- Set runtime path of "/usr/local/bin/gl-display-test" to "" -- Installing: /usr/local/bin/camera-viewer.py -- Installing: /usr/local/bin/cuda-from-numpy.py -- Installing: /usr/local/bin/cuda-to-numpy.py -- Installing: /usr/local/bin/gl-display-test.py -- Installing: /usr/lib/python2.7/dist-packages/jetson_utils_python.so -- Set runtime path of "/usr/lib/python2.7/dist-packages/jetson_utils_python.so" to "" -- Installing: /usr/lib/python2.7/dist-packages/Jetson -- Installing: /usr/lib/python2.7/dist-packages/Jetson/Utils -- Installing: /usr/lib/python2.7/dist-packages/Jetson/Utils/__init__.py -- Installing: /usr/lib/python2.7/dist-packages/Jetson/__init__.py -- Installing: /usr/lib/python2.7/dist-packages/jetson -- Installing: /usr/lib/python2.7/dist-packages/jetson/utils -- Installing: /usr/lib/python2.7/dist-packages/jetson/utils/__init__.py -- Installing: /usr/lib/python2.7/dist-packages/jetson/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/jetson_utils_python.so -- Set runtime path of "/usr/lib/python3.6/dist-packages/jetson_utils_python.so" to "" -- Installing: /usr/lib/python3.6/dist-packages/Jetson -- Installing: /usr/lib/python3.6/dist-packages/Jetson/Utils -- Installing: /usr/lib/python3.6/dist-packages/Jetson/Utils/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/Jetson/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/jetson -- Installing: /usr/lib/python3.6/dist-packages/jetson/utils -- Installing: /usr/lib/python3.6/dist-packages/jetson/utils/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/jetson/__init__.py -- Installing: /usr/local/bin/detectnet-camera.py -- Installing: /usr/local/bin/detectnet-console.py -- Installing: /usr/local/bin/imagenet-camera.py -- Installing: /usr/local/bin/imagenet-console.py -- Installing: /usr/local/bin/my-detection.py -- Installing: /usr/local/bin/my-recognition.py -- Installing: /usr/local/bin/segnet-batch.py -- Installing: /usr/local/bin/segnet-camera.py -- Installing: /usr/local/bin/segnet-console.py -- Installing: /usr/lib/python2.7/dist-packages/jetson_inference_python.so -- Set runtime path of "/usr/lib/python2.7/dist-packages/jetson_inference_python.so" to "" -- Up-to-date: /usr/lib/python2.7/dist-packages/Jetson -- Installing: /usr/lib/python2.7/dist-packages/Jetson/__init__.py -- Installing: /usr/lib/python2.7/dist-packages/Jetson/Inference -- Installing: /usr/lib/python2.7/dist-packages/Jetson/Inference/__init__.py -- Up-to-date: /usr/lib/python2.7/dist-packages/jetson -- Installing: /usr/lib/python2.7/dist-packages/jetson/__init__.py -- Installing: /usr/lib/python2.7/dist-packages/jetson/inference -- Installing: /usr/lib/python2.7/dist-packages/jetson/inference/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/jetson_inference_python.so -- Set runtime path of "/usr/lib/python3.6/dist-packages/jetson_inference_python.so" to "" -- Up-to-date: /usr/lib/python3.6/dist-packages/Jetson -- Installing: /usr/lib/python3.6/dist-packages/Jetson/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/Jetson/Inference -- Installing: /usr/lib/python3.6/dist-packages/Jetson/Inference/__init__.py -- Up-to-date: /usr/lib/python3.6/dist-packages/jetson -- Installing: /usr/lib/python3.6/dist-packages/jetson/__init__.py -- Installing: /usr/lib/python3.6/dist-packages/jetson/inference -- Installing: /usr/lib/python3.6/dist-packages/jetson/inference/__init__.py
The project will be built to jetson-inference/build/aarch64, with the following directory structure:
|-build
\aarch64
\bin where the sample binaries are built to
\networks where the network models are stored
\images where the test images are stored
\include where the headers reside
\lib where the libraries are build to
These also get installed under /usr/local/ The Python bindings for the jetson.inference and jetson.utils modules also get installed under /usr/lib/python*/dist-packages/.
view libjetson-utils and libjetson-inference in lib
[TRT] TensorRT version 5.1.6 [TRT] loading NVIDIA plugins... [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [TRT] Plugin Creator registration succeeded - NMS_TRT [TRT] Plugin Creator registration succeeded - Reorg_TRT [TRT] Plugin Creator registration succeeded - Region_TRT [TRT] Plugin Creator registration succeeded - Clip_TRT [TRT] Plugin Creator registration succeeded - LReLU_TRT [TRT] Plugin Creator registration succeeded - PriorBox_TRT [TRT] Plugin Creator registration succeeded - Normalize_TRT [TRT] Plugin Creator registration succeeded - RPROI_TRT [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [TRT] completed loading NVIDIA plugins. [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/ResNet-18/ResNet-18.caffemodel.1.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU [TRT] device GPU, loading networks/ResNet-18/deploy.prototxt networks/ResNet-18/ResNet-18.caffemodel [TRT] retrieved Output tensor "prob": 1000x1x1 [TRT] retrieved Input tensor "data": 3x224x224 [TRT] device GPU, configuring CUDA engine [TRT] device GPU, building FP16: ON [TRT] device GPU, building INT8: OFF [TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded) [TRT] device GPU, completed building CUDA engine [TRT] network profiling complete, writing engine cache to networks/ResNet-18/ResNet-18.caffemodel.1.1.GPU.FP16.engine [TRT] device GPU, completed writing engine cache to networks/ResNet-18/ResNet-18.caffemodel.1.1.GPU.FP16.engine [TRT] device GPU, networks/ResNet-18/ResNet-18.caffemodel loaded [TRT] device GPU, CUDA engine context initialized with 2 bindings [TRT] binding -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 (CHANNEL) -- dim #1 224 (SPATIAL) -- dim #2 224 (SPATIAL) [TRT] binding -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 (CHANNEL) -- dim #1 1 (SPATIAL) -- dim #2 1 (SPATIAL) [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 device GPU, networks/ResNet-18/ResNet-18.caffemodel initialized. [TRT] networks/ResNet-18/ResNet-18.caffemodel loaded imageNet -- loaded 1000 class info entries networks/ResNet-18/ResNet-18.caffemodel initialized. [image] loaded 'images/orange_0.jpg' (1920 x 1920, 3 channels) class 0950 - 0.996028 (orange) imagenet-console: 'images/orange_0.jpg' -> 99.60276% class #950 (orange)
[TRT] ------------------------------------------------ [TRT] Timing Report networks/ResNet-18/ResNet-18.caffemodel [TRT] ------------------------------------------------ [TRT] Pre-Process CPU 0.10824ms CUDA 0.34156ms [TRT] Network CPU 12.91854ms CUDA 12.47026ms [TRT] Post-Process CPU 0.80311ms CUDA 0.82672ms [TRT] Total CPU 13.82989ms CUDA 13.63854ms [TRT] ------------------------------------------------
[TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements
imagenet-console: attempting to save output image to 'output_0.jpg' imagenet-console: completed saving 'output_0.jpg' imagenet-console: shutting down... imagenet-console: shutdown complete
Python
$ cd etson-inference/build/aarch64/bin
$ sudo ./imagenet-console.py --network=resnet-18 images/orange_0.jpg output_0.jpg
[TRT] TensorRT version 5.1.6 [TRT] loading NVIDIA plugins... [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [TRT] Plugin Creator registration succeeded - NMS_TRT [TRT] Plugin Creator registration succeeded - Reorg_TRT [TRT] Plugin Creator registration succeeded - Region_TRT [TRT] Plugin Creator registration succeeded - Clip_TRT [TRT] Plugin Creator registration succeeded - LReLU_TRT [TRT] Plugin Creator registration succeeded - PriorBox_TRT [TRT] Plugin Creator registration succeeded - Normalize_TRT [TRT] Plugin Creator registration succeeded - RPROI_TRT [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [TRT] completed loading NVIDIA plugins. [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/ResNet-18/ResNet-18.caffemodel.1.1.GPU.FP16.engine [TRT] loading network profile from engine cache... networks/ResNet-18/ResNet-18.caffemodel.1.1.GPU.FP16.engine [TRT] device GPU, networks/ResNet-18/ResNet-18.caffemodel loaded [TRT] device GPU, CUDA engine context initialized with 2 bindings [TRT] binding -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 (CHANNEL) -- dim #1 224 (SPATIAL) -- dim #2 224 (SPATIAL) [TRT] binding -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 (CHANNEL) -- dim #1 1 (SPATIAL) -- dim #2 1 (SPATIAL) [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 device GPU, networks/ResNet-18/ResNet-18.caffemodel initialized. [TRT] networks/ResNet-18/ResNet-18.caffemodel loaded imageNet -- loaded 1000 class info entries networks/ResNet-18/ResNet-18.caffemodel initialized. class 0950 - 0.996028 (orange) image is recognized as 'orange' (class #950) with 99.602759% confidence
[TRT] ------------------------------------------------ [TRT] Timing Report networks/ResNet-18/ResNet-18.caffemodel [TRT] ------------------------------------------------ [TRT] Pre-Process CPU 0.06884ms CUDA 0.32849ms [TRT] Network CPU 11.44888ms CUDA 11.01536ms [TRT] Post-Process CPU 0.20783ms CUDA 0.20708ms [TRT] Total CPU 11.72555ms CUDA 11.55094ms [TRT] ------------------------------------------------
[TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements
Classify a live camera stream using an image recognition DNN.
optional arguments: --help show this help message and exit --network NETWORK pre-trained model to load (see below for options) --camera CAMERA index of the MIPI CSI camera to use (e.g. CSI camera 0), or for VL42 cameras, the /dev/video device to use. by default, MIPI CSI camera 0 will be used. --width WIDTH desired width of camera stream (default is 1280 pixels) --height HEIGHT desired height of camera stream (default is 720 pixels)
imageNet arguments: --network NETWORK pre-trained model to load, one of the following: * alexnet * googlenet (default) * googlenet-12 * resnet-18 * resnet-50 * resnet-101 * resnet-152 * vgg-16 * vgg-19 * inception-v4 --model MODEL path to custom model to load (caffemodel, uff, or onnx) --prototxt PROTOTXT path to custom prototxt to load (for .caffemodel only) --labels LABELS path to text file containing the labels for each class --input_blob INPUT name of the input layer (default is 'data') --output_blob OUTPUT name of the output layer (default is 'prob') --batch_size BATCH maximum batch size (default is 1) --profile enable layer profiling in TensorRT
camera type
MIPI CSI cameras are used by specifying the sensor index (0 or 1, ect.)
V4L2 USB cameras are used by specifying their /dev/video node (/dev/video0, /dev/video1, ect.)
The default is to use MIPI CSI sensor 0 (--camera=0)
Query the available formats with the following commands:
DeepStream SDK 4.0.1 requires the installation of JetPack 4.2.2. donwload deepstream_sdk_v4.0.1_jetson.tbz2 from here
DeepStream SDK 4.0.2
DeepStream SDK 4.0.2 requires the installation of JetPack 4.3. donwload deepstream_sdk_v4.0.2_jetson.tbz2 or deepstream-4.0_4.0.2-1_arm64.deb from here
# (1) install deepstream sdk from tar file tar -xpvf deepstream_sdk_v4.0.2_jetson.tbz2 cd deepstream_sdk_v4.0.2_jetson sudo tar -xvpf binaries.tbz2 -C / sudo ./install.sh sudo ldconfig
# (2) or install deepstream sdk from deb sudo apt-get install ./deepstream-4.0_4.0.2-1_arm64.deb
## NOTE: sources and samples folders will be found in /opt/nvidia/deepstream/deepstream-4.0
# To boost the clocks # After you have installed DeepStream SDK, # run these commands on the Jetson device to boost the clocks:
Help Options: -h, --help Show help options --help-all Show all help options --help-gst Show GStreamer Options
Application Options: -v, --version Print DeepStreamSDK version -t, --tiledtext Display Bounding box labels in tiled mode --version-all Print DeepStreamSDK and dependencies version -c, --cfg-file Set the config file -i, --input-file Set the input file