Nvidia DeepStream 101: A beginner’s guide to real-time computer vision
In this blog, we’re going to cover some seriously cool stuff:
- Introduction to DeepStream (because you can’t build something without knowing what it is)
- Setting up DeepStream (because, let’s be real, it’s not going to set itself up) Buckle up, it’s going to be a wild ride!
Introduction to DeepStream
Computer vision is a field of artificial intelligence that lets computers see and interpret the world around them. It’s used in all sorts of cool applications, like image and video analysis, object detection, and even making robots do our bidding. But the real reason we all love computer vision is for the real-time stuff. Who doesn’t love seeing things happen in (near) real-time? That’s where DeepStream comes in. It’s an SDK for building real-time computer vision applications on Nvidia GPUs and Jetson devices. It’s a framework for constructing GStreamer pipelines that can process video streams like a boss, and it’s loaded with pre-built plugins for tasks like object detection and tracking. And if you’re feeling adventurous, you can even make your own plugins! DeepStream can handle all sorts of video streams, from local files to network streams like RTSP. It also has hardware-accelerated support for video encoding and decoding, so you can process high-res streams easily. Plus, it integrates with TensorRT, a library for optimizing deep learning models for inference on Nvidia hardware. With all that power, you can run advanced computer vision models in real-time without breaking a sweat (or your computer). Long live computer vision!
Setting up DeepStream-6.1
Setting up DeepStream requires a lot of things, but the most important one is probably CUDA. If you’re brave enough to tackle the installation of CUDA, you deserve a medal (or at least a round of applause). Just be warned: CUDA and its dependant software can be a bit finicky when it comes to version mismatches. But don’t worry, this blog is here to help beginners get started with DeepStream without getting lost in Nvidia’s vast documentation (or tearing their hair out). The goal is to make your journey as smooth as possible, so don’t give up just yet!
The instructions for installing DeepStream is slightly different for dGPU (discrete GPU) devices and Jetson devices. In this blog, the instructions given are for setting up DeepStream on dGPU devices, specifically for Ubuntu machines. For Jetson or RHEL (Red Hat Enterprise Linux) devices follow the official documentation from Nvidia DeepStream SDK.
Pre-requisites:
- GStreamer 1.16.2
- NVIDIA driver 515.65.01
- CUDA 11.7 Update 1
- TensorRT 8.4.1.5
Step1: Install GStreamer and other dependencies.
sudo apt install \
libssl1.1 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstreamer-plugins-base1.0-dev \
libgstrtspserver-1.0-0 libjansson4 \
libyaml-cpp-dev gcc make git python3
Step2: Install CUDA Toolkit and NVIDIA driver.
wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda-repo-debian11-11-7-local_11.7.1-515.65.01-1_amd64.deb
sudo dpkg -i cuda-repo-debian11-11-7-local_11.7.1-515.65.01-1_amd64.deb
sudo rm /etc/apt/sources.list.d/*cuda*
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda
Step3: Install TensorRT
$ sudo apt-get install libnvinfer8=8.4.1-1+cuda11.6 libnvinfer-plugin8=8.4.1-1+cuda11.6 libnvparsers8=8.4.1-1+cuda11.6 \
libnvonnxparsers8=8.4.1-1+cuda11.6 libnvinfer-bin=8.4.1-1+cuda11.6 libnvinfer-dev=8.4.1-1+cuda11.6 \
libnvinfer-plugin-dev=8.4.1-1+cuda11.6 libnvparsers-dev=8.4.1-1+cuda11.6 libnvonnxparsers-dev=8.4.1-1+cuda11.6 \
libnvinfer-samples=8.4.1-1+cuda11.6 libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6 \
python3-libnvinfer=8.4.1-1+cuda11.6 python3-libnvinfer-dev=8.4.1-1+cuda11.6
Step4: Install DeepStream (Using the .deb file)
wget https://developer.nvidia.com/deepstream-6.1_6.1.1-1_amd64.deb
sudo apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb
Step5: Verify if DeepStream was installed successfully
deepstream-app --version
The output of the following command should show the respective version of DeepStream that was installed. Ignore the warnings related to the .so
files missing for triton server.
Alternate Setup using Docker
DeepStream has got you covered with Docker containers for both dGPU and Jetson platforms. Deploying DeepStream applications has never been easier — just bundle all your dependencies into a container and you’re good to go. And where can you find these handy containers? Look no further than the NVIDIA container registry at https://ngc.nvidia.com. Just grab the nvidia-docker package and you’ll have access to GPU resources from within your containers. It’s like magic (but with less waving of wands and more typing on a keyboard). Happy containerizing!
Pull the samples docker image to run sample applications using Docker
docker pull nvcr.io/nvidia/deepstream:6.1.1-samples
To run the docker container use the following command
docker run -itd --rm --net=host --name deepstream --runtime nvidia -w /opt/nvidia/deepstream/deepstream nvcr.io/nvidia/deepstream:6.1.1-samples
To run applications inside the docker container, use the following command
docker exec -it deepstream bash
deepstream-app --version
This will open a bash terminal inside the docker container which can be used to run any sample DeepStream application.
And that’s a wrap! You’re all set to start building DeepStream applications. In the next tutorial, we’ll show you how to build a simple DeepStream application. No sweat, right? (Well, maybe a little sweat, but we’ll be there to help you every step of the way).