In this section we will install and configure the IoT Edge Runtime on an NVIDIA Jetson Device. This will require that we deploy a collection of Azure Services to support the modules that are defined in the associated IoT Edge Deployment for IoT Hub.
If you take a close look at the deployment, you will notice that it includes the following modules:
Module | Purpose | Backing Azure Service |
---|---|---|
edgeAgent | System Module used by IoT Edge to deploy and ensure uptime of modules defined in device deployment | Azure IoT Hub (Authorization and for obtaining deployment configuration) |
edgeHub | System Module responsible for inter-module communication and message back to Azure IoT Hub | Azure IoT Hub (Ingestion of Device to Cloud Telemetry) |
NVIDIADeepStreamSDK | Custom Module which runs DeepStream workload, output is forwarded to DeepStreamAnalytics Module for summarization | Telemetry is routed to DeepStreamAnalytics module (see: IoT Edge - Declare Routes) where it is filtered and forwarded to an Azure IoT Hub |
CameraTaggingModule | Custom Module for obtaining images from available RTSP sources for use in Training Custom Object Detection Models | CustomVision.AI for exporting of captured images for use in training Custom Object Detection model(s) |
azureblobstorageoniotedge | Custom Module for providing replication of data to a backing Azure Storage Account | Azure Storage Account for replication and long-term storage of captured images |
DeepStreamAnalytics | Custom Module that employs "Stream Analytics on IoT Edge" Module to Summarize Object Detection Results from NVIDIADeepStreamSDK | Azure Stream Analytics on Edge Job defined and served from Azure |
In this section, we will only need to deploy an Azure IoT Hub and Azure Storage Account. If you are curious about the pricing involved for these services, they are summarized below:
- IoT Hub Pricing
- Azure Storage Account
- Azure Stream Analytics on Edge Pricing (Technically, even though we are using a job that is not contained in the end-users subscription, billing does occur per device that runs the DeepStreamAnalytics Module)
The additional services, CustomVision.AI and Azure Stream Analytics on Edge, will be addressed in upcoming sections and will not be needed at this time.
If you wish to follow along with the steps in this module, we have recorded a livestream presentation titled "Configure and Deploy "Intelligent Video Analytics" to IoT Edge Runtime on NVIDIA Jetson" that walks through the steps below in great detail.
Before we install IoT Edge, we need to install a few utilities onto the Nvidia Jetson device with:
sudo apt-get install -y curl nano
ARM64 builds of IoT Edge that are compatible with NVIDIA Jetson Hardware are provided beginning in the 1.0.8 release tag of IoT Edge. To install the latest release of IoT Edge, run the following from a terminal on your Nvidia Jetson device or consult the official documentation:
# You can copy the entire text from this code block and
# paste in terminal. The comment lines will be ignored.
# Install the IoT Edge repository configuration
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
rm packages-microsoft-prod.deb
# Perform apt update
sudo apt-get update
# Install IoT Edge and the Security Daemon
sudo apt-get install aziot-edge defender-iot-micro-agent-edge
In this section, we will manually provision our Jetson hardware as an IoT Edge device. To accomplish this, we will need to deploy an active IoT Hub which we will use to register a new IoT Edge device and from there obtain a device connection string that we will allow us to securely authenticate to the IoT Hub instance.
You can create a new IoT Hub, register an IoT Edge device, and obtain the device connection string needed to accomplish this by following the documentation for Registering an IoT Edge device in the Azure Portal or by Registering an IoT Edge device with the Azure-CLI.
You can quickly configure your IoT Edge device with symmetric key authentication using the following command:
sudo iotedge config mp --connection-string 'PASTE_DEVICE_CONNECTION_STRING_HERE'
The iotedge config mp command creates a configuration file on the device and enters your connection string in the file.
Apply the configuration changes.
sudo iotedge config apply
If you want to see the configuration file, you can open it:
sudo nano /etc/aziot/config.toml
Check to see that the IoT Edge system service is running.
sudo iotedge system status
A successful status response is Ok.
If you need to troubleshoot the service, retrieve the service logs.
sudo iotedge system logs
Use the check tool to verify configuration and connection status of the device.
sudo iotedge check
Once configured successfully, the IoT Edge runtime will begin pulling down the edgeAgent and edgeHub system modules. These modules will run by default until we supply a deployment configuration containing additional modules.
Module 2.3 : Prepare the Jetson Device to use the "Intelligent Video Analytics" sample configurations
In this module, we will mirror the sample configurations contained in this repo onto the Jetson device. This will require that we leverage some very specific paths that are referenced in those configurations, so be sure to follow these steps exactly as they are described.
We will begin by creating a directory to store the configuration on the Jetson device with:
sudo mkdir -p /data/misc/storage
Next, we will configure the /data
directory and all subdirectories to be accessible from a non-privileged user account with:
sudo chmod -R 777 /data
Next, we will navigate to /data/misc/storage
with:
cd /data/misc/storage
Then clone this repository to that directory with:
git clone https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure.git
Next, we need to configure the Jetson OS to allow for access to the X11 Window server from a container by granting local privileges to the X11 socket to the iotedge
user account.
xhost local:iotedge
This will activate the privileges for the current logged-in session, but will not persist on reboot. Make the configuration persistent by opening /etc/profile
for editing with:
sudo nano /etc/profile
Then append the following text to the very top of that file:
xhost local:iotedge
On subsequent reboots, the iotedge
user should now be able to spawn Graphical User Interfaces using the host X11 socket. This will allow us to view the bounding-box detections of the DeepStreamSDK module while running as an IoT Edge module (i.e. while running as a container).
To make diagnosing potential issues easier, you will also want to enable access to the docker service from your user account. This can be accomplished with:
sudo usermod -aG docker $USER
On subsequent login sessions, you will now be able to invoke docker
command without the need to prepend with sudo
.
We have successfully prepared the Jetson Device to use the "Intelligent Video Analytics" sample configurations. Next, we will configure the appropriate prerequisite Azure Storage Account and configuration needed for the Blob Storage Module (azureblobstorageoniotedge).
In this step, we will configure the IoT Edge Blob Storage Module which is used in conjunction with the CameraTaggingModule to store image captures locally and replicate them to the cloud. Technically, this module is optional and the CameraTaggingModule can upload images directly to the cloud or CustomVision.AI without it, but it gives a more robust solution for the end user that can capture and store images without the need for outbound internet access. You can learn more about the Camera Tagging Module an it's supporting features in this in-depth article.
This module will require the use of Visual Studio Code, preferably running on a development machine that is not the Jetson device. Begin by cloning this repository to your development machine by navigating into the directory of your choosing and running:
git clone https://github.com/toolboc/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure.git
Next, Open Visual Studio Code, then select "File => Open Folder" then navigate to and select the newly created "Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure" folder.
Within the newly opened project, create a file named .env in the deployment-iothub
folder and supply it with the following contents:
CONTAINER_REGISTRY_NAME=
LOCAL_STORAGE_ACCOUNT_KEY=
LOCAL_STORAGE_ACCOUNT_NAME=camerataggingmodulelocal
DESTINATION_STORAGE_NAME=camerataggingmodulecloud
CLOUD_STORAGE_CONNECTION_STRING=
This file will will store key/value that are used to replace values in deployment.template.json to produce a working deployment manifest. You will notice these entries in the deployment.template.json are proceeded with the '$' symbol. This marks them as tokens for replacement during the generation of the deployment manifest.
For now, we will skip the CONTAINER_REGISTRY_NAME
as that is only needed if you are pulling container images from a private repository. Since the modules in our deployment are all publicly available, it is not needed at this time.
Produce a value for LOCAL_STORAGE_ACCOUNT_KEY
by visiting GeneratePlus. This will generate a random base64 encoded string that will be used to configure a secure connection to the local blob storage instance. You will want to supply the entire result, which should end with two equal signs (==).
LOCAL_STORAGE_ACCOUNT_NAME
is best left as-is, but you are welcome to rename it, provided that it follows the format for naming: The field can contain only lowercase letters and numbers and the name must be between 3 and 24 characters.
DESTINATION_STORAGE_NAME
is supplied from an assumed-to-exist blob storage container in the Azure Cloud. You can create this container by performing the following steps:
Navigate to the Azure Marketplace and search for 'blob', then select Storage Account - blob, file, table, queue
Create the Storage Account using settings similar to below (note: the Storage account name must be globally unique)
Select "Review + Create" => "Create" to deploy the new Storage Account Resource.
Navigate to your newly deployed Storage Account and select Containers
:
Create a new storage container named "camerataggingmodulecloud" as shown below (the name is important as it matches the value in the .env):
CLOUD_STORAGE_CONNECTION_STRING can be obtained by visiting your newly created Storage Account and selecting Settings => Access Keys. Copy the entire contents of the Connection string and supply this as the value.
Your completed .env file should look similar to the following:
CONTAINER_REGISTRY_NAME=
LOCAL_STORAGE_ACCOUNT_KEY=9LkgJa1ApIsISmuUHwonxg==
LOCAL_STORAGE_ACCOUNT_NAME=camerataggingmodulelocal
DESTINATION_STORAGE_NAME=camerataggingmodulecloud
CLOUD_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=camerataggingmodulestore;AccountKey=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000==;EndpointSuffix=core.windows.net
We are now ready to create and apply the sample deployment specified in thedeployment-iothub
.
Now that we have accounted for all of the pre-requisite services, setup, and configuration; we are ready to produce a deployment to begin running a sample Intelligent Video Analytics pipeline on our Jetson device. The following steps will take place in Visual Studio Code, again, preferably running on a development machine which is not the Jetson Device itself.
In the previous section, we created a .env file to support the configuration parameters needed by the Blob Storage Module. That .env file should be located in the deployment-iothub
folder. Ensure that you have supplied the appropriate parameters and that the .env file exists before proceeding.
Next, we will configure the project to target the arm64v8 platform. To accomplish this, bring up the Command Pallette with (CTRL+SHIFT+P), then search for the following task:
Azure IoT Edge: Set Default Target Platform for Edge Solution
Select the "Azure IoT Edge: Set Default Target Platform for Edge Solution" task and a drop-down will appear showing all available platforms. Select arm64v8
from the list. This will ensure that any modules added to the project and built-from source are targeted to the Jetson architecture.
Note: If you do not see any results when searching for the task above, ensure that you have installed the Azure IoT Tools Extension.
Next, we will need to set a DISPLAY
environment variable for the NVIDIADeepStreamSDK module to enable communication to the X11 server running on the host (i.e. allow us to run a GUI based application form a container). In most instances, the DISPLAY
environment variable will be set to :1, this corresponds to the referenced physical display that may or may not be attached to the host. To obtain the actual value, run the following command on the terminal of the Jetson device:
echo $DISPLAY
Once you have obtained the DISPLAY
value, open the deployment-iothub\deployment.template.json
folder on your development machine and update the value for the DISPLAY
variable to match the result obtained by running the previous command on the Jetson Device. If this value is mismatched, you may see errors in the NVIDIADeepStreamSDK logs which mention an inability to create an EGL sink. Make sure to save this file if you make modifications.
Next, bring up the Command Pallette again with (CTRL+SHIFT+P), this time search for:
Azure IoT Hub: Select IoT Hub
Select the "Azure IoT Hub: Select IoT Hub" task and follow the prompts to connect to the IoT Hub that was used to register and configure the IoT Edge runtime on your Jetson Device. This may require that you authenticate your Visual Studio Code instance with Microsoft Azure if you have never done so before.
After you have selected the appropriate IoT Hub, expand the deployment-iothub
folder and right-click the deployment.template.json
file, then select "Generate IoT Edge Deployment Manifest". This will produce a new folder in that directory named "config" and an associated deployment named deployment.arm64v8.json
. Right-click the deployment.arm64v8.json
file and select "Create Deployment for Single Device".
A drop-down should appear showing all devices registered in your currently selected IoT Hub. Choose the device that represents your Jetson Device and the deployment will begin to activate on your device (provided the IoT Edge runtime is active and that the device is connected to the internet).
It may take a while for the images specified in the deployment to pull down to the device. You can verify that all images are pulled with:
sudo docker images
A completed deployment should eventually show a result similar to the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/azureiotedge-hub 1.0 9b62dd5f824e 7 days ago 237MB
mcr.microsoft.com/azureiotedge-agent 1.0 ae9bfb3081c5 7 days ago 219MB
nvcr.io/nvidia/deepstream-l4t 5.0-dp-20.04-iot 7b4457646f87 5 weeks ago 2.16GB
toolboc/camerataggingmodule latest 704e9e0ce6dc 6 weeks ago 666MB
mcr.microsoft.com/azure-stream-analytics/azureiotedge 1.0.6-linux-arm32v7 bb2d6fbc5a3b 4 months ago 566MB
mcr.microsoft.com/azure-blob-storage latest 76f2e7849a91 11 months ago 203MB
When you are certain that the deployment has completed, it is now possible to modify the solution to your needs. This will be explained in the next section.
This section is a bit open-ended as it will depend on how you intend to process video input on your Jetson Device.
Before making any modifications, it is highly advised to consult the DeepStream Documentation for Configuration Groups and remember that everything should be tracked using 'git' so recovery is always possible.
The un-modified sample deployment references a DeepStream configuration located on your Jetson Device at /data/misc/storage/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/services/DEEPSTREAM/configs
. Within this directory there will be some additional example DeepStream Configurations:
DeepStream Sample Configuration Name | Description |
---|---|
DSConfig-CustomVisionAI.txt | Employs an example object detection model created with CustomVision.AI that is located in /data/misc/storage/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/services/CUSTOM_VISION_AI |
DSConfig-YoloV3.txt | Employs an example object detection model based on YoloV3 |
DSConfig-YoloV3Tiny.txt | Employs an example object detection model based on YoloV3Tiny |
Each of these examples are configured by default to process a single video input from a publicly available RTSP stream of Big Buck Bunny. We do this partially because it is the ONLY reliable and publicly accessible RTSP stream on the entire internet and to make it super easy to modify the existing example to point to a custom RTSP endpoint for say, an IP capable security camera.
To change the active DeepStream configuration in your deployment, you can modify the deployment.template.json
to specify a different configuration file within the ENTRYPOINT
specification for the NVIDIADeepStreamSDK
module, then repeat the steps in Module 2.5 to regenerate and apply the modified deployment. Note that using the YoloV3* configurations will require that you bring in some additional dependencies which will be discussed in Module 3.
In the default deployment that we applied, the DeepStream confiuration, DSConfig-CustomVisionAI.txt can be modified on your Jetson device with:
nano /data/misc/storage/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/services/DEEPSTREAM/configs/DSConfig-CustomVisionAI.txt
After you have made edits to this configuration, restart the NVIDIADeepStreamSDK module to test it with:
docker restart NVIDIADeepStreamSDK
To monitor the logs, you can use:
iotedge logs NVIDIADeepStreamSDK
OR
docker logs -f NVIDIADeepStreamSDK
For each of your input sources, you will want to ensure that each of them is provided an entry in msgconv_config.txt by modifying with:
nano /data/misc/storage/Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure/services/DEEPSTREAM/configs/msgconv_config.txt
This file is used to generate telemetry to the Azure IoT Hub and to specify which video input / camera that a given object detection originated from.
One last note, if you are modifying the DeepStream configuration to use multiple video sources,you will want to modify the [streammux]
batch-size
property to equal the number of video sources you are using for optimal performance. For example, if you have modified the DeepStream Configuration to use four input RTSP streams, you will want to set [streammux]
batch-size
= 4, in your modified DeepStream configuration.
Once you have modified the configuration to obtain video sources from your desired inputs, we will now be ready to look into how to create and deploy Custom Object Detection Model from CustomVision.AI and explore the usage of academic grade models using the YOLOV3* configurations.