Categories
AI/ML envs Vision

NVIDIA Tests

Ok here we go. I already installed gstreamer. Was still getting some h264 encoding plugin missing error, so I needed to add this:

apt-get install gstreamer1.0-libav

Then on my Ubuntu laptop (192.168.0.103), I run:

gst-launch-1.0 -v udpsrc port=1234 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink

And then to get webcam streaming from the Jetson,

video-viewer /dev/video0 rtp://192.168.0.103:1234

and similarly,

detectnet /dev/video0 rtp://192.168.0.103:1234

and

segnet /dev/video0 rtp://192.168.0.103:1234

It is impractical to run VNC because of the tiny resolution, and ssh -X tunnelling because that requires the host to have whatever drivers are used on the jetson. GStreamer is working well though.

Cool.

I’m trying to run a python program on the feed. Ended up finding same issue elsewhere. RTP output not working. Bumped the thread.

Someone worked it out. Needed a do/while loop instead of a for loop.

It’s been a couple of months since I was here, and it’s now time to load up this gstreamer code again, and see if I can stream a webcam feed from the robot, and we evaluate inference on the trained CNN, and we colour in the different classes in pretty colours, and

So, TODO:

  1. Get panoptic segmentation colour mapping combining the output layers of the CNN. So something like this. We get a live webcam feed, and we run the frames through the semantic segmentation CNN, and combine the binary masks into a multiclass mask.
  2. Then gstream these rainbow multiclass masked images to a listening gstreamer video-viewer program

So, practically though, I need to set up the Jetson again, and get it compiling the trained h5 file, and using webcam stills as input. Then gstream the result. Ok.

And fix power cable for Jetson step 1. Then try set up webcam gstreaming.

Ok, it’s plugged in, nmap found it. Let’s log in… Ok, run gstreamer, and then i’ve got a folder, jetson-inference, and I’m volume mapping it.

./docker/run.sh --volume /home/chicken/jetson-inference:/jetson-inference
import jetson.inference
import jetson.utils

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)

camera = jetson.utils.videoSource("/dev/video0")
display = jetson.utils.videoOutput("rtp://192.168.101.127:1234","--headless") # 'my_video.mp4' for file

while True:
    img = camera.Capture()
    detections = net.Detect(img)
    display.Render(img)
    display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))
    if not camera.IsStreaming() or not display.IsStreaming():
        break

I am in this dusty jetson-inference docker.

pip3 list

appdirs (1.4.4)
boto3 (1.16.58)
botocore (1.19.58)
Cython (0.29.21)
dataclasses (0.8)
decorator (4.4.2)
future (0.18.2)
jmespath (0.10.0)
Mako (1.1.3)
MarkupSafe (1.1.1)
numpy (1.19.4)
pandas (1.1.5)
Pillow (8.0.1)
pip (9.0.1)
pycuda (2020.1)
python-dateutil (2.8.1)
pytools (2020.4.3)
pytz (2020.5)
s3transfer (0.3.4)
setuptools (51.0.0)
six (1.15.0)
torch (1.6.0)
torchaudio (0.6.0a0+f17ae39)
torchvision (0.7.0a0+78ed10c)
urllib3 (1.26.2)
wheel (0.36.1)

and my nice docker with TF2 and everything installed already says:

root@jetson:/dmc/jetson-inference/build/aarch64/bin# ./my-detection.py
Traceback (most recent call last):
File "./my-detection.py", line 24, in
import jetson.inference
ModuleNotFoundError: No module named 'jetson'

Ok let’s try install jetson from source. First, tried the ‘Quick Reference’ instructions… Errored at

/dmc/jetson-inference/c/depthNet.h(190): error: identifier "COLORMAP_VIRIDIS_INVERTED" is undefined

/dmc/jetson-inference/c/depthNet.h(180): error: identifier "COLORMAP_VIRIDIS_INVERTED" is undefined

Next, ran a command mentioned lower down,

git submodule update --init

Now make -j$(nproc) gets to

/usr/bin/ld: cannot find -lnvcaffe_parser

And this reply suggests using sed to fix this…

sed -i ‘s/nvcaffe_parser/nvparsers/g’ CMakeLists.txt 

Ok…Built! And…

[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- didn't discover any v4l2 devices
[gstreamer] gstCamera -- device discovery and auto-negotiation failed
[gstreamer] gstCamera -- failed to create device v4l2:///dev/video0
Traceback (most recent call last):
File "my-detection.py", line 31, in
camera = jetson.utils.videoSource("/dev/video0")
Exception: jetson.utils -- failed to create videoSource device

Ok so docker/run.sh also has some other stuff going on, looking for V4L2 devices and such. Ok added device, and it’s working in my nice docker!

sudo docker run -it -p 8888:8888 -p 6006:6006 -p 8265:8265 --rm --runtime nvidia --network host --device /dev/video0 -v /home/chicken/:/dmc nx_setup

So what now… We open ‘/dev/video0’ for V4L2, then replace my inference code below.

img = camera.Capture()     
detections = net.Detect(img)     
display.Render(img)


Ok ... zoom ahead and it's all worked out...
---
After going on the ringer another time: zooming ahead, fixing that jetson issue, involves 
cd jetson-inference/build
make install
ldconfig


---

Saved the sender program at the github 

To get video to the screen

gst-launch-1.0 -v udpsrc port=1234  caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" !  rtph264depay ! decodebin ! videoconvert ! autovideosink

or to a mp4

gst-launch-1.0 -v udpsrc port=1234  caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" !  rtph264depay  ! decodebin ! x264enc ! qtmux ! filesink location=test.mp4 -e