For the MFRU exhibition, we presented a variety of robots. The following is some documentation, on the specifications, and setup instructions. We are leaving the robots with konS.
All Robots
Li-Po batteries need to be stored at 3.8V per cell. For exhibition, they can be charged to 4.15A per cell, and run with a battery level monitor until they display 3.7V, at which point they should be swapped out. Future iterations of robotic projects will make use of splitter cables to allow hot swapping batteries, for zero downtime.
We leave our ISDT D2 Mark 2 charger, for maintaining and charging Li-Po batteries.
At setup time, in a new location, Raspberry Pi SD cards need to be updated to connect to the new Wi-fi network. Simplest method is to physically place the SD card in a laptop, and transfer a wpa_supplicant.conf file with the below changed to the new credentials and locale, and a blank file called ssh, to allow remote login.
Then following startup with the updated SD card, robot IP addresses need to be determined, typically using `nmap -sP 192.168.xxx.xxx`, (or a windows client like ZenMap).
Usernames and passwords used are:
LiDARbot – pi/raspberry
Vacuumbot – pi/raspberry and chicken/chicken
Pinkbot – pi/raspberry
Gripperbot – pi/raspberry
Birdbot – daniel/daniel
Nipplebot – just arduino
Lightswitchbot – just arduino and analog timer
For now, it is advised to shut down robots by connecting to their IP address, and typing sudo shutdown -H now and waiting for the lights to turn off, before unplugging. It’s not 100% necessary, but it reduces the chances that the apt cache becomes corrupted, and you need to reflash the SD card and start from scratch.
Starting from scratch involves reflashing the SD card using Raspberry Pi Imager, cloning the git repository, running pi_boot.sh and pip3 install -y requirements.txt, configuring config.py, and running create_service.sh to automate the startup.
LiDARbot
Raspberry Pi Zero W x 1 PCA9685 PWM controller x 1 RPLidar A1M8 x 1 FT5835M servo x 4
Powered by: Standard 5V Power bank [10Ah and 20Ah]
Startup Instructions: – Plug in USB cables. – Wait for service startup and go to URL. – If Lidar chart is displaying, click ‘Turn on Brain’
Vacuumbot
Raspberry Pi 3b x 1 LM2596 stepdown converter x 1 RDS60 servo x 4
Powered by: 7.4V 4Ah Li-Po battery
NVIDIA Jetson NX x 1 Realsense D455 depth camera x 1
Powered by: 11.1V 4Ah Li-Po battery
Instructions: – Plug Jetson assembly connector into 11.4V, and RPi assembly connector into 7.4V – Connect to Jetson:
cd ~/jetson-server
python3 inference_server.py
– Go to the Jetson URL to view depth and object detection. – Wait for Rpi service to start up. – Connect to RPi URL, and click ‘Turn on Brain’
Pinkbot
Raspberry Pi Zero W x 1 PCA9685 PWM controller x 1 LM2596 stepdown converter x 1 RDS60 servo x 8 Ultrasonic sensors x 3
Powered by: 7.4V 6.8Ah Li-Po battery
Instructions: – Plug in to Li-Po battery – Wait for Rpi service to start up. – Connect to RPi URL, and click ‘Turn On Brain’
Gripperbot
Raspberry Pi Zero W x 1 150W stepdown converter (to 7.4V) x 1 LM2596 stepdown converter (to 5V) x 1 RDS60 servo x 4 MGGR996 servo x 1
Powered by: 12V 60W power supply
Instructions: – Plug in to wall – Wait for Rpi service to start up. – Connect to RPi URL, and click ‘Fidget to the Waves’
Birdbot
Raspberry Pi Zero W x 1 FT SM-85CL-C001 servo x 4 FE-URT-1 serial controller x 1 12V input step-down converter (to 5V) x 1 Ultrasonic sensor x 1 RPi camera v2.1 x 1
Powered by: 12V 60W power supply
Instructions: – Plug in to wall – Wait for Rpi service to start up. – Connect to RPi URL, and click ‘Fidget to the Waves’
I got the Feetech Smart Bus servos running on the RPi. Using them for the birdbot.
Some gotchas:
Need to wire TX to TX, RX to RX.
Despite claiming 1000000 baudrate, 115200 was required, or it says ‘There is no status packet!”
After only one servo working, for a while, I found their FAQ #5, and installed their debugging software, and plugged each servo in individually, and changed their IDs to 1/2/3/4. It was only running the first one because all of their IDs were still 1.
For python, you need to pip3 install pyserial, and then import serial.
I soldered on some extra wires to the motor + and -, to power the motor separately.
Wasn’t getting any luck, but it turned out to be the MicroUSB cable (The OTG cable was ok). After swapping it out, I was able to run the simple_grabber app and confirm that data was coming out.
I debugged the Adafruit v1.29 issue too. So now I’m able to get the data in python, which will probably be nicer to work with, as I haven’t done proper C++ in like 20 years. But this Slamtec code would be the cleanest example to work with.
So I added in some C socket code and recompiled, so now the demo app takes a TCP connection and starts dumping data.
It was actually A LOT faster than the python libraries. But I started getting ECONNREFUSED errors, which I thought might be because the Pi Zero W only has a single CPU, and the Python WSGI worker engine was eventlet, which only handles 1 worker, for flask-socketio, and running a socket server, client, and socket-io, on a single CPU, was creating some sort of resource contention. But I couldn’t solve it.
I found a C++ python-wrapped project but it was compiled for 64 bit, and the software, SWIG, which I needed to recompile for 32 bit, seemed a bit complicated.
So, back to Python.
Actually, back to javascript, to get some visuals in a browser. The Adafruit example is for pygame, but we’re over a network, so that won’t work. Rendering Matplotlib graphs is going to be too slow. Need to stream data, and render it on the front end.
Detour #1: NPM
Ok… so, need to install Node.js to install this one, which for Raspberry Pi Zero W, is ARM6.
This is the most recent ARM6 nodejs tarball:
wget https://nodejs.org/dist/latest-v11.x/node-v11.15.0-linux-armv6l.tar.gz
tar xzvf node-v11.15.0-linux-armv6l.tar.gz
cd node-v11.15.0-linux-armv6l
sudo cp -R * /usr/local/
sudo ldconfig
npm install --global yarn
sudo npm install --global yarn
npm install rplidar
npm ERR! serialport@4.0.1 install: `node-pre-gyp install --fallback-to-build`
Ok... never mind javascript for now.
Detour #2: Dash/Plotly
Let’s try this python code. https://github.com/Hyun-je/pyrplidar
Ok well it looks like it works maybe, but where is s/he getting that nice plot from? Not in the code. I want the plot.
So, theta and distance are just polar coordinates. So I need to plot polar coordinates.
PolarToCartesian.
Convert a polar coordinate (r,θ) to cartesian (x,y): x = r cos(θ), y = r sin(θ)
pip3 install pandas
pip3 install dash
k let's try...
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 48 from C header, got 40 from PyObject
ok
pip3 install --upgrade numpy
(if your numpy version is < 1.20.0)
ok now bad marshal data (unknown type code)
sheesh, what garbage.
Posting issue to their github and going back to the plan.
Reply from Plotly devs: pip3 won’t work, will need to try conda install, for ARM6
Ok let’s see if we can install plotly again….
Going to try miniconda – they have a arm6 file here…
Damn. 2014. Python 2. Nope. Ok Plotly is not an option for RPi Zero W. I could swap to another RPi, but I don’t think the 1A output of the power bank can handle it, plus camera, plus lidar motor, and laser. (I am using the 2.1A output for the servos).
Solution #1: D3.js
Ok, Just noting this link, as it looks useful for the lidar robot, later.
“eventlet is the best performant option, with support for long-polling and WebSocket transports.”
apparently needs redis for message queueing…
pip install eventlet
pip install redis
Ok, and we need gunicorn, because eventlet is just for workers...
pip3 install gunicorn
gunicorn --worker-class eventlet -w 1 module:app
k, that throws an error.
I need to downgrade eventlet, or do some complicated thing.
pip install eventlet==0.30.2
gunicorn --bind 0.0.0.0 --worker-class eventlet -w 1 kmp8servo:app
(my service is called kmp8servo.py)
ok so do i need redis?
sudo apt-get install redis
ok it's already running now,
at /usr/bin/redis-server 127.0.0.1:6379
no, i don't really need redis. Could use sqlite, too. But let's use it anyway.
Ok amazing, gunicorn works. It's running on port 8000
Ok, after some work, socket-io is also working.
Received #0: Connected
Received #1: I'm connected!
Received #2: Server generated event
Received #3: Server generated event
Received #4: Server generated event
Received #5: Server generated event
Received #6: Server generated event
Received #7: Server generated event
So, I’m going to go with d3.js instead of P5js, just cause it’s got a zillion more users, and there’s plenty of polar coordinate code to look at, too.
Got it drawing the polar background… but I gotta change the scale a bit. The code uses a linear scale from 0 to 1, so I need to get my distances down to something between 0 and 1. Also need radians, instead of the degrees that the lidar is putting out.
ok finally. what an ordeal.
But now we still need to get python lidar code working though, or switch back to the C socket code I got working.
Ok, well, so I added D3 update code with transitions, and the javascript looks great.
But the C Slamtec SDK, and the Python RP Lidar wrappers are a source of pain.
I had the C sockets working briefly, but it stopped working, seemingly while I added more Python code between each socket read. I got frustrated and gave up.
The Adafruit library, with the fixes I made, seem to work now, but it’s in a very precarious state, where looking at it funny causes a bad descriptor field, or checksum error.
But I managed to get the brain turning on, with the lidar. I’m using Redis to track the variables, using the memory.py code from this K9 repo. Thanks.
I will come back to trying to fix the remaining python library issues, but for now, the robot is running, so, on to the next.
Unfortunately the results of using the OpenCV/GStreamer example code to transmit over the network using H264 compression, were even worse than the JPEG over HTTP attempt I’m trying to improve on. Much worse. That was surprising. It could be this wifi dongle though, which is very disappointing on the Jetson Nano. It appears as though the Jetson Nano tries to keep total wattage around 10W, however plugging in the Realsense camera and a wifi dongle is pulling way more than that (All 4A @ 5W supplied by the barrel jack). It may mean that wireless robotics with the Realsense is not practical, on the Jetson.
Required apt install gstreamer1.0-rtsp to be installed.
Back to drawing board for getting the RealSense colour and depth transmitting to a different viewing machine, on the network (while still providing distance data for server side computation).
I just found this github from ETH Z. Not surprising that they have some of the most relevant datasets I’ve seen, pertaining to making proprioceptive autonomous systems. I came across their Autonomous Systems Labs dataset site.
One of the projects, panoptic mapping, is pretty much the panoptic segmentation from earlier research, combined with volumetric point clouds. “A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.”
I have the Jetson NX, and I tried the last couple of days to install OpenCV, and am still fighting it. But we’re going to give it a few rounds.
Let’s see, so I used DustyNV’s Dockerfile with an OpenCV setup for 4.4 or 4.5.
But the build dies, or is still missing libraries. There’s a bunch of them, and as I’m learning, everything is a linker issue. Everything. sudo ldconfig.
Here’s a 2019 quora answer to “What is sudo ldconfig in linux?”
“ldconfig updates the cache for the linker in a UNIX environment with libraries found in the paths specified in “/etc/ld.so.conf”. sudo executes it with superuser rights so that it can write to “/etc/ld.so.cache”.
You usually use this if you get errors about some dynamically linked libraries not being found when starting a program although they are actually present on the system. You might need to add their paths to “/etc/ld.so.conf” first, though.” – Marcel Noe
So taking my own advice, let’s see:
chicken@chicken:/etc/ld.so.conf.d$ find | xargs cat $1
cat: .: Is a directory
/opt/nvidia/vpi1/lib64
/usr/local/cuda-10.2/targets/aarch64-linux/lib
# Multiarch support
/usr/local/lib/aarch64-linux-gnu
/lib/aarch64-linux-gnu
/usr/lib/aarch64-linux-gnu
/usr/lib/aarch64-linux-gnu/libfakeroot
# libc default configuration
/usr/local/lib
/usr/lib/aarch64-linux-gnu/tegra
/usr/lib/aarch64-linux-gnu/fakechroot
/usr/lib/aarch64-linux-gnu/tegra-egl
/usr/lib/aarch64-linux-gnu/tegra
Ok. On our host (Jetson), let’s see if we can install it, or access it. It’s Jetpack 4.6.1 so it should have it installed already.
ImportError: libblas.so.3: cannot open shared object file: No such file or directory
cd /usr/lib/aarch64-linux-gnu/
ls -l liblas.so*
libblas.so -> /etc/alternatives/libblas.so-aarch64-linux-gnu
cd /etc/alternatives
ls -l liblas.so*
libblas.so.3-aarch64-linux-gnu -> /usr/lib/aarch64-linux-gnu/atlas/libblas.so.3
libblas.so-aarch64-linux-gnu -> /usr/lib/aarch64-linux-gnu/atlas/libblas.so
Sounds promising. Ha it worked.
chicken@chicken:/usr/lib/aarch64-linux-gnu/atlas$ python3
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>>
Now let’s try fix DustyNV’s Dockerfile. Oops right, it takes forever to build things, or even to download and install them. So try not to change things early on in the install. So besides, Dusty’s setup already has these being installed. So it’s not that it’s not there. It’s some linking issue.
Ok I start up the NV docker and try import cv2, but
admin@chicken:/workspaces/isaac_ros-dev$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/cv2/__init__.py", line 96, in <module>
bootstrap()
File "/usr/local/lib/python3.6/dist-packages/cv2/__init__.py", line 86, in bootstrap
import cv2
ImportError: libtesseract.so.4: cannot open shared object file: No such file or directory
admin@chicken:/$ sudo apt-get install libtesseract-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
libarchive13 librhash0 libuv1
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
libleptonica-dev
The following NEW packages will be installed:
libleptonica-dev libtesseract-dev
0 upgraded, 2 newly installed, 0 to remove and 131 not upgraded.
Need to get 2,666 kB of archives.
After this operation, 14.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libleptonica-dev arm64 1.75.3-3 [1,251 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libtesseract-dev arm64 4.00~git2288-10f4998a-2 [1,415 kB]
Fetched 2,666 kB in 3s (842 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libleptonica-dev.
dpkg: warning: files list file for package 'libcufft-10-2' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'cuda-cudart-10-2' missing; assuming package has no files currently installed
(Reading database ... 97997 files and directories currently installed.)
Preparing to unpack .../libleptonica-dev_1.75.3-3_arm64.deb ...
Unpacking libleptonica-dev (1.75.3-3) ...
Selecting previously unselected package libtesseract-dev.
Preparing to unpack .../libtesseract-dev_4.00~git2288-10f4998a-2_arm64.deb ...
Unpacking libtesseract-dev (4.00~git2288-10f4998a-2) ...
Setting up libleptonica-dev (1.75.3-3) ...
Setting up libtesseract-dev (4.00~git2288-10f4998a-2) ...
ImportError: libtesseract.so.4: cannot open shared object file: No such file or directory
This guy has a smart idea, to install them, which is pretty clever. But I tried that already, and tesseract’s build failed, of course. Then it complains about undefined references to jpeg,png,TIFF,zlib,etc. Hmm. All that shit is installed.
/usr/lib/gcc/aarch64-linux-gnu/8/../../../aarch64-linux-gnu/liblept.a(libversions.o): In function `getImagelibVersions':
(.text+0x98): undefined reference to `jpeg_std_error'
(.text+0x158): undefined reference to `png_get_libpng_ver'
(.text+0x184): undefined reference to `TIFFGetVersion'
(.text+0x1f0): undefined reference to `zlibVersion'
(.text+0x21c): undefined reference to `WebPGetEncoderVersion'
(.text+0x26c): undefined reference to `opj_version'
But so here’s the evidence: cv2 is looking for libtesseract.so.4, which doesn’t exist at all. And even if we symlinked it to point to the libtesseract.so file, that just links to libtesseract.so.4.0.0 which is empty.
Ah. Ok I had to sudo apt-get install libtesseract-dev on the Jetson host, not inside the docker!!. Hmm. Right. Cause I’m sharing most of the libs on the host anyway. It’s gotta be on the host.
admin@chicken:/usr/lib/aarch64-linux-gnu$ ls -l *tess*
-rw-r--r-- 1 root root 6892600 Apr 7 2018 libtesseract.a
lrwxrwxrwx 1 root root 21 Apr 7 2018 libtesseract.so -> libtesseract.so.4.0.0
lrwxrwxrwx 1 root root 21 Apr 7 2018 libtesseract.so.4 -> libtesseract.so.4.0.0
-rw-r--r-- 1 root root 3083888 Apr 7 2018 libtesseract.so.4.0.0
admin@chicken:/usr/lib/aarch64-linux-gnu$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> exit()
Success.
So now back to earlier, we were trying to run jupyter lab, to try run the camera calibration code again. I added the installation to the dockerfile. So this command starts it up at http://chicken:8888/lab (or name of your computer).
Needed matplotlib, so just did quick macro install:
%pip install matplotlib
ModuleNotFoundError: No module named 'matplotlib'
Note: you may need to restart the kernel to use updated packages.
K... restart kernel. Kernel -> Restart.
Ok now I’m going to try calibrate the stereo cameras, since OpenCV is back.
Seems after some successes, the cameras are not creating capture sessions anymore, even from the host. Let’s reboot.
I applied for the OpenAI Codex beta, and tried it out. It’s very cool.
It’s like GPT trained on Github or something like that, so you can just give it a prompt, and it tries to carry on the sentence with what it expects next.
So the easy way to use it, I’ve found, is to write a comment for the code you want it to write for you. I switched the engine to ‘davinci-codex’ and cranked up the response length to 1600 or so. Sometimes it just makes up more comments, or does something silly. But now and then it hits a home run.
Here’s an example.
"""
Take an example wav file, found at file_name="/home/jovyan/work/307.wav", and plot a mel spectrogram of it.
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
log_S = librosa.power_to_db(S, ref=np.max)
plt.figure(figsize=(12, 4))
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
plt.title('mel power spectrogram')
plt.colorbar(format='%+02.0f dB')
plt.tight_layout()
plt.show()
I had to install librosa, but then it worked. It did more than I asked for.
Then I fed that whole thing, with another
""" Plot the MFCC of file_name="/home/jovyan/work/307.wav" """
and it carried like an example giving champ:
"""
Take an example wav file, found at file_name="/home/jovyan/work/307.wav", and plot a mel spectrogram of it.
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
log_S = librosa.power_to_db(S, ref=np.max)
plt.figure(figsize=(12, 4))
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
plt.title('mel power spectrogram')
plt.colorbar(format='%+02.0f dB')
plt.tight_layout()
plt.show()
"""
Plot the MFCC of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
mfccs = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=40)
plt.figure(figsize=(12, 4))
librosa.display.specshow(mfccs, x_axis='time')
plt.colorbar()
plt.title('MFCC')
plt.tight_layout()
plt.show()
"""
Plot the chromagram of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
chroma = librosa.feature.chroma_stft(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')
plt.colorbar()
plt.title('Chromagram')
plt.tight_layout()
plt.show()
"""
Plot the chromagram of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
chroma = librosa.feature.chroma_stft(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')
plt.colorbar()
plt.title('Chromagram')
plt.tight_layout()
plt.show()
"""
Plot the tonnetz of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
tonnetz = librosa.feature.tonnetz(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(tonnetz, y_axis='tonnetz', x_axis='time')
plt.colorbar()
plt.title('Tonal Centroids (Tonnetz)')
plt.tight_layout()
plt.show()
"""
Plot the zero-crossing rate of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
zrate = librosa.feature.zero_crossing_rate(y)
plt.figure(figsize=(12, 4))
librosa.display.specshow(zrate, x_axis='time')
plt.colorbar()
plt.title('Zero Crossing Rate')
plt.tight_layout()
plt.show()
"""
Plot the spectral centroid of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
cent = librosa.feature.spectral_centroid(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(cent, x_axis='time')
plt.colorbar()
plt.title('Spectral Centroids')
plt.tight_layout()
plt.show()
"""
Plot the spectral bandwidth of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(spec_bw, x_axis='time')
plt.colorbar()
plt.title('Spectral Bandwidth')
plt.tight_layout()
plt.show()
"""
Plot the spectral rolloff of file_name="/home/jovyan/work/307.wav"
"""
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()
spec_rolloff = lib
After waiting a while… we get:
import librosa
import librosa.display
import matplotlib.pyplot as plt
file_name="/home/jovyan/work/307.wav"
y, sr = librosa.load(file_name)
plt.figure(figsize=(12, 8))
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max)
plt.subplot(4, 2, 1)
librosa.display.specshow(D, y_axis='linear')
plt.colorbar(format='%+2.0f dB')
plt.title('Linear-frequency power spectrogram')
plt.subplot(4, 2, 2)
librosa.display.specshow(D, y_axis='log')
plt.colorbar(format='%+2.0f dB')
plt.title('Log-frequency power spectrogram')
plt.subplot(4, 2, 3)
librosa.display.specshow(D, x_axis='time')
plt.colorbar(format='%+2.0f dB')
plt.title('Time-frequency power spectrogram')
plt.subplot(4, 2, 4)
librosa.display.specshow(D, x_axis='time', y_axis='log')
plt.colorbar(format='%+2.0f dB')
plt.title('Log-Time-frequency power spectrogram')
Beautiful. Since we’re currently just using a 256×256 view port in pybullet, this is quite a bit more advanced than required though. Learning game engines can also take a while. It took me about a month to learn Unity3d, with intermediate C# experience. Unreal Engine uses C++, so it’s a bit less accessible to beginners.
Continuing from our early notes on SLAM algorithms (Simultaneous Localisation and Mapping), and the similar but not as map-making, DSO algorithm, I came across a good project (“From cups to consciousness“) and article that reminded me that mapping the environment or at least having some sense of depth, will be pretty crucial.
At the moment I’ve just got to the point of thinking to train a CNN on simulation data, and so there should also be some positioning of the robot as a model in it’s own virtual world. So it’s probably best to reexamine what’s already out there. Visual odometry. Optical Flow.
I found a good paper summarizing 2019 options. The author’s github has some interesting scripts that might be useful. It reminds me that I should probably be using ROS and gazebo, to some extent. The conclusion was roughly that Google Cartographer or GMapping (Open SLAM) are generally beating some other ones, Karto, Hector. Seems like SLAM code is all a few years old. Google Cartographer had some support for ‘lifelong mapping‘, which sounded interesting. The robot goes around updating its map, a bit. It reminds me I saw ‘PonderNet‘ today, fresh from DeepMind, which from a quick look is, more or less, about like scaling your workload down to your input size.
Anyway, we are mostly interested in Monocular SLAM. So none of this applies, probably. I’m mostly interested at the moment, in using some prefab scenes like the AI2Thor environment in the Cups-RL example, and making some sort of SLAM in simulation.
Also interesting is RATSLAM and the recent update: LatentSLAM – The authors of this site, The Smart Robot, got my attention because of the CCNs. Cortical column networks.
“A common shortcoming of RatSLAM is its sensitivity to perceptual aliasing, in part due to the reliance on an engineered visual processing pipeline. We aim to reduce the effects of perceptual aliasing by replacing the perception module by a learned dynamics model. We create a generative model that is able to encode sensory observations into a latent code that can be used as a replacement to the visual input of the RatSLAM system”
Interesting, “The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.“
I think DSO might be a good option, or the closed loop, LDSO, look like the most straight-forward, maybe.
After a weekend away with a computer vision professional, I found out about COLMAP, a structure from movement suite.
I saw a few more recent projects too, e.g. NeuralRecon, and
ooh, here’s a recent facebook one that sounds like it might work!
Consistent Depth … eh, their google colab is totally broken.
Anyhow, LDSO. Let’s try it.
In file included from /dmc/LDSO/include/internal/OptimizationBackend/AccumulatedTopHessian.h:10:0, from /dmc/LDSO/include/internal/OptimizationBackend/EnergyFunctional.h:9, from /dmc/LDSO/include/frontend/FeatureMatcher.h:10, from /dmc/LDSO/include/frontend/FullSystem.h:18, from /dmc/LDSO/src/Map.cc:4: /dmc/LDSO/include/internal/OptimizationBackend/MatrixAccumulators.h:8:10: fatal error: SSE2NEON.h: No such file or directory #include "SSE2NEON.h" ^~~~ compilation terminated. src/CMakeFiles/ldso.dir/build.make:182: recipe for target 'src/CMakeFiles/ldso.dir/Map.cc.o' failed make[2]: *** [src/CMakeFiles/ldso.dir/Map.cc.o] Error 1 make[2]: *** Waiting for unfinished jobs…. CMakeFiles/Makefile2:85: recipe for target 'src/CMakeFiles/ldso.dir/all' failed make[1]: *** [src/CMakeFiles/ldso.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2
Ok maybe not.
There’s a paper here reviewing ORBSLAM3 and LDSO, and they encounter lots of issues. But it’s a good paper for an overview of how the algorithms work. We want a point cloud so we can find the closest points, and not walk into them.
Calibration is an issue, rolling shutter cameras are an issue, IMU data can’t be synced meaningfully, it’s a bit of a mess, really.
Also, reports that ORB-SLAM2 was only getting 5 fps on a raspberry pi, I got smart, and looked for something specifically for the jetson. I found a depth CNN for monocular vision on the forum, amazing.
Ok so after much fussing about, I found just what we need. I had an old copy of jetson-containers, and the slam code was added just 6 months ago. I might want to try the noetic one (ROS2) instead of ROS, good old ROS.
git clone https://github.com/dusty-nv/jetson-containers.git
cd jetson-containers
chicken@jetson:~/jetson-containers$ ./scripts/docker_build_ros.sh --distro melodic --with-slam
Successfully built 2eb4d9c158b0
Successfully tagged ros:melodic-ros-base-l4t-r32.5.0
chicken@jetson:~/jetson-containers$ ./scripts/docker_test_ros.sh melodic
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.5.0
l4t-base image: nvcr.io/nvidia/l4t-base:r32.5.0
testing container ros:melodic-ros-base-l4t-r32.5.0 => ros_version
xhost: unable to open display ""
xauth: file /tmp/.docker.xauth does not exist
sourcing /opt/ros/melodic/setup.bash
ROS_ROOT /opt/ros/melodic/share/ros
ROS_DISTRO melodic
getting ROS version -
melodic
done testing container ros:melodic-ros-base-l4t-r32.5.0 => ros_version
Well other than the X display, looking good.
Maybe I should just plug in a monitor. Ideally I wouldn’t have to, though. I used GStreamer the other time. Maybe we do that again.
This looks good too… https://github.com/dusty-nv/ros_deep_learning but let’s stay focused. I’m also thinking maybe we upgrade early, to noetic. Ugh it looks like a whole new bunch of build tools and things to relearn. I’m sure it’s amazing. Let’s do ROS1, for now.
Let’s try build that FCNN one again.
CMake Error at tx2_fcnn_node/Thirdparty/fcrn-inference/CMakeLists.txt:121 (find_package):
By not providing "FindOpenCV.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "OpenCV", but
CMake did not find one.
Could not find a package configuration file provided by "OpenCV" (requested
version 3.0.0) with any of the following names:
OpenCVConfig.cmake
opencv-config.cmake
Add the installation prefix of "OpenCV" to CMAKE_PREFIX_PATH or set
"OpenCV_DIR" to a directory containing one of the above files. If "OpenCV"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!
Ok hold on…
Builds additional container with VSLAM packages,
including ORBSLAM2, RTABMAP, ZED, and Realsense.
This only applies to foxy and galactic and implies
--with-pytorch as these containers use PyTorch."
Ok that hangs when it starts building the slam bits. Luckily, someone’s raised the bug, and though it’s not fixed, Dusty does have a docker already compiled.
So, after some digging, I think we can solve the X problem (i.e. where are we going to see this alleged SLAMming occur?) with an RTSP server. Previously I used GStreamer to send RTP over UDP. But this makes more sense, to run a server on the Jetson. There’s a plugin for GStreamer, so I’m trying to get the ‘dev’ version, so I can compile the test-launch.c program.
apt-get install libgstrtspserver-1.0-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libgstrtspserver-1.0-dev is already the newest version (1.14.5-0ubuntu1~18.04.1).
ok... git clone https://github.com/GStreamer/gst-rtsp-server.git
root@jetson:/opt/gst-rtsp-server/examples# gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
test-launch.c: In function ‘main’:
test-launch.c:77:3: warning: implicit declaration of function ‘gst_rtsp_media_factory_set_enable_rtcp’; did you mean ‘gst_rtsp_media_factory_set_latency’? [-Wimplicit-function-declaration]
gst_rtsp_media_factory_set_enable_rtcp (factory, !disable_rtcp);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gst_rtsp_media_factory_set_latency
/tmp/ccC1QgPA.o: In function `main':
test-launch.c:(.text+0x154): undefined reference to `gst_rtsp_media_factory_set_enable_rtcp'
collect2: error: ld returned 1 exit status
gst_rtsp_media_factory_set_enable_rtcp
Ok wait let’s reinstall gstreamer.
apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
error...
Unpacking libgstreamer-plugins-bad1.0-dev:arm64 (1.14.5-0ubuntu1~18.04.1) ...
Errors were encountered while processing:
/tmp/apt-dpkg-install-Ec7eDq/62-libopencv-dev_3.2.0+dfsg-4ubuntu0.1_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Ok then leave out that one...
apt --fix-broken install
and that fails on
Errors were encountered while processing:
/var/cache/apt/archives/libopencv-dev_3.2.0+dfsg-4ubuntu0.1_arm64.deb
It’s like a sign of being a good programmer, to solve this stuff. But damn. Every time. Suggestions continue, in the forums of those who came before. Let’s reload the docker.
Ok I took a break and got lucky. The test-launch.c code is different from what the admin had.
Let’s diff it and see what changed…
#define DEFAULT_DISABLE_RTCP FALSE
from
static gboolean disable_rtcp = DEFAULT_DISABLE_RTCP;
{"disable-rtcp", '\0', 0, G_OPTION_ARG_NONE, &disable_rtcp,
"Whether RTCP should be disabled (default false)", NULL},
from
gst_rtsp_media_factory_set_enable_rtcp (factory, !disable_rtcp);
so now this works (to compile).
gcc test.c -o test $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
So apparently now I can run this in VLC… when I open
rtsp://<jetson-ip>:8554/test
Um is that meant to happen?…. Yes!
Ok next, we want to see SLAM stuff happening. So, ideally, a video feed of the desktop, or something like that.
So hereare the links I have open. Maybe I get back to them later. Need to get back to ORBSLAM2 first, and see where we’re at, and what we need. Not quite /dev/video0 to PC client. More like, ORBSLAM2 to dev/video0 to PC client. Or full screen desktop. One way or another.
libgstrtspserver-1.0-dev is already the newest version (1.14.5-0ubuntu1~18.04.1).
Today we have
E: Unable to locate package libgstrtspserver-1.0-dev E: Couldn't find any package by glob 'libgstrtspserver-1.0-dev' E: Couldn't find any package by regex 'libgstrtspserver-1.0-dev'
Did I maybe compile it outside of the docker? Hmm maybe. Why can’t I find it though? Let’s try the obvious… but also why does this take so long? Network is unreachable. Network is unreachable. Where have all the mirrors gone?
apt-get update
Ok so long story short, I made another docker file. to get gstreamer installed. It mostly required adding a key for the kitware apt repo.
Since 1.14, the use of libv4l2 has been disabled due to major bugs in the emulation layer. To enable usage of this library, set the environment variable GST_V4L2_USE_LIBV4L2=1
but it doesn’t want to work anyway. Ok RTSP is almost a dead end.
I might attach a CSI camera instead of V4L2 (USB camera) maybe. Seems less troublesome. But yeah let’s take a break. Let’s get back to depthnet and ROS2 and ORB-SLAM2, etc.
depthnet: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8: file too short
Ok, let’s try ROS2.
(Sorry, this was supposed to be about SLAM, right?)
As a follow-up for this post…
I asked about mapping two argus (NVIDIA’s CSI camera driver) node topics, in order to fool their stereo_proc, on the github issues. No replies, cause they probably want to sell expensive stereo cameras, and I am asking how to do it with $15 Chinese cameras.
I looked at DustyNV’s Mono depth. Probably not going to work. It seems like you can get a good depth estimate for things in the scene, but everything around the edges reads as ‘close’. Not sure that’s practical enough for depth.
I looked at the NVIDIA DNN depth. Needs proper stereo cameras.
I looked at NVIDIA VPI Stereo Disparity pipeline It is the most promising yet, but the input either needs to come from calibrated cameras, or needs to be rectified on-the-fly using OpenCV. This seems like it might be possible in python, but it is not obvious yet how to do it in C++, which the rest of the code is in.
I tried calibration.
I removed the USB cameras.
I attached two RPi 2.1 CSI cameras, from older projects. Deep dived into ISAAC_ROS suite. Left ROS2 alone for a bit because it is just getting in the way. The one camera sensor had fuzzy lines going across, horizontally, occasionally, and calibration results were poor, fuzzy. Decided I needed new cameras.
IMX-219 was used by the github author, and I even printed out half of the holder, to hold the cameras 8cm apart.
I tried calibration using the ROS2 cameracalibrator, which is a wrapper for a opencv call, after starting up the camera driver node, inside the isaac ros docker.
(Because of bug, also sometimes need to remove –ros-args –remap )
OpenCV was able to calibrate, via the ROS2 application, in both cases. So maybe I should just grab the outputs from that. We’ll do that again, now. But I think I need to print out a chessboard and just see how that goes first.
I couldn’t get more than a couple of matches using pictures of the chessboard on the screen, even with binary thresholding, in the author’s calibration notebooks.
Here’s what the NVIDIA VPI 1.2’s samples drew, for my chess boards:
Camera calibration seems to be a serious problem, in the IOT camera world. I want something approximating depth, and it is turning out that there’s some math involved.
Learning about epipolar geometry was not something I planned to do for this.
But this is like a major showstopper, so either, I must rectify, in real time, or I must calibrate.
“The reason for the noisy result is that the VPI algorithm expects the rectified image pairs as input. Please do the rectification first and then feed the rectified images into the stereo disparity estimator.”
So can we use this info? The nvidia post references this code below as the solution, perhaps, within the context of the code below. Let’s run it on the chessboard?