Categories
AI/ML CNNs deep Locomotion simulation Vision

Simulation Vision 2

I’ve now got a UNet that can provide predictions for where an egg is, in simulation.

So I want to design a reward function related to the egg prediction mask.

I haven’t ‘plugged in’ the trained neural network though, because it will slow things down, and I can just as well make use of the built-in pybullet segmentation to get the simulation egg pixels. At some point though, the robot will have to exist in a world where egg pixels are not labelled as such, and the simulation trained vision will be a useful basis for training.

I think a good reward function might be, (to not fall over), and to maximize the number of 1s for the egg prediction mask. An intermediate award might be the centering of egg pixels.

The numpy way to count mask pixels could be

arr = np.array([1, 0, 0, 0, 0, 1, 1, 1, 1, 0])
np.count_nonzero(arr == 1)

I ended up using the following to count the pixels:

    seg = Image.fromarray(mask.astype('uint8'))
    self._num_ones = (np.array(seg) == 1).sum()

Hmm for centering, not sure yet.

I’m looking into how to run pybullet / gym on the cloud and get some of it rendering.

I’ve found a few leads. VNC is an obvious solution, but probably won’t be available on Chrome OS. Pybullet has a broken link, but I think it’s suggesting something like this colab, more or less, using ‘pyrender’. User matpalm has a minimal example of sending images to Google Dataflow. Those might be good if I can render video. There’s a Jupyter example with capturing images in pybullet. I’ll have to research a bit more. An RDP viewer would probably be easiest, if it’s possible.

Some interesting options on stackoverflow, too.

I set up the Ray Tune training again, on google cloud, and enabled the dashboard by opening some ports (8265, and 6006), and initialising ray with ray.init(dashboard_host=”0.0.0.0″)

I can see it improving the episode reward mean, but it’s taking a good while on the 4 CPU cloud machine. Cost is about $3.50/day on the CPU machine, and about $16/day on the GPU machine. Google is out of T4 GPUs at the moment.

I have it saving the occasional mp4 video using a Monitor wrapper that records every 10th episode.

def env_creator(env_config):
    env = RobotableEnv()
    env = gym.wrappers.Monitor(env, "./vid", video_callable=lambda episode_id: episode_id%10==0,force=True)
    return env

After one night of training, it went from about -30 reward to -5 reward. I’m just running it on the CPU machine while I iron out the issues.

I think curriculum training might also be a useful addition.

Categories
Gripper Research institutes links

MCube

Another MIT group: The MCube Lab. Some grasping datasets and such. Looks like a good resource.

Categories
institutes links sexing

Egg-Tech Prize

The Egg-Tech Prize and Phase 1 winners

Taken from https://foundationfar.org/grants-funding/opportunities/egg-tech-prize/

Interesting as a guideline for comparison with international efforts, and for perspective of the sort of money in this problem. “the industry could save between $1.5 billion and $2.5 billion each year.” – News Article.

The Egg-Tech Prize Phase II criteria forms the basis for the merit-based review, outlined above.

Day and potential to utilize male eggs (up to 25 points).

Minimum: Functions on or before day 8 of incubation. Preference for solutions with reduced incubation time with pre-incubation most preferred. Protocols involving short periods of incubation during egg storage (SPIODES) will be considered preincubation and given preference. Preference will be given to technologies that enable the use of male eggs in other applications.

Accuracy (up to 20 points).

Minimum: 98 percent accuracy. Preference will be given to technologies that work with all chicken breeds/colors commonly used in commercial production.

Economic Feasibility (up to 20 points).

Score for this criterion will consider economic feasibility based on a cost-benefit analysis and business plan that should include:

Direct costs:

  • Capital costs incurred by technology developer, per hatchery
  • Capital investment for equipment/structure modification by hatchery
  • Predicted annual maintenance costs
  • Predicted annual consumables costs
  • Predicted personnel training and labor requirements (hours)

Indirect costs:

  • Expected utilities requirements of technology
  • Potential revenue models
  • Lease, subscription, sales, other.
  • Other revenue streams for developer

Predicted revenues gained for hatchery in diverting eggs, energy savings, labor, cost-savings from not feeding male chicks (depending on country), etc.

Throughput and physical size (up to 15 points)

Potential for sexing at least 15,000 eggs per hour (more preferred). If multiple units will be used in combination to achieve the desired throughput, only one demonstration unit will be required but all units needed to meet the desired throughput must fit into existing hatchery structures, with reasonable and appropriate modifications.

Hatchability (up to 15 points)

Minimum: Does not reduce hatching rate by more than 1.5 percent from baseline.

Speed of test results (up to 5 points)

Results returned in less than 30 min if eggs are tested during incubation (allowable time for removal, testing and return to incubator).† If eggs are tested prior to incubation, with or without SPIDES, results must be available within 48 hours of testing. Accurate tracking and identification of eggs must be demonstrated.

†Longer times until test result will require placing eggs back into the incubator, in which case they must be removed again for sorting.

Categories
3D 3D Research AI/ML arxiv CNNs control envs Locomotion simulation UI Vision

SLAM part 2 (Overview, RTSP, Calibration)

Continuing from our early notes on SLAM algorithms (Simultaneous Localisation and Mapping), and the similar but not as map-making, DSO algorithm, I came across a good project (“From cups to consciousness“) and article that reminded me that mapping the environment or at least having some sense of depth, will be pretty crucial.

At the moment I’ve just got to the point of thinking to train a CNN on simulation data, and so there should also be some positioning of the robot as a model in it’s own virtual world. So it’s probably best to reexamine what’s already out there. Visual odometry. Optical Flow.

I found a good paper summarizing 2019 options. The author’s github has some interesting scripts that might be useful. It reminds me that I should probably be using ROS and gazebo, to some extent. The conclusion was roughly that Google Cartographer or GMapping (Open SLAM) are generally beating some other ones, Karto, Hector. Seems like SLAM code is all a few years old. Google Cartographer had some support for ‘lifelong mapping‘, which sounded interesting. The robot goes around updating its map, a bit. It reminds me I saw ‘PonderNet‘ today, fresh from DeepMind, which from a quick look is, more or less, about like scaling your workload down to your input size.

Anyway, we are mostly interested in Monocular SLAM. So none of this applies, probably. I’m mostly interested at the moment, in using some prefab scenes like the AI2Thor environment in the Cups-RL example, and making some sort of SLAM in simulation.

Also interesting is RATSLAM and the recent update: LatentSLAM – The authors of this site, The Smart Robot, got my attention because of the CCNs. Cortical column networks.

LatentSLAM: https://arxiv.org/pdf/2105.03265.pdf

“A common shortcoming of RatSLAM is its sensitivity
to perceptual aliasing, in part due to the reliance on
an engineered visual processing pipeline. We aim to reduce
the effects of perceptual aliasing by replacing the perception
module by a learned dynamics model. We create a generative
model that is able to encode sensory observations into a
latent code that can be used as a replacement to the visual
input of the RatSLAM system”

Interesting, “The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.

I think DSO might be a good option, or the closed loop, LDSO, look like the most straight-forward, maybe.

After a weekend away with a computer vision professional, I found out about COLMAP, a structure from movement suite.

I saw a few more recent projects too, e.g. NeuralRecon, and

ooh, here’s a recent facebook one that sounds like it might work!

Consistent Depth … eh, their google colab is totally broken.

Anyhow, LDSO. Let’s try it.

In file included from /dmc/LDSO/include/internal/OptimizationBackend/AccumulatedTopHessian.h:10:0,
from /dmc/LDSO/include/internal/OptimizationBackend/EnergyFunctional.h:9,
from /dmc/LDSO/include/frontend/FeatureMatcher.h:10,
from /dmc/LDSO/include/frontend/FullSystem.h:18,
from /dmc/LDSO/src/Map.cc:4:
/dmc/LDSO/include/internal/OptimizationBackend/MatrixAccumulators.h:8:10: fatal error: SSE2NEON.h: No such file or directory
#include "SSE2NEON.h"
^~~~
compilation terminated.
src/CMakeFiles/ldso.dir/build.make:182: recipe for target 'src/CMakeFiles/ldso.dir/Map.cc.o' failed
make[2]: *** [src/CMakeFiles/ldso.dir/Map.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs….
CMakeFiles/Makefile2:85: recipe for target 'src/CMakeFiles/ldso.dir/all' failed
make[1]: *** [src/CMakeFiles/ldso.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Ok maybe not.

There’s a paper here reviewing ORBSLAM3 and LDSO, and they encounter lots of issues. But it’s a good paper for an overview of how the algorithms work. We want a point cloud so we can find the closest points, and not walk into them.

Calibration is an issue, rolling shutter cameras are an issue, IMU data can’t be synced meaningfully, it’s a bit of a mess, really.

Also, reports that ORB-SLAM2 was only getting 5 fps on a raspberry pi, I got smart, and looked for something specifically for the jetson. I found a depth CNN for monocular vision on the forum, amazing.

Then this is a COLMAP structure-from-motion option, and some more depth stuff… and more making it high res

Ok so after much fussing about, I found just what we need. I had an old copy of jetson-containers, and the slam code was added just 6 months ago. I might want to try the noetic one (ROS2) instead of ROS, good old ROS.

git clone https://github.com/dusty-nv/jetson-containers.git
cd jetson-containers

chicken@jetson:~/jetson-containers$ ./scripts/docker_build_ros.sh --distro melodic --with-slam


Successfully built 2eb4d9c158b0
Successfully tagged ros:melodic-ros-base-l4t-r32.5.0


chicken@jetson:~/jetson-containers$ ./scripts/docker_test_ros.sh melodic
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R32.5.0
l4t-base image:  nvcr.io/nvidia/l4t-base:r32.5.0
testing container ros:melodic-ros-base-l4t-r32.5.0 => ros_version
xhost:  unable to open display ""
xauth:  file /tmp/.docker.xauth does not exist
sourcing   /opt/ros/melodic/setup.bash
ROS_ROOT   /opt/ros/melodic/share/ros
ROS_DISTRO melodic
getting ROS version -
melodic
done testing container ros:melodic-ros-base-l4t-r32.5.0 => ros_version



Well other than the X display, looking good.

Maybe I should just plug in a monitor. Ideally I wouldn’t have to, though. I used GStreamer the other time. Maybe we do that again.

This looks good too… https://github.com/dusty-nv/ros_deep_learning but let’s stay focused. I’m also thinking maybe we upgrade early, to noetic. Ugh it looks like a whole new bunch of build tools and things to relearn. I’m sure it’s amazing. Let’s do ROS1, for now.

Let’s try build that FCNN one again.

CMake Error at tx2_fcnn_node/Thirdparty/fcrn-inference/CMakeLists.txt:121 (find_package):
  By not providing "FindOpenCV.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "OpenCV", but
  CMake did not find one.

  Could not find a package configuration file provided by "OpenCV" (requested
  version 3.0.0) with any of the following names:

    OpenCVConfig.cmake
    opencv-config.cmake

  Add the installation prefix of "OpenCV" to CMAKE_PREFIX_PATH or set
  "OpenCV_DIR" to a directory containing one of the above files.  If "OpenCV"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!

Ok hold on…

Builds additional container with VSLAM packages,
including ORBSLAM2, RTABMAP, ZED, and Realsense.
This only applies to foxy and galactic and implies 
--with-pytorch as these containers use PyTorch."

Ok so not melodic then. ROS2 it is…

./scripts/docker_build_ros.sh --distro foxy --with-slam

Ok that hangs when it starts building the slam bits. Luckily, someone’s raised the bug, and though it’s not fixed, Dusty does have a docker already compiled.

sudo docker pull dustynv/ros:foxy-slam-l4t-r32.6.1

I started it up with

docker run -it --runtime nvidia --rm --network host --privileged --device /dev/video0 -v /home/chicken/:/dmc dustynv/ros:foxy-slam-l4t-r32.6.1

So, after some digging, I think we can solve the X problem (i.e. where are we going to see this alleged SLAMming occur?) with an RTSP server. Previously I used GStreamer to send RTP over UDP. But this makes more sense, to run a server on the Jetson. There’s a plugin for GStreamer, so I’m trying to get the ‘dev’ version, so I can compile the test-launch.c program.

apt-get install libgstrtspserver-1.0-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libgstrtspserver-1.0-dev is already the newest version (1.14.5-0ubuntu1~18.04.1).

ok... git clone https://github.com/GStreamer/gst-rtsp-server.git

root@jetson:/opt/gst-rtsp-server/examples# gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
test-launch.c: In function ‘main’:
test-launch.c:77:3: warning: implicit declaration of function ‘gst_rtsp_media_factory_set_enable_rtcp’; did you mean ‘gst_rtsp_media_factory_set_latency’? [-Wimplicit-function-declaration]
   gst_rtsp_media_factory_set_enable_rtcp (factory, !disable_rtcp);
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   gst_rtsp_media_factory_set_latency
/tmp/ccC1QgPA.o: In function `main':
test-launch.c:(.text+0x154): undefined reference to `gst_rtsp_media_factory_set_enable_rtcp'
collect2: error: ld returned 1 exit status




gst_rtsp_media_factory_set_enable_rtcp

Ok wait let’s reinstall gstreamer.

apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio


error...

Unpacking libgstreamer-plugins-bad1.0-dev:arm64 (1.14.5-0ubuntu1~18.04.1) ...
Errors were encountered while processing:
 /tmp/apt-dpkg-install-Ec7eDq/62-libopencv-dev_3.2.0+dfsg-4ubuntu0.1_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Ok then leave out that one... 

apt --fix-broken install
and that fails on
Errors were encountered while processing:
 /var/cache/apt/archives/libopencv-dev_3.2.0+dfsg-4ubuntu0.1_arm64.deb
 


It’s like a sign of being a good programmer, to solve this stuff. But damn. Every time. Suggestions continue, in the forums of those who came before. Let’s reload the docker.

root@jetson:/opt/gst-rtsp-server/examples# pkg-config --cflags --libs gstreamer-1.0

-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0

root@jetson:/opt/gst-rtsp-server/examples# pkg-config --cflags --libs gstreamer-rtsp-server-1.0
-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -lgstrtspserver-1.0 -lgstbase-1.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0
 

Ok I took a break and got lucky. The test-launch.c code is different from what the admin had.

Let’s diff it and see what changed…

#define DEFAULT_DISABLE_RTCP FALSE

from 

static gboolean disable_rtcp = DEFAULT_DISABLE_RTCP;



{"disable-rtcp", '\0', 0, G_OPTION_ARG_NONE, &disable_rtcp,
  "Whether RTCP should be disabled (default false)", NULL},

 from

gst_rtsp_media_factory_set_enable_rtcp (factory, !disable_rtcp);


so now this works (to compile).
gcc test.c -o test $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)

ok so back to it…

root@jetson:/opt/gst-rtsp-server/examples# ./test-launch "videotestsrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96"
stream ready at rtsp://127.0.0.1:8554/test

So apparently now I can run this in VLC… when I open

rtsp://<jetson-ip>:8554/test

Um is that meant to happen?…. Yes!

Ok next, we want to see SLAM stuff happening. So, ideally, a video feed of the desktop, or something like that.

So here are the links I have open. Maybe I get back to them later. Need to get back to ORBSLAM2 first, and see where we’re at, and what we need. Not quite /dev/video0 to PC client. More like, ORBSLAM2 to dev/video0 to PC client. Or full screen desktop. One way or another.

Here's a cool pdf with some instructions, from doodlelabs, and their accompanying pdf about video streaming codecs and such.

Also, gotta check out this whole related thing. and the depthnet example, whose documentation is here.

Ok, so carrying on.

I try again today, and whereas yesterday we had

 libgstrtspserver-1.0-dev is already the newest version (1.14.5-0ubuntu1~18.04.1).

Today we have

E: Unable to locate package libgstrtspserver-1.0-dev
E: Couldn't find any package by glob 'libgstrtspserver-1.0-dev'
E: Couldn't find any package by regex 'libgstrtspserver-1.0-dev'

Did I maybe compile it outside of the docker? Hmm maybe. Why can’t I find it though? Let’s try the obvious… but also why does this take so long? Network is unreachable. Network is unreachable. Where have all the mirrors gone?

apt-get update

Ok so long story short, I made another docker file. to get gstreamer installed. It mostly required adding a key for the kitware apt repo.

./test "videotestsrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96"

Ok and on my linux box now, so I’ll connect to it.

sudo apt install vlc
vlc rtsp://192.168.101.115:8554/Test

K all good… So let’s get the camera output next?

sheesh it’s not obvious.

I’m just making a note of this.

Since 1.14, the use of libv4l2 has been disabled due to major bugs in the emulation layer. To enable usage of this library, set the environment variable GST_V4L2_USE_LIBV4L2=1

but it doesn’t want to work anyway. Ok RTSP is almost a dead end.

I might attach a CSI camera instead of V4L2 (USB camera) maybe. Seems less troublesome. But yeah let’s take a break. Let’s get back to depthnet and ROS2 and ORB-SLAM2, etc.

depthnet: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8: file too short

Ok, let’s try ROS2.

(Sorry, this was supposed to be about SLAM, right?)

As a follow-up for this post…

I asked about mapping two argus (NVIDIA’s CSI camera driver) node topics, in order to fool their stereo_proc, on the github issues. No replies, cause they probably want to sell expensive stereo cameras, and I am asking how to do it with $15 Chinese cameras.

I looked at DustyNV’s Mono depth. Probably not going to work. It seems like you can get a good depth estimate for things in the scene, but everything around the edges reads as ‘close’. Not sure that’s practical enough for depth.

I looked at the NVIDIA DNN depth. Needs proper stereo cameras.

I looked at NVIDIA VPI Stereo Disparity pipeline It is the most promising yet, but the input either needs to come from calibrated cameras, or needs to be rectified on-the-fly using OpenCV. This seems like it might be possible in python, but it is not obvious yet how to do it in C++, which the rest of the code is in.

Self portrait using unusable stereo disparity data, using the c++ code in https://github.com/NVIDIA-AI-IOT/jetson-stereo-depth/

I tried calibration.

I removed the USB cameras.

I attached two RPi 2.1 CSI cameras, from older projects. Deep dived into ISAAC_ROS suite. Left ROS2 alone for a bit because it is just getting in the way. The one camera sensor had fuzzy lines going across, horizontally, occasionally, and calibration results were poor, fuzzy. Decided I needed new cameras.

IMX-219 was used by the github author, and I even printed out half of the holder, to hold the cameras 8cm apart.

I tried calibration using the ROS2 cameracalibrator, which is a wrapper for a opencv call, after starting up the camera driver node, inside the isaac ros docker.

ros2 run isaac_ros_argus_camera_mono isaac_ros_argus_camera_mono --ros-args -p device:=0 -p sensor:=4 -p output_encoding:="mono8"

(This publishes mono camera feed to topic /image_raw)

ros2 run camera_calibration cameracalibrator \
--size=8x6 \
--square=0.063 \
--approximate=0.3 \
--no-service-check \
--ros-args --remap /image:=/image_raw

(Because of bug, also sometimes need to remove –ros-args –remap )

OpenCV was able to calibrate, via the ROS2 application, in both cases. So maybe I should just grab the outputs from that. We’ll do that again, now. But I think I need to print out a chessboard and just see how that goes first.

I couldn’t get more than a couple of matches using pictures of the chessboard on the screen, even with binary thresholding, in the author’s calibration notebooks.

Here’s what the NVIDIA VPI 1.2’s samples drew, for my chess boards:

Stereo Disparity
Confidence Map

Camera calibration seems to be a serious problem, in the IOT camera world. I want something approximating depth, and it is turning out that there’s some math involved.

Learning about epipolar geometry was not something I planned to do for this.

But this is like a major showstopper, so either, I must rectify, in real time, or I must calibrate.

https://upload.wikimedia.org/wikipedia/commons/9/9a/Image_rectification.svg

We’re not going to SLAM without it.

The pertinent forum post is here.

“The reason for the noisy result is that the VPI algorithm expects the rectified image pairs as input. Please do the rectification first and then feed the rectified images into the stereo disparity estimator.”

So can we use this info? The nvidia post references this code below as the solution, perhaps, within the context of the code below. Let’s run it on the chessboard?

p1fNew = p1f.reshape((p1f.shape[0] * 2, 1))
p2fNew = p2f.reshape((p2f.shape[0] * 2, 1))

retBool ,rectmat1, rectmat2 = cv2.stereoRectifyUncalibrated(p1fNew,p2fNew,fundmat,imgsize)
import numpy as np
import cv2
import vpi

left  = cv2.imread('left.png')
right = cv2.imread('right.png')
left_gray  = cv2.cvtColor(left, cv2.COLOR_BGR2GRAY)
right_gray = cv2.cvtColor(right, cv2.COLOR_BGR2GRAY)

detector = cv2.xfeatures2d.SIFT_create()
kp1, desc1 = detector.detectAndCompute(left_gray,  None)
kp2, desc2 = detector.detectAndCompute(right_gray, None)

bf = cv2.BFMatcher()
matches = bf.knnMatch(desc1, desc2, k=2)

ratio = 0.75
good, mkp1, mkp2 = [], [], []
for m in matches:
    if m[0].distance < m[1].distance * ratio:
        m = m[0]
        good.append(m)
        mkp1.append( kp1[m.queryIdx] )
        mkp2.append( kp2[m.trainIdx] )

p1 = np.float32([kp.pt for kp in mkp1])
p2 = np.float32([kp.pt for kp in mkp2])

H, status = cv2.findHomography(p1, p2, cv2.RANSAC, 20)
print('%d / %d  inliers/matched' % (np.sum(status), len(status)))

status = np.array(status, dtype=bool)
p1f = p1[status.view(np.ndarray).ravel()==1,:] #Remove Outliers
p2f = p2[status.view(np.ndarray).ravel()==1,:] #Remove Outliers
goodf = [good[i] for i in range(len(status)) if status.view(np.ndarray).ravel()[i]==1]

fundmat, mask = cv2.findFundamentalMat(p1f, p2f, cv2.RANSAC, 3, 0.99,)

#img = cv2.drawMatches(left_gray, kp1, right_gray, kp2, good, None, None, flags=2)
#cv2.imshow('Default Matches', img)
#img = cv2.drawMatches(left_gray, kp1, right_gray, kp2, goodf, None, None, flags=2)
#cv2.imshow('Filtered Matches', img)
#cv2.waitKey(0)

retBool, H1, H2 = cv2.stereoRectifyUncalibrated(p1f, p2f, fundmat, (left.shape[1],left.shape[0]))

with vpi.Backend.CUDA:
    left = vpi.asimage(left).convert(vpi.Format.NV12_ER)
    left = left.perspwarp(H1)
    left = left.convert(vpi.Format.RGB8)

    right = vpi.asimage(right).convert(vpi.Format.NV12_ER)
    right = right.perspwarp(H2)
    right = right.convert(vpi.Format.RGB8)

#cv2.imshow('Left', left.cpu())
#cv2.imshow('Right', right.cpu())
#cv2.waitKey(0)

cv2.imwrite('rectified_left.png', left.cpu())
cv2.imwrite('rectified_right.png', right.cpu())

Categories
AI/ML arxiv institutes links

MTank

Came across a ‘non-partisan’ group, with a github archive of RL links. It’s pretty epic.

Categories
bio control form Locomotion robots

Pantograph Legs

“They observed that many quadrupedal, mammalian animals feature a distinguished functional three-segment front leg and hind leg design, and proposed a “pantograph” leg abstraction for robotic research.”

1 DOF (degree of freedom). 1 motor. Miranda wants jointed legs, and I don’t want to work out inverse kinematics, so this looks ideal. Maybe a bit complicated still.

Biorobotics Laboratory, EPFL

The simpler force diagram:

Cheetah-cub leg mechanism, and leg compliance. A single leg is shown abstracted, detailed leg segment ratios are omitted for clarity, robot heading direction is to the left. (1) shows the three leg angles αprox, αmid, and αdist. Hip and knee RC servo motors are mounted proximally, the leg length actuation is transmitted by a cable mechanism. The pantograph structure was inspired by the work of Witte et al. (2003) and Fischer and Blickhan (2006). (2) The foot segment describes a simplified foot-locus, showing the leg in mid-swing. For ground clearance, the knee motor shortens the leg by pulling on the cable mechanism (green, Fcable). Fdiag is the major, diagonal leg spring. Its force extends the pantograph leg, against gravitational and dynamic forces. (3) The leg during mid-stance. (4) In case of an external translational perturbation, the leg will be compressed passively. (5) If an external perturbation torque applies e.g., through body pitching, the leg linkage will transmit it into a deflection of the parallel spring, not of the diagonal spring.
Cheetah-cub leg mechanism, and leg compliance. A single leg is shown abstracted, detailed leg segment ratios are omitted for clarity, robot heading direction is to the left. (1) shows the three leg angles αprox, αmid, and αdist. Hip and knee RC servo motors are mounted proximally, the leg length actuation is transmitted by a cable mechanism. The pantograph structure was inspired by the work of Witte et al. (2003) and Fischer and Blickhan (2006). (2) The foot segment describes a simplified foot-locus, showing the leg in mid-swing. For ground clearance, the knee motor shortens the leg by pulling on the cable mechanism (green, Fcable). Fdiag is the major, diagonal leg spring. Its force extends the pantograph leg, against gravitational and dynamic forces. (3) The leg during mid-stance. (4) In case of an external translational perturbation, the leg will be compressed passively. (5) If an external perturbation torque applies e.g., through body pitching, the leg linkage will transmit it into a deflection of the parallel spring, not of the diagonal spring.Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs (Alexander Badri-Spröwitz et al, 2014)

Compliance is a feature, made possible by springs typically.

Biologically Inspired Robots - nitishpuri.github.io
https://nitishpuri.github.io/posts/robotics/biologically-inspired-robots/

A homemade attempt here with the Mojo robot of the Totally Not Evil Robot Army. Their robot only uses 9g servos, and can’t quite pick itself up.

I did an initial design with what I had around, and it turns out compliance is a delicate balance. Too much spring, and it just mangles itself up. Too little spring and it can’t lift off the ground.

Further iterations removed the springs, which were too tight by far, and used cable ties to straighten the legs, but the weight of the robot is a little bit too much for the knee joints.

I will likely leave it until I have a 3d printer, some better springs, and will give it another try with more tools and materials available. Maybe even hydraulics, some day,

Some more research required, too.

https://www.mdpi.com/1424-8220/20/17/4911/htm

Categories
3D Research AI/ML arxiv control form Locomotion robots

Kinematic Motion Primitives

This post follows the ‘Finding where we left off’ post, focused on locomotion sim2real. In that post I tried to generalise and smooth the leg angle servo movements in their -PI/2 to PI/2 range.

I will likely try extracting kMPs, before this is all over, which from a skim read, and look at the pictures, are like, just taking a single slice of the wave data, and repeating that. Or, taking consecutive periodic waves, and extracting the average / normalized movement from them.

https://becominghuman.ai/introduction-to-timeseries-analysis-using-python-numpy-only-3a7c980231af

Cheetah-cub leg mechanism, and leg compliance. A single leg is shown abstracted, detailed leg segment ratios are omitted for clarity, robot heading direction is to the left. (1) shows the three leg angles αprox, αmid, and αdist. Hip and knee RC servo motors are mounted proximally, the leg length actuation is transmitted by a cable mechanism. The pantograph structure was inspired by the work of Witte et al. (2003) and Fischer and Blickhan (2006). (2) The foot segment describes a simplified foot-locus, showing the leg in mid-swing. For ground clearance, the knee motor shortens the leg by pulling on the cable mechanism (green, Fcable). Fdiag is the major, diagonal leg spring. Its force extends the pantograph leg, against gravitational and dynamic forces. (3) The leg during mid-stance. (4) In case of an external translational perturbation, the leg will be compressed passively. (5) If an external perturbation torque applies e.g., through body pitching, the leg linkage will transmit it into a deflection of the parallel spring, not of the diagonal spring.
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs (Alexander Badri-Spröwitz et al, 2014)

It’s now December 6th 2021, as I continue here…

This paper is very relevant, “Realizing Learned Quadruped Locomotion Behaviors through Kinematic Motion Primitives”

Some Indian PhDs have summed up the process. Unfortunately I’m not quite on the exact same page. I understand the pictures, haha.

Here’s where this picture comes from, which is useful for explaining what I need to do: (Short paper)

In 2014, also, same thing, Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs

They just used PCA. (Principal Component Analysis). That’s like a common ML toolkit thing.

Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs (2014)

See now this is where they lose me: “The covariance matrix of the normalized dataset”. Come on guys. Throw us a bone.

I found this picture, which is worth 1000 words, in the discussion on stackexchange about PCA and SVD:

Rotating PCA animation

So, I’m not quite ready for PCA. That is two dimensions, anyway. Oh right, so I need to add a ‘time’ dimension. numpy’s expand_dims?

I played around with Codex, to assist with finding the peaks, and to find the period length.

And I separated them out to different plots… and got the peaks matching once I passed in ( , distance=80).

I had to install these, and restart the Jupyter kernel (and I think close and restart the Chrome tab.) in order to get some matplotlib widgets.

Error message:
Jupyter Lab: Error displaying widget: model not found



!pip3 install --upgrade jupyterlab ipympl
%matplotlib widget
The matplotlib slider example (image thereof)

I started on a slider widget to draw a vertical line on top of the leg data, but I need to fix the refresh issue. Anyhow, it’s not quite what i want. What do I want?

So, I want the kMPs. The kMPs are like, a gif of a basic action, e.g. robot taking a full step forward, on all legs, which we can run once, twice, etc.

We can ‘average’ or ‘normalise’ or ‘phase’ the waves, and assume that gives us a decent average step forward.

I think there’s enough variation in this silly simulation walk that we should start with just the simplest, best single wave.

But since they ran PCA, let’s run it to see what it does for the data. We have a single integer value, which is 1D. To make it 2D, so we can run PCA on it… we add a time dimension?

But also, so I measured the period a few programs up, to be

67 steps (front right),

40 steps (front left),

59 steps (back right),

42 steps (back left).

So, as a starting point, it would be nice to be as close to servos at 90 degrees as possible. If I iterate the values, and track the lowest sum diff, yeah… is that it? I’m looking at this link at SO.

Ideally I could visualise the options..

Repeating a slice. Averaging the slices.

Ok, so I need a start index, end index, to index a range.

After some investigation, the index where the legs are closest to 90 degrees, is at 1739

Computer Enhance

So that’s kinda close to our ideal kMPs, from about 1739 to about 1870 maybe, but clearly the data is messy. Could be tweaked. Wavetable editor, basically.

Alright, let’s make an app. We can try run a Flask server on the Pi, with Javascript front end using chart.js.

pip3 install flask

Save the test web app, kmpapp.py

from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello world'

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0') 

python3 kmpapp.py

Ok good start. We need to get the x and y data into JSON so Javascript can plot it, in chart.js

That’s looking good. Maybe too many points. Ok, so I want to edit, save, and run the KMPs on the robot.

Well it took a day but it’s working, and is pretty cool. Used smooth.js to allow smoother transitions. Took another day to add save and load features.

I’ll upload this to the project repo.

Many improvements added. Will update repo again closer to MFRU.

Categories
3D Research AI/ML CNNs deep dev envs evolution GANs Gripper Gripper Research Linux Locomotion sexing sim2real simulation The Sentient Table UI Vision

Simulation Vision

We’ve got an egg in the gym environment now, so we need to collect some data for training the robot to go pick up an egg.

I’m going to have it save the rgba, depth and segmentation images to disk for Unet training. I left out the depth image for now. The pictures don’t look useful. But some papers are using the depth, so I might reconsider. Some weed bot paper uses 14-channel images with all sorts of extra domain specific data relevant to plants.

I wrote some code to take pics if the egg was in the viewport, and it took 1000 rgb and segmentation pictures or so. I need to change the colour of the egg for sure, and probably randomize all the textures a bit. But main thing is probably to make the segmentation layers with pixel colours 0,1,2, etc. so that it detects the egg and not so much the link in the foreground.

So sigmoid to softmax and so on. Switching to multi-class also begs the question whether to switch to Pytorch & COCO panoptic segmentation based training. It will have to happen eventually, as I think all of the fastest implementations are currently in Pytorch and COCO based. Keras might work fine for multiclass or multiple binary classification, but it’s sort of the beginning attempt. Something that works. More proof of concept than final implementation. But I think Keras will be good enough for these in-simulation 256×256 images.

Regarding multi-class segmentation, karolzak says “it’s just a matter of changing num_classes argument and you would need to shape your mask in a different way (layer per class??), so for multiclass segmentation you would need a mask of shape (width, height, num_classes)

I’ll keep logging my debugging though, if you’re reading this.

So I ran segmask_linkindex.py to see what it does, and how to get more useful data. The code is not running because the segmentation image actually has an array of arrays. I presume it’s a numpy array. I think it must be the rows and columns. So anyway I added a second layer to the loop, and output the pixel values, and when I ran it in the one mode:

-1
-1
-1
83886081
obUid= 1 linkIndex= 4
83886081
obUid= 1 linkIndex= 4
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
16777217
obUid= 1 linkIndex= 0
16777217
obUid= 1 linkIndex= 0
-1
-1
-1

And in the other mode

-1
-1
-1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
-1
-1
-1

Ok I see. Hmm. Well the important thing is that this code is indeed for extracting the pixel information. I think it’s going to be best for the segmentation to use the simpler segmentation mask that doesn’t track the link info. Ok so I used that code from the guy’s thesis project, and that was interpolating the numbers. When I look at the unique elements of the mask without interpolation, I’ve got…

[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   1   2 255]
[  0   1   2 255]
[  0   2 255]
[  0   2 255]

Ok, so I think:

255 is the sky
0 is the plane
2 is the robotable
1 is the egg

So yeah, I was just confused because the segmentation masks were all black and white. But if you look closely with a pixel picker tool, the pixel values are (0,0,0), (1,1,1), (2,2,2), (255,255,255), so I just couldn’t see it.

The interpolation kinda helps, to be honest.

As per OpenAI’s domain randomization helping with Sim2Real, we want to randomize some textures and some other things like that. I also want to throw in some random chickens. Maybe some cats and dogs. I’m afraid of transfer learning, at this stage, because a lot of it has to do with changing the structure of the final layer of the neural network, and that might be tough. Let’s just do chickens and eggs.

An excerpt from OpenAI:

Costs

Both techniques increase the computational requirements: dynamics randomization slows training down by a factor of 3x, while learning from images rather than states is about 5-10x slower.

Ok that’s a bit more complex than I was thinking. I want to randomize textures and colours, first

I’ve downloaded and unzipped the ‘Describable Textures Dataset’

And ok it’s loading a random texture for the plane

and random colour for the egg and chicken

Ok, next thing is the Simulation CNN.

Interpolation doesn’t work though, for this, cause it interpolates from what’s available in the image:

[  0  85 170 255]
[  0  63 127 191 255]
[  0  63 127 191 255]

I kind of need the basic UID segmentation.

[  0   1   2   3 255]

Ok, pity about the mask colours, but anyway.

Let’s train the UNet on the new dataset.

We’ll need to make karolzak’s changes.

I’ve saved 2000+ rgb.jpg and seg.png files and we’ve got [0,1,2,3,255] [plane, egg, robot, chicken, sky]

So num_classes=5

And

“for multiclass segmentation you would need a mask of shape (width, height, num_classes) “

What is y.shape?

(2001, 256, 256, 1)

which is 2001 files, of 256 x 256 pixels, and one class. So if I change that to 5…? ValueError: cannot reshape array of size 131137536 into shape (2001,256,256,5)

Um… Ok I need to do more research. Brb.

So the keras_unet library is set up to input binary masks per class, and output binary masks per class.

I would rather use the ‘integer’ class output, and have it output a single array, with the class id per pixel. Similar to this question. In preparation for karolzak probably not knowing how to do this with his library, I’ve asked on stackoverflow for an elegant way to make the binary masks from a multi-class mask, in the meantime.

I coded it up using the library author’s suggested method, as he pointed out that the gains of the integer encoding method are minimal. I’ll check it out another time. I think it might still make sense for certain cases.

Ok that’s pretty awesome. We have 4 masks. Human, chicken, egg, robot. I left out plane and sky for now. That was just 2000 images of training, and I have 20000. I trained on another 2000 images, and it’s down to 0.008 validation loss, which is good enough!

So now I want to load the CNN model in the locomotion code, and feed it the images from the camera, and then have a reward function related to maximizing the egg pixels.

I also need to look at the pybullet-planning project and see what it consists of, as I imagine they’ve made some progress on the next steps. “built-in implementations of standard motion planners, including PRM, RRT, biRRT, A* etc.” – I haven’t even come across these acronyms yet! Ok, they are motion planning. Solvers of some sort. Hmm.

Categories
AI/ML CNNs deep dev GANs Linux sexing Vision

Cloud GPUs: GCP

The attempted training of the U-Net on the Jetson NX has been a bit slow, making odd progress over 2 nights, and I’m not sure if it’s working. I’ve had to reduce batch size to 1, and the filter size, which has reduced the number of parameters by about a factor of 10, and still, loading the NN into memory sometimes dies on a concatenation call. The number of images per batch can also crash it, so perhaps some memory can be saved with a better image loading process.

Anyway, projects under an official NVIDIA repo are suggesting that we should be able to train smaller networks like resnet18, with 11 million parameters, on the Jetson. So maybe we can still avoid the cloud.

But judging by the NVIDIA TLT info, any training of resnet50s or 100s are going to need serious GPUs and memory and space for training.

After looking at Google, Amazon and Microsoft offerings, the AWS g4dn.xlarge instance looks like it might be the best option, at $0.526/hr, or Google’s got a T4 based compute engine for only $0.35/hr. These are good options, if 16GB of video ram will be enough. It should be, because we’re working with like 5GB on the Jetson.

Microsoft has the NC6 option, which looks good for a much more beefy GPU and memory, at $0.90/hr.

We’re just looking at Pay-as-you-go prices, as the 1-year and 3-year commitments will end up being expensive.

I’m still keen to try train on the Jetson, but the cloud is becoming more and more probable. In Sweden, visiting Miranda, we’re unable to order a Jetson AGX Xavier, the 32GB version. Arrow won’t ship here without a VAT number, and SiliconHighway is out of stock.

So, attempting Cloud GPUs. If you want to cut to the chase, read this one backwards. So many problems. In the end, it turned out setting it up yourself is practically impossible, but there is an ‘AI Platform’ section that works.

Amazon AWS. Tried to log in to AWS. “Authentication failed because your account has been suspended.” Tells me to create a new account. But then brings me back to the same failure screen. Ok, sending email to their accounts department. Next.

Google Cloud. I tried to create a VM and add a T4 GPU, but none of the regions have them. So I need to download the Gcloud SDK and CLI tool first, to run a command to describe the regions, according to the ‘Before you begin‘ instructions..

Ok, GPUs will only run on N1 and A2 VMs. The A2 VMs are only for A100s, so I need an N1 VM in one of these regions, and we add a T4 GPU.

There’s an option to load a specific docker, and unfortunately they don’t seem to have one with both Pytorch and TF2. Let’s start with TF2 gcr.io/deeplearning-platform-release/tf2-gpu.2-4

So this looks like a good enough VM. 30GB RAM, 8 cpus. For europe-west3, the cost is about 50 cents / hr for the VM and 41 cents / hr for the GPU.

n1-standard-8830GB$0.4896$0.09840
1 GPU16 GB GDDR6$0.41 per GPU

So let’s round up to about $1/hour. I ended up picking the n1-standard-4 (4 cpus, 15 gb ram).

At these prices I’ll want to get things up and running asap. So I am going to prep a bit, before I click the Create VM button.

I had to try a few things to find a cloud instance with a gpu, because the official list didn’t really work. I eventually got one with a T4 GPU from europe-west4-c.

It seems like Google Drive isn’t really part of the google cloud platform ecosystem, so I started a storage bucket with 50GB of space, and am uploading the chicken images to it.

The instance doesn’t have pip or jupyter installed. So let’s do that…

ok so when I sudo’ed, I got this error

Jul 20 14:45:01 chicken-vm konlet-startup[1665]: {"errorDetail":{"message":"write /var/lib/docker/tmp/GetImageBlob362062711: no space left on device"},"error":"write /var/lib/docker/tmp/GetImageBl
 Jul 20 14:45:01 chicken-vm konlet-startup[1665]: ).
 Jul 20 14:45:01 chicken-vm konlet-startup[1665]: 2021/07/20 14:43:04 No containers created by previous runs of Konlet found.
 Jul 20 14:45:01 chicken-vm konlet-startup[1665]: 2021/07/20 14:43:04 Found 0 volume mounts in container chicken-vm declaration.
 Jul 20 14:45:01 chicken-vm konlet-startup[1665]: 2021/07/20 14:43:04 Error: Failed to start container: Error: No such image: gcr.io/deeplearning-platform-release/tf2-gpu.2-4
 Jul 20 14:45:01 chicken-vm konlet-startup[1665]: 2021/07/20 14:43:04 Saving welcome script to profile.d

So 10GB wasn’t enough to load gcr.io/deeplearning-platform-release/tf2-gpu.2-4 , I guess.

Ok deleting the VM. Next time, bigger hard drive. I’m now adding a cloud storage bucket and uploading the chicken images, so I can copy them to the VM’s drive later. It’s taking forever. Wow. Ok.

Now I am trying to spin up a VM again, and it’s practically impossible. I’ve tried every region and zone possible. Ok europe-west1-c. Finally. I also upped my ‘quota’ of gpus, under IAM->Quotas, in case that is a reason I couldn’t find a GPU VM. They reviewed and approved it in about 15 minutes.

+------------------+--------+-----------------+
|       Name       | Region | Requested Limit |
+------------------+--------+-----------------+
| GPUS_ALL_REGIONS | GLOBAL |        1        |
+------------------+--------+-----------------+

So after like 10 minutes of nothing, I see the docker container started up.

68ee22bf268f gcr.io/deeplearning-platform-release/tf2-gpu.2-4 "/entrypoint.sh /run…" 5 minutes ago Up 4 minutes klt-chicken-vm-template-1-ursn

I’ve enabled tcp:8080 port in the firewall settings, but the external ip and new port don’t seem to connect. https://35.195.66.139:8080/ Ah ha. http. We’re in!

Jupyter Lab starting up.

So I tried to download the gcloud tools to get gsutil to access my storage bucket, but was getting ‘Permission denied’, even as root. I chown’ed it to my user, but still no.

I had to go out, so I stopped the VM. Seems you can’t suspend a VM with a GPU. I also saw when I typed ‘sudo -i’ to switch user to root, it said to ‘docker attach’ to my container. But the container is just like a tty printing out logs, so you can get stuck in the docker, and need to ssh in again.

I think the issue was just that I need to be inside the docker to do things. The VM you log into is just a minimal container running environment. So I think that was my issue. Next time I install gsutil, I’ll run ‘docker exec -it 68ee22bf268f bash’ to get into the docker first.

Ok fired up the VM again. This time I exec’ed into the docker, and gsutil was already installed. gsutil cp -r gs://chicken-drive . is copying the files now. It’s slow, and it says to try with -m, for parallel copying, but I’m just going to let it carry on for now. It’s slow, but I can do some other stuff for now. So far our gcloud bill is $1.80.

Ok, /opt/jupyter/chicken-drive has my data now. But according to /opt/jupyter/.jupyter/jupyter_notebook_config.py, I need to move it under /home/jupyter.

Hmm. No space left on drive. What? 26GB all full. But it wasn’t full a second ago. How can moving files cause this? I guess the mv operation must copy and then delete. Ok, so deleting the new one. Let’s try again, one folder at a time. Oh boy. This is something a bit off about the google process. I didn’t start my container, and if I did, I’d probably map a volume. But the host is sort of read only. Anyway. We’re in. I can see the files in Jupyter Lab.

So now we’re training U-Net binary classification using keras-unet, by karolzak, based on the kz-isbi-chanllenge.ipynb notebook.

But now I’m getting this error when it’s clearly there…

FileNotFoundError: [Errno 2] No such file or directory: '/OID/v6/images/Chicken/train/'

Ok well I can’t work it out but changing it to a path relative to the notebook worked. base_dir = “../../../”

Ok first test round of training, binary classification: chicken, not-chicken. Just 173 image/mask pairs, 10 epochs of 40 steps.

Now let’s try with the training set. 1989 chickens this time. 50/50 split. 30 epochs of 50 steps. Ok second round… hmm, not so good. Pretty much all black.

Ok I’m changing the parameters of the network, fixing some code, and starting again.

I see that the pngs were loading float values, whereas in the example, they were loading ints. I fixed it by adding a m = m.convert(‘L’) to the mask (png) loading code. I think previously, it was training with the float values from 0 to 1, divided by 255, whereas the original example had int values from 0 to 255, divided by 255.

So I’m also resetting the parameters, to make this a larger network, since we’re training in the cloud. 512×512 instead of 256×256. Batch size of 3. Horizontal flip augmentation. 64 filters. 10 epochs of 100 steps. Go go go. Ok, out of memory. Batch size of 1. Still out of memory. Back to test set of 173 chickens. Ok it’s only maxing at 40% RAM now. I’ll let it run.

Ok, honestly I don’t know anymore. What is it even doing? Looks like it’s inversing black and white. That’s not very useful.

Ok before giving up, I’m going to make some changes.

The next day, I’m starting up the VM. Total cost so far, $8.84. The files are all missing, so I’m recopying, though using the gsutil -m cp -R gs://chicken-drive . option, and yes it is a lot faster. Though it slows down.

I think the current setup is maybe failing because we’re using 173 images with one kind of augmentation. Instead of 10 epochs of 100 steps of the same shit, let’s rather swap out the training images.

First problem is that Keras is basically broken, in this regard. I’ve immediately discovered that saving and loading a checkpoint does not save and load the metrics, and so it keeps evaluating against a loss of infinity, instead of what your saved model achieved. Very annoying.

Now, after stopping and restarting the VM, and enabling all cloud APIs, I’m having a new problem. gsutil no longer works. After 4% copied, network throughput drops to 0.0B/s. I tried reconnecting and now get:

Connection via Cloud Identity-Aware Proxy Failed
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.

I’ve switched back to ‘Allow default access’. Still getting 4003.

Ok, I’ve deleted the instance. Trying again. Started it up. It’s not installing the docker I asked for, after 22 minutes. Something is wrong. Let’s try again. Stopping VM. I’m ticking the ‘Run as priviliged’ box this time.

Ok now it’s working again. It even started up with the docker ready. I’m trying with the multiprocess copying again, and it slowed down at 55%, but is still going. Phew. Ok.

I changed to using the TF2 SavedModel format. Still restarts the ‘best’ metric. What a piece of shit. I can’t actually believe it. Ok I wrote my own code for finding the best, by saving all weights with the val_loss in the filename, and then loading the best weights for the next epoch. It’s still not perfect, but it’s better than Keras overwriting the best weights every time.

Interestingly, it seems like maybe my training on the Jetson was actually working, because the same weird little vignette-ing is occurring.

Ok we’re up to $20 billing, on gcloud. It’s adding up, but not too badly yet. Nothing seems to be beating a round of training from like 4 hours ago, so to keep things more exploratory, I added a 50/50 chance to pick from the saved weights at random, rather than loading the winner every time.

Something seems to be happening. The vignette is shrinking, but some chicken border action, maybe.

I left it running overnight, and this morning, we’re up to $33 spent, and today, we can’t log into the VM again. Pretty annoying. Of the 3 reasons for ‘Permission denied’, only one makes sense, Your key expired and Compute Engine deleted your ~/.ssh/authorized_keys file.

Same story if I run the gcloud commands: gcloud beta compute ssh –zone “europe-west4-c” “chicken-vm-template-1” –project “gpu-ggr”

So I apparently need to add a new public key to the Metadata section. I just know something is going to go wrong. Yeah, so I did everything I know I’m supposed to do, and it didn’t work. I generated an OpenSSH private/public key pair in PuttyGen, I changed the permissions on the private key so that only I have access, I updated the SSH Keys in the VM instance metadata, and the metadata for good measure. And ssh -i opensshprivate daniel_brownell@34.91.21.245 -v just ends up with Permission denied (publickey).

ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C daniel_brownell

Ok and then print the public key, and copy paste it to the VM Instance ‘Edit…’ / SSH Keys… and connect with PuTTY with the private key and… nope. Permission denied (publickey).. Ok I need to go through these answers and find one that works. Same error with windows cmd line ssh, except also complains that the openssh key is an invalid format. Try again later.

Fuck you gcloud. Ok I’m stopping and deleting the VM. $43 used so far.

Also, the training through the night didn’t improve on the val_loss score. Something’s fucked.

Ok I’ve started it up again a few days later. I was wondering about the warnings at the beginning of my training that carious CUDA things were not installed. So apparently I need:

cos-extensions install gpu

and… no space left on device

Ok so more space.

/dev/sda1 31G 22G 9.2G 70% /mnt/stateful_partition

So I increased the boot disk to 35GB and called ‘ cos-extensions install gpu’ again, after cd’ing into /mnt/stateful-partition and it worked a bit better. Still has ‘ERROR: Unable to load the kernel module 'nvidia.ko'.‘ in the logs though. But install logs at ./mnt/stateful_partition/var/lib/nvidia/nvidia-installer.log say its ok…

So the error now is ‘Could not load dynamic library ‘libcuda.so.1′; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64’

And so we need to modify the docker container run command, something like the example in the instructions.

Ok so our container is… gcr.io/deeplearning-platform-release/tf2-gpu.2-4

According to this stackoverflow answer, this already has everything installed. Ok but the host needs the drivers installed.

tf.config.list_physical_devices('GPU')
[]

So yeah, i think i need to install the cos crap, and restart the container with those volume and device bits.

docker stop klt-chicken-vm-template-1-ursn
docker run \
  --volume /var/lib/nvidia/lib64:/usr/local/nvidia/lib64 \
  --volume /var/lib/nvidia/bin:/usr/local/nvidia/bin \
  --device /dev/nvidia0:/dev/nvidia0 \
  --device /dev/nvidia-uvm:/dev/nvidia-uvm \
  --device /dev/nvidiactl:/dev/nvidiactl \
  gcr.io/deeplearning-platform-release/tf2-gpu.2-4 

...

[I 14:54:49.167 LabApp] Jupyter Notebook 6.3.0 is running at:
[I 14:54:49.168 LabApp] http://46fce08b5770:8080/
[I 14:54:49.168 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
^C^C^C^C^C^C^C^C^C^C

Not so good. Ok can’t access it either. -p 8080:8080 fixes that. It didn’t like --gpus all.

“Unable to determine GPU information”. Container optimised shit.

Ok I’m going to delete the VM again. Going to check out these nvidia cloud containers. There’s 21.07-tf2-py3 and NGC stuff.

So I can’t pull the dockers cause there’s no space, and even after attaching a persistent disk, not, because things are stored on the boot disk. Ok but I can tell docker to store stuff on a persistent disk.

/etc/docker/daemon.json:

{
    "data-root": "/mnt/x/y/docker_data"
}
root@nvidia-ngc-tensorflow-test-b-1-vm:/mnt/disks/disk# docker run --gpus all --rm -it -p 8080:8080 -p 6006:6006 nvcr.io/nvidia/tensorflow:21.07-tf2-py3

docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.

Followed the ubuntu 20.04 driver installation,

cuda : Depends: cuda-11-4 (>= 11.4.1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Oh boy. Ok so I used this trick to make some /tmp space:

mount --bind /path/to/dir/with/plenty/of/space /tmp

and then as per this answer and the nvidia instructions:

wget https://developer.download.nvidia.com/compute/cuda/11.1.0/local_installers/cuda_11.1.0_455.23.05_linux.run
chmod +x cuda_11.1.0_455.23.05_linux.run 
sudo ./cuda_11.1.0_455.23.05_linux.run 

or some newer version:

wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run
sudo sh cuda_11.4.1_470.57.02_linux.run

‘boost::filesystem::filesystem_error’

Ok using all the space again. 32GB. Not enough. Fuck this. I’m deleting the VM again. 64GB. SSD persistent disk. Ok installed driver. Running docker…

And…

FFS. Something is compromised. In the time it took to install CUDA and run docker on an Ubuntu VM, an army of Indian hackers managed to delete my root user.

Ok. Maybe it’s time to consider AWS again for GPUs. I think I can officially count GCP GPU as unusable. Learned a few useful things, but overall, yeesh.

I think maybe I’ll just run the training on a cheap non-GPU VM on GCP for now, so that I’m not paying for a GPU that I’m not using.

docker run -d -p 8080:8080 -v /home/daniel_brownell:/home/jupyter gcr.io/deeplearning-platform-release/tf2-cpu.2-4

Ok wow so now with the cpu version, the loss is improving like crazy. It went from 0.28 to 0.24 in 10 epochs (10 minutes or so). That sort of improvement was not happening after like 10 hours on the ‘gpu’.

So yeah, amazing. The code now does a sort of population based training, by picking a random previous set of weights instead of the best weights, half of the time. Overall it slows things down, but should result in a bit more variation in the end.

What finally worked

Ok there’s also an ‘AI platform – notebook’ option. I might try that too.

Ok the instance started up. But it failed to start 4 cron services: nscd, unscd, crond, sshd. CPU use goes to zero. Nothing. Ok so I need to ssh tunnel apparently.

gcloud compute ssh --project gpu-ggr --zone europe-west1-b notebook -- -L 8080:localhost:8080

Ok that was easy. Let’s try this.

Successfully opened dynamic library libcudart.so.11.0

‘ModelCheckpoint’ object has no attribute ‘_implements_train_batch_hooks’

Ok, needed to change all keras.* etc. to tensorflow.keras.*

Ok fuck me that’s a lot faster than CPU.

Permission denied: ‘weights-0.2439.hdf5’

Ok, let’s sudo it.

Ok there she goes. It’s like 20 times faster maybe. Strangely isn’t doing much better than the CPU though. But I’ll let it run for a bit. It’s only been a minute. I think maybe the CPU doing well was just good luck. Perhaps we trained them too well on the original set of like 173 images, and it was getting good results on those original images.

Ok now it’s been an hour or so, and it’s not beating the CPU. I’ve changed the train / validation set to 50/50 now, and the learning rate is randomly chosen between 0.001 and 0.0003. And I’m upping the epochs to 30. And the filters to 64. batch_size=4, use_batch_norm=True.

We’re down to 23.3 after an hour and a half. 21 now… 3 hours maybe now

Ok 5 hours, lets check:

Holy shit it’s working. That’s great. I’ll leave it running overnight. The overnight results didn’t improve much for some reason.

(TODO: learn about focal loss / dice loss / jaccard distance as possible change to loss function.? less necessary now.)

So it’s cool but it’s 364MB. We need it 1/4 size to run it on the Jetson NX I think.

So, retraining, with filters=32. We’re already down to 0.24 after an hour. Ok I stopped at 0.2104 after a few hours.

So yeah. Good enough for now.

There’s some other things to train, too.

The eggs in simulation: generate views, save images to disk. save segmentation images to disk.

Train the walking again with the gripper.

Eggs in the real world. Use augmentation to place real egg pics in scenes. Possibly use Mask-RCNN/YOLACT code with COCO, instead of continuing in Keras.

The now-working U-net binary chicken segmentation is in Keras, so there will be some tricks required, to run a multi-class segmentation detector, or multiple binary classifiers. Advice for multi-class segmentation is here and the multiple binary classifier advice is here.

When we finally try running it all on a Jetson, we will maybe need to shrink the neural network further. But that can be done last minute. It looks like we can save the h5fs file to TF2’s SavedModel format with model.save(model_fname) and convert to frozen graph, to import into TensorRT, the NVIDIA format. Similar to this. TensorRT shrinks neurons to single bytes, I believe.

Categories
3D 3D prototypes 3D Research control dev envs Gripper gripper prototypes Gripper Research Locomotion robots Vision

Gripper simulation

I’ve been scouring for existing code to help with developing the gripper in simulation. I was looking for a way to implement ‘eye-in-hand’ visual servoing, and came across a good resource, created for a masters thesis, which shows a ‘robot vision’ window, and he compares depth sensing algorithms. My approach was going to be, essentially, segmentation, in order to detect and localise chickens and eggs, in the field of vision, and then just try get their shape into an X-Y coordinate position, and over a certain size, to initiate interaction.

This one uses an SDF model of a KUKA industrial 6 DOF robot with a two finger gripper, but that has specific rotational movement, that seems maybe different from a simpler robot arm. So it’s maybe a bit overkill, and I might just want to see his camera code.

Miranda’s gripper prototype isn’t a $50k KUKA industrial robot arm. It’s just v.0.1 and got an 11kg/cm MG945, some 5kg/cm MG5010s, and an 1.3kg/cm SG90, and a sucker contraption I found on DFRobot, that can suck eggs.

So, regarding the simulation,this will be on top of the robot, as its head.

So we need an URDF file. Or an SDF file. There’s a couple ways to go with this.

The other resource I’ve found that looks like just what I need, is ur5pybullet

Regarding the ‘visual servoing’, the state of the art appears to be QT-Opt, perhaps. Or maybe RCAN, built on top of it. But we’re not there just yet. Another project specifically uses pybullet. Some extra notes here, from Sergey Levine, and co., associated with most of these projects.

Another good one is Retina-GAN, where they convert both simulation and reality into a canonical format. I’ve also come across Dex-Net before, from UCB.

Everything is very complicated though.

I’ve managed to make an URDF that looks good enough to start with, though. I’ll put everything in a github. We want to put two servos on the ‘head’ for animatronic emotional aesthetics, but there’s a sucker contraption there for the egg, so I think this is good enough for simulation, for now, anyway. I just need to put a camera on its head, put some eggs in the scene, and maybe reward stable contact with the tip. Of course it’s going to be a lot of work.

We also want to add extra leg parts, but I don’t want to use 4 more motors on it.

So I’m playing around with some aluminium and timing belts and pulleys to get 8 leg parts on 4 motors. Something like this, with springs if we can find some.

So, simulator camera vision. I can enable the GUI. Turns out I just need to press ‘g’ to toggle.

self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_RENDERING, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_GUI, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_SEGMENTATION_MARK_PREVIEW, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_RGB_BUFFER_PREVIEW, 1)

I’ve added the gripper, and now I’m printing out the _control_observation, because I need to work out what is in an observation.

self.GetTrueMotorAngles()
0.18136442583543283, 0.4339093246887722, -0.25269494256467184, 0.32002873424829736, -0.6635045784503064, 1.5700002984158676, -1.5700000606174402, -0.2723645141027962,

self.GetTrueMotorVelocities()
0.451696256678765, 0.48232988947216504, -4.0981980703534395, 0.4652986924553241, 0.3592921211587608, -6.978131098967118e-06, 1.5237597481713495e-06, -10.810712328063294,

self.GetTrueMotorTorques()
-3.5000000000000004, -3.5000000000000004, 3.5000000000000004, -3.5000000000000004, 3.5000000000000004, -3.5000000000000004, 3.5000000000000004, 3.5000000000000004, 

self.GetTrueBaseOrientation()
-0.008942336195953221, -0.015395612988274186, 0.00639837318132646, 0.9998210192552996, 

self.GetTrueBaseRollPitchYawRate()
-0.01937158793669886, -0.05133982438770338, 0.001050170752804882]

Ok so I need the link state of the end effector (8th link), to get its position and orientation.

    state = self._pybullet_client.getLinkState(self.quadruped, self._end_effector_index)
    pos = state[0]
    orn = state[1]

    print(pos)
    print(orn)

(0.8863188372297804, -0.4008813832608453, 3.1189486984341848)

(0.9217446940545668, 0.3504950513334899, -0.059006227834041206, -0.1551070696318658)

Since the orientation is 4 dimensions, it’s a quaternion,

  def gripper_camera(self, state):
    pos = state[0]
    ori = state[1]


    rot_matrix = self._pybullet_client.getMatrixFromQuaternion(ori)

    rot_matrix = np.array(rot_matrix).reshape(3, 3)
    
# Initial vectors
    init_camera_vector = (1, 0, 0) # z-axis
    init_up_vector = (0, 1, 0) # y-axis
    
# Rotated vectors
    camera_vector = rot_matrix.dot(init_camera_vector)
    up_vector = rot_matrix.dot(init_up_vector)

    self.view_matrix_gripper = self._pybullet_client.computeViewMatrix(pos, pos + 0.1 * camera_vector, up_vector)

    img = self._pybullet_client.getCameraImage(256, 256, self.view_matrix_gripper, self.projectionMatrix, shadow=0, flags = self._pybullet_client.ER_SEGMENTATION_MASK_OBJECT_AND_LINKINDEX, renderer=self._pybullet_client.ER_BULLET_HARDWARE_OPENGL)

Ok I’ve got the visuals now, but I shouldn’t be seeing that shadow

The camera is like 90 degrees off maybe. Could be an issue with the camera setup, or maybe the URDF setup? Ok…

Changing the initial camera vector fixed the view somewhat:

    init_camera_vector = (0, 0, 1) # x-axis

Except that we’re looking backwards now.

init_camera_vector = (0, 0, -1) # x-axis

Ok well it’s correct now, but heh, hmm. Might need to translate the camera just a bit higher.

I found a cool free chicken obj file with Creative commons usage. And an egg.

Heh need to resize obj files. Collision physics is fun.

Ok I worked out how to move the camera a bit higher.

    pos = list(pos) 
    pos[2] += 0.3
    pos = tuple(pos)

Alright! Getting somewhere.

So, next, I add resized eggs and some chickens for good measure, to the scene.

Then we need to train it to stick its shnoz on the eggs.

Ok… gonna have to train this sucker now.

First, the table is falling from the sky, so I might need to stabilize it first. I also need to randomize the egg location a bit.

And I want to minimize the distance between the gripper attachment and the egg.

The smart way is probably to have it walk until some condition and then grasp, but in the spirit of letting the robot learn things by itself, I will probably ignore heuristics. If I do decide to use heuristics, it will probably be a finite state machine with ‘walking’ mode and ‘gripping’ mode. But we’ll come back to this when it’s necessary. Most of the time there won’t be any eggs in sight. So it will just need to walk around until it is sure there is an egg somewhere in sight.

Ok I’ve added a random egg to the scene

self.rng = default_rng()
egg_position = np.r_[self.rng.uniform(-20, 20, 2), 0.1]
egg_orientation = transformations.random_quaternion(self.rng.random(3))
self._egg_mesh = self._pybullet_client.loadURDF("%s/egg.urdf" % self._urdf_root, egg_position, egg_orientation, globalScaling=0.1)

And the end effector’s position should be something like the original camera position before we moved it up a bit, plus length of the end effector in the URDF (0.618). I ended up doing this:

    pos = [pos[0] + 0.5*camera_vector[0], 
           pos[1] + 0.5*camera_vector[1], 
           pos[2] + 0.5*camera_vector[2]]
    pos = tuple(pos) 

And it’s closer to the tip now. But yeah. I will start a new post, Simulation Vision.