We’ve got an egg in the gym environment now, so we need to collect some data for training the robot to go pick up an egg.
I’m going to have it save the rgba, depth and segmentation images to disk for Unet training. I left out the depth image for now. The pictures don’t look useful. But some papers are using the depth, so I might reconsider. Some weed bot paper uses 14-channel images with all sorts of extra domain specific data relevant to plants.
I wrote some code to take pics if the egg was in the viewport, and it took 1000 rgb and segmentation pictures or so. I need to change the colour of the egg for sure, and probably randomize all the textures a bit. But main thing is probably to make the segmentation layers with pixel colours 0,1,2, etc. so that it detects the egg and not so much the link in the foreground.
So sigmoid to softmax and so on. Switching to multi-class also begs the question whether to switch to Pytorch & COCO panoptic segmentation based training. It will have to happen eventually, as I think all of the fastest implementations are currently in Pytorch and COCO based. Keras might work fine for multiclass or multiple binary classification, but it’s sort of the beginning attempt. Something that works. More proof of concept than final implementation. But I think Keras will be good enough for these in-simulation 256×256 images.
Regarding multi-class segmentation, karolzak says “it’s just a matter of changing num_classes
argument and you would need to shape your mask in a different way (layer per class??), so for multiclass segmentation you would need a mask of shape (width, height, num_classes)
“
I’ll keep logging my debugging though, if you’re reading this.
So I ran segmask_linkindex.py to see what it does, and how to get more useful data. The code is not running because the segmentation image actually has an array of arrays. I presume it’s a numpy array. I think it must be the rows and columns. So anyway I added a second layer to the loop, and output the pixel values, and when I ran it in the one mode:
-1 -1 -1 83886081 obUid= 1 linkIndex= 4 83886081 obUid= 1 linkIndex= 4 1 obUid= 1 linkIndex= -1 1 obUid= 1 linkIndex= -1 16777217 obUid= 1 linkIndex= 0 16777217 obUid= 1 linkIndex= 0 -1 -1 -1 And in the other mode -1 -1 -1 1 obUid= 1 linkIndex= -1 1 obUid= 1 linkIndex= -1 1 obUid= 1 linkIndex= -1 -1 -1 -1
Ok I see. Hmm. Well the important thing is that this code is indeed for extracting the pixel information. I think it’s going to be best for the segmentation to use the simpler segmentation mask that doesn’t track the link info. Ok so I used that code from the guy’s thesis project, and that was interpolating the numbers. When I look at the unique elements of the mask without interpolation, I’ve got…
[ 0 2 255] [ 0 2 255] [ 0 2 255] [ 0 2 255] [ 0 2 255] [ 0 1 2 255] [ 0 1 2 255] [ 0 2 255] [ 0 2 255] Ok, so I think: 255 is the sky 0 is the plane 2 is the robotable 1 is the egg
So yeah, I was just confused because the segmentation masks were all black and white. But if you look closely with a pixel picker tool, the pixel values are (0,0,0), (1,1,1), (2,2,2), (255,255,255), so I just couldn’t see it.
The interpolation kinda helps, to be honest.
As per OpenAI’s domain randomization helping with Sim2Real, we want to randomize some textures and some other things like that. I also want to throw in some random chickens. Maybe some cats and dogs. I’m afraid of transfer learning, at this stage, because a lot of it has to do with changing the structure of the final layer of the neural network, and that might be tough. Let’s just do chickens and eggs.
An excerpt from OpenAI:
Costs
Both techniques increase the computational requirements: dynamics randomization slows training down by a factor of 3x, while learning from images rather than states is about 5-10x slower.
Ok that’s a bit more complex than I was thinking. I want to randomize textures and colours, first
I’ve downloaded and unzipped the ‘Describable Textures Dataset’
And ok it’s loading a random texture for the plane
and random colour for the egg and chicken
Ok, next thing is the Simulation CNN.
Interpolation doesn’t work though, for this, cause it interpolates from what’s available in the image:
[ 0 85 170 255] [ 0 63 127 191 255] [ 0 63 127 191 255] I kind of need the basic UID segmentation. [ 0 1 2 3 255] Ok, pity about the mask colours, but anyway.
Let’s train the UNet on the new dataset.
We’ll need to make karolzak’s changes.
I’ve saved 2000+ rgb.jpg and seg.png files and we’ve got [0,1,2,3,255] [plane, egg, robot, chicken, sky]
So num_classes=5
And
“for multiclass segmentation you would need a mask of shape (width, height, num_classes) “
What is y.shape?
(2001, 256, 256, 1)
which is 2001 files, of 256 x 256 pixels, and one class. So if I change that to 5…? ValueError: cannot reshape array of size 131137536 into shape (2001,256,256,5)
Um… Ok I need to do more research. Brb.
So the keras_unet library is set up to input binary masks per class, and output binary masks per class.
I would rather use the ‘integer’ class output, and have it output a single array, with the class id per pixel. Similar to this question. In preparation for karolzak probably not knowing how to do this with his library, I’ve asked on stackoverflow for an elegant way to make the binary masks from a multi-class mask, in the meantime.
I coded it up using the library author’s suggested method, as he pointed out that the gains of the integer encoding method are minimal. I’ll check it out another time. I think it might still make sense for certain cases.
Ok that’s pretty awesome. We have 4 masks. Human, chicken, egg, robot. I left out plane and sky for now. That was just 2000 images of training, and I have 20000. I trained on another 2000 images, and it’s down to 0.008 validation loss, which is good enough!
So now I want to load the CNN model in the locomotion code, and feed it the images from the camera, and then have a reward function related to maximizing the egg pixels.
I also need to look at the pybullet-planning project and see what it consists of, as I imagine they’ve made some progress on the next steps. “built-in implementations of standard motion planners, including PRM, RRT, biRRT, A* etc.” – I haven’t even come across these acronyms yet! Ok, they are motion planning. Solvers of some sort. Hmm.