Categories
3D Research AI/ML arxiv control form Locomotion robots

Kinematic Motion Primitives

This post follows the ‘Finding where we left off’ post, focused on locomotion sim2real. In that post I tried to generalise and smooth the leg angle servo movements in their -PI/2 to PI/2 range.

I will likely try extracting kMPs, before this is all over, which from a skim read, and look at the pictures, are like, just taking a single slice of the wave data, and repeating that. Or, taking consecutive periodic waves, and extracting the average / normalized movement from them.

https://becominghuman.ai/introduction-to-timeseries-analysis-using-python-numpy-only-3a7c980231af

Cheetah-cub leg mechanism, and leg compliance. A single leg is shown abstracted, detailed leg segment ratios are omitted for clarity, robot heading direction is to the left. (1) shows the three leg angles αprox, αmid, and αdist. Hip and knee RC servo motors are mounted proximally, the leg length actuation is transmitted by a cable mechanism. The pantograph structure was inspired by the work of Witte et al. (2003) and Fischer and Blickhan (2006). (2) The foot segment describes a simplified foot-locus, showing the leg in mid-swing. For ground clearance, the knee motor shortens the leg by pulling on the cable mechanism (green, Fcable). Fdiag is the major, diagonal leg spring. Its force extends the pantograph leg, against gravitational and dynamic forces. (3) The leg during mid-stance. (4) In case of an external translational perturbation, the leg will be compressed passively. (5) If an external perturbation torque applies e.g., through body pitching, the leg linkage will transmit it into a deflection of the parallel spring, not of the diagonal spring.
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs (Alexander Badri-Spröwitz et al, 2014)

It’s now December 6th 2021, as I continue here…

This paper is very relevant, “Realizing Learned Quadruped Locomotion Behaviors through Kinematic Motion Primitives”

Some Indian PhDs have summed up the process. Unfortunately I’m not quite on the exact same page. I understand the pictures, haha.

Here’s where this picture comes from, which is useful for explaining what I need to do: (Short paper)

In 2014, also, same thing, Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs

They just used PCA. (Principal Component Analysis). That’s like a common ML toolkit thing.

Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs (2014)

See now this is where they lose me: “The covariance matrix of the normalized dataset”. Come on guys. Throw us a bone.

I found this picture, which is worth 1000 words, in the discussion on stackexchange about PCA and SVD:

Rotating PCA animation

So, I’m not quite ready for PCA. That is two dimensions, anyway. Oh right, so I need to add a ‘time’ dimension. numpy’s expand_dims?

I played around with Codex, to assist with finding the peaks, and to find the period length.

And I separated them out to different plots… and got the peaks matching once I passed in ( , distance=80).

I had to install these, and restart the Jupyter kernel (and I think close and restart the Chrome tab.) in order to get some matplotlib widgets.

Error message:
Jupyter Lab: Error displaying widget: model not found



!pip3 install --upgrade jupyterlab ipympl
%matplotlib widget
The matplotlib slider example (image thereof)

I started on a slider widget to draw a vertical line on top of the leg data, but I need to fix the refresh issue. Anyhow, it’s not quite what i want. What do I want?

So, I want the kMPs. The kMPs are like, a gif of a basic action, e.g. robot taking a full step forward, on all legs, which we can run once, twice, etc.

We can ‘average’ or ‘normalise’ or ‘phase’ the waves, and assume that gives us a decent average step forward.

I think there’s enough variation in this silly simulation walk that we should start with just the simplest, best single wave.

But since they ran PCA, let’s run it to see what it does for the data. We have a single integer value, which is 1D. To make it 2D, so we can run PCA on it… we add a time dimension?

But also, so I measured the period a few programs up, to be

67 steps (front right),

40 steps (front left),

59 steps (back right),

42 steps (back left).

So, as a starting point, it would be nice to be as close to servos at 90 degrees as possible. If I iterate the values, and track the lowest sum diff, yeah… is that it? I’m looking at this link at SO.

Ideally I could visualise the options..

Repeating a slice. Averaging the slices.

Ok, so I need a start index, end index, to index a range.

After some investigation, the index where the legs are closest to 90 degrees, is at 1739

Computer Enhance

So that’s kinda close to our ideal kMPs, from about 1739 to about 1870 maybe, but clearly the data is messy. Could be tweaked. Wavetable editor, basically.

Alright, let’s make an app. We can try run a Flask server on the Pi, with Javascript front end using chart.js.

pip3 install flask

Save the test web app, kmpapp.py

from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello world'

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0') 

python3 kmpapp.py

Ok good start. We need to get the x and y data into JSON so Javascript can plot it, in chart.js

That’s looking good. Maybe too many points. Ok, so I want to edit, save, and run the KMPs on the robot.

Well it took a day but it’s working, and is pretty cool. Used smooth.js to allow smoother transitions. Took another day to add save and load features.

I’ll upload this to the project repo.

Many improvements added. Will update repo again closer to MFRU.

Categories
3D Research AI/ML CNNs deep dev envs evolution GANs Gripper Gripper Research Linux Locomotion sexing sim2real simulation The Sentient Table UI Vision

Simulation Vision

We’ve got an egg in the gym environment now, so we need to collect some data for training the robot to go pick up an egg.

I’m going to have it save the rgba, depth and segmentation images to disk for Unet training. I left out the depth image for now. The pictures don’t look useful. But some papers are using the depth, so I might reconsider. Some weed bot paper uses 14-channel images with all sorts of extra domain specific data relevant to plants.

I wrote some code to take pics if the egg was in the viewport, and it took 1000 rgb and segmentation pictures or so. I need to change the colour of the egg for sure, and probably randomize all the textures a bit. But main thing is probably to make the segmentation layers with pixel colours 0,1,2, etc. so that it detects the egg and not so much the link in the foreground.

So sigmoid to softmax and so on. Switching to multi-class also begs the question whether to switch to Pytorch & COCO panoptic segmentation based training. It will have to happen eventually, as I think all of the fastest implementations are currently in Pytorch and COCO based. Keras might work fine for multiclass or multiple binary classification, but it’s sort of the beginning attempt. Something that works. More proof of concept than final implementation. But I think Keras will be good enough for these in-simulation 256×256 images.

Regarding multi-class segmentation, karolzak says “it’s just a matter of changing num_classes argument and you would need to shape your mask in a different way (layer per class??), so for multiclass segmentation you would need a mask of shape (width, height, num_classes)

I’ll keep logging my debugging though, if you’re reading this.

So I ran segmask_linkindex.py to see what it does, and how to get more useful data. The code is not running because the segmentation image actually has an array of arrays. I presume it’s a numpy array. I think it must be the rows and columns. So anyway I added a second layer to the loop, and output the pixel values, and when I ran it in the one mode:

-1
-1
-1
83886081
obUid= 1 linkIndex= 4
83886081
obUid= 1 linkIndex= 4
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
16777217
obUid= 1 linkIndex= 0
16777217
obUid= 1 linkIndex= 0
-1
-1
-1

And in the other mode

-1
-1
-1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
-1
-1
-1

Ok I see. Hmm. Well the important thing is that this code is indeed for extracting the pixel information. I think it’s going to be best for the segmentation to use the simpler segmentation mask that doesn’t track the link info. Ok so I used that code from the guy’s thesis project, and that was interpolating the numbers. When I look at the unique elements of the mask without interpolation, I’ve got…

[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   1   2 255]
[  0   1   2 255]
[  0   2 255]
[  0   2 255]

Ok, so I think:

255 is the sky
0 is the plane
2 is the robotable
1 is the egg

So yeah, I was just confused because the segmentation masks were all black and white. But if you look closely with a pixel picker tool, the pixel values are (0,0,0), (1,1,1), (2,2,2), (255,255,255), so I just couldn’t see it.

The interpolation kinda helps, to be honest.

As per OpenAI’s domain randomization helping with Sim2Real, we want to randomize some textures and some other things like that. I also want to throw in some random chickens. Maybe some cats and dogs. I’m afraid of transfer learning, at this stage, because a lot of it has to do with changing the structure of the final layer of the neural network, and that might be tough. Let’s just do chickens and eggs.

An excerpt from OpenAI:

Costs

Both techniques increase the computational requirements: dynamics randomization slows training down by a factor of 3x, while learning from images rather than states is about 5-10x slower.

Ok that’s a bit more complex than I was thinking. I want to randomize textures and colours, first

I’ve downloaded and unzipped the ‘Describable Textures Dataset’

And ok it’s loading a random texture for the plane

and random colour for the egg and chicken

Ok, next thing is the Simulation CNN.

Interpolation doesn’t work though, for this, cause it interpolates from what’s available in the image:

[  0  85 170 255]
[  0  63 127 191 255]
[  0  63 127 191 255]

I kind of need the basic UID segmentation.

[  0   1   2   3 255]

Ok, pity about the mask colours, but anyway.

Let’s train the UNet on the new dataset.

We’ll need to make karolzak’s changes.

I’ve saved 2000+ rgb.jpg and seg.png files and we’ve got [0,1,2,3,255] [plane, egg, robot, chicken, sky]

So num_classes=5

And

“for multiclass segmentation you would need a mask of shape (width, height, num_classes) “

What is y.shape?

(2001, 256, 256, 1)

which is 2001 files, of 256 x 256 pixels, and one class. So if I change that to 5…? ValueError: cannot reshape array of size 131137536 into shape (2001,256,256,5)

Um… Ok I need to do more research. Brb.

So the keras_unet library is set up to input binary masks per class, and output binary masks per class.

I would rather use the ‘integer’ class output, and have it output a single array, with the class id per pixel. Similar to this question. In preparation for karolzak probably not knowing how to do this with his library, I’ve asked on stackoverflow for an elegant way to make the binary masks from a multi-class mask, in the meantime.

I coded it up using the library author’s suggested method, as he pointed out that the gains of the integer encoding method are minimal. I’ll check it out another time. I think it might still make sense for certain cases.

Ok that’s pretty awesome. We have 4 masks. Human, chicken, egg, robot. I left out plane and sky for now. That was just 2000 images of training, and I have 20000. I trained on another 2000 images, and it’s down to 0.008 validation loss, which is good enough!

So now I want to load the CNN model in the locomotion code, and feed it the images from the camera, and then have a reward function related to maximizing the egg pixels.

I also need to look at the pybullet-planning project and see what it consists of, as I imagine they’ve made some progress on the next steps. “built-in implementations of standard motion planners, including PRM, RRT, biRRT, A* etc.” – I haven’t even come across these acronyms yet! Ok, they are motion planning. Solvers of some sort. Hmm.

Categories
3D 3D prototypes 3D Research control dev envs Gripper gripper prototypes Gripper Research Locomotion robots Vision

Gripper simulation

I’ve been scouring for existing code to help with developing the gripper in simulation. I was looking for a way to implement ‘eye-in-hand’ visual servoing, and came across a good resource, created for a masters thesis, which shows a ‘robot vision’ window, and he compares depth sensing algorithms. My approach was going to be, essentially, segmentation, in order to detect and localise chickens and eggs, in the field of vision, and then just try get their shape into an X-Y coordinate position, and over a certain size, to initiate interaction.

This one uses an SDF model of a KUKA industrial 6 DOF robot with a two finger gripper, but that has specific rotational movement, that seems maybe different from a simpler robot arm. So it’s maybe a bit overkill, and I might just want to see his camera code.

Miranda’s gripper prototype isn’t a $50k KUKA industrial robot arm. It’s just v.0.1 and got an 11kg/cm MG945, some 5kg/cm MG5010s, and an 1.3kg/cm SG90, and a sucker contraption I found on DFRobot, that can suck eggs.

So, regarding the simulation,this will be on top of the robot, as its head.

So we need an URDF file. Or an SDF file. There’s a couple ways to go with this.

The other resource I’ve found that looks like just what I need, is ur5pybullet

Regarding the ‘visual servoing’, the state of the art appears to be QT-Opt, perhaps. Or maybe RCAN, built on top of it. But we’re not there just yet. Another project specifically uses pybullet. Some extra notes here, from Sergey Levine, and co., associated with most of these projects.

Another good one is Retina-GAN, where they convert both simulation and reality into a canonical format. I’ve also come across Dex-Net before, from UCB.

Everything is very complicated though.

I’ve managed to make an URDF that looks good enough to start with, though. I’ll put everything in a github. We want to put two servos on the ‘head’ for animatronic emotional aesthetics, but there’s a sucker contraption there for the egg, so I think this is good enough for simulation, for now, anyway. I just need to put a camera on its head, put some eggs in the scene, and maybe reward stable contact with the tip. Of course it’s going to be a lot of work.

We also want to add extra leg parts, but I don’t want to use 4 more motors on it.

So I’m playing around with some aluminium and timing belts and pulleys to get 8 leg parts on 4 motors. Something like this, with springs if we can find some.

So, simulator camera vision. I can enable the GUI. Turns out I just need to press ‘g’ to toggle.

self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_RENDERING, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_GUI, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_SEGMENTATION_MARK_PREVIEW, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
self._pybullet_client.configureDebugVisualizer(self._pybullet_client.COV_ENABLE_RGB_BUFFER_PREVIEW, 1)

I’ve added the gripper, and now I’m printing out the _control_observation, because I need to work out what is in an observation.

self.GetTrueMotorAngles()
0.18136442583543283, 0.4339093246887722, -0.25269494256467184, 0.32002873424829736, -0.6635045784503064, 1.5700002984158676, -1.5700000606174402, -0.2723645141027962,

self.GetTrueMotorVelocities()
0.451696256678765, 0.48232988947216504, -4.0981980703534395, 0.4652986924553241, 0.3592921211587608, -6.978131098967118e-06, 1.5237597481713495e-06, -10.810712328063294,

self.GetTrueMotorTorques()
-3.5000000000000004, -3.5000000000000004, 3.5000000000000004, -3.5000000000000004, 3.5000000000000004, -3.5000000000000004, 3.5000000000000004, 3.5000000000000004, 

self.GetTrueBaseOrientation()
-0.008942336195953221, -0.015395612988274186, 0.00639837318132646, 0.9998210192552996, 

self.GetTrueBaseRollPitchYawRate()
-0.01937158793669886, -0.05133982438770338, 0.001050170752804882]

Ok so I need the link state of the end effector (8th link), to get its position and orientation.

    state = self._pybullet_client.getLinkState(self.quadruped, self._end_effector_index)
    pos = state[0]
    orn = state[1]

    print(pos)
    print(orn)

(0.8863188372297804, -0.4008813832608453, 3.1189486984341848)

(0.9217446940545668, 0.3504950513334899, -0.059006227834041206, -0.1551070696318658)

Since the orientation is 4 dimensions, it’s a quaternion,

  def gripper_camera(self, state):
    pos = state[0]
    ori = state[1]


    rot_matrix = self._pybullet_client.getMatrixFromQuaternion(ori)

    rot_matrix = np.array(rot_matrix).reshape(3, 3)
    
# Initial vectors
    init_camera_vector = (1, 0, 0) # z-axis
    init_up_vector = (0, 1, 0) # y-axis
    
# Rotated vectors
    camera_vector = rot_matrix.dot(init_camera_vector)
    up_vector = rot_matrix.dot(init_up_vector)

    self.view_matrix_gripper = self._pybullet_client.computeViewMatrix(pos, pos + 0.1 * camera_vector, up_vector)

    img = self._pybullet_client.getCameraImage(256, 256, self.view_matrix_gripper, self.projectionMatrix, shadow=0, flags = self._pybullet_client.ER_SEGMENTATION_MASK_OBJECT_AND_LINKINDEX, renderer=self._pybullet_client.ER_BULLET_HARDWARE_OPENGL)

Ok I’ve got the visuals now, but I shouldn’t be seeing that shadow

The camera is like 90 degrees off maybe. Could be an issue with the camera setup, or maybe the URDF setup? Ok…

Changing the initial camera vector fixed the view somewhat:

    init_camera_vector = (0, 0, 1) # x-axis

Except that we’re looking backwards now.

init_camera_vector = (0, 0, -1) # x-axis

Ok well it’s correct now, but heh, hmm. Might need to translate the camera just a bit higher.

I found a cool free chicken obj file with Creative commons usage. And an egg.

Heh need to resize obj files. Collision physics is fun.

Ok I worked out how to move the camera a bit higher.

    pos = list(pos) 
    pos[2] += 0.3
    pos = tuple(pos)

Alright! Getting somewhere.

So, next, I add resized eggs and some chickens for good measure, to the scene.

Then we need to train it to stick its shnoz on the eggs.

Ok… gonna have to train this sucker now.

First, the table is falling from the sky, so I might need to stabilize it first. I also need to randomize the egg location a bit.

And I want to minimize the distance between the gripper attachment and the egg.

The smart way is probably to have it walk until some condition and then grasp, but in the spirit of letting the robot learn things by itself, I will probably ignore heuristics. If I do decide to use heuristics, it will probably be a finite state machine with ‘walking’ mode and ‘gripping’ mode. But we’ll come back to this when it’s necessary. Most of the time there won’t be any eggs in sight. So it will just need to walk around until it is sure there is an egg somewhere in sight.

Ok I’ve added a random egg to the scene

self.rng = default_rng()
egg_position = np.r_[self.rng.uniform(-20, 20, 2), 0.1]
egg_orientation = transformations.random_quaternion(self.rng.random(3))
self._egg_mesh = self._pybullet_client.loadURDF("%s/egg.urdf" % self._urdf_root, egg_position, egg_orientation, globalScaling=0.1)

And the end effector’s position should be something like the original camera position before we moved it up a bit, plus length of the end effector in the URDF (0.618). I ended up doing this:

    pos = [pos[0] + 0.5*camera_vector[0], 
           pos[1] + 0.5*camera_vector[1], 
           pos[2] + 0.5*camera_vector[2]]
    pos = tuple(pos) 

And it’s closer to the tip now. But yeah. I will start a new post, Simulation Vision.

Categories
form Hardware

SCARA

From a website called robots.com, which is a great URL, but which otherwise has broken links,


The SCARA or Selective Compliant Assembly (or Articulated) Robot Arm robot provides a circular work envelope.

Here’s from allaboutcircuits.com

Industrial Robotics Roundup:

Articulated vs. SCARA vs. Cartesian Robots

Hmm. interesting. I think we’re doing an Articulated build for the gripper. It will need Z axis movement. But SCARA is cool if you don’t need Z axis (up down).

Categories
AI/ML control dev Hardware hardware_ robots Vision

Jetson NX: A New Hope

After checking out the speed of image segmentation on the Raspberry Pi (like one frame every 10 seconds maybe?), and my i3 laptop not being much better, I realised I needed more computing power, at least to train the neural networks. I can probably still ultimately run the neural network on the Pi, but we’ll see.

Looking at computing options, I ultimately went with the $399 NVIDIA Jetson Xavier NX.

Developer Kit Technical Specifications
GPUNVIDIA Volta architecture
with 384 NVIDIA® CUDA® cores and 48 Tensor cores
CPU6-core NVIDIA Carmel ARM®v8.2 64-bit CPU
6 MB L2 + 4 MB L3
DLA2x NVDLA Engines
Vision Accelerator7-Way VLIW Vision Processor
Memory8 GB 128-bit LPDDR4x 51.2GB/s
StoragemicroSD (Card not included)
Video Encode2x 4Kp30 | 6x 1080p 60 | 14x 1080p30 (H.265/H.264)
Video Decode2x 4Kp60 | 4x 4Kp30 | 12x 1080p60 | 32x 1080p30 (H.265)
2x 4Kp30 | 6x 1080p60 |16x 1080p30 (H.264)
Camera2x MIPI CSI-2 D-PHY lanes
ConnectivityGigabit Ethernet, M.2 Key E (WiFi/BT included), M.2 Key M (NVMe)
DisplayHDMI and DP
USB4x USB 3.1, USB 2.0 Micro-B
OthersGPIOs, I2C, I2S, SPI, UART
Mechanical103 mm x 90.5 mm x 34 mm

Also, did you know they made a dystopic reboot retcon of The Jetsons, that 70s retro-futuristic Hanna Barbera cartoon, in comic form? An ice meteor destoyed Earth. They were lucky to have had a place in space to go, working for the Spacely Space Sprockets, incorporated. (YT link)

https://en.wikipedia.org/wiki/The_Jetsonshttps://en.wikipedia.org/wiki/The_Jetsons

It took an hour to set up, and was mostly straightforward, though I had to get a ‘clover plug’ cable, and an SD card.

I used Etcher to load the latest 6GB Jetson Developer Kit SD image, and had a keyboard, mouse, and hdmi monitor that worked. So I was able to enter the wifi SSID and password while setting it up.

I learned that one option for headless installation is to use a USB cable from your computer to the micro-USB input of the Jetson. But ultimately this wasn’t necessary. I ran ipconfig on the Jetson, got an ip address, and connected with ssh.

After needing to change the wifi details, I used the usb cable, then connected with:

ssh chicken@192.168.55.1
sudo nmcli device wifi connect 'SSID' password 'PASSWORD'

Coming back to this later, I attempted the same, but with the Jetson Nano, instead of the Jetson Xavier, and it didn’t work. I learned that the Nano doesn’t come with a Wifi adapter.

I think with the Nano, (“B01”) you need a monitor to install. I tried multiple tutorials, ssh’ing 192.168.55.1, I tried using screen to connect to /dev/ttyACC0 at 115200 baud, nope. Looked at the forums, and it’s complicated. I didn’t try the USB UART because my USB-TTL converter’s cable colours are different.

Another method that worked, with the nano, is plugging the ethernet cable from the Nano directly into the wifi router. It then shows up on the router’s network.

Later, when trying to install a D-Link wifi ‘Wireless N Nano USB Adaptor’, ( for the love of God, just get an Edimax – they work out of the box), I connected over ssh with the ethernet cable from the jetson to the router, then downloaded the driver and unzipped and untarred, and then ran the `make` file and `make install` as per the instructions, but had to run export ARCH=arm64 before that, because it was looking in aarch64. Then rebooted. Then

chicken@chicken:~$ sudo nmcli device wifi connect 'ssid' password 'password'
[sudo] password for chicken:
Device 'wlan0' successfully activated with '3a7997e6-c6b1-40f7-bf93-fba5b110282c'.

A lot of research will have to happen again now, though, because NVIDIA has its own software ecosystem. I’ll need a vision solution that is portable to the PI, with the hope that a model or neural net trained on the Jetson will still be able to run on the Raspberry Pi, since it’s 40X cheaper.

The lingo takes some time to get used to, but I believe JetPack is the name for this OS of preinstalled nvidia docs and libraries.

Since last year, an algorithm called… a Transformer… which has just recently created a hell of a chat bot, with GPT-3, and which underlies Google search as BERT (Bidirectional Encoder Representations from Transformers).

And there are hybrid convolutional nets and transformers, eg. DETR, and there are the SOTA from last year, EfficientNet, and then for some instances, or most, YOLOv4 is meant to be the new hot algorithm. It’s bigger than YOLOv3. It’s wait, so it’s more frames per second, and the accuracy (AP) is kinda so/so, at 5% less. I realised YOLOv5, which I had seen, which is a Pytorch implementation, is faster, though it’s technically just some one being a bit of a douchebag and calling his implementation of the author’s peer reviewed, the next version, YOLOv5. So what now?

Image for post

YOLOv4 vs. YOLOv5

https://github.com/ultralytics/yolov5#user-content-pretrained-checkpoints
it keeps getting better.

So, PyTorch vs. Tensorflow ( vs. TensorRT),

NVIDIA has this 3d simulator environment in Unreal Engine! Isaac. Something like an API for robots, by NVIDIA. They got this robot working with it, apparently.

NVIDIA Carter — ISAAC 2020.2 documentation

It’s actually pretty good. I wonder if this https://developer.nvidia.com/deepstream-sdk is as cool as it sounds. Ah, closed source. Of course. But I can apply to join. Eh maybe.

So, I want to get these chickens into a convolutional neural network, or a transformer and output a pretty picture. I want the colour masks, not the bounding boxes.

I don’t want to get too caught up in proprietary NVIDIA specific API, even if they have an Unreal Engine simulator. But it might be worth checking out. GStreamer is an open source port of it, so maybe back on the menu.

But it’s a whole integrated thing. “The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are: retail analytics, parking management, managing logistics, robotics, optical inspection and managing operations.”

Nice. DeepStream supports several popular networks out of the box such as YOLO, FasterRCNN, SSD, RetinaNet and MaskRCNN.

I get the sense that Gems in the DeepStream world of Isaac, are like, ROS nodes, offering services on a port. ORB is a Gem. Ultimately, a prediction, or reconstruction in 3d, of the shape of objects in the world, would be ideal. I’m only doing the colour map stuff because the colours are nice, and it looks more impressive. But ultimately I will need to pick the best tool for the job.

NVIDIA also has DIGITS, Deep Learning GPU Training System (DIGITS) … puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.

So as you can see, there’s more to find out. But ultimately I will probably have to repeat the task of getting labelled data into folders, and having the labels in the right format. Then generating the TFRecords, or doing whatever you do in PyTorch, I’m still biased to the TensorFlow ODI 2 implementation, because Google’s got the best dataset of chickens.

But lots to check out from NVIDIA.

The revelation of GStreamer seems to be one of the big wins here. It looks like an Apache Camel for video pipelines.

Categories
Hardware hardware_ Locomotion robots

Rotary Encoders and motors

We should check out rotary encoders and motors, while I’m in RSA. I’ve started looking at them now. There’s a wealth of info at a company that makes them, Dynapar.

Here’s the good stuff. Coupling: “Rotary encoders come in 3 major mounting styles: hollow-shaft (hollow-bore or through shaft), hub-shaft (hub-bore) or shafted. Hollow-shaft and hub-shaft rotary encoders mount directly to a motor shaft typically using a tether. Shafted rotary encoders mount using a flexible coupling.”

I asked a South African company about their Italian ‘Eltra‘ rotary encoders yesterday. Then Feetech from China asked this morning, if I needed servos. Well actually yes. I asked about their encoder motors, as they might be called, as a combination. They had 4.8V, 6V, and 7.4V such motors, whose encoders were of the magnetic measuring type, and give 12 bits of precision (2^12 = 4096, so 0 to 4095) as serial data.

The MG996Rs used for the basic table robot prototype were possibly adequate, but definitely on the hobby side. They felt more appropriate for a robot about half the size of the prototype. The prototype was successfully sat upon, by the chicken.

I started thinking about practicalities of powering a jetson nx (The devkit supports between 9-20V ) or jetson nano (5V) ot rpi (5V). and numerous motors. So probably uses a battery in the 9-20V range. Motor too. Probably 14.4V is what Lithium ion or similar batteries come as. Usually need to buy lithium ion batteries in the place where you’re going, rather than bring them on a plane. But you can always try. (Not a financial advisor (or otherwise)). They are probably still mostly imported for RC models (remote control, remember?).

But also wow, on the topic of rotary encoders and servos, this guy made a sub-millimeter accuracy by a 6 d.o.f. (degrees of freedom) robot arm

Used his own interesting and as yet mysterious encoder, which he put inside a motor. Says his robot arm cost EU 300. It’s super impressive.

The other option with power appears to be cool ‘just what you needed’ type products like DFRobot’s FIT0186, which has a built-in encoder! I ended up buying 4 x FIT0186s and 4 x FIT0185s (12V, 251RPM and 83RPM respectively).

I used an Arduino Nano, a DFRobot Dual motor driver, based on the TB6612 chip, plugging in the Nano for 5V, and 12V into the driver, to power the motors.

But after much wiring up, the motors work fine, but the encoders do not. I fixed up their example code to use volatile variables, and run the minimal amount of code in the interrupt service routines (ISRs), and even used two interrupts for a single motor, one for each hall sensor output, and put a 0.1mF capacitor between signal and ground. Tried CHANGE/RISING/FALLING triggers. Tried different motors.

I’ve looked at every forum post, and tried everything, and now I’ve posted on DFRobot’s forum, in case someone has got the encoder working. Not looking promising.

So back to the MG996Rs, I find that I ordered two batches from Mantech, supposedly the same genuine product from Tower Pro, and yet the one batch does not work properly with the PCA9685, uses a lower minimum pulse, and moves in the opposite direction to the other ones when using the same code. The duds keep creeping around, after you turn off the throttle.

Ok so I’m going to try with the one batch of MG996Rs that do seem to work, but we’ll need to make a new plan. Possibly, rigging an actual rotary encoder up, on the motor shaft. But for now, back to MG996Rs and the drawing board. We left the old robotable in Switzerland last year. So I made a new one with the remaining MG996Rs.

So I was able to get them fairly well calibrated by setting the min and max pulse, as that seems to be how servos work. They have a center PWM pulse, around 1500, where they are still, and then as you decrease the pulses per second, it goes one way, and as you increase the pulses per second, it goes the other way.

Unfortunately after much fuss, these continuous rotation servos turned out to be duds. At least for the way I was trying to use them.

Set the throttle to 1, sleep for a second. Set the throttle to zero, sleep for a second. Set the throttle to -1, sleep for a second. …And you have a new position. Unfortunately the positional control is unusable. I guess these are for wheels or an application where position is not so important.

Categories
Hardware hardware_ Locomotion sim2real

Finding where we left off

Months have passed. It is 29 December 2020 now, and Miranda and I are at Bitwäscherei, in Zurich, during the remoteC3 (Chaos Computer Club)

We have a few things to get back to:

  • Locomotion
    • Sim2Real
  • Vision
    • Object detection / segmentation

We plan to work on a chicken cam, but are waiting for the raspberry pi camera to arrive. So first, locomotion.

Miranda would like to add more leg segments, but process-wise, we need the software and hardware working together with basics, before adding more parts.

Sentient Table prototype

I’ve tested the GPIO, and it’s working, after adjusting the minimum pulse value for the MG996R servo.

So, where did we leave off, before KonS MFRU / ICAF ?

Locomotion:

There was a walking table in simulation, and the progress was saving so that we could reload the state, in Ray and Tune.

I remember the best walker had a ‘reward’ of 902, so I searched for 902

grep -R ‘episode_reward_mean\”\: 902’

And found these files:

 409982 Aug 1 07:52 events.out.tfevents.1596233543.chrx
334 Jul 31 22:12 params.json
304 Jul 31 22:12 params.pkl
132621 Aug 1 07:52 progress.csv
1542332 Aug 1 07:52 result.json

and there are checkpoint directories, with binary files.

So what are these files? How do I extract actions?

Well it looks like this info keeps track of Ray/Tune progress. If we want logs, we seem to need to make them ourselves. The original minitaur code used google protobuf to log state. So I set the parameter to log to a directory.

log_path="/media/chrx/0FEC49A4317DA4DA/logs"

So now when I run it again, it makes a file in the format below:


message RobotableEpisode {
// The state-action pair at each step of the log.
repeated RobotableStateAction state_action = 1;
}

message RobotableMotorState {
// The current angle of the motor.
double angle = 1;
// The current velocity of the motor.
double velocity = 2;
// The current torque exerted at this motor.
double torque = 3;
// The action directed to this motor. The action is the desired motor angle.
double action = 4;
}

message RobotableStateAction {
// Whether the state/action information is valid. It is always true if the
// proto is from simulation. It might be false when communication error
// happens on robotable hardware.
bool info_valid = 6;
// The time stamp of this step. It is computed since the reset of the
// environment.
google.protobuf.Timestamp time = 1;
// The position of the base of the minitaur.
robotics.messages.Vector3d base_position = 2;
// The orientation of the base of the minitaur. It is represented as (roll,
// pitch, yaw).
robotics.messages.Vector3d base_orientation = 3;
// The angular velocity of the base of the minitaur. It is the time derivative
// of (roll, pitch, yaw).
robotics.messages.Vector3d base_angular_vel = 4;
// The motor states (angle, velocity, torque, action) of eight motors.
repeated RobotableMotorState motor_states = 5;
}

I’m pretty much only interested in that last line,

repeated RobotableMotorState motor_states = 5;

So that’s the task, to decode the protobuf objects.

import os
import inspect

currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
os.sys.path.insert(0, parentdir)

import argparse
from gym_robotable.envs import logging

if __name__ == "__main__":

    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/logs/robotable_log_2020-12-29-191602')
    args = parser.parse_args()
    logging = logging.RobotableLogging()
    episode = logging.restore_episode(args.log_file)
    print(dir (episode))
    print("episode=",episode)
    fields = episode.ListFields()

    for field in fields:
        print(field)

This prints out some json-like info. On the right path.

time {
seconds: 5
}
base_position {
x: 0.000978083148860855
y: 1.7430418253385236
z: -0.0007063670972221042
}
base_orientation {
x: -0.026604138524100478
y: 0.00973575985636693
z: -0.08143286338936992
}
base_angular_vel {
x: 0.172553297157456
y: -0.011541306494121786
z: -0.010542314686643973
}
motor_states {
angle: -0.12088901254000686
velocity: -0.868766524998517
torque: 3.3721667267908284
action: 3.6539504528045654
}
motor_states {
angle: 0.04232669165311699
velocity: 1.5488756496627718
torque: 3.4934419908437704
action: 1.4116498231887817
}
motor_states {
angle: 0.8409251448232009
velocity: -1.617737108768752
torque: -3.3539541961507124
action: -3.7024881839752197
}
motor_states {
angle: 0.13926660037454777
velocity: -0.9575437158301312
torque: 3.563701581854714
action: 1.104300618171692
}
info_valid: true
])

We will have to experiment to work out how to translate this data to something usable. The servos are controlled with throttle, from -1 to 1.

Technically I should probably rewrite the simulation to output this “throttle” value. But let’s work with what we have, for now.

My first attempt will be to extract and normalize the torque values to get a sequence of actions… nope.

Ok so plan B. Some visualisation of the values should help. I have it outputting the JSON now.

episode_proto = logging.restore_episode(args.log_file)

jsonObj = MessageToJson(episode_proto)

print(jsonObj)

I decided to use StreamLit, which is integrated with various plotting libraries. After looking at the different plotting options, Plotly seems the most advanced.

Ok so in JSON,

{
  "time": "1970-01-01T00:00:05Z",
  "basePosition": {
    "x": 0.000978083148860855,
    "y": 1.7430418253385236,
    "z": -0.0007063670972221042
  },
  "baseOrientation": {
    "x": -0.026604138524100478,
    "y": 0.00973575985636693,
    "z": -0.08143286338936992
  },
  "baseAngularVel": {
    "x": 0.172553297157456,
    "y": -0.011541306494121786,
    "z": -0.010542314686643973
  },
  "motorStates": [
    {
      "angle": -0.12088901254000686,
      "velocity": -0.868766524998517,
      "torque": 3.3721667267908284,
      "action": 3.6539504528045654
    },
    {
      "angle": 0.04232669165311699,
      "velocity": 1.5488756496627718,
      "torque": 3.4934419908437704,
      "action": 1.4116498231887817
    },
    {
      "angle": 0.8409251448232009,
      "velocity": -1.617737108768752,
      "torque": -3.3539541961507124,
      "action": -3.7024881839752197
    },
    {
      "angle": 0.13926660037454777,
      "velocity": -0.9575437158301312,
      "torque": 3.563701581854714,
      "action": 1.104300618171692
    }
  ],
  "infoValid": true
}

Plotly uses Panda dataframes, which is tabular data. 2 dimensions. So I need to transform this to something usable.

Something like time on the x-axis

and angle / velocity / torque / action on the y axis.

Ok so how to do this…?

Well I’ve almost got it, but I mostly had to give up on StreamLit’s native line_chart for now. Plotly’s has line chart code that can handle multiple variables. So I’m getting sidetracked by this bug:

When I import plotly’s library,

import plotly.graph_objects as go

“No module named ‘plotly.graph_objects’; ‘plotly’ is not a package”

https://stackoverflow.com/questions/57105747/modulenotfounderror-no-module-named-plotly-graph-objects

import plotly.graph_objects as go ? no…

from plotly import graph_objs as go ? no…

from plotly import graph_objects as go ? no…

hmm.

pip3 install plotly==4.14.1 ?

Requirement already satisfied: plotly==4.14.1 in /usr/local/lib/python3.6/dist-packages (4.14.1)

no… why are the docs wrong then?

Ah ha.

“This is the well known name shadowing trap.” – stackoverflow

I named my file plotly.py – that is the issue.

So, ok run it again… (streamlit run plot.py) and open localhost:8501…

Now,

Running as root without --no-sandbox is not supported. See https://crbug.com/638180

Ah ha. I went back to StreamLit notation and it worked.

#fig.show()
st.plotly_chart(fig)

Ok excellent, so here is my first round of code:

import pandas as pd
import numpy as np
import streamlit as st
import time
from plotly import graph_objects as go
import os
import inspect
from google.protobuf.json_format import MessageToJson
import argparse
from gym_robotable.envs import logging


currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
os.sys.path.insert(0, parentdir)


if __name__ == "__main__":


    st.title('Analyticz')


    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/logs/robotable_log_2020-12-29-191602')
    args = parser.parse_args()
    logging = logging.RobotableLogging()
    episode_proto = logging.restore_episode(args.log_file)


    times = []
    angles = [[]]*4           # < bugs!
    velocities = [[]]*4
    torques = [[]]*4
    actions = [[]]*4

    for step in range(len(episode_proto.state_action)):

       step_log = episode_proto.state_action[step]
       times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))

       for i in range(4):
           angles[i].append(step_log.motor_states[i].angle)
           velocities[i].append(step_log.motor_states[i].velocity)
           torques[i].append(step_log.motor_states[i].torque)
           actions[i].append(step_log.motor_states[i].action)
 
    print(angles)
    print(times)
    print(len(angles))
    print(len(velocities))
    print(len(torques))
    print(len(actions))
    print(len(times))

    # Create traces
    fig = go.Figure()
    fig.add_trace(go.Scatter(x=times, y=angles[0],
	            mode='lines',
	            name='Angles'))
    fig.add_trace(go.Scatter(x=times, y=velocities[0],
	            mode='lines+markers',
	            name='Velocities'))
    fig.add_trace(go.Scatter(x=times, y=torques[0],
	            mode='markers', 
                    name='Torques'))
    fig.add_trace(go.Scatter(x=times, y=actions[0],
	            mode='markers', 
                    name='Actions'))

    st.plotly_chart(fig)

And it’s plotting data for one leg.

If this is just 5 seconds of simulation, then velocities looks like it might be the closest match. You can imagine it going up a bit, back a bit, then a big step forward.

So, one idea is to do symbolic regression, to approximate the trigonometry equations for quadrupedal walking, (or just google them), and generalise to a walking algorithm, to use for locomotion. I could use genetic programming, like at university (https://gplearn.readthedocs.io/en/stable/examples.html#symbolic-regressor). But that’s overkill and probably won’t work. Gotta smooth the graph incrementally. Normalize it.

Let’s see what happens next, visually, after 5 seconds of data, and then view the same, for other legs.

Ok there is 30 seconds of walking.

The tools I wrote for the walker, are run with ‘python3 play_tune.py –replay 1’. It looks for the best checkpoint and replays it from there.

But now I seem to be getting the same graph for different legs. What? We’re going to have to investigate.

Ok turns out [[]]*4 is the wrong way to initialise arrays in python. It makes all sublists the same. Here’s the correct way:

velocities = [[] for i in range(4)]

Now I have 4 different legs.

The graph is very spiky, so I’ve added a rolling window average, and normalised it between -1 and 1 since that’s what the servo throttle allows.

I am thinking that maybe because the range between min and max for the 4 legs are:

3.1648572819886085
1.7581604444983845
5.4736002843351805
1.986915632875287

The rear legs aren’t moving as much, so maybe it doesn’t make sense to normalize them all to [-1, 1] all on the same scale. Like maybe the back right leg that moves so much should be normalized to [-1, 1] and then all the other legs are scaled down proportionally. Anyway, let’s see. Good enough for now.

In the code, the motors order is:

front right, front left, back right, back left.

Ok so to save the outputs…

import pandas as pd
 import numpy as np
 import streamlit as st
 import time
 from plotly import graph_objects as go
 import os
 import inspect
 from google.protobuf.json_format import MessageToJson
 import argparse
 from gym_robotable.envs import logging
 import plotly.express as px
 currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
 parentdir = os.path.dirname(os.path.dirname(currentdir))
 os.sys.path.insert(0, parentdir)
 def normalize_negative_one(img):
     normalized_input = (img - np.amin(img)) / (np.amax(img) - np.amin(img))
     return 2*normalized_input - 1
 if name == "main":
     st.title('Analyticz')
     parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
     parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/walkinglogs/robotable_log_2021-01-17-231240')
     args = parser.parse_args()
     logging = logging.RobotableLogging()
     episode_proto = logging.restore_episode(args.log_file)
     times = []
     velocities = [[] for i in range(4)]
     for step in range(len(episode_proto.state_action)):
        step_log = episode_proto.state_action[step]
        times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))
        for i in range(4):
            velocities[i].append(step_log.motor_states[i].velocity)
     #truncate because a bunch of trailing zeros
     velocities[0] = velocities[0][0:3000]
     velocities[1] = velocities[1][0:3000]
     velocities[2] = velocities[2][0:3000]
     velocities[3] = velocities[3][0:3000]
     times = times[0:3000]
     #get moving averages
     window_size_0=40
     numbers_series_0 = pd.Series(velocities[0])
     windows_0 = numbers_series_0.rolling(window_size_0)
     moving_averages_0 = windows_0.mean()
     moving_averages_list_0 = moving_averages_0.tolist()
     without_nans_0 = moving_averages_list_0[window_size_0 - 1:]
     window_size_1=40
     numbers_series_1 = pd.Series(velocities[1])
     windows_1 = numbers_series_1.rolling(window_size_1)
     moving_averages_1 = windows_1.mean()
     moving_averages_list_1 = moving_averages_1.tolist()
     without_nans_1 = moving_averages_list_1[window_size_1 - 1:]
     window_size_2=40
     numbers_series_2 = pd.Series(velocities[2])
     windows_2 = numbers_series_2.rolling(window_size_2)
     moving_averages_2 = windows_2.mean()
     moving_averages_list_2 = moving_averages_2.tolist()
     without_nans_2 = moving_averages_list_2[window_size_2 - 1:]
     window_size_3=40
     numbers_series_3 = pd.Series(velocities[3])
     windows_3 = numbers_series_3.rolling(window_size_3)
     moving_averages_3 = windows_3.mean()
     moving_averages_list_3 = moving_averages_3.tolist()
     without_nans_3 = moving_averages_list_3[window_size_3 - 1:]
     #normalize between -1 and 1
     avg_0 = np.asarray(without_nans_0)
     avg_1 = np.asarray(without_nans_1)
     avg_2 = np.asarray(without_nans_2)
     avg_3 = np.asarray(without_nans_3)
     avg_0 = normalize_negative_one(avg_0)
     avg_1 = normalize_negative_one(avg_1)
     avg_2 = normalize_negative_one(avg_2)
     avg_3 = normalize_negative_one(avg_3)
     np.save('velocity_front_right', avg_0)
     np.save('velocity_front_left', avg_1)
     np.save('velocity_back_right', avg_2)
     np.save('velocity_back_left', avg_3)
     np.save('times', times)
     # Create traces
     fig0 = go.Figure()
     fig0.add_trace(go.Scatter(x=times, y=velocities[0],
                 mode='lines',
                 name='Velocities 0'))
     fig0.add_trace(go.Scatter(x=times, y=avg_0.tolist(),
                 mode='lines',
                 name='Norm Moving Average 0'))
     st.plotly_chart(fig0)
     fig1 = go.Figure()
     fig1.add_trace(go.Scatter(x=times, y=velocities[1],
                 mode='lines',
                 name='Velocities 1'))
     fig1.add_trace(go.Scatter(x=times, y=avg_1.tolist(),
                 mode='lines',
                 name='Norm Moving Average 1'))
     st.plotly_chart(fig1)
     fig2 = go.Figure()
     fig2.add_trace(go.Scatter(x=times, y=velocities[2],
                 mode='lines',
                 name='Velocities 2'))
     fig2.add_trace(go.Scatter(x=times, y=avg_2.tolist(),
                 mode='lines',
                 name='Norm Moving Average 2'))
     st.plotly_chart(fig2)
     fig3 = go.Figure()
     fig3.add_trace(go.Scatter(x=times, y=velocities[3],
                 mode='lines',
                 name='Velocities 3'))
     fig3.add_trace(go.Scatter(x=times, y=avg_3.tolist(),
                 mode='lines',
                 name='Norm Moving Average 3'))
     st.plotly_chart(fig3)

(Excuse the formatting.) Then I’m loading those npy files and iterating them to the motors.

import time
 import numpy as np
 from board import SCL, SDA
 import busio
 from adafruit_pca9685 import PCA9685
 from adafruit_motor import servo
 i2c = busio.I2C(SCL, SDA)
 pca = PCA9685(i2c, reference_clock_speed=25630710)
 pca.frequency = 50
 servo0 = servo.ContinuousServo(pca.channels[0], min_pulse=685, max_pulse=2280)
 servo1 = servo.ContinuousServo(pca.channels[1], min_pulse=810, max_pulse=2095)
 servo2 = servo.ContinuousServo(pca.channels[2], min_pulse=700, max_pulse=2140)
 servo3 = servo.ContinuousServo(pca.channels[3], min_pulse=705, max_pulse=2105)
 velocity_front_right = np.load('velocity_front_right.npy')
 velocity_front_left = np.load('velocity_front_left.npy')
 velocity_back_right = np.load('velocity_back_right.npy')
 velocity_back_left = np.load('velocity_back_left.npy')
 times = np.load('times.npy')
 reverse left motors
 velocity_front_left = -velocity_front_left
 velocity_back_left = -velocity_back_left
 print (velocity_front_right.size)
 print (velocity_front_left.size)
 print (velocity_back_right.size)
 print (velocity_back_left.size)
 print (times.size)
 for time in times:
    print(time)
 for a,b,c,d in np.nditer([velocity_front_right, velocity_front_left, velocity_back_right, velocity_back_left]):
     servo0.throttle = a/4
     servo1.throttle = b/4
     servo2.throttle = c/4
     servo3.throttle = d/4
    print (a, b, c, d)
 servo0.throttle = 0
 servo1.throttle = 0
 servo2.throttle = 0
 servo3.throttle = 0
 pca.deinit()

Honestly it doesn’t look terrible, but these MG996R continuous rotation servos are officially garbage.

While we wait for new servos to arrive, I’m testing on SG90s. I’ve renormalized about 90 degrees

def normalize_0_180(img):
     normalized_0_180 = (180*(img - np.min(img))/np.ptp(img)).astype(int)   
     return normalized_0_180

That was also a bit much variation, and since we’ve got 180 degree servos now, but it still looks a bit off, I halved the variance in the test.

 velocity_front_right = ((velocity_front_right - 90)/2)+90
 velocity_front_left = ((velocity_front_left - 90)/2)+90
 velocity_back_right = ((velocity_back_right - 90)/2)+90
 velocity_back_left = ((velocity_back_left - 90)/2)+90

# reverse left motors
 velocity_front_left = 180-velocity_front_left
 velocity_back_left = 180-velocity_back_left

lol. ok. Sim2Real.

So, it’s not terrible, but we’re not quite there either. Also i think it’s walking backwards.

I am not sure the math is correct.

I changed the smoothing code to use this code which smoothes based on the preceding plot.

def anchor(signal, weight):
     buffer = []
     last = signal[0]
     for i in signal:
         smoothed_val = last * weight + (1 - weight) * i
         buffer.append(smoothed_val)
         last = smoothed_val
     return buffer
Derp.

OK i realised I was wrong all along. Two things.

First, I just didn’t see that the angles values were on that original graph. They were so small. Of course we’re supposed to use the angles, rather than the velocities, for 180 degree servos.

Second problem was, I was normalizing from min to max of the graph. Of course it should be -PI/2 to PI/2, since the simulator works with radians, obviously. Well anyway, hindsight is 20/20. Now we have a fairly accurate sim2real. I use the anchor code above twice, to get a really smooth line.

Here’s the final code.

import pandas as pd
 import numpy as np
 import streamlit as st
 import time
 from plotly import graph_objects as go
 import os
 import inspect
 from google.protobuf.json_format import MessageToJson
 import argparse
 from gym_robotable.envs import logging
 import plotly.express as px
 currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
 parentdir = os.path.dirname(os.path.dirname(currentdir))
 os.sys.path.insert(0, parentdir)
 def anchor(signal, weight):
     buffer = []
     last = signal[0]
     for i in signal:
         smoothed_val = last * weight + (1 - weight) * i
         buffer.append(smoothed_val)
         last = smoothed_val
     return buffer
 assume radians
 def normalize_0_180(img):
     normalized_0_180 = np.array(img)*57.2958 + 90
     return normalized_0_180
 if name == "main":
     st.title('Analyticz')
     parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
     parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/walkinglogs/robotable_log_2021-01-17-231240')
     args = parser.parse_args()
     logging = logging.RobotableLogging()
     episode_proto = logging.restore_episode(args.log_file)
     times = []
     angles = [[] for i in range(4)]
     for step in range(len(episode_proto.state_action)):
        step_log = episode_proto.state_action[step]
        times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))
        for i in range(4):
            print (step)
            print (step_log.motor_states[i].angle)
            angles[i].append(step_log.motor_states[i].angle)
     #truncate because a bunch of trailing zeros
     angles[0] = angles[0][0:3000]
     angles[1] = angles[1][0:3000]
     angles[2] = angles[2][0:3000]
     angles[3] = angles[3][0:3000]
     avg_0 = normalize_0_180(angles[0])
     avg_1 = normalize_0_180(angles[1])
     avg_2 = normalize_0_180(angles[2])
     avg_3 = normalize_0_180(angles[3])
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     np.save('angle_front_right_180', avg_0)
     np.save('angle_front_left_180', avg_1)
     np.save('angle_back_right_180', avg_2)
     np.save('angle_back_left_180', avg_3)
     # Create traces
     fig0 = go.Figure()
     fig0.add_trace(go.Scatter(x=times, y=angles[0],
                 mode='lines',
                 name='Angles 0'))
     fig0.add_trace(go.Scatter(x=times, y=avg_0,
                  mode='lines',
                  name='Norm Moving Average 0'))
     st.plotly_chart(fig0)
     fig1 = go.Figure()
     fig1.add_trace(go.Scatter(x=times, y=angles[1],
                 mode='lines',
                 name='Angles 1'))
     fig1.add_trace(go.Scatter(x=times, y=avg_1,
                  mode='lines',
                  name='Norm Moving Average 1'))
     st.plotly_chart(fig1)
     fig2 = go.Figure()
     fig2.add_trace(go.Scatter(x=times, y=angles[2],
                 mode='lines',
                 name='Angles 2'))
     fig2.add_trace(go.Scatter(x=times, y=avg_2,
                  mode='lines',
                  name='Norm Moving Average 2'))
     st.plotly_chart(fig2)
     fig3 = go.Figure()
     fig3.add_trace(go.Scatter(x=times, y=angles[3],
                 mode='lines',
                 name='Angles 3'))
     fig3.add_trace(go.Scatter(x=times, y=avg_3,
                  mode='lines',
                  name='Norm Moving Average 3'))
     st.plotly_chart(fig3)

OK.

So there’s a milestone that took way too long. We’ve got Sim 2 Real working, ostensibly.

After some fortuitous googling, I found the Spot Micro, or, Spot Mini Mini project. The Spot Micro guys still have a big focus on inverse kinematics, which I’m trying to avoid for as long as I can.

They’ve done a very similar locomotion project using pyBullet, and I was able to find a useful paper, in the inspiration section, alerting me to kMPs.

Kinematic Motion Primitives. It’s a similar idea to what I did above.

Instead, what these guys did was to take a single wave of their leg data, and repeat that, and compare that to a standardized phase. (More or less). Makes sense. Looks a bit complicated to work out the phase of the wave in my case.

I’ll make a new topic, and try to extract kMPs from the data, for the next round of locomotion sim2real. I will probably also train the robot for longer, to try evolve a gait that isn’t so silly.

Categories
doin a heckin inspire Hardware hardware_ Locomotion robots

Hardware Inspiration

MIT’s Cheetah mini breakown. O-Drive, nice motors, etc…

James Bruton’s Mini Dog, with servos and arduino

James Bruton’s OpenDog with badass DC motors

Categories
dev Locomotion The Sentient Table

Spinning Up

OpenAI Spinning Up https://spinningup.openai.com/en/latest/

So I’ve got it working, OpenAI’s PPO. https://github.com/openai/spinningup/issues/142 – Needs a workaround to run your own envs.

But I can’t work out how to increase the exploration “factor”.

It’s some sort of application of a Gaussian distribution of noise, I think, which is the simple idea.

It looks like clipratio is maybe what i need.

hmm but ppo don’t want it.

parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=64)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--steps', type=int, default=4000)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='ppo')


def ppo(env_fn, actor_critic=core.mlp_actor_critic, ac_kwargs=dict(), seed=0,
        steps_per_epoch=4000, epochs=50, gamma=0.99, 


clip_ratio=0.2, 


pi_lr=3e-4,
        vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
        target_kl=0.01, logger_kwargs=dict(), save_freq=10):

https://github.com/openai/spinningup/issues/12

ok so i tried SAC algo too, and the issue i have now is

(, AttributeError(“‘list’ object has no attribute ‘reshape’”,), )

So the thing is the dimensionality

“FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance,

env = FlattenDictWrapper(env, [‘observation’, ‘desired_goal’])

Spinning Up implementations currently only support envs with Box observation spaces (where observations are real-valued vectors). These environments have Dict observation spaces, so each obs is a dict of (key, vector) pairs. If you want to test things out in these envs, I recommend doing it as a hacking project! 🙂 “

Categories
control envs Gripper Research Locomotion simulation

Meta-world

It is a training task set for 6 DOF (degree-of-freedom) robot arms.

https://github.com/rlworkgroup/metaworld

Here’s a 38MB gif explaining it: https://meta-world.github.io/figures/ml45-1080p.gif

Meta-World is an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. We aim to provide task distributions that are sufficiently broad to evaluate meta-RL algorithms’ generalization ability to new behaviors.”