Categories
AI/ML Locomotion robots sim2real simulation

Imitation Learning

This is the real ticket. Basically motion capture to speed up training. But when a robot can do this, we don’t need human workers anymore. (Except to provide examples of the actions to perform, and to build the first robot-building machine, or robot-building-building machines, etc.

videos: https://sites.google.com/view/nips2017-one-shot-imitation/home

arxiv: https://arxiv.org/pdf/1703.07326.pdf

abstract: https://arxiv.org/abs/1703.07326

Learning Agile Robotic Locomotion Skills by
Imitating Animals: https://xbpeng.github.io/projects/Robotic_Imitation/2020_Robotic_Imitation.pdf

Imitation is the ability to recognize and reproduce others’ actions – By extension, imitation learning is a means of learning and developing new skills from observing these skills performed by another agent. Imitation learning (IL) as applied to robots is a technique to reduce the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated. Imitation learning is thus a “natural” means of training a machine, meant to be accessible to lay people. – (https://link.springer.com/referenceworkentry/10.1007%2F978-1-4419-1428-6_758)

OpenAI’s https://openai.com/blog/robots-that-learn/

“We’ve created a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.”

Categories
Locomotion robots

Soft Tensegrity Robots

Soft Tensegrity Robots, jiggling around:

https://www.youtube.com/watch?v=SuLQDhrk9tQ

“Neat” – Youtube comment

Categories
AI/ML evolution robots

resibots

another phd collab thing, on European Research Council grant https://www.resibots.eu/videos.html 2015-2020. Nice. They’re the ones who developed MAP-elites https://arxiv.org/abs/1504.04909

They https://members.loria.fr/JBMouret/nature_press.html had a paper published in Nature, for their bots that fix themselves.

MAP-Elites is interesting. It categorises behaviours and tests local optima, of some sort variables. Haven’t read the paper yet. It is windy.

“It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. “

Categories
control robots sim2real simulation

Sim2Real links

Using Simulation and Domain Adaptation to Improve
Efficiency of Deep Robotic Grasping:

https://arxiv.org/pdf/1805.07831.pdf

Optimizing Simulations with Noise-Tolerant Structured Exploration: https://arxiv.org/pdf/1805.07831.pdf

Categories
dev robots Vision

ROS Camera Topic

What is a ros topic? http://wiki.ros.org/Topics
ROS can publish the webcam stream to a “topic”, and any part of the robot can subscribe to it, by name, if it is interested in that data. ROS is almost like a program where everything is a global variable.

https://answers.ros.org/question/218228/ros-example-program-doesnt-work-with-the-laptop-webcam/

I made this file for the laptop webcam, but then didn’t end up using it.

<launch>
  <group ns="camera">
    <node pkg="libuvc_camera" type="camera_node" name="mycam">
      <!-- Parameters used to find the camera -->
      <param name="vendor" value="0x2232"/>
      <param name="product" value="0x1082"/>
      <param name="serial" value=""/>
      <!-- If the above parameters aren't unique, choose the first match: -->
      <param name="index" value="0"/>

      <!-- Image size and type -->
      <param name="width" value="640"/>
      <param name="height" value="480"/>
      <!-- choose whichever uncompressed format the camera supports: -->
      <param name="video_mode" value="uncompressed"/> <!-- or yuyv/nv12/mjpeg -->
      <param name="frame_rate" value="15"/>

      <param name="timestamp_method" value="start"/> <!-- start of frame -->
      <param name="camera_info_url" value="file:///tmp/cam.yaml"/>

      <param name="auto_exposure" value="3"/> <!-- use aperture_priority auto exposure -->
      <param name="auto_white_balance" value="false"/>
    </node>
  </group>
</launch>

roscore

apt install ros-melodic-uvc-camera

rospack listnames

rosrun uvc_camera uvc_camera_node _device:=/dev/video0

rostopic list

(should show /image_raw now…)

rosrun dso_ros dso_live calib=/opt/catkin_ws/src/dso_ros/camera.txt image:=/image_raw/

Categories
dev robots simulation

Catkin

ROS build tool. These are the patterns of use:

In order to help automate the merged build process, Catkin was distributed with a command-line tool called catkin_make. This command automated the above CMake work flow while setting some variables according to standard conventions. These defaults would result in the execution of the following commands:

$ mkdir build
$ cd build
$ cmake ../src -DCATKIN_DEVEL_SPACE=../devel -DCMAKE_INSTALL_PREFIX=../install
$ make -j<number of cores> -l<number of cores> [optional target, e.g. install]

To get DSO (Direct Sparse Odometry) working.

I followed these instructions: https://github.com/JakobEngel/dso_ros/issues/32

I made /opt/catkin_ws

git clone –single-branch –branch cmake https://github.com/NikolausDemmel/dso.git
git clone –single-branch –branch catkin https://github.com/NikolausDemmel/dso_ros.git

catkin init

catkin config -DCMAKE_BUILD_TYPE=Release

catkin build

Categories
3D Research robots simulation

Gazebo Cassie Sim

Checked out https://github.com/agilityrobotics/cassie-gazebo-sim

There were some extra steps, as per usual. Ubuntu 18.04? https://automaticaddison.com/how-to-launch-gazebo-in-ubuntu/ – We need to change the URL to url: https://api.ignitionrobotics.org

cd ~/.ignition/fuel/config.yaml

and I needed to set some envs

GAZEBO_PLUGIN_PATH=/opt/cassie-gazebo-sim/plugin/build
GAZEBO_MODEL_PATH=/.gazebo/models

/.gazebo/models/cassie# gazebo cassie.world

It loads a derpy cassie robot

Then /opt/cassie-gazebo-sim/plugin/build# ./cassiectrl

runs the sim, which doesn’t do anything.

But the https://github.com/agilityrobotics/cassie-gazebo-sim/tree/master/plugin/include code is interesting, for remote controlling code in C using UDP. UDP is a good idea for remote control. Sends structs. Very straightforward. Nice. ZMQ probably nicer though.

Looks like it integrates with a fancy https://www.elmomc.com/ motion control company thing. Nice big UI. But yeah. Cassie robot is much too complicated.

Categories
robots

Stanley: The Robot that won DARPA’s heart

http://robots.stanford.edu/papers/thrun.stanley05.pdf

the interesting part:

Categories
Hardware hardware_ robots

Robot prep: PWM control

I started up a raspberry pi with raspbian installed on it. (I used balena’s etcher to flash an sd card): the light version, basically just Debian OS for rpi: https://www.raspberrypi.org/downloads/raspbian/

https://www.balena.io/etcher/

The RPi needs a proper keyboard, at least until you set up ssh and can access it remotely. (

We’re interested in making a robot, and we’re using a Raspberry pi. So we need to control servos. RPi only has a single PWM pin, so we need to use an I2C module https://learn.adafruit.com/16-channel-pwm-servo-driver to control however many servos our robot needs, and the software libs to run it https://github.com/adafruit/Adafruit_Python_PCA9685

adafruit_products_ID815servo_LRG.jpg

and we need an external 5V PSU, to power the servos.

Configuring Your Pi for I2C:

sudo apt-get install python-smbus
sudo apt-get install i2c-tools

Need to connect the RPi to the servo driver. This was a picture taken when testing it on the RPi Zero W in 2019. The instructions for pinout connections: https://learn.adafruit.com/16-channel-pwm-servo-driver/pinouts

Or from Adafruit, for RPi:

adafruit_products_raspi_pca9685_i2c_bb.jpg

(or Arduino)

adafruit_products_AllServos_bb-1024.jpg

Here’s an rpi zero layout.

Ok but how do we power servos? We can’t run it off the RPi’s 5V 2A. Oh duh, there’s that big DC socket connected to the PCA9685

There is something to be said for running the robot on an Arduino. You get something robotic. You upload someone’s hexapod spider code, and it does a little dance. You can control it remotely. It can interact with sensors.

Arduinos are dirt cheap. So I bet we could have tiny neural networks running in arduinos… shit, do we already have them? Let’s see… ok wow there is like a whole thing. https://blog.arduino.cc/2019/10/15/get-started-with-machine-learning-on-arduino/ ok but 2K of RAM is not much. You would need to be a real demo scene junky to use a 2KB NN, but yeah, you could do something crudely intelligent with it.

Robots on ESP32s are definitely a thing. But ok no, Raspberry Pi for real robot. We need Linux for this.

Ok so I need to wire this up. But I also need a chassis for the robot.

Categories
AI/ML Locomotion robots sim2real simulation

Pre-training your dragon

The links here died, so updated to more generic links

https://neurorobotics.net

https://hal.inria.fr/

Also

“Sim-to-Real Transfer with Neural-Augmented Robot Simulation” https://proceedings.mlr.press/v87/golemo18a.html