Categories
dev The Sentient Table

Gym Env

Starting from pybullet/gym/pybullet_envs/bullet/minitaur_gym_env.py

and going by these instructions https://github.com/openai/gym/blob/master/docs/creating-environments.md

First had to remember the protocol buffer stuff https://developers.google.com/protocol-buffers/docs/reference/python-generated

protoc --proto_path=src --python_out=build/gen src/foo.proto src/bar/baz.proto

https://stackoverflow.com/questions/45068568/how-to-create-a-new-gym-environment-in-openai

/opt/gym-robotable# pip install -e .
Obtaining file:///opt/gym-robotable
Requirement already satisfied: gym>=0.2.3 in /usr/local/lib/python3.6/dist-packages (from gym-robotable==0.0.1) (0.17.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.3.0)
Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.18.1)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.11.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.2.3->gym-robotable==0.0.1) (0.18.2)
Installing collected packages: gym-robotable
Running setup.py develop for gym-robotable
Successfully installed gym-robotable

When I try run the gym env, I get:

raise error.UnregisteredEnv(‘No registered env with id: {}’.format(id))

Ok it was some typo in the __init__.py

Fixed that up in the imports and __inits__ and now got this issue: https://answers.ros.org/question/326226/importerror-dynamic-module-does-not-define-module-export-function-pyinit__tf2/

Ok i think that was some file that was importing tensorflow 2. I don’t want to use tensorflow. it’s like 400MB.

URDF file ‘/root/.local/lib/python3.6/site-packages/pybullet_data/robot.urdf’ not found

so ok, let’s put it there.

/opt/gym-robotable/gym_robotable/envs# cp robot.urdf /root/.local/lib/python3.6/site-packages/pybullet_data/

AttributeError: ‘Robotable’ object has no attribute ‘Reset’

Ok so I’ve got robot.urdf looking like a table, and loading. But there’s various differences between the minitaur gym env and my own minitaur, the robotable. Gotta compare, etc. ok it wants reset() not Reset(…)

just a warning:

No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
torsob3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

happy with 1 defaults for mass and inertia settings, for now.

Ok so apparently just working my way through the bugs one by one. That’s probably as good a method as any :

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.GetMotorAngles().tolist())
AttributeError: ‘Robotable’ object has no attribute ‘GetMotorAngles’

K did a :%s/Get/get/g in vi. NEXT.

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 301, in reset
return self._get_observation()
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.getMotorAngles().tolist())
File “/opt/gym-robotable/gym_robotable/envs/robotable.py”, line 88, in getMotorAngles
jointState = p.getJointState(self.quadruped, self.motorIdList[i])
IndexError: list index out of range

ok self.nMotors = 4

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 527, in _get_observation_upper_bound
    num_motors = self.robotable.num_motors
AttributeError: 'Robotable' object has no attribute 'num_motors'

k lets change nMotors to num_motors. NEXT

File “ars.py”, line 77, in ExploreWorker
state, reward, done, _ = env.step(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 350, in step
action = self._transform_action_to_motor_command(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 313, in _transform_action_to_motor_command
action = self.robotable.ConvertFromLegModel(action)
AttributeError: ‘Robotable’ object has no attribute ‘ConvertFromLegModel’

Categories
AI/ML simulation

Hindsight Experience Replay (HER)

Hindsight Experience Replay

(Instead of getting a crap result and saying, wow what a crap result, you say, ah well if that’s what I had wanted to do, then that would have worked.)

It’s a clever idea, Open AI.

One ability humans have, unlike the current generation of model-free RL algorithms, is to learn almost as much from achieving an undesired outcome as from the desired one.

https://arxiv.org/abs/1707.01495

“So why not just pretend that we wanted to achieve this goal to begin with, instead of the one that we set out to achieve originally?”

The HER algorithm achieves this by using what is called “sparse and binary” rewards, which only provide an indication to the agent that either it has failed or succeeded. In contrast, the “dense,” “shaped” rewards used in conventional reinforcement learning tip agents off as to whether they are getting “close,” “closer,” “much closer,” or “very close” to hitting their goal. Such so-called dense rewards can speed up the learning process, but the drawback is that these dense rewards often don’t contain much of a learning signal for the agent to learn from, and can be difficult to design and implement for real-world applications.

https://thenewstack.io/openai-algorithm-allows-ai-to-learn-from-its-mistakes/

G-HER

Hindsight experience replay (HER) based on universal value functions shows promising results in such multi-goal settings by substituting achieved goals for the original goal, frequently giving the agent rewards. However, the achieved goals are limited to the current policy level and lack guidance for learning. We propose a novel guided goal-generation model for multi-goal RL named G-HER. Our method uses a conditional generative recurrent neural network (RNN) to explicitly model the relationship between policy level and goals, enabling the generation of various goals conditions on the different policy levels.

https://sites.google.com/view/gher-algorithm

https://thenewstack.io/openai-algorithm-allows-ai-to-learn-from-its-mistakes/

Categories
Locomotion

Robot “Forward Kinematics”

This doesn’t seem necessary any more, because we have simulators. But interesting to see what people had to do back in the day:

Denavit-Hartenberg (DH) parameters 

https://blog.robotiq.com/how-to-calculate-a-robots-forward-kinematics-in-5-easy-steps

Categories
GANs

ALA – Adversarial Latent Autoencoders

https://github.com/podgorskiy/ALAE https://arxiv.org/pdf/2004.04467.pdf

Similar to GANs, but a bit cleaner. Image training encodes a latent representation of the ur-Celebrity, and new images are generated from it using another image as an input.

Latent: (of a quality or state) existing but not yet developed or manifest; hidden or concealed.

Autoencoder
Categories
dev robots Vision

ROS Camera Topic

What is a ros topic? http://wiki.ros.org/Topics
ROS can publish the webcam stream to a “topic”, and any part of the robot can subscribe to it, by name, if it is interested in that data. ROS is almost like a program where everything is a global variable.

https://answers.ros.org/question/218228/ros-example-program-doesnt-work-with-the-laptop-webcam/

I made this file for the laptop webcam, but then didn’t end up using it.

<launch>
  <group ns="camera">
    <node pkg="libuvc_camera" type="camera_node" name="mycam">
      <!-- Parameters used to find the camera -->
      <param name="vendor" value="0x2232"/>
      <param name="product" value="0x1082"/>
      <param name="serial" value=""/>
      <!-- If the above parameters aren't unique, choose the first match: -->
      <param name="index" value="0"/>

      <!-- Image size and type -->
      <param name="width" value="640"/>
      <param name="height" value="480"/>
      <!-- choose whichever uncompressed format the camera supports: -->
      <param name="video_mode" value="uncompressed"/> <!-- or yuyv/nv12/mjpeg -->
      <param name="frame_rate" value="15"/>

      <param name="timestamp_method" value="start"/> <!-- start of frame -->
      <param name="camera_info_url" value="file:///tmp/cam.yaml"/>

      <param name="auto_exposure" value="3"/> <!-- use aperture_priority auto exposure -->
      <param name="auto_white_balance" value="false"/>
    </node>
  </group>
</launch>

roscore

apt install ros-melodic-uvc-camera

rospack listnames

rosrun uvc_camera uvc_camera_node _device:=/dev/video0

rostopic list

(should show /image_raw now…)

rosrun dso_ros dso_live calib=/opt/catkin_ws/src/dso_ros/camera.txt image:=/image_raw/

Categories
dev robots simulation

Catkin

ROS build tool. These are the patterns of use:

In order to help automate the merged build process, Catkin was distributed with a command-line tool called catkin_make. This command automated the above CMake work flow while setting some variables according to standard conventions. These defaults would result in the execution of the following commands:

$ mkdir build
$ cd build
$ cmake ../src -DCATKIN_DEVEL_SPACE=../devel -DCMAKE_INSTALL_PREFIX=../install
$ make -j<number of cores> -l<number of cores> [optional target, e.g. install]

To get DSO (Direct Sparse Odometry) working.

I followed these instructions: https://github.com/JakobEngel/dso_ros/issues/32

I made /opt/catkin_ws

git clone –single-branch –branch cmake https://github.com/NikolausDemmel/dso.git
git clone –single-branch –branch catkin https://github.com/NikolausDemmel/dso_ros.git

catkin init

catkin config -DCMAKE_BUILD_TYPE=Release

catkin build

Categories
AI/ML control Locomotion simulation The Sentient Table

ARS and PPO

https://pybullet.org/Bullet/phpBB3/viewtopic.php?t=12553

A couple of advanced data science algorithms. Implemented both for the walking table. ARS is great. We hear a lot about deep learning. This one is shallow learning, and does very well on simpler tasks. Just inputs and outputs, no hidden layers.

It’s similar to the Evolution Strategies algorithm. Generally trying some random stuff out, and slowly changing the model based on what gets you closer to the goal.

ARS: https://arxiv.org/pdf/1803.07055.pdf

Good lecture slides http://eddiesagra.com/wp-content/uploads/2019/03/Introduction-to-Machine-Learning-v1.2-Mar-11-2019.pdf

ARS – Augmented Random Search

https://github.com/colinskow/move37/blob/master/ars/ars.py

https://towardsdatascience.com/introduction-to-augmented-random-search-d8d7b55309bd

PPO – Proximal Policy Optimization

https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/gym/pybullet_envs/agents/ppo/algorithm.py

Categories
3D Research robots simulation

Gazebo Cassie Sim

Checked out https://github.com/agilityrobotics/cassie-gazebo-sim

There were some extra steps, as per usual. Ubuntu 18.04? https://automaticaddison.com/how-to-launch-gazebo-in-ubuntu/ – We need to change the URL to url: https://api.ignitionrobotics.org

cd ~/.ignition/fuel/config.yaml

and I needed to set some envs

GAZEBO_PLUGIN_PATH=/opt/cassie-gazebo-sim/plugin/build
GAZEBO_MODEL_PATH=/.gazebo/models

/.gazebo/models/cassie# gazebo cassie.world

It loads a derpy cassie robot

Then /opt/cassie-gazebo-sim/plugin/build# ./cassiectrl

runs the sim, which doesn’t do anything.

But the https://github.com/agilityrobotics/cassie-gazebo-sim/tree/master/plugin/include code is interesting, for remote controlling code in C using UDP. UDP is a good idea for remote control. Sends structs. Very straightforward. Nice. ZMQ probably nicer though.

Looks like it integrates with a fancy https://www.elmomc.com/ motion control company thing. Nice big UI. But yeah. Cassie robot is much too complicated.

Categories
Locomotion The Sentient Table

Robot prep: URDF

After much confusion, I made a table with legs that can move.

So I raised the table to z=1 plane and put the origin of the joints in the right place. THEN you set the legs/links with origin at -0.5 because the joint is at 1, and the leg is 1 long, and presumably it needs 0.5 because that’s the centre of the box.

I did visualisation by changing the filename in this file in the bullet3 folders, and running in python:

python3 biped2d_pybullet.py

seemed better than RViz

<?xml version="1.0"?>
<robot name="table">

  <material name="white">
    <color rgba="1 1 1 1"/>
  </material>

  <material name="black">
    <color rgba="0.2 0.2 0.2 1"/>
  </material>

  <material name="blue">
    <color rgba="0 0 1 1"/>
  </material>


	
  <link name="torso">
    <visual>
      <geometry>
        <box size="2 2 0.1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 1"/>
      <material name="white"/>
    </visual>
    <collision>
      <geometry>
        <box size="2 2 0.1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 1"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>
 


 
  <link name="r_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>

  
  <joint name="torso_to_rightleg" type="revolute">
    <parent link="torso"/>
    <child link="r_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="0.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="1 1 1"/>
  </joint>





  
  <link name="l_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>

  
  <joint name="torso_to_leftleg" type="revolute">
    <parent link="torso"/>
    <child link="l_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="10.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="-1 1 1"/>
  </joint>








  <link name="front_r_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>


  <joint name="torso_to_frontrightleg" type="revolute">
    <parent link="torso"/>
    <child link="front_r_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="0.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="1 -1 1"/>
  </joint>








  <link name="front_l_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>


  <joint name="torso_to_frontleftleg" type="revolute">
    <parent link="torso"/>
    <child link="front_l_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="10.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="-1 -1 1"/>
  </joint>

  

</robot>
Categories
AI/ML arxiv Vision

Instance Segmentation

https://arxiv.org/pdf/2003.10152.pdf – SOLOv2

https://arxiv.org/pdf/2003.06148.pdf – PointINS: Point-based Instance Segmentation

cool site, paperswithcode.

https://paperswithcode.com/task/instance-segmentation?page=4