Categories
Locomotion

Robot “Forward Kinematics”

This doesn’t seem necessary any more, because we have simulators. But interesting to see what people had to do back in the day:

Denavit-Hartenberg (DH) parameters 

https://blog.robotiq.com/how-to-calculate-a-robots-forward-kinematics-in-5-easy-steps

Categories
AI/ML control Locomotion simulation The Sentient Table

ARS and PPO

https://pybullet.org/Bullet/phpBB3/viewtopic.php?t=12553

A couple of advanced data science algorithms. Implemented both for the walking table. ARS is great. We hear a lot about deep learning. This one is shallow learning, and does very well on simpler tasks. Just inputs and outputs, no hidden layers.

It’s similar to the Evolution Strategies algorithm. Generally trying some random stuff out, and slowly changing the model based on what gets you closer to the goal.

ARS: https://arxiv.org/pdf/1803.07055.pdf

Good lecture slides http://eddiesagra.com/wp-content/uploads/2019/03/Introduction-to-Machine-Learning-v1.2-Mar-11-2019.pdf

ARS – Augmented Random Search

https://github.com/colinskow/move37/blob/master/ars/ars.py

https://towardsdatascience.com/introduction-to-augmented-random-search-d8d7b55309bd

PPO – Proximal Policy Optimization

https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/gym/pybullet_envs/agents/ppo/algorithm.py

Categories
Locomotion The Sentient Table

Robot prep: URDF

After much confusion, I made a table with legs that can move.

So I raised the table to z=1 plane and put the origin of the joints in the right place. THEN you set the legs/links with origin at -0.5 because the joint is at 1, and the leg is 1 long, and presumably it needs 0.5 because that’s the centre of the box.

I did visualisation by changing the filename in this file in the bullet3 folders, and running in python:

python3 biped2d_pybullet.py

seemed better than RViz

<?xml version="1.0"?>
<robot name="table">

  <material name="white">
    <color rgba="1 1 1 1"/>
  </material>

  <material name="black">
    <color rgba="0.2 0.2 0.2 1"/>
  </material>

  <material name="blue">
    <color rgba="0 0 1 1"/>
  </material>


	
  <link name="torso">
    <visual>
      <geometry>
        <box size="2 2 0.1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 1"/>
      <material name="white"/>
    </visual>
    <collision>
      <geometry>
        <box size="2 2 0.1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 1"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>
 


 
  <link name="r_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>

  
  <joint name="torso_to_rightleg" type="revolute">
    <parent link="torso"/>
    <child link="r_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="0.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="1 1 1"/>
  </joint>





  
  <link name="l_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>

  
  <joint name="torso_to_leftleg" type="revolute">
    <parent link="torso"/>
    <child link="l_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="10.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="-1 1 1"/>
  </joint>








  <link name="front_r_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>


  <joint name="torso_to_frontrightleg" type="revolute">
    <parent link="torso"/>
    <child link="front_r_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="0.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="1 -1 1"/>
  </joint>








  <link name="front_l_upperleg">
    <visual>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 1"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 -0.5"/>
      <contact_coefficients mu="0.08" />
    </collision>
  </link>


  <joint name="torso_to_frontleftleg" type="revolute">
    <parent link="torso"/>
    <child link="front_l_upperleg"/>
    <axis xyz="1 0 0"/>
    <limit effort="10.0" lower="-1.57." upper="1.57" velocity="1000.0"/>
    <origin rpy="0 0 0" xyz="-1 -1 1"/>
  </joint>

  

</robot>
Categories
Hardware Locomotion Vision

Robot prep 2: GPIO and Camera

So I’ve got the RPi camera images sending to my laptop now, after installing OpenCV4, and running the test code from https://github.com/jeffbass/imagezmq

Next, we need to try move the servos with code.

https://learn.adafruit.com/16-channel-pwm-servo-driver/python-circuitpython

ok so i installed these

First, if no pip3,

sudo apt-get update

sudo apt-get install python-pip3

Then,

sudo pip3 install adafruit-circuitpython-pca9685

sudo pip3 install adafruit-circuitpython-servokit

Adafruit make you copy paste line by line from here…

https://github.com/adafruit/Adafruit_CircuitPython_Motor

Ok looking in the example folder of that,..

from board import SCL, SDA
import busio
from adafruit_pca9685 import PCA9685
from adafruit_motor import servo
i2c = busio.I2C(SCL, SDA)

pca = PCA9685(i2c)
pca.frequency = 50

servo2 = servo.Servo(pca.channels[2])

for i in range(90):
servo2.angle = i
for i in range(90):
servo2.angle = 90 - i
pca.deinit()

i changed it to 90 degrees and got rid of the comments. It suggests a min and max for the servo.

I ran it and the servo got angry with me and wouldn’t stop. I had to unplug everything because it was eating up the cables in its madness.

Ok so datasheet of MG996R: https://www.electronicoscaldas.com/datasheet/MG996R_Tower-Pro.pdf

It keeps going if I plug just the power back in. It seems to rotate continuously. So something is f***ed. Rebooting RPi. It’s supposed to be 180 degree rotation. Will need to read up on servo GPIO forums.

I also tried the ‘fraction’ style code: https://github.com/adafruit/Adafruit_CircuitPython_Motor/blob/master/examples/motor_pca9685_servo_sweep.py

and it rotated and rotated.

So, I think it must be a continuous servo. Now that I look at the product https://mantech.co.za/ProductInfo.aspx?Item=15M8959 i see it was a continuous servo. Derp.

Ok so let’s see… continuous servo: https://github.com/adafruit/Adafruit_CircuitPython_Motor/blob/master/examples/motor_pca9685_continuous_servo.py

We need to set some limits, apparently these are the defaults.

servo7 = servo.ContinuousServo(pca.channels[7], min_pulse=750, max_pulse=2250)

and possibly set this using a calibrated servo, using the calibrate.py program

pca = PCA9685(i2c, reference_clock_speed=25630710)

https://github.com/adafruit/Adafruit_CircuitPython_PCA9685/tree/master/examples

ok. cool.


At a later date, testing the MG996R servo,

I needed to initialise the min_pulse value at 550, or it continuously rotated.

servo7 = servo.ContinuousServo(pca.channels[1], min_pulse=550, max_pulse=2250)

Categories
dev Hardware hardware_ Linux

RPi without keyboard and mouse

https://sendgrid.com/blog/complete-guide-set-raspberry-pi-without-keyboard-mouse/

https://github.com/motdotla/ansible-pi

First thing is you need a file called ‘ssh’ on the raspbian to enable it:.

https://www.raspberrypi.org/forums/viewtopic.php?t=144839

ok so I found the IP address of the PI

root@chrx:~# nmap -sP 192.168.101.0/24

Starting Nmap 7.60 ( https://nmap.org ) at 2020-04-05 17:06 UTC
Nmap scan report for _gateway (192.168.101.1)
Host is up (0.0026s latency).
MAC Address: B8:69:F4:1B:D5:0F (Unknown)
Nmap scan report for 192.168.101.43
Host is up (0.042s latency).
MAC Address: 28:0D:FC:76:BB:3E (Sony Interactive Entertainment)
Nmap scan report for 192.168.101.100
Host is up (0.049s latency).
MAC Address: 18:F0:E4:E9:AF:E3 (Unknown)
Nmap scan report for 192.168.101.101
Host is up (0.015s latency).
MAC Address: DC:85:DE:22:AC:5D (AzureWave Technology)
Nmap scan report for 192.168.101.103
Host is up (-0.057s latency).
MAC Address: 74:C1:4F:31:47:61 (Unknown)
Nmap scan report for 192.168.101.105
Host is up (-0.097s latency).
MAC Address: B8:27:EB:03:24:B0 (Raspberry Pi Foundation)

Nmap scan report for 192.168.101.111
Host is up (-0.087s latency).
MAC Address: 00:24:D7:87:78:EC (Intel Corporate)
Nmap scan report for 192.168.101.121
Host is up (-0.068s latency).
MAC Address: AC:E0:10:C0:84:26 (Liteon Technology)
Nmap scan report for 192.168.101.130
Host is up (-0.097s latency).
MAC Address: 80:5E:C0:52:7A:27 (Yealink(xiamen) Network Technology)
Nmap scan report for 192.168.101.247
Host is up (0.15s latency).
MAC Address: DC:4F:22:FB:0B:27 (Unknown)
Nmap scan report for chrx (192.168.101.127)
Host is up.
Nmap done: 256 IP addresses (11 hosts up) scanned in 2.45 seconds

if nmap is not installed,

apt-get install nmap

Connect to whatever IP it is

ssh -vvvv pi@192.168.101.105

Are you sure you want to continue connecting (yes/no)? yes

Cool, and to set up wifi, let’s check out this ansible script https://github.com/motdotla/ansible-pi

$ sudo apt update
$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

ok 58MB install…

# ansible-playbook playbook.yml -i hosts –ask-pass –become -c paramiko

PLAY [Ansible Playbook for configuring brand new Raspberry Pi]

TASK [Gathering Facts]

TASK [pi : set_fact]
ok: [192.168.101.105]

TASK [pi : Configure WIFI] **
changed: [192.168.101.105]

TASK [pi : Update APT package cache]
[WARNING]: Updating cache and auto-installing missing dependency: python-apt
ok: [192.168.101.105]

TASK [pi : Upgrade APT to the lastest packages] *
changed: [192.168.101.105]

TASK [pi : Reboot] **
changed: [192.168.101.105]

TASK [pi : Wait for Raspberry PI to come back] **
ok: [192.168.101.105 -> localhost]

PLAY RECAP ****
192.168.101.105 : ok=7 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

And I’ll unplug the ethernet and try connect by ssh again

Ah, but it’s moved up to 192.168.1.106 now

nmap -sP 192.168.101.0/24 (I checked again) and now it was ‘Unknown’, but ssh pi@192.168.101.106 worked

(If you can connect to your router, eg. 192.168.0.1 for most D-Link routers, you can go to something like Status -> Wireless, to see connected devices too, and skip the nmap stuff.)

I log in, then to configure some stuff:

sudo raspi-config

Under the interfaces peripheral section, Enable the camera and I2C

sudo apt-get install python-smbus
sudo apt-get install i2c-tools

ok tested with

raspistill -o out.jpg

Then copied across from my computer with

scp pi@192.168.101.106:/home/pi/out.jpg out.jpg

and then make it smaller (because trying to upload the 4MB version no)

convert out.jpg -resize 800×600 new.jpg

Cool and it looks like we also need to expand the partition

sudo raspi-config again, (Advanced Options, and first option)


Upon configuring the latest pi, I needed to first use the ethernet cable,

and then once logged in, use

sudo rfkill unblock 0

to turn on the wifi. The SSID and wifi password could be configured in raspi-config.


At Bitwäsherei, the ethernet cable to the router trick didn’t work.

Instead, as per the resident Gandalf’s advice, the instructions here

https://raspberrypi.stackexchange.com/questions/10251/prepare-sd-card-for-wifi-on-headless-pi

worked for setting up wireless access on the sd card.

“Since May 2016, Raspbian has been able to copy wifi details from /boot/wpa_supplicant.conf into /etc/wpa_supplicant/wpa_supplicant.conf to automatically configure wireless network access”

The file contains

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=«your_ISO-3166-1_two-letter_country_code»

network={
    ssid="«your_SSID»"
    psk="«your_PSK»"
    key_mgmt=WPA-PSK
}

Save, and put sd card in RPi. Wireless working and can ssh in again!

2022 News flash:

Incredibly, some more issues.

New issue, user guide not updated yet

https://stackoverflow.com/questions/71804429/raspberry-pi-ssh-access-denied

In essence, the default pi user no longer exists, so you have to create it and set its password using either the official Imager tool or by creating a userconf file in the boot partition of your microSD card, which should contain a single line of text: username:hashed-password

Default pi and raspberry

pi:$6$/4.VdYgDm7RJ0qM1$FwXCeQgDKkqrOU3RIRuDSKpauAbBvP11msq9X58c8Que2l1Dwq3vdJMgiZlQSbEXGaY5esVHGBNbCxKLVNqZW1

Categories
Hardware hardware_ robots

Robot prep: PWM control

I started up a raspberry pi with raspbian installed on it. (I used balena’s etcher to flash an sd card): the light version, basically just Debian OS for rpi: https://www.raspberrypi.org/downloads/raspbian/

https://www.balena.io/etcher/

The RPi needs a proper keyboard, at least until you set up ssh and can access it remotely. (

We’re interested in making a robot, and we’re using a Raspberry pi. So we need to control servos. RPi only has a single PWM pin, so we need to use an I2C module https://learn.adafruit.com/16-channel-pwm-servo-driver to control however many servos our robot needs, and the software libs to run it https://github.com/adafruit/Adafruit_Python_PCA9685

adafruit_products_ID815servo_LRG.jpg

and we need an external 5V PSU, to power the servos.

Configuring Your Pi for I2C:

sudo apt-get install python-smbus
sudo apt-get install i2c-tools

Need to connect the RPi to the servo driver. This was a picture taken when testing it on the RPi Zero W in 2019. The instructions for pinout connections: https://learn.adafruit.com/16-channel-pwm-servo-driver/pinouts

Or from Adafruit, for RPi:

adafruit_products_raspi_pca9685_i2c_bb.jpg

(or Arduino)

adafruit_products_AllServos_bb-1024.jpg

Here’s an rpi zero layout.

Ok but how do we power servos? We can’t run it off the RPi’s 5V 2A. Oh duh, there’s that big DC socket connected to the PCA9685

There is something to be said for running the robot on an Arduino. You get something robotic. You upload someone’s hexapod spider code, and it does a little dance. You can control it remotely. It can interact with sensors.

Arduinos are dirt cheap. So I bet we could have tiny neural networks running in arduinos… shit, do we already have them? Let’s see… ok wow there is like a whole thing. https://blog.arduino.cc/2019/10/15/get-started-with-machine-learning-on-arduino/ ok but 2K of RAM is not much. You would need to be a real demo scene junky to use a 2KB NN, but yeah, you could do something crudely intelligent with it.

Robots on ESP32s are definitely a thing. But ok no, Raspberry Pi for real robot. We need Linux for this.

Ok so I need to wire this up. But I also need a chassis for the robot.

Categories
dev evolution Locomotion simulation

Evolution (Part 1)

Ok next we’re going to be looking at https://github.com/bulletphysics/bullet3/tree/master/examples/Evolution

According to some blog bit: https://gitmemory.com/issue/bulletphysics/bullet3/2118/468943185 there might be an issue: “The problem with this demo is the following: It can not optimize since the robots are not properly restored. Their physical settings are not properly restored and thus their performance varies. I think it should only serve as a bullet example that does some simulation. If you can improve it to restore the physics correctly, that is fine, but this might be hard to do if you are new to bullet. It would be best to reimplement it in pybullet, because it features deterministic restoring of robots.”

So will have to keep that in mind.

So starting in the NN3DWalkersTimeWarp code, we quickly run into ERP and CFM http://www.ode.org/ode-latest-userguide.html#sec_3_7_0 – These are error reduction parameter and constraint force mixing and help fix errors caused by joints that aren’t set up perfectly.

The code also loads a number of Constraint Solvers:

btSequentialImpulseConstraintSolver.h
btNNCGConstraintSolver.h
btMultiBodyConstraintSolver.h
btMultiBodyDynamicsWorld.h
btDantzigSolver.h
btSolveProjectedGaussSeidel.h
btLemkeSolver.h
btMLCPSolver.h

Hmm. Not sure what they do, but they enforce the joint constraints in the physics we’ve set up.

The TimeWarp class mostly deals with configuration settings for running the simulation at different speeds, and with different solvers, etc.

The setupBasicParamInterface() sets up the UI sliders. Then,

m_guiHelper->setUpAxis(1); // Set Y axis as Up axis

createEmptyDynamicsWorld(); // create an empty dynamic world

m_guiHelper->autogenerateGraphicsObjects(m_dynamicsWorld);

The Dynamics world is configured with

btDefaultCollisionConfiguration in a btCollisionDispatcher

a default btDbvtBroadphase (a sweeping style collision detection)

and whichever solver is being used.

The Dynamics world is initiated as a btDiscreteDynamicsWorld

There’s timeWarpSimulation which we override, and the stepSimulation which is in a loop called the ‘canonical game loop’. It updates any variables changed by the UI parameters, then calls our code, and then updates and time variables that need refreshing, and gives you a chance to update some graphics. It gets to the end of the loop, checks the time, to decide whether to render the graphics, or to do the loop again.

That’s sufficient for now, to move on with the CPP file code.

https://github.com/bulletphysics/bullet3/blob/master/examples/Evolution/NN3DWalkers.cpp

Another interesting project I saw now, using OGRE, which is some other physics engine, I think, is https://github.com/benelot/minemonics – has some evolved life too.

Jumper 1

But we are evolving something that looks more like these creepy rainbow spider bombs, that ‘we made’ in MotorDemo.cpp.

class NNWalker
{
	btDynamicsWorld* m_ownerWorld;
	btCollisionShape* m_shapes[BODYPART_COUNT];
	btRigidBody* m_bodies[BODYPART_COUNT];
	btTransform m_bodyRelativeTransforms[BODYPART_COUNT];
	btTypedConstraint* m_joints[JOINT_COUNT];
	btHashMap<btHashPtr, int> m_bodyTouchSensorIndexMap;
	bool m_touchSensors[BODYPART_COUNT];
	btScalar m_sensoryMotorWeights[BODYPART_COUNT * JOINT_COUNT];

	bool m_inEvaluation;
	btScalar m_evaluationTime;
	bool m_reaped;
	btVector3 m_startPosition;
	int m_index;

These above are the variables for the walker, which is similar to the MotorDemo critter. The code has been upgraded here and there since MotorDemo.cpp. It’s pretty great that people share their hard work so that we can just waltz on in and hack a quick robot together. This guy https://github.com/erwincoumans founded the Bullet project, and now works for Google Brain. (on Google Brain?)

So these are some of the public methods for the WalkersExample

	// Evaluation

	void update(const btScalar timeSinceLastTick);

	void updateEvaluations(const btScalar timeSinceLastTick);

	void scheduleEvaluations();

	void drawMarkings();



	// Reaper

	void rateEvaluations();

	void reap();

	void sow();

	void crossover(NNWalker* mother, NNWalker* father, NNWalker* offspring);

	void mutate(NNWalker* mutant, btScalar mutationRate);

	NNWalker* getRandomElite();

	NNWalker* getRandomNonElite();

	NNWalker* getNextReaped();

	void printWalkerConfigs();

Let’s just go through these for now.

We want to evolve behaviours based on stimulus, and the NNWalkers even have m_touchSensors, which we’ll have a look at later, to wee what they let us do.

Interlude: http://news.mit.edu/2015/algorithm-helps-robots-handle-uncertainty-0602 here says the following, and also has a video with sick drone rock music. This is a good part to remember, that either the robot is pausing, evaluating, pausing, evaluating, to accomplish the behaviours using ‘online’ feedback and offline policy processing.

There’s an offline planning phase where the agents can figure out a policy together that says, ‘If I take this set of actions, given that I’ve made these observations during online execution, and you take these other sets of actions, given that you’ve made these observations, then we can all agree that the whole set of actions that we take is pretty close to optimal,’” says Shayegan Omidshafiei, an MIT graduate student in aeronautics and astronautics and first author on the new paper. “There’s no point during the online phase where the agents stop and say, ‘This is my belief. This is your belief. Let’s come up with a consensus on the best overall belief and replan.’ Each one just does its own thing.

So the parts to do with Evaluation are:

// Evaluation

void update(const btScalar timeSinceLastTick);

void updateEvaluations(const btScalar timeSinceLastTick);

void scheduleEvaluations();

void drawMarkings();

And here’s the code for all the evolution stuff:

bool fitnessComparator(const NNWalker* a, const NNWalker* b)
{
return a->getFitness() > b->getFitness(); // sort walkers descending
}

So we are maximising fitness, and this is the Comparator function

The ratingEvaluations function orders by fitness, and prints the square root of the best performing individual’s distance fitness. Then we iterate through the walkers, update the time series canvas tick (next generation). Zero the times, and counters.

void NN3DWalkersExample::rateEvaluations()
 {
     m_walkersInPopulation.quickSort(fitnessComparator);  // Sort walkers by fitness
b3Printf("Best performing walker: %f meters", btSqrt(m_walkersInPopulation[0]->getDistanceFitness()));

for (int i = 0; i < NUM_WALKERS; i++)
{
    m_timeSeriesCanvas->insertDataAtCurrentTime(btSqrt(m_walkersInPopulation[i]->getDistanceFitness()), 0, true);
}
m_timeSeriesCanvas->nextTick();

for (int i = 0; i < NUM_WALKERS; i++)
{
    m_walkersInPopulation[i]->setEvaluationTime(0);
}
m_nextReaped = 0;

}

The reap function uses the REAP_QTY (0.3) to iterate backwards through the worst 30%, set them to reaped.

void NN3DWalkersExample::reap()
{
int reaped = 0;
for (int i = NUM_WALKERS - 1; i >= (NUM_WALKERS - 1) * (1 - REAP_QTY); i--)
{ // reap a certain percentage
m_walkersInPopulation[i]->setReaped(true);
reaped++;
b3Printf("%i Walker(s) reaped.", reaped);
}
}

getRandomElite and getRandomNonElite use SOW_ELITE_QTY (set to 0.2). The functions return one in the first 20% and the last 80%.

NNWalker* NN3DWalkersExample::getRandomElite()
{
return m_walkersInPopulation[((NUM_WALKERS - 1) * SOW_ELITE_QTY) * (rand() / RAND_MAX)];
}
NNWalker* NN3DWalkersExample::getRandomNonElite()
{
return m_walkersInPopulation[(NUM_WALKERS - 1) * SOW_ELITE_QTY + (NUM_WALKERS - 1) * (1.0f - SOW_ELITE_QTY) * (rand() / RAND_MAX)];
}

The getNextReaped() function checks if we’ve reaped the REAP_QTY percentage yet. If not, increment counter and return the next reaped individual.

NNWalker* NN3DWalkersExample::getNextReaped()
 {
     if ((NUM_WALKERS - 1) - m_nextReaped >= (NUM_WALKERS - 1) * (1 - REAP_QTY))
     {
         m_nextReaped++;
     }

if (m_walkersInPopulation[(NUM_WALKERS - 1) - m_nextReaped + 1]->isReaped())
 {
     return m_walkersInPopulation[(NUM_WALKERS - 1) - m_nextReaped + 1];
 }
 else
 {
     return NULL;  // we asked for too many
 }
}

Next is sow()…

SOW_CROSSOVER_QTY is 0.2,
SOW_ELITE_PARTNER is 0.8.
SOW_MUTATION_QTY is 0.5.
MUTATION_RATE is 0.5.
SOW_ELITE_QTY is 0.2.
So we iterate over 20% of the population, increase sow count, get random elite mother, and 80% of the time, a random elite father (20% non-elite). Grab a reaped individual, and make it into a crossover of the parents.

Then mutations are performed on the population from 20% to 70%, passing in some scalar to the mutate().

Finally, the REAP_QTY – SOW_CROSSOVER_QTY is 10%, who are sown as randomized motor weight individuals.

Ok…

void NN3DWalkersExample::sow()

{

 int sow = 0;

 for (int i = 0; i < NUM_WALKERS * (SOW_CROSSOVER_QTY); i++)

 {  // create number of new crossover creatures

 sow++;

 b3Printf("%i Walker(s) sown.", sow);

 NNWalker* mother = getRandomElite();                                                                  // Get elite partner (mother)

 NNWalker* father = (SOW_ELITE_PARTNER < rand() / RAND_MAX) ? getRandomElite() : getRandomNonElite();  //Get elite or random partner (father)

 NNWalker* offspring = getNextReaped();

 crossover(mother, father, offspring);

 }

 

 for (int i = NUM_WALKERS * SOW_ELITE_QTY; i < NUM_WALKERS * (SOW_ELITE_QTY + SOW_MUTATION_QTY); i++)

 {  // create mutants

 mutate(m_walkersInPopulation[i], btScalar(MUTATION_RATE / (NUM_WALKERS * SOW_MUTATION_QTY) * (i - NUM_WALKERS * SOW_ELITE_QTY)));

 }

 

 for (int i = 0; i < (NUM_WALKERS - 1) * (REAP_QTY - SOW_CROSSOVER_QTY); i++)

 {

 sow++;

 b3Printf("%i Walker(s) sown.", sow);

 NNWalker* reaped = getNextReaped();

 reaped->setReaped(false);

 reaped->randomizeSensoryMotorWeights();

 }

}


Crossover goes through the joints, and sets half of the motor weights to the mom and half to the dad, randomly.

void NN3DWalkersExample::crossover(NNWalker* mother, NNWalker* father, NNWalker* child)
{
	for (int i = 0; i < BODYPART_COUNT * JOINT_COUNT; i++)
	{
		btScalar random = ((double)rand() / (RAND_MAX));

		if (random >= 0.5f)
		{
			child->getSensoryMotorWeights()[i] = mother->getSensoryMotorWeights()[i];
		}
		else
		{
			child->getSensoryMotorWeights()[i] = father->getSensoryMotorWeights()[i];
		}
	}
}

mutate() takes a mutation rate, and randomizes motor weights at that rate.


void NN3DWalkersExample::mutate(NNWalker* mutant, btScalar mutationRate)
{
	for (int i = 0; i < BODYPART_COUNT * JOINT_COUNT; i++)
	{
		btScalar random = ((double)rand() / (RAND_MAX));

		if (random >= mutationRate)
		{
			mutant->getSensoryMotorWeights()[i] = ((double)rand() / (RAND_MAX)) * 2.0f - 1.0f;
		}
	}
}

Ok good enough.

Let’s move on to the evaluation code…

Categories
bio Locomotion

Magnetic BioRobots

https://newatlas.com/robotics/tiny-magnetized-robots-warehouse-biotech/

Categories
AI/ML Locomotion robots sim2real simulation

Pre-training your dragon

The links here died, so updated to more generic links

https://neurorobotics.net

https://hal.inria.fr/

Also

“Sim-to-Real Transfer with Neural-Augmented Robot Simulation” https://proceedings.mlr.press/v87/golemo18a.html

Categories
doin a heckin inspire Locomotion

Dinosaur Robot ‘Trudy’