Categories
dev

Installing MultiNEAT

import MultiNEAT as NEAT

need to install MultiNEAT.

Tried to install
using these instructions
https://github.com/peter-ch/MultiNEAT

it said success despite this:

byte-compiling build/bdist.linux-x86_64/egg/MultiNEAT/init.py to init.pyc
File "build/bdist.linux-x86_64/egg/MultiNEAT/init.py", line 45
t = {**traits, 'w':w}
^
SyntaxError: invalid syntax


Installed /usr/local/lib/python2.7/dist-packages/multineat-0.5-py2.7-linux-x86_64.egg
Processing dependencies for multineat==0.5
Finished processing dependencies for multineat==0.5


then tried
pip install .
https://stackoverflow.com/questions/1471994/what-is-setup-py
It gets these errors:

/usr/bin/ld: cannot find -lboost_python36
/usr/bin/ld: cannot find -lboost_numpy36

boost libs missing. Apparently it has a conda install, but I don’t entirely understand whether conda and pip will play nicely together. conda also a big install and the Chromebook is not big on space. Ok Miniconda is smaller.

https://anaconda.org/conda-forge/multineat

To install this package with conda run one of the following:
conda install -c conda-forge multineat
conda install -c conda-forge/label/cf201901 multineat
conda install -c conda-forge/label/cf202003 multineat

apt-get install libboost-all-dev
...already installed

https://stackoverflow.com/questions/36881958/c-program-cannot-find-boost possible solution.

ah…

/usr/local/lib/python2.7/dist-packages

root@chrx:/opt/MultiNEAT# ls -l /usr/local/lib/python2.7/dist-packages
total 7312
-rw-r–r– 1 root staff 39 Apr 8 21:50 easy-install.pth
-rw-r–r– 1 root staff 7482106 Apr 8 21:50 multineat-0.5-py2.7-linux-x86_64.egg

so it’s compiling with python2.

I’m gonna go with conda instead.

conda install MultiNEAT

https://anaconda.org/conda-forge/multineat

https://docs.conda.io/en/latest/miniconda.html#linux-installers

you need to chmod 755 the install file, and don’t forget to add to $PATH.

echo “PATH=$PATH:/opt/miniconda3/bin” >> ~/.bashrc

then close window and open new bash window, to refresh envs. (envs are environment variables, if someone is actually reading this)

conda install -c conda-forge multineat

(base) root@chrx:~# conda install -c conda-forge multineat
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: | 
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed                                                                          

UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:

Specifications:

  - multineat -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']

Your python: python=3.7

If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.

ok…

root@chrx:~# conda info

     active environment : base
    active env location : /opt/miniconda3
            shell level : 1
       user config file : /root/.condarc
 populated config files : /root/.condarc
          conda version : 4.8.2
    conda-build version : not installed
         python version : 3.7.6.final.0
       virtual packages : __glibc=2.27
       base environment : /opt/miniconda3  (writable)
           channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
          package cache : /opt/miniconda3/pkgs
                          /root/.conda/pkgs
       envs directories : /opt/miniconda3/envs
                          /root/.conda/envs
               platform : linux-64
             user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.16.18-galliumos galliumos/3.1 glibc/2.27
                UID:GID : 0:0
             netrc file : None
           offline mode : False


ok let’s go back to compiling boost from source cause conda don’t like 3.7. Which seems like I should do a –force or something, cause wtf. So here’s the theory of C++ linking (fml):

https://stackoverflow.com/questions/16710047/usr-bin-ld-cannot-find-lnameofthelibrary

here’s more relevant

https://askubuntu.com/questions/944035/installing-libboost-python-dev-for-python3-without-installing-python2-7

https://stackoverflow.com/questions/12578499/how-to-install-boost-on-ubuntu

Here’s some Dockerfile on it:

RUN cd /usr/src && \
 wget --no-verbose https://dl.bintray.com/boostorg/release/1.65.1/source/boost_1_65_1.tar.gz && \
 tar xzf boost_1_65_1.tar.gz && \
 cd boost_1_65_1 && \
 ln -s /usr/local/include/python3.6m /usr/local/include/python3.6 && \
 ./bootstrap.sh --with-python=$(which python3) && \
 ./b2 install && \
 rm /usr/local/include/python3.6 && \
 ldconfig && \
 cd / && rm -rf /usr/src/*

I’m doing the steps one by one, ./b2 made it compile. it’s taking like a good few mins now.

The Boost C++ Libraries were successfully built!

The following directory should be added to compiler include paths:

    /opt/boost_1_65_1

The following directory should be added to linker library paths:

    /opt/boost_1_65_1/stage/lib

ok let’s try again…

/usr/bin/ld: cannot find -lboost_python36
/usr/bin/ld: cannot find -lboost_numpy36

ldconfig

nope ok where is the library?

# find | grep boost_python

(there's all the boost libs i built under /opt, and also these:)

./usr/lib/x86_64-linux-gnu/libboost_python-py36.so
./usr/lib/x86_64-linux-gnu/libboost_python3-py36.so.1.65.1
./usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.65.1
./usr/lib/x86_64-linux-gnu/libboost_python3-py36.so
./usr/lib/x86_64-linux-gnu/libboost_python-py27.a
./usr/lib/x86_64-linux-gnu/libboost_python-py36.a
./usr/lib/x86_64-linux-gnu/libboost_python.so
./usr/lib/x86_64-linux-gnu/libboost_python3.a
./usr/lib/x86_64-linux-gnu/libboost_python-py27.so
./usr/lib/x86_64-linux-gnu/libboost_python3.so
./usr/lib/x86_64-linux-gnu/libboost_python3-py36.a
./usr/lib/x86_64-linux-gnu/libboost_python.a

Ok ./usr/lib/x86_64-linux-gnu/ is where the .so is.

https://stackoverflow.com/questions/3808775/cmake-doesnt-find-boost

https://askubuntu.com/questions/449348/why-are-boost-package-libs-installed-to-usr-lib-x86-64-linux-gnu

https://stackoverflow.com/questions/36881958/c-program-cannot-find-boost

the last one had the sauce that worked for someone for something similar

So I’m changing this bit of setup.py

        include_dirs = ['/opt/boost_1_65_1']
        library_dirs = ['/opt/boost_1_65_1/stage/lib']
        extra.extend(['-DUSE_BOOST_PYTHON', '-DUSE_BOOST_RANDOM', #'-O0',
                      #'-DVDEBUG',
                      ])
        exx = Extension('MultiNEAT._MultiNEAT',
                        sources,
                        libraries=libs,
                        library_dirs=library_dirs,
                        include_dirs=include_dirs,
                        extra_compile_args=extra)

nope

https://unix.stackexchange.com/questions/423821/gcc-usr-bin-ld-cannot-find-lglut32-lopengl32-lglu32-lfreegut-but-these

So we need to use LD somehow.

You need:

  1. To actually have the library in your computer
  2. Help gcc/the linker to find the library by providing the path to the library
    • You can add -Ldir-name to the gcc command
    • You can the library location to the LD_LIBRARY_PATH environment variable
  3. Update the “Dynamic Linker“:sudo ldconfig

Let’s see what it’s running…

x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/src/PythonBindings.o build/temp.linux-x86_64-3.6/src/Genome.o build/temp.linux-x86_64-3.6/src/Innovation.o build/temp.linux-x86_64-3.6/src/NeuralNetwork.o build/temp.linux-x86_64-3.6/src/Parameters.o build/temp.linux-x86_64-3.6/src/PhenotypeBehavior.o build/temp.linux-x86_64-3.6/src/Population.o build/temp.linux-x86_64-3.6/src/Random.o build/temp.linux-x86_64-3.6/src/Species.o build/temp.linux-x86_64-3.6/src/Substrate.o build/temp.linux-x86_64-3.6/src/Utils.o -L/opt/boost_1_65_1/stage/lib -lboost_system -lboost_serialization -lboost_python36 -lboost_numpy36 -o build/lib.linux-x86_64-3.6/MultiNEAT/_MultiNEAT.cpython-36m-x86_64-linux-gnu.so

ok and indeed there doesn’t seem to be anything under /opt/boost_1_65_1/stage/lib with python in the name. We found it in /usr/lib/x86_64-linux-gnu/ earlier.

nope nope nope

https://stackoverflow.com/questions/24173330/cmake-is-not-able-to-find-boost-libraries

SET (BOOST_ROOT "/opt/boost_1_65_1")
SET (BOOST_INCLUDEDIR "/opt/boost_1_65_1/boost")
SET (BOOST_LIBRARYDIR "/opt/boost_1_65_1/libs")

ok so I think it’s just that the Boost.Python package isn’t here. It’s in usr/lib … hmm more later

Here’s something promising: https://github.com/andrewssobral/bgslibrary/issues/96

#:/usr/lib/x86_64-linux-gnu# find | grep boost_python
./libboost_python-py36.so
./libboost_python3-py36.so.1.65.1
./libboost_python-py27.so.1.65.1
./libboost_python3-py36.so
./libboost_python-py27.a
./libboost_python-py36.a
./libboost_python.so
./libboost_python3.a
./libboost_python-py27.so
./libboost_python3.so
./libboost_python3-py36.a
./libboost_python.a

Hmm the answer doesn’t make sense here. Half of those are already symlinks. Let’s go back and make sure we compiled correctly.

./bootstrap.sh --with-libraries=python --with-python=python3.6

export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH

nope. ok what is cmake doing?

It calls Find.Boost,

dies on :

No header defined for python36; skipping header check

No header defined for numpy36; skipping header check

so, ok i worked it out

I changed the CMakeList.txt line to -lboost_python3 -lboost_numpy3

Installed /usr/local/lib/python3.6/dist-packages/multineat-0.5-py3.6-linux-x86_64.egg
Processing dependencies for multineat==0.5
Finished processing dependencies for multineat==0.5

ok but when i try run a python boi in the examples folder,

Boost.Python.ArgumentError: Python argument types in
Genome.init(Genome, int, int, int, int, bool, ActivationFunction, ActivationFunction, int, Parameters, int)
did not match C++ signature:
init(_object*, unsigned int, unsigned int, unsigned int, unsigned int, bool, NEAT::ActivationFunction, NEAT::ActivationFunction, int, NEAT::Parameters, unsigned int, unsigned int)

ok fuck it, github, help me.

https://github.com/peter-ch/MultiNEAT/issues

Ok so the guy pointed out that the constructor changed. It works fine now, after adding a field.

Categories
meta

Musing on audio/visual/motor brain

Just some notes to myself. We’re going to be doing some advanced shitty robots here, with Sim-To-Real policy transfer.

ENSEMBLE NNs

I had a look at merging NNs, and found this https://machinelearningmastery.com/ensemble-methods-for-deep-learning-neural-networks/ with this link as one of the most recent articles: https://arxiv.org/abs/1803.05407 – It recommends using averages of multiple NNs.

AUDIO

For audio there’s https://github.com/google-coral/project-keyword-spotter which uses a 72 euro TPU https://coral.ai/products/accelerator/ for fast processing.

I’ve seen convolution network style NNs on spectrograms of audio (eg https://medium.com/gradientcrescent/urban-sound-classification-using-convolutional-neural-networks-with-keras-theory-and-486e92785df4) Anyway, it’s secondary. We can have it work with an mic with a volume threshold to start with.

MOTION

Various neural networks will be trained in simulation, to perform different tasks, with egg and chicken and human looking objects. Ideally we develop a robot that can’t really fall over.

We need to decide whether we’re giving it spacial awareness in 3d, using point clouds maybe? Creating mental maps of the environment?

VISION

Convolution networks are typical for vision tasks. We can however use HyperNEAT for visual discrimination, here: https://github.com/PacktPublishing/Hands-on-Neuroevolution-with-Python/tree/master/Chapter7

But what will make sense is to have the RPi take pics, send them across to a server on a desktop computer, play around with the image in OpenCV first, and then feed that to the neuro-evolution process.

Categories
Hardware Locomotion Vision

Robot prep 2: GPIO and Camera

So I’ve got the RPi camera images sending to my laptop now, after installing OpenCV4, and running the test code from https://github.com/jeffbass/imagezmq

Next, we need to try move the servos with code.

https://learn.adafruit.com/16-channel-pwm-servo-driver/python-circuitpython

ok so i installed these

First, if no pip3,

sudo apt-get update

sudo apt-get install python-pip3

Then,

sudo pip3 install adafruit-circuitpython-pca9685

sudo pip3 install adafruit-circuitpython-servokit

Adafruit make you copy paste line by line from here…

https://github.com/adafruit/Adafruit_CircuitPython_Motor

Ok looking in the example folder of that,..

from board import SCL, SDA
import busio
from adafruit_pca9685 import PCA9685
from adafruit_motor import servo
i2c = busio.I2C(SCL, SDA)

pca = PCA9685(i2c)
pca.frequency = 50

servo2 = servo.Servo(pca.channels[2])

for i in range(90):
servo2.angle = i
for i in range(90):
servo2.angle = 90 - i
pca.deinit()

i changed it to 90 degrees and got rid of the comments. It suggests a min and max for the servo.

I ran it and the servo got angry with me and wouldn’t stop. I had to unplug everything because it was eating up the cables in its madness.

Ok so datasheet of MG996R: https://www.electronicoscaldas.com/datasheet/MG996R_Tower-Pro.pdf

It keeps going if I plug just the power back in. It seems to rotate continuously. So something is f***ed. Rebooting RPi. It’s supposed to be 180 degree rotation. Will need to read up on servo GPIO forums.

I also tried the ‘fraction’ style code: https://github.com/adafruit/Adafruit_CircuitPython_Motor/blob/master/examples/motor_pca9685_servo_sweep.py

and it rotated and rotated.

So, I think it must be a continuous servo. Now that I look at the product https://mantech.co.za/ProductInfo.aspx?Item=15M8959 i see it was a continuous servo. Derp.

Ok so let’s see… continuous servo: https://github.com/adafruit/Adafruit_CircuitPython_Motor/blob/master/examples/motor_pca9685_continuous_servo.py

We need to set some limits, apparently these are the defaults.

servo7 = servo.ContinuousServo(pca.channels[7], min_pulse=750, max_pulse=2250)

and possibly set this using a calibrated servo, using the calibrate.py program

pca = PCA9685(i2c, reference_clock_speed=25630710)

https://github.com/adafruit/Adafruit_CircuitPython_PCA9685/tree/master/examples

ok. cool.


At a later date, testing the MG996R servo,

I needed to initialise the min_pulse value at 550, or it continuously rotated.

servo7 = servo.ContinuousServo(pca.channels[1], min_pulse=550, max_pulse=2250)

Categories
Vision

Installing OpenCV on RPi

I followed these instructions:

Install OpenCV 4 on Raspberry Pi 4 and Raspbian Buster

sudo apt-get -y update && sudo apt-get -y upgrade

sudo apt-get -y install build-essential cmake pkg-config

sudo apt-get -y install libjpeg-dev libtiff5-dev libjasper-dev libpng-dev

sudo apt-get -y install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

sudo apt-get -y install libxvidcore-dev libx264-dev

sudo apt-get -y install libfontconfig1-dev libcairo2-dev

sudo apt-get -y install libgdk-pixbuf2.0-dev libpango1.0-dev

sudo apt-get -y install libgtk2.0-dev libgtk-3-dev

sudo apt-get -y install libatlas-base-dev gfortran

sudo apt-get -y install libhdf5-dev libhdf5-serial-dev libhdf5-103

sudo apt-get -y install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5

sudo apt-get -y install python3-dev

but then needed two more libs installed, which I found here:

https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/issues/18

sudo apt install libilmbase23
sudo apt install libopenexr-dev

According to the PyImageSearch guy, opencv will be faster if we build from source, for RPi, but that takes an extra 2 hours. So no.

So the remaining parts were:

sudo wget https://bootstrap.pypa.io/get-pip.py  sudo python get-pip.py
sudo python3 get-pip.py
sudo rm -rf ~/.cache/pip
sudo pip install virtualenv virtualenvwrapper 

Add these to ~/.bashrc

export WORKON_HOME=$HOME/.virtualenvsexport VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3source /usr/local/bin/virtualenvwrapper.sh

Then

source ~/.bashrc
mkvirtualenv cv -p python3
pip install "picamera[array]"
pip install opencv-contrib-python 

This gave this error: Could not find OpenSSL

ok so

sudo apt-get install libssl-dev

Categories
dev Hardware hardware_ Linux

RPi without keyboard and mouse

https://sendgrid.com/blog/complete-guide-set-raspberry-pi-without-keyboard-mouse/

https://github.com/motdotla/ansible-pi

First thing is you need a file called ‘ssh’ on the raspbian to enable it:.

https://www.raspberrypi.org/forums/viewtopic.php?t=144839

ok so I found the IP address of the PI

root@chrx:~# nmap -sP 192.168.101.0/24

Starting Nmap 7.60 ( https://nmap.org ) at 2020-04-05 17:06 UTC
Nmap scan report for _gateway (192.168.101.1)
Host is up (0.0026s latency).
MAC Address: B8:69:F4:1B:D5:0F (Unknown)
Nmap scan report for 192.168.101.43
Host is up (0.042s latency).
MAC Address: 28:0D:FC:76:BB:3E (Sony Interactive Entertainment)
Nmap scan report for 192.168.101.100
Host is up (0.049s latency).
MAC Address: 18:F0:E4:E9:AF:E3 (Unknown)
Nmap scan report for 192.168.101.101
Host is up (0.015s latency).
MAC Address: DC:85:DE:22:AC:5D (AzureWave Technology)
Nmap scan report for 192.168.101.103
Host is up (-0.057s latency).
MAC Address: 74:C1:4F:31:47:61 (Unknown)
Nmap scan report for 192.168.101.105
Host is up (-0.097s latency).
MAC Address: B8:27:EB:03:24:B0 (Raspberry Pi Foundation)

Nmap scan report for 192.168.101.111
Host is up (-0.087s latency).
MAC Address: 00:24:D7:87:78:EC (Intel Corporate)
Nmap scan report for 192.168.101.121
Host is up (-0.068s latency).
MAC Address: AC:E0:10:C0:84:26 (Liteon Technology)
Nmap scan report for 192.168.101.130
Host is up (-0.097s latency).
MAC Address: 80:5E:C0:52:7A:27 (Yealink(xiamen) Network Technology)
Nmap scan report for 192.168.101.247
Host is up (0.15s latency).
MAC Address: DC:4F:22:FB:0B:27 (Unknown)
Nmap scan report for chrx (192.168.101.127)
Host is up.
Nmap done: 256 IP addresses (11 hosts up) scanned in 2.45 seconds

if nmap is not installed,

apt-get install nmap

Connect to whatever IP it is

ssh -vvvv pi@192.168.101.105

Are you sure you want to continue connecting (yes/no)? yes

Cool, and to set up wifi, let’s check out this ansible script https://github.com/motdotla/ansible-pi

$ sudo apt update
$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

ok 58MB install…

# ansible-playbook playbook.yml -i hosts –ask-pass –become -c paramiko

PLAY [Ansible Playbook for configuring brand new Raspberry Pi]

TASK [Gathering Facts]

TASK [pi : set_fact]
ok: [192.168.101.105]

TASK [pi : Configure WIFI] **
changed: [192.168.101.105]

TASK [pi : Update APT package cache]
[WARNING]: Updating cache and auto-installing missing dependency: python-apt
ok: [192.168.101.105]

TASK [pi : Upgrade APT to the lastest packages] *
changed: [192.168.101.105]

TASK [pi : Reboot] **
changed: [192.168.101.105]

TASK [pi : Wait for Raspberry PI to come back] **
ok: [192.168.101.105 -> localhost]

PLAY RECAP ****
192.168.101.105 : ok=7 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

And I’ll unplug the ethernet and try connect by ssh again

Ah, but it’s moved up to 192.168.1.106 now

nmap -sP 192.168.101.0/24 (I checked again) and now it was ‘Unknown’, but ssh pi@192.168.101.106 worked

(If you can connect to your router, eg. 192.168.0.1 for most D-Link routers, you can go to something like Status -> Wireless, to see connected devices too, and skip the nmap stuff.)

I log in, then to configure some stuff:

sudo raspi-config

Under the interfaces peripheral section, Enable the camera and I2C

sudo apt-get install python-smbus
sudo apt-get install i2c-tools

ok tested with

raspistill -o out.jpg

Then copied across from my computer with

scp pi@192.168.101.106:/home/pi/out.jpg out.jpg

and then make it smaller (because trying to upload the 4MB version no)

convert out.jpg -resize 800×600 new.jpg

Cool and it looks like we also need to expand the partition

sudo raspi-config again, (Advanced Options, and first option)


Upon configuring the latest pi, I needed to first use the ethernet cable,

and then once logged in, use

sudo rfkill unblock 0

to turn on the wifi. The SSID and wifi password could be configured in raspi-config.


At Bitwäsherei, the ethernet cable to the router trick didn’t work.

Instead, as per the resident Gandalf’s advice, the instructions here

https://raspberrypi.stackexchange.com/questions/10251/prepare-sd-card-for-wifi-on-headless-pi

worked for setting up wireless access on the sd card.

“Since May 2016, Raspbian has been able to copy wifi details from /boot/wpa_supplicant.conf into /etc/wpa_supplicant/wpa_supplicant.conf to automatically configure wireless network access”

The file contains

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=«your_ISO-3166-1_two-letter_country_code»

network={
    ssid="«your_SSID»"
    psk="«your_PSK»"
    key_mgmt=WPA-PSK
}

Save, and put sd card in RPi. Wireless working and can ssh in again!

2022 News flash:

Incredibly, some more issues.

New issue, user guide not updated yet

https://stackoverflow.com/questions/71804429/raspberry-pi-ssh-access-denied

In essence, the default pi user no longer exists, so you have to create it and set its password using either the official Imager tool or by creating a userconf file in the boot partition of your microSD card, which should contain a single line of text: username:hashed-password

Default pi and raspberry

pi:$6$/4.VdYgDm7RJ0qM1$FwXCeQgDKkqrOU3RIRuDSKpauAbBvP11msq9X58c8Que2l1Dwq3vdJMgiZlQSbEXGaY5esVHGBNbCxKLVNqZW1

Categories
AI/ML deep

Hyperparameters

https://deepai.org/machine-learning-glossary-and-terms/hyperparameter

“A hyperparameter is a parameter that is set before the learning process begins. These parameters are tunable and can directly affect how well a model trains.”

There’s a few hyperparameter optimizers:

https://optuna.org/

http://hyperopt.github.io/hyperopt/

(Ended up using Tune, in the Ray framework)

https://docs.ray.io/en/master/tune/

Categories
robots

Stanley: The Robot that won DARPA’s heart

http://robots.stanford.edu/papers/thrun.stanley05.pdf

the interesting part:

Categories
evolution institutes

MIT Press

https://www.mitpressjournals.org/doi/full/10.1162/ARTL_a_00210 this was such a good find, but the rest of their site wasn’t cooperating

http://cognet.mit.edu/journals/evolutionary-computation/28/1 it would be cool if i could view these pdfs. SA IP range banned 🙁

Categories
dev envs simulation

OpenAI Gym

pip3 install gym

git clone https://github.com/openai/gym.git

cd gym/examples/agents/

python3 random_agent.py

root@root:/opt/gym/examples/agents# python3 random_agent.py
INFO: Making new env: CartPole-v0
INFO: Creating monitor directory /tmp/random-agent-results
INFO: Starting new video recorder writing to /tmp/random-agent-results/openaigym.video.0.4726.video000000.mp4
INFO: Starting new video recorder writing to /tmp/random-agent-results/openaigym.video.0.4726.video000001.mp4
INFO: Starting new video recorder writing to /tmp/random-agent-results/openaigym.video.0.4726.video000008.mp4
INFO: Starting new video recorder writing to /tmp/random-agent-results/openaigym.video.0.4726.video000027.mp4
INFO: Starting new video recorder writing to /tmp/random-agent-results/openaigym.video.0.4726.video000064.mp4
INFO: Finished writing results. You can upload them to the scoreboard via gym.upload(‘/tmp/random-agent-results’)
root@chrx:/opt/gym/examples/agents#

https://github.com/openai/gym/blob/master/docs/environments.md

https://gym.openai.com/envs/#mujoco of course, we’re using Bullet instead of mujoco for a physics engine, as it’s free.

Categories
AI/ML evolution neuro

HyperNEAT

https://github.com/PacktPublishing/Hands-on-Neuroevolution-with-Python.git

Copy pasta:

HyperNEAT: Powerful, Indirect Neural Network Evolution

HyperNEAT: Powerful, Indirect Neural Network Evolution

Hunter Heidenreich

Hunter HeidenreichFollowJan 10, 2019 · 9 min read

https://towardsdatascience.com/hyperneat-powerful-indirect-neural-network-evolution-fba5c7c43b7b

Expanding NeuroEvolution

Last week, I wrote an article about NEAT (NeuroEvolution of Augmenting Topologies) and we discussed a lot of the cool things that surrounded the algorithm. We also briefly touched upon how this older algorithm might even impact how we approach network building today, alluding to the fact that neural networks need not be built entirely by hand.

Today, we are going to dive into a different approach to neuroevolution, an extension of NEAT called HyperNEAT. NEAT, as you might remember, had a direct encoding for its network structure. This was so that networks could be more intuitively evolved, node by node and connection by connection. HyperNEAT drops this idea because in order to evolve a network like the brain (with billions of neurons), one would need a much faster way of evolving that structure.

HyperNEAT is a much more conceptually complex algorithm (in my opinion, at least) and even I am working on understanding the nuts and bolts of how it all works. Today, we will take a look under the hood and explore some of the components of this algorithm so that we might better understand what makes it so powerful and reason about future extensions in this age of deep learning.

HyperNEAT

Motivation

Before diving into the paper and algorithm, I think it’s worth exploring a bit more the motivation behind HyperNEAT.

The full name of the paper is “A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks”, which is quite the mouthful! But already, we can see two of the major points. It’s a hypercube-based indirect encoding. We’ll get into the hypercube part later, but already we know that it’s a move from direct encodings to indirect encodings (see my last blog on NEAT for a more detailed description of some of the differences between the two). Furthermore, we get the major reasoning behind it as well: For evolving big neural nets!

More than that, the creators of this algorithm highlight that if one were to look at the brain, they see a “network” with billions of nodes and trillions of connections. They see a network that uses repetition of structure, reusing a mapping of the same gene to generate the same physical structure multiple times. They also highlight that the human brain is constructed in a way so as to exploit physical properties of the world: symmetry (have mirrors of structures, two eyes for input for example) and locality (where nodes are in the structure influences their connections and functions).

Contrast this what we know about neural networks, either built through an evolution procedure or constructed by hand and trained. Do any of these properties hold? Sure, if we force the network to have symmetry and locality, maybe… However, even then, take a dense, feed-forward network where all nodes in one layer are connected to all nodes in the next! And when looking at the networks constructed by the vanilla NEAT algorithm? They tend to be disorganized, sporadic, and not exhibit any of these nice regularities.

Enter in HyperNEAT! Utilizing an indirect encoding through something called connective Compositional Pattern Producing Networks (CPPNs), HyperNEAT attempts to exploit geometric properties to produce very large neural networks with these nice features that we might like to see in our evolved networks.

What’s a Compositional Pattern Producing Network?

In the previous post, we discussed encodings and today we’ll dive deeper into the indirect encoding used for HyperNEAT. Now, indirect encodings are a lot more common than you might think. In fact, you have one inside yourself!

DNA is an indirect encoding because the phenotypic results (what we actually see) are orders of magnitude larger than the genotypic content (the genes in the DNA). If you look at a human genome, we’ll say it has about 30,000 genes coding for approximately 3 billion amino acids. Well, the brain has 3 trillion connections. Obviously, there is something indirect going on here!

Something borrowed from the ideas of biology is an encoding scheme called developmental encoding. This is the idea that all genes should be able to be reused at any point in time during the developmental process and at any location within the individual. Compositional Pattern Producing Networks (CPPNs) are an abstraction of this concept that have been show to be able to create patterns for repeating structures in Cartesian space. See some structures that were produced with CPPNs here:

Pure CPPNs

A phenotype can be described as a function of n dimensions, where n is the number of phenotypic traits. What we see is the result of some transformation from genetic encoding to the exhibited traits. By composing simple functions, complex patterns can actually be easily represented. Things like symmetry, repetition, asymmetry, and variation all easily fall out of an encoding structure like this depending on the types of networks that are produced.

We’ll go a little bit deeper into the specifics of how CPPNs are specifically used in this context, but hopefully this gives you the general feel for why and how they are important in the context of indirect encodings.

Tie In to NEAT

In HyperNEAT, a bunch of familiar properties reappear for the original NEAT paper. Things like complexification over time are important (we’ll start with simple and evolve complexity if and when it’s needed). Historical markings will be used so that we can properly line up encodings for any sort of crossover. Uniform starting populations will also be used so that there’s no wildcard, incompatible networks from the start.

The major difference in how NEAT is used in this paper and the previous? Instead of using the NEAT algorithm to evolve neural networks directly, HyperNEAT uses NEAT to evolve CPPNs. This means that more “activation” functions are used for CPPNs since things like Gaussians give rise to symmetry and trigonometric functions help with repetition of structure.

The Algorithm

So now that we’ve talked about what a CPPN is and that we use the NEAT algorithm to evolve and adjust them, it begs the question of how are these actually used in the overall HyperNEAT context?

First, we need to introduce the concept of a substrate. In the scope of HyperNEAT, a substrate is simply a geometric ordering of nodes. The simplest example could be a plane or a grid, where each discrete (x, y) point is a node. A connective CPPN will actually take two of these points and compute weight between these two nodes. We could think of that as the following equation:

CPPN(x1, y1, x2, y2) = w

Where CPPN is an evolved CPPN, like that of what we’ve discussed in previous sections. We can see that in doing this, every single node will actually have some sort of weight connection between them (even allowing for recurrent connections). Connections can be positive or negative, and a minimum weight magnitude can also be defined so that any outputs below that threshold will result in no connection.

The geometric layout of nodes must be specified prior to the evolution of any CPPN. As a result, as the CPPN is evolved, the actually connection weights and network topology will result in a pattern that is geometric (all inputs are based on the positions of nodes).

In the case where the nodes are arranged on some sort of 2 dimensional plane or grid, the CPPN is a function of four dimensions and thus we can say it is being evolved on a four dimensional hypercube. This is where we get the name of the paper!

Regularities in the Produced Patterns

All regularities that we’ve mentioned before can easily fall out of an encoding like this. Symmetry can occur by using symmetric functions over something like x1 and x2. This can be a function like a Gaussian function. Imperfect symmetry can occur when symmetry is used over things like both x and y, but only with respect to one axis.

Repetition also falls out, like we’ve mentioned before, with periodic functions such as sine, cosine, etc. And like with symmetry, variation against repetition can be introduced by inducing a periodic function over a non-repeating aspect of the substrate. Thus all four major regularities that were aimed for are able to develop from this encoding.

Substrate Configuration

You may have guessed from the above that the configuration of the substrate is critical. And that makes a lot of sense. In biology, the structure of something is tied to its functionality. Therefore, in our own evolution schema, the structure of our nodes are tightly linked to the functionality and performance that may be seen on a particular task.

Here, we can see a couple of substrate configurations specifically outlined in the original paper:

I think it’s very important to look at the configuration that is a three dimensional cube and note how it simply adjusts our CPPN equation from four dimensional to six dimensional:

CPPN(x1, y1, z1, x2, y2, z2) = w

Also, the grid can be extended to the sandwich configuration by only allowing for nodes on one half to connect to the other half. This can be seen easily as an input/output configuration! The authors of the paper actually use this configuration to take in visual activation on the input half and use it to activate certain nodes on the output half.

The circular layout is also interesting, as geometry need not be a grid for a configuration. A radial geometry can be used instead, allowing for interesting behavioral properties to spawn out of the unique geometry that a circle can represent.

Input and Output Layout

Inputs and outputs are laid out prior to the evolution of CPPNs. However, unlike a traditional neural network, our HyperNEAT algorithm is made aware of the geometry of the inputs and outputs and can learn to exploit and embrace the regularities of it. Locality and repetition of inputs and outputs can be easily exploited through this extra information that HyperNEAT receives.

Substrate Resolution

Another powerful and unique property of HyperNEAT is the ability to scale the resolution of a substrate up and down. What does that mean? Well, let’s say you evolve a HyperNEAT network based on images of a certain size. The underlying geometry that was exploited to perform well at that size results in the same pattern when scaled to a new size. Except, no extra training is needed. It simply scales to another size!

Summarization of the Algorithm

I think with all that information about how this algorithm works, it’s worth summarizing the steps of it.

  • 1. Chose a substrate configuration (the layout of nodes and where input/output is located)
  • 2. Create a uniform, minimal initial population of connective CPPNs
  • 3. Repeat until solution:
  • 4. For each CPPN
  • (a) Generate connections for the neural network using the CPPN
  • (b) Evaluate the performance of the neural network
  • 5. Reproduce CPPNs using NEAT algorithm

Conclusion

And there we have it! That’s the HyperNEAT algorithm. I encourage you to take a look at the paper if you wish to explore more of the details or wish to look at the performance on some of the experiments they did with the algorithm (I particularly enjoy their food gathering robot experiment).

What are the implications for the future? That’s something I’ve been thinking about recently as well. Is there a tie-in from HyperNEAT to training traditional deep networks today? Is this a better way to train deep networks? There’s another paper of Evolvable Substrate HyperNEAT where the actual substrates are evolved as well, a paper I wish to explore in the future! But is there something hidden in that paper that bridges the gap between HyperNEAT and deep neural networks? Only time will tell and only we can answer that question!

Hopefully, this article was helpful! If I missed anything or if you have questions, let me know. I’m still learning and exploring a lot of this myself so I’d be more than happy to have a conversation about it on here or on Twitter.

And if you want to read more of what I’ve written, maybe check out:


Originally hosted at hunterheidenreich.com.