Categories
Hardware hardware_ Locomotion sim2real

Finding where we left off

Months have passed. It is 29 December 2020 now, and Miranda and I are at Bitwäscherei, in Zurich, during the remoteC3 (Chaos Computer Club)

We have a few things to get back to:

  • Locomotion
    • Sim2Real
  • Vision
    • Object detection / segmentation

We plan to work on a chicken cam, but are waiting for the raspberry pi camera to arrive. So first, locomotion.

Miranda would like to add more leg segments, but process-wise, we need the software and hardware working together with basics, before adding more parts.

Sentient Table prototype

I’ve tested the GPIO, and it’s working, after adjusting the minimum pulse value for the MG996R servo.

So, where did we leave off, before KonS MFRU / ICAF ?

Locomotion:

There was a walking table in simulation, and the progress was saving so that we could reload the state, in Ray and Tune.

I remember the best walker had a ‘reward’ of 902, so I searched for 902

grep -R ‘episode_reward_mean\”\: 902’

And found these files:

 409982 Aug 1 07:52 events.out.tfevents.1596233543.chrx
334 Jul 31 22:12 params.json
304 Jul 31 22:12 params.pkl
132621 Aug 1 07:52 progress.csv
1542332 Aug 1 07:52 result.json

and there are checkpoint directories, with binary files.

So what are these files? How do I extract actions?

Well it looks like this info keeps track of Ray/Tune progress. If we want logs, we seem to need to make them ourselves. The original minitaur code used google protobuf to log state. So I set the parameter to log to a directory.

log_path="/media/chrx/0FEC49A4317DA4DA/logs"

So now when I run it again, it makes a file in the format below:


message RobotableEpisode {
// The state-action pair at each step of the log.
repeated RobotableStateAction state_action = 1;
}

message RobotableMotorState {
// The current angle of the motor.
double angle = 1;
// The current velocity of the motor.
double velocity = 2;
// The current torque exerted at this motor.
double torque = 3;
// The action directed to this motor. The action is the desired motor angle.
double action = 4;
}

message RobotableStateAction {
// Whether the state/action information is valid. It is always true if the
// proto is from simulation. It might be false when communication error
// happens on robotable hardware.
bool info_valid = 6;
// The time stamp of this step. It is computed since the reset of the
// environment.
google.protobuf.Timestamp time = 1;
// The position of the base of the minitaur.
robotics.messages.Vector3d base_position = 2;
// The orientation of the base of the minitaur. It is represented as (roll,
// pitch, yaw).
robotics.messages.Vector3d base_orientation = 3;
// The angular velocity of the base of the minitaur. It is the time derivative
// of (roll, pitch, yaw).
robotics.messages.Vector3d base_angular_vel = 4;
// The motor states (angle, velocity, torque, action) of eight motors.
repeated RobotableMotorState motor_states = 5;
}

I’m pretty much only interested in that last line,

repeated RobotableMotorState motor_states = 5;

So that’s the task, to decode the protobuf objects.

import os
import inspect

currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
os.sys.path.insert(0, parentdir)

import argparse
from gym_robotable.envs import logging

if __name__ == "__main__":

    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/logs/robotable_log_2020-12-29-191602')
    args = parser.parse_args()
    logging = logging.RobotableLogging()
    episode = logging.restore_episode(args.log_file)
    print(dir (episode))
    print("episode=",episode)
    fields = episode.ListFields()

    for field in fields:
        print(field)

This prints out some json-like info. On the right path.

time {
seconds: 5
}
base_position {
x: 0.000978083148860855
y: 1.7430418253385236
z: -0.0007063670972221042
}
base_orientation {
x: -0.026604138524100478
y: 0.00973575985636693
z: -0.08143286338936992
}
base_angular_vel {
x: 0.172553297157456
y: -0.011541306494121786
z: -0.010542314686643973
}
motor_states {
angle: -0.12088901254000686
velocity: -0.868766524998517
torque: 3.3721667267908284
action: 3.6539504528045654
}
motor_states {
angle: 0.04232669165311699
velocity: 1.5488756496627718
torque: 3.4934419908437704
action: 1.4116498231887817
}
motor_states {
angle: 0.8409251448232009
velocity: -1.617737108768752
torque: -3.3539541961507124
action: -3.7024881839752197
}
motor_states {
angle: 0.13926660037454777
velocity: -0.9575437158301312
torque: 3.563701581854714
action: 1.104300618171692
}
info_valid: true
])

We will have to experiment to work out how to translate this data to something usable. The servos are controlled with throttle, from -1 to 1.

Technically I should probably rewrite the simulation to output this “throttle” value. But let’s work with what we have, for now.

My first attempt will be to extract and normalize the torque values to get a sequence of actions… nope.

Ok so plan B. Some visualisation of the values should help. I have it outputting the JSON now.

episode_proto = logging.restore_episode(args.log_file)

jsonObj = MessageToJson(episode_proto)

print(jsonObj)

I decided to use StreamLit, which is integrated with various plotting libraries. After looking at the different plotting options, Plotly seems the most advanced.

Ok so in JSON,

{
  "time": "1970-01-01T00:00:05Z",
  "basePosition": {
    "x": 0.000978083148860855,
    "y": 1.7430418253385236,
    "z": -0.0007063670972221042
  },
  "baseOrientation": {
    "x": -0.026604138524100478,
    "y": 0.00973575985636693,
    "z": -0.08143286338936992
  },
  "baseAngularVel": {
    "x": 0.172553297157456,
    "y": -0.011541306494121786,
    "z": -0.010542314686643973
  },
  "motorStates": [
    {
      "angle": -0.12088901254000686,
      "velocity": -0.868766524998517,
      "torque": 3.3721667267908284,
      "action": 3.6539504528045654
    },
    {
      "angle": 0.04232669165311699,
      "velocity": 1.5488756496627718,
      "torque": 3.4934419908437704,
      "action": 1.4116498231887817
    },
    {
      "angle": 0.8409251448232009,
      "velocity": -1.617737108768752,
      "torque": -3.3539541961507124,
      "action": -3.7024881839752197
    },
    {
      "angle": 0.13926660037454777,
      "velocity": -0.9575437158301312,
      "torque": 3.563701581854714,
      "action": 1.104300618171692
    }
  ],
  "infoValid": true
}

Plotly uses Panda dataframes, which is tabular data. 2 dimensions. So I need to transform this to something usable.

Something like time on the x-axis

and angle / velocity / torque / action on the y axis.

Ok so how to do this…?

Well I’ve almost got it, but I mostly had to give up on StreamLit’s native line_chart for now. Plotly’s has line chart code that can handle multiple variables. So I’m getting sidetracked by this bug:

When I import plotly’s library,

import plotly.graph_objects as go

“No module named ‘plotly.graph_objects’; ‘plotly’ is not a package”

https://stackoverflow.com/questions/57105747/modulenotfounderror-no-module-named-plotly-graph-objects

import plotly.graph_objects as go ? no…

from plotly import graph_objs as go ? no…

from plotly import graph_objects as go ? no…

hmm.

pip3 install plotly==4.14.1 ?

Requirement already satisfied: plotly==4.14.1 in /usr/local/lib/python3.6/dist-packages (4.14.1)

no… why are the docs wrong then?

Ah ha.

“This is the well known name shadowing trap.” – stackoverflow

I named my file plotly.py – that is the issue.

So, ok run it again… (streamlit run plot.py) and open localhost:8501…

Now,

Running as root without --no-sandbox is not supported. See https://crbug.com/638180

Ah ha. I went back to StreamLit notation and it worked.

#fig.show()
st.plotly_chart(fig)

Ok excellent, so here is my first round of code:

import pandas as pd
import numpy as np
import streamlit as st
import time
from plotly import graph_objects as go
import os
import inspect
from google.protobuf.json_format import MessageToJson
import argparse
from gym_robotable.envs import logging


currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
os.sys.path.insert(0, parentdir)


if __name__ == "__main__":


    st.title('Analyticz')


    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/logs/robotable_log_2020-12-29-191602')
    args = parser.parse_args()
    logging = logging.RobotableLogging()
    episode_proto = logging.restore_episode(args.log_file)


    times = []
    angles = [[]]*4           # < bugs!
    velocities = [[]]*4
    torques = [[]]*4
    actions = [[]]*4

    for step in range(len(episode_proto.state_action)):

       step_log = episode_proto.state_action[step]
       times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))

       for i in range(4):
           angles[i].append(step_log.motor_states[i].angle)
           velocities[i].append(step_log.motor_states[i].velocity)
           torques[i].append(step_log.motor_states[i].torque)
           actions[i].append(step_log.motor_states[i].action)
 
    print(angles)
    print(times)
    print(len(angles))
    print(len(velocities))
    print(len(torques))
    print(len(actions))
    print(len(times))

    # Create traces
    fig = go.Figure()
    fig.add_trace(go.Scatter(x=times, y=angles[0],
	            mode='lines',
	            name='Angles'))
    fig.add_trace(go.Scatter(x=times, y=velocities[0],
	            mode='lines+markers',
	            name='Velocities'))
    fig.add_trace(go.Scatter(x=times, y=torques[0],
	            mode='markers', 
                    name='Torques'))
    fig.add_trace(go.Scatter(x=times, y=actions[0],
	            mode='markers', 
                    name='Actions'))

    st.plotly_chart(fig)

And it’s plotting data for one leg.

If this is just 5 seconds of simulation, then velocities looks like it might be the closest match. You can imagine it going up a bit, back a bit, then a big step forward.

So, one idea is to do symbolic regression, to approximate the trigonometry equations for quadrupedal walking, (or just google them), and generalise to a walking algorithm, to use for locomotion. I could use genetic programming, like at university (https://gplearn.readthedocs.io/en/stable/examples.html#symbolic-regressor). But that’s overkill and probably won’t work. Gotta smooth the graph incrementally. Normalize it.

Let’s see what happens next, visually, after 5 seconds of data, and then view the same, for other legs.

Ok there is 30 seconds of walking.

The tools I wrote for the walker, are run with ‘python3 play_tune.py –replay 1’. It looks for the best checkpoint and replays it from there.

But now I seem to be getting the same graph for different legs. What? We’re going to have to investigate.

Ok turns out [[]]*4 is the wrong way to initialise arrays in python. It makes all sublists the same. Here’s the correct way:

velocities = [[] for i in range(4)]

Now I have 4 different legs.

The graph is very spiky, so I’ve added a rolling window average, and normalised it between -1 and 1 since that’s what the servo throttle allows.

I am thinking that maybe because the range between min and max for the 4 legs are:

3.1648572819886085
1.7581604444983845
5.4736002843351805
1.986915632875287

The rear legs aren’t moving as much, so maybe it doesn’t make sense to normalize them all to [-1, 1] all on the same scale. Like maybe the back right leg that moves so much should be normalized to [-1, 1] and then all the other legs are scaled down proportionally. Anyway, let’s see. Good enough for now.

In the code, the motors order is:

front right, front left, back right, back left.

Ok so to save the outputs…

import pandas as pd
 import numpy as np
 import streamlit as st
 import time
 from plotly import graph_objects as go
 import os
 import inspect
 from google.protobuf.json_format import MessageToJson
 import argparse
 from gym_robotable.envs import logging
 import plotly.express as px
 currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
 parentdir = os.path.dirname(os.path.dirname(currentdir))
 os.sys.path.insert(0, parentdir)
 def normalize_negative_one(img):
     normalized_input = (img - np.amin(img)) / (np.amax(img) - np.amin(img))
     return 2*normalized_input - 1
 if name == "main":
     st.title('Analyticz')
     parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
     parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/walkinglogs/robotable_log_2021-01-17-231240')
     args = parser.parse_args()
     logging = logging.RobotableLogging()
     episode_proto = logging.restore_episode(args.log_file)
     times = []
     velocities = [[] for i in range(4)]
     for step in range(len(episode_proto.state_action)):
        step_log = episode_proto.state_action[step]
        times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))
        for i in range(4):
            velocities[i].append(step_log.motor_states[i].velocity)
     #truncate because a bunch of trailing zeros
     velocities[0] = velocities[0][0:3000]
     velocities[1] = velocities[1][0:3000]
     velocities[2] = velocities[2][0:3000]
     velocities[3] = velocities[3][0:3000]
     times = times[0:3000]
     #get moving averages
     window_size_0=40
     numbers_series_0 = pd.Series(velocities[0])
     windows_0 = numbers_series_0.rolling(window_size_0)
     moving_averages_0 = windows_0.mean()
     moving_averages_list_0 = moving_averages_0.tolist()
     without_nans_0 = moving_averages_list_0[window_size_0 - 1:]
     window_size_1=40
     numbers_series_1 = pd.Series(velocities[1])
     windows_1 = numbers_series_1.rolling(window_size_1)
     moving_averages_1 = windows_1.mean()
     moving_averages_list_1 = moving_averages_1.tolist()
     without_nans_1 = moving_averages_list_1[window_size_1 - 1:]
     window_size_2=40
     numbers_series_2 = pd.Series(velocities[2])
     windows_2 = numbers_series_2.rolling(window_size_2)
     moving_averages_2 = windows_2.mean()
     moving_averages_list_2 = moving_averages_2.tolist()
     without_nans_2 = moving_averages_list_2[window_size_2 - 1:]
     window_size_3=40
     numbers_series_3 = pd.Series(velocities[3])
     windows_3 = numbers_series_3.rolling(window_size_3)
     moving_averages_3 = windows_3.mean()
     moving_averages_list_3 = moving_averages_3.tolist()
     without_nans_3 = moving_averages_list_3[window_size_3 - 1:]
     #normalize between -1 and 1
     avg_0 = np.asarray(without_nans_0)
     avg_1 = np.asarray(without_nans_1)
     avg_2 = np.asarray(without_nans_2)
     avg_3 = np.asarray(without_nans_3)
     avg_0 = normalize_negative_one(avg_0)
     avg_1 = normalize_negative_one(avg_1)
     avg_2 = normalize_negative_one(avg_2)
     avg_3 = normalize_negative_one(avg_3)
     np.save('velocity_front_right', avg_0)
     np.save('velocity_front_left', avg_1)
     np.save('velocity_back_right', avg_2)
     np.save('velocity_back_left', avg_3)
     np.save('times', times)
     # Create traces
     fig0 = go.Figure()
     fig0.add_trace(go.Scatter(x=times, y=velocities[0],
                 mode='lines',
                 name='Velocities 0'))
     fig0.add_trace(go.Scatter(x=times, y=avg_0.tolist(),
                 mode='lines',
                 name='Norm Moving Average 0'))
     st.plotly_chart(fig0)
     fig1 = go.Figure()
     fig1.add_trace(go.Scatter(x=times, y=velocities[1],
                 mode='lines',
                 name='Velocities 1'))
     fig1.add_trace(go.Scatter(x=times, y=avg_1.tolist(),
                 mode='lines',
                 name='Norm Moving Average 1'))
     st.plotly_chart(fig1)
     fig2 = go.Figure()
     fig2.add_trace(go.Scatter(x=times, y=velocities[2],
                 mode='lines',
                 name='Velocities 2'))
     fig2.add_trace(go.Scatter(x=times, y=avg_2.tolist(),
                 mode='lines',
                 name='Norm Moving Average 2'))
     st.plotly_chart(fig2)
     fig3 = go.Figure()
     fig3.add_trace(go.Scatter(x=times, y=velocities[3],
                 mode='lines',
                 name='Velocities 3'))
     fig3.add_trace(go.Scatter(x=times, y=avg_3.tolist(),
                 mode='lines',
                 name='Norm Moving Average 3'))
     st.plotly_chart(fig3)

(Excuse the formatting.) Then I’m loading those npy files and iterating them to the motors.

import time
 import numpy as np
 from board import SCL, SDA
 import busio
 from adafruit_pca9685 import PCA9685
 from adafruit_motor import servo
 i2c = busio.I2C(SCL, SDA)
 pca = PCA9685(i2c, reference_clock_speed=25630710)
 pca.frequency = 50
 servo0 = servo.ContinuousServo(pca.channels[0], min_pulse=685, max_pulse=2280)
 servo1 = servo.ContinuousServo(pca.channels[1], min_pulse=810, max_pulse=2095)
 servo2 = servo.ContinuousServo(pca.channels[2], min_pulse=700, max_pulse=2140)
 servo3 = servo.ContinuousServo(pca.channels[3], min_pulse=705, max_pulse=2105)
 velocity_front_right = np.load('velocity_front_right.npy')
 velocity_front_left = np.load('velocity_front_left.npy')
 velocity_back_right = np.load('velocity_back_right.npy')
 velocity_back_left = np.load('velocity_back_left.npy')
 times = np.load('times.npy')
 reverse left motors
 velocity_front_left = -velocity_front_left
 velocity_back_left = -velocity_back_left
 print (velocity_front_right.size)
 print (velocity_front_left.size)
 print (velocity_back_right.size)
 print (velocity_back_left.size)
 print (times.size)
 for time in times:
    print(time)
 for a,b,c,d in np.nditer([velocity_front_right, velocity_front_left, velocity_back_right, velocity_back_left]):
     servo0.throttle = a/4
     servo1.throttle = b/4
     servo2.throttle = c/4
     servo3.throttle = d/4
    print (a, b, c, d)
 servo0.throttle = 0
 servo1.throttle = 0
 servo2.throttle = 0
 servo3.throttle = 0
 pca.deinit()

Honestly it doesn’t look terrible, but these MG996R continuous rotation servos are officially garbage.

While we wait for new servos to arrive, I’m testing on SG90s. I’ve renormalized about 90 degrees

def normalize_0_180(img):
     normalized_0_180 = (180*(img - np.min(img))/np.ptp(img)).astype(int)   
     return normalized_0_180

That was also a bit much variation, and since we’ve got 180 degree servos now, but it still looks a bit off, I halved the variance in the test.

 velocity_front_right = ((velocity_front_right - 90)/2)+90
 velocity_front_left = ((velocity_front_left - 90)/2)+90
 velocity_back_right = ((velocity_back_right - 90)/2)+90
 velocity_back_left = ((velocity_back_left - 90)/2)+90

# reverse left motors
 velocity_front_left = 180-velocity_front_left
 velocity_back_left = 180-velocity_back_left

lol. ok. Sim2Real.

So, it’s not terrible, but we’re not quite there either. Also i think it’s walking backwards.

I am not sure the math is correct.

I changed the smoothing code to use this code which smoothes based on the preceding plot.

def anchor(signal, weight):
     buffer = []
     last = signal[0]
     for i in signal:
         smoothed_val = last * weight + (1 - weight) * i
         buffer.append(smoothed_val)
         last = smoothed_val
     return buffer
Derp.

OK i realised I was wrong all along. Two things.

First, I just didn’t see that the angles values were on that original graph. They were so small. Of course we’re supposed to use the angles, rather than the velocities, for 180 degree servos.

Second problem was, I was normalizing from min to max of the graph. Of course it should be -PI/2 to PI/2, since the simulator works with radians, obviously. Well anyway, hindsight is 20/20. Now we have a fairly accurate sim2real. I use the anchor code above twice, to get a really smooth line.

Here’s the final code.

import pandas as pd
 import numpy as np
 import streamlit as st
 import time
 from plotly import graph_objects as go
 import os
 import inspect
 from google.protobuf.json_format import MessageToJson
 import argparse
 from gym_robotable.envs import logging
 import plotly.express as px
 currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
 parentdir = os.path.dirname(os.path.dirname(currentdir))
 os.sys.path.insert(0, parentdir)
 def anchor(signal, weight):
     buffer = []
     last = signal[0]
     for i in signal:
         smoothed_val = last * weight + (1 - weight) * i
         buffer.append(smoothed_val)
         last = smoothed_val
     return buffer
 assume radians
 def normalize_0_180(img):
     normalized_0_180 = np.array(img)*57.2958 + 90
     return normalized_0_180
 if name == "main":
     st.title('Analyticz')
     parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
     parser.add_argument('--log_file', help='path to protobuf file', default='/media/chrx/0FEC49A4317DA4DA/walkinglogs/robotable_log_2021-01-17-231240')
     args = parser.parse_args()
     logging = logging.RobotableLogging()
     episode_proto = logging.restore_episode(args.log_file)
     times = []
     angles = [[] for i in range(4)]
     for step in range(len(episode_proto.state_action)):
        step_log = episode_proto.state_action[step]
        times.append(str(step_log.time.seconds) + '.' + str(step_log.time.nanos))
        for i in range(4):
            print (step)
            print (step_log.motor_states[i].angle)
            angles[i].append(step_log.motor_states[i].angle)
     #truncate because a bunch of trailing zeros
     angles[0] = angles[0][0:3000]
     angles[1] = angles[1][0:3000]
     angles[2] = angles[2][0:3000]
     angles[3] = angles[3][0:3000]
     avg_0 = normalize_0_180(angles[0])
     avg_1 = normalize_0_180(angles[1])
     avg_2 = normalize_0_180(angles[2])
     avg_3 = normalize_0_180(angles[3])
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     avg_0 = anchor(avg_0, 0.8)
     avg_1 = anchor(avg_1, 0.8)
     avg_2 = anchor(avg_2, 0.8)
     avg_3 = anchor(avg_3, 0.8)
     np.save('angle_front_right_180', avg_0)
     np.save('angle_front_left_180', avg_1)
     np.save('angle_back_right_180', avg_2)
     np.save('angle_back_left_180', avg_3)
     # Create traces
     fig0 = go.Figure()
     fig0.add_trace(go.Scatter(x=times, y=angles[0],
                 mode='lines',
                 name='Angles 0'))
     fig0.add_trace(go.Scatter(x=times, y=avg_0,
                  mode='lines',
                  name='Norm Moving Average 0'))
     st.plotly_chart(fig0)
     fig1 = go.Figure()
     fig1.add_trace(go.Scatter(x=times, y=angles[1],
                 mode='lines',
                 name='Angles 1'))
     fig1.add_trace(go.Scatter(x=times, y=avg_1,
                  mode='lines',
                  name='Norm Moving Average 1'))
     st.plotly_chart(fig1)
     fig2 = go.Figure()
     fig2.add_trace(go.Scatter(x=times, y=angles[2],
                 mode='lines',
                 name='Angles 2'))
     fig2.add_trace(go.Scatter(x=times, y=avg_2,
                  mode='lines',
                  name='Norm Moving Average 2'))
     st.plotly_chart(fig2)
     fig3 = go.Figure()
     fig3.add_trace(go.Scatter(x=times, y=angles[3],
                 mode='lines',
                 name='Angles 3'))
     fig3.add_trace(go.Scatter(x=times, y=avg_3,
                  mode='lines',
                  name='Norm Moving Average 3'))
     st.plotly_chart(fig3)

OK.

So there’s a milestone that took way too long. We’ve got Sim 2 Real working, ostensibly.

After some fortuitous googling, I found the Spot Micro, or, Spot Mini Mini project. The Spot Micro guys still have a big focus on inverse kinematics, which I’m trying to avoid for as long as I can.

They’ve done a very similar locomotion project using pyBullet, and I was able to find a useful paper, in the inspiration section, alerting me to kMPs.

Kinematic Motion Primitives. It’s a similar idea to what I did above.

Instead, what these guys did was to take a single wave of their leg data, and repeat that, and compare that to a standardized phase. (More or less). Makes sense. Looks a bit complicated to work out the phase of the wave in my case.

I’ll make a new topic, and try to extract kMPs from the data, for the next round of locomotion sim2real. I will probably also train the robot for longer, to try evolve a gait that isn’t so silly.

Categories
meta MFRU

Artwork Feedback

Slovenian philosopher, Maja Pan, has written an interesting article on our work, at https://www.animot-vegan.com/tehnologizacija-skrbi/

I’ll need to reread it a few times, and think about it, to address specifics.

But for first remarks, just to address the most general concerns:

We’re glad that the artwork is provoking some critical discursion.

After MFRU, the five chickens went on to live a free range life, with a spacious fox-proof coop that we built, in the Maribor hills, with a family that looks after them now. We’ve received (obviously subjective) reports that the chickens are doing well, and are happy.

Philosophically, I think we don’t see improved welfare as counter-productive to abolitionist ideology.

But there are some 40 billion chickens in intensive (factory) farms. So there is, in reality, much room for improvement.

That is not to say we entirely disagree with abolitionist goals: it would probably be ethical to bring about the extinction of the modern broiler chicken.

But at least, personally, I see abolitionism as unrealistic, when juxtaposed with the magnitude of the intensive poultry farming industry, in 2022, and misguided, if suggesting that improving welfare is a bad idea, because it acts as a balm to soothe the unthinking consumer’s conscience, allowing them to continue buying into fundamentally unethical practices. It’s an interesting idea, but people still need to make their choices, with individual responsibility. The hope in a silver bullet legal solution, like the EU outright banning animal farming on ethical grounds is wishful thinking, because of the economic impact. That is also not to say that it’s not possible – one suggestion at the discussion was to subsidize transition of poultry farming to hemp farming, which is capable of producing similar levels of protein, and profit, assuming those are the underlying basic goals of the industry.

Our ongoing collaboration with the poultry scientist, Dr. Brus, is to develop statistical, or machine-learned comparative welfare indicators, based on audio recordings. But it is not done to justify existing industrial practice. It is done to encourage its improvement, by enabling differentiable rankings of environments. Widespread application of objective welfare rankings would likely have the effect of artificial selection for more ‘humane’ treatment of animals, as farms seek to improve their scores. It’s not the binary ethics of abolitionist veganism, but it is at least an idea, to open the ethical gradient between zero and one, for the estimated 99% of the planet who are not vegan. It’s hard to argue with veganism, because it’s correct. But reality is something else.

Anyway, those are my initial thoughts.

Categories
control envs form Gripper Hardware hardware_ Locomotion power robots

Exhibition Robots

For the MFRU exhibition, we presented a variety of robots. The following is some documentation, on the specifications, and setup instructions. We are leaving the robots with konS.

All Robots

Li-Po batteries need to be stored at 3.8V per cell. For exhibition, they can be charged to 4.15A per cell, and run with a battery level monitor until they display 3.7V, at which point they should be swapped out. Future iterations of robotic projects will make use of splitter cables to allow hot swapping batteries, for zero downtime.

We leave our ISDT D2 Mark 2 charger, for maintaining and charging Li-Po batteries.

At setup time, in a new location, Raspberry Pi SD cards need to be updated to connect to the new Wi-fi network. Simplest method is to physically place the SD card in a laptop, and transfer a wpa_supplicant.conf file with the below changed to the new credentials and locale, and a blank file called ssh, to allow remote login.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=si

network={
    ssid="Galerija"
    psk="galerija"
    key_mgmt=WPA-PSK
}

Then following startup with the updated SD card, robot IP addresses need to be determined, typically using `nmap -sP 192.168.xxx.xxx`, (or a windows client like ZenMap).

Usernames and passwords used are:

LiDARbot – pi/raspberry

Vacuumbot – pi/raspberry and chicken/chicken

Pinkbot – pi/raspberry

Gripperbot – pi/raspberry

Birdbot – daniel/daniel

Nipplebot – just arduino

Lightswitchbot – just arduino and analog timer

For now, it is advised to shut down robots by connecting to their IP address, and typing sudo shutdown -H now and waiting for the lights to turn off, before unplugging. It’s not 100% necessary, but it reduces the chances that the apt cache becomes corrupted, and you need to reflash the SD card and start from scratch.

Starting from scratch involves reflashing the SD card using Raspberry Pi Imager, cloning the git repository, running pi_boot.sh and pip3 install -y requirements.txt, configuring config.py, and running create_service.sh to automate the startup.

LiDARbot

Raspberry Pi Zero W x 1
PCA9685 PWM controller x 1
RPLidar A1M8 x 1
FT5835M servo x 4

Powered by: Standard 5V Power bank [10Ah and 20Ah]

Startup Instructions:
– Plug in USB cables.
– Wait for service startup and go to URL.
– If Lidar chart is displaying, click ‘Turn on Brain’

LiDARbot has a lidar, laser head spinning around, collecting distance updates from the light that bounces back, allowing it to update a 2d (top-down) map of its surroundings.
It is able to not bump into things.

Vacuumbot

Raspberry Pi 3b x 1
LM2596 stepdown converter x 1
RDS60 servo x 4

Powered by: 7.4V 4Ah Li-Po battery

NVIDIA Jetson NX x 1
Realsense D455 depth camera x 1

Powered by: 11.1V 4Ah Li-Po battery

Instructions:
– Plug Jetson assembly connector into 11.4V, and RPi assembly connector into 7.4V
– Connect to Jetson:

cd ~/jetson-server
python3 inference_server.py

– Go to the Jetson URL to view depth and object detection.
– Wait for Rpi service to start up.
– Connect to RPi URL, and click ‘Turn on Brain’

It can scratch around, like the chickens.
Vacuumbot has a depth camera, so it can update a 3d map of its surroundings, and it runs an object detection neural network, so it can interact with its environment. It uses 2 servos per leg, 1 for swivelling its hips in and out, and 1 for the leg rotation.

Pinkbot

Raspberry Pi Zero W x 1
PCA9685 PWM controller x 1
LM2596 stepdown converter x 1
RDS60 servo x 8
Ultrasonic sensors x 3

Powered by: 7.4V 6.8Ah Li-Po battery

Instructions:
– Plug in to Li-Po battery
– Wait for Rpi service to start up.
– Connect to RPi URL, and click ‘Turn On Brain’

Pinkbot has 3 ultrasonic distance sensors, so it has a basic “left” “forward” “right” sense of its surroundings. It uses 8 x 60kg-cm servos, (2 per leg), and 2 x 35kg-cm servos for the head. The servos are powerful, so it can walk, and even jump around.

Gripperbot

Gripperbot has 4 x 60kg-cm servos and 1 x 35kg-cm continuous rotation servo with a worm gear, to open and close the gripper. It uses two spring switches which let it know when the hand is closed. A metal version would be cool.

Raspberry Pi Zero W x 1
150W stepdown converter (to 7.4V) x 1
LM2596 stepdown converter (to 5V) x 1
RDS60 servo x 4
MGGR996 servo x 1

Powered by: 12V 60W power supply

Instructions:
– Plug in to wall
– Wait for Rpi service to start up.
– Connect to RPi URL, and click ‘Fidget to the Waves’

Birdbot

Raspberry Pi Zero W x 1
FT SM-85CL-C001 servo x 4
FE-URT-1 serial controller x 1
12V input step-down converter (to 5V) x 1
Ultrasonic sensor x 1
RPi camera v2.1 x 1

Powered by: 12V 60W power supply

Instructions:
– Plug in to wall
– Wait for Rpi service to start up.
– Connect to RPi URL, and click ‘Fidget to the Waves’

Birdbot is based on the Max Planck Institute BirdBot, and uses some nice 12V servos. It has a camera and distance sensor, and can take pictures when chickens pass by. We didn’t implement the force sensor central pattern generator of the original paper, however. Each leg uses 5 strings held in tension, making it possible, with one servo moving the leg, and the other servo moving the string, to lift and place the leg with a more sophisticated, natural, birdlike motion.

Lightswitchbot

Turns on the light, in the morning
Categories
CNNs GANs highly_speculative

DALL-E / 2

and some stable diffusion tests

Categories
dev envs hardware_ Locomotion robots

Feetech SCServo

I got the Feetech Smart Bus servos running on the RPi. Using them for the birdbot.

Some gotchas:

  • Need to wire TX to TX, RX to RX.
  • Despite claiming 1000000 baudrate, 115200 was required, or it says ‘There is no status packet!”
  • After only one servo working, for a while, I found their FAQ #5, and installed their debugging software, and plugged each servo in individually, and changed their IDs to 1/2/3/4. It was only running the first one because all of their IDs were still 1.
  • For python, you need to pip3 install pyserial, and then import serial.

Categories
control dev envs hardware_ robots UI Vision

Slamtec RPLidar

I got the RPLidar A1M8-R6 with firmware 1.29, and at first, it was just plastic spinning around, and none of the libraries worked.

But I got it working on Windows, as a sanity check, so it wasn’t broken. So I started again on getting it working on the Raspberry Pi Zero W.

Tried the Adafruit python libs, but v1.29 had some insurmountable issue, and I couldn’t downgrade to v1.27.

So needed to compile the Slamtec SDK.

A helpful post pointed out how to fix the compile error and I was able to compile.

I soldered on some extra wires to the motor + and -, to power the motor separately.

Wasn’t getting any luck, but it turned out to be the MicroUSB cable (The OTG cable was ok). After swapping it out, I was able to run the simple_grabber app and confirm that data was coming out.

pi@raspberrypi:~/rplidar_sdk/output/Linux/Release $ ./simple_grabber --channel --serial /dev/ttyUSB0 115200
theta: 59.23 Dist: 00160.00
theta: 59.50 Dist: 00161.00
theta: 59.77 Dist: 00162.00
theta: 59.98 Dist: 00164.00
theta: 60.29 Dist: 00165.00
theta: 61.11 Dist: 00168.00

I debugged the Adafruit v1.29 issue too. So now I’m able to get the data in python, which will probably be nicer to work with, as I haven’t done proper C++ in like 20 years. But this Slamtec code would be the cleanest example to work with.

So I added in some C socket code and recompiled, so now the demo app takes a TCP connection and starts dumping data.

./ultra_simple --channel --serial /dev/ttyUSB0 115200

It was actually A LOT faster than the python libraries. But I started getting ECONNREFUSED errors, which I thought might be because the Pi Zero W only has a single CPU, and the Python WSGI worker engine was eventlet, which only handles 1 worker, for flask-socketio, and running a socket server, client, and socket-io, on a single CPU, was creating some sort of resource contention. But I couldn’t solve it.

I found a C++ python-wrapped project but it was compiled for 64 bit, and the software, SWIG, which I needed to recompile for 32 bit, seemed a bit complicated.

So, back to Python.

Actually, back to javascript, to get some visuals in a browser. The Adafruit example is for pygame, but we’re over a network, so that won’t work. Rendering Matplotlib graphs is going to be too slow. Need to stream data, and render it on the front end.

Detour #1: NPM

Ok… so, need to install Node.js to install this one, which for Raspberry Pi Zero W, is ARM6.

This is the most recent ARM6 nodejs tarball:

wget https://nodejs.org/dist/latest-v11.x/node-v11.15.0-linux-armv6l.tar.gz

tar xzvf node-v11.15.0-linux-armv6l.tar.gz
cd node-v11.15.0-linux-armv6l
sudo cp -R * /usr/local/
sudo ldconfig
npm install --global yarn
sudo npm install --global yarn

npm install rplidar

npm ERR! serialport@4.0.1 install: `node-pre-gyp install --fallback-to-build`
 
Ok...  never mind javascript for now.

Detour #2: Dash/Plotly

Let’s try this python code. https://github.com/Hyun-je/pyrplidar

Ok well it looks like it works maybe, but where is s/he getting that nice plot from? Not in the code. I want the plot.

So, theta and distance are just polar coordinates. So I need to plot polar coordinates.

PolarToCartesian.

Convert a polar coordinate (r,θ) to cartesian (x,y): x = r cos(θ), y = r sin(θ)

Ok that is easy, right? So here’s a javascript library with a polar coordinate plotter

So, plan is, set up a flask route, read RPLidar data, publish to a front end, which plots it in javascript

Ok after some googling, Dash / Plotly looks like a decent option.

Found this code. Cool project! And though this guy used a different Lidar, it’s pretty much what I’m trying to do, and he’s using plotly.

pip3 install pandas
pip3 install dash

k let's try...
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 48 from C header, got 40 from PyObject

ok
pip3 install --upgrade numpy     
(if your numpy version is < 1.20.0)

ok now  bad marshal data (unknown type code)
sheesh, what garbage.  
Posting issue to their github and going back to the plan.

Reply from Plotly devs: pip3 won’t work, will need to try conda install, for ARM6

Ok let’s see if we can install plotly again….

Going to try miniconda – they have a arm6 file here

Damn. 2014. Python 2. Nope. Ok Plotly is not an option for RPi Zero W. I could swap to another RPi, but I don’t think the 1A output of the power bank can handle it, plus camera, plus lidar motor, and laser. (I am using the 2.1A output for the servos).

Solution #1: D3.js

Ok, Just noting this link, as it looks useful for the lidar robot, later.

So, let’s install socket io and websockets

pip3 install flask_socketio
pip3 install simple-websocket
pip3 install flask-executor

(looking at this link) for flask and socket-io, and this link for d3 polar chart

app isn’t starting though, since adding socket-io So, hmm. Ok, this issue. Right, needs 0.0.0.0.

socketio.run(app, debug=True, host='0.0.0.0')

Back to it…

K. Let’s carry on with Flask/d3.js though.

I think if we’re doing threading, I need to use a WSGI server.

pip install waitress

ok that won’t work with flask-socketio. Needs gevent or eventlet.

eventlet is the best performant option, with support for long-polling and WebSocket transports.”

apparently needs redis for message queueing…

pip install eventlet
pip install redis

Ok, and we need gunicorn, because eventlet is just for workers...

pip3 install gunicorn

gunicorn --worker-class eventlet -w 1 module:app

k, that throws an error.
I need to downgrade eventlet, or do some complicated thing.
pip install eventlet==0.30.2

gunicorn --bind 0.0.0.0 --worker-class eventlet -w 1 kmp8servo:app
(my service is called kmp8servo.py)


ok so do i need redis?
sudo apt-get install redis
ok it's already running now, 
at /usr/bin/redis-server 127.0.0.1:6379
no, i don't really need redis.  Could use sqlite, too. But let's use it anyway.

Ok amazing, gunicorn works.  It's running on port 8000

Ok, after some work,  socket-io is also working.

Received #0: Connected
Received #1: I'm connected!
Received #2: Server generated event
Received #3: Server generated event
Received #4: Server generated event
Received #5: Server generated event
Received #6: Server generated event
Received #7: Server generated event

So, I’m going to go with d3.js instead of P5js, just cause it’s got a zillion more users, and there’s plenty of polar coordinate code to look at, too.

Got it drawing the polar background… but I gotta change the scale a bit. The code uses a linear scale from 0 to 1, so I need to get my distances down to something between 0 and 1. Also need radians, instead of the degrees that the lidar is putting out.

ok finally. what an ordeal.

But now we still need to get python lidar code working though, or switch back to the C socket code I got working.

Ok, well, so I added D3 update code with transitions, and the javascript looks great.

But the C Slamtec SDK, and the Python RP Lidar wrappers are a source of pain.

I had the C sockets working briefly, but it stopped working, seemingly while I added more Python code between each socket read. I got frustrated and gave up.

The Adafruit library, with the fixes I made, seem to work now, but it’s in a very precarious state, where looking at it funny causes a bad descriptor field, or checksum error.

But I managed to get the brain turning on, with the lidar. I’m using Redis to track the variables, using the memory.py code from this K9 repo. Thanks.

I will come back to trying to fix the remaining python library issues, but for now, the robot is running, so, on to the next.

Categories
Behaviour chicken_research The Chicken Experience

Notes for looking after my chicken

Below are the notes I left, for my housemates, to look after my chicken, Alpha, while I’m away in Europe, for MFRU.

Alpha feeding

What she’s supposed to eat, as a chicken:

Maize, sorghum, sunflower seeds, leftover vegetable scraps, and enough calcium for egg shell development. So, crumple some egg shells in the garden occasionally. 

But she’ll only eat maize and sorghum if she’s really hungry. She likes sunflower seeds.

Her faves in decreasing order are:

Superworms 

Live Larvae

Dead Larvae

Bread 

Cooked rice

(De-shelled) Sunflower seeds

There’s various grains and pulses like millet, rice, lentils, which she’ll eat, if it’s cooked. 

When she does the loud alarm noises, or jumps on the table outside Chris’s room, it’s usually the black and white cat. But sometimes it means she’s looking for a place to lay an egg. 95% of the time, she will go back and lay the egg in her box/bed. I think when she crouches down, when you approach, it’s a similar vibe. 

Her house has a few features, which need adjustment occasionally. There’s two waterproof umbrella cloths on top, for the rain, usually held in place by a branch, and there’s some polystyrene directly above the box. The box is raised, because she prefers to be higher off the ground, when sleeping. It’s not necessary but after about 6:30pm, when she’s in bed, you can put the ‘satanic apron’ over her box. But first have a quick check for mosquitoes in the box. If there’s lots of mosquitoes, make sure there’s not some sitting water with mosquito larvae in the backyard somewhere. The apron offers a bit more protection from the cold, and light. The chicken house could use some work, but usually it’s fine unless it’s been very windy or rainy.

I change her bedding once a month or so, or if it gets wet in there, after a rain, or if she poops her bed, I’ll take the poop out. Straw or that Alfalfa/Lucerne in the plastic bag works for bedding. 

Um… what else… when you steal the eggs, try not to let her see the egg, or she’ll make disapproving sounds.

I change the water every day or two. There doesn’t need to be as many water containers as there are, but just make sure there’s some water around, that the little rat doves haven’t shat in. 

Alpha won’t eat the maize, so you can throw a handful in a spot, and the little doves will eat it.

The worms and larvae are pretty good at hiding but there should be a few hundred of each left. Just throw some wet scraps in, occasionally. They’re eating melted instant coffee, and cardboard and grass at the moment. 

Alpha will eat as many worms as you give her, so try limit to 10. Internet says 2/day. But that’s for normal, not-spoiled chickens. 

She’s picky, until there’s no choices, and then if she’s hungry, she’ll eat whatever’s going. The internet usually knows what she can eat, if you’re going to feed her something new. Usually if it’s grainy or pulsey, or crumby, or anything meaty, or insecty, she’ll give it a try.

Leave some sunflower seeds on top of the bucket where the little rat doves can’t see.

She doesn’t understand pointing at things, or English. But she will usually understand food in hand, or food put in front of her. 

Ok that’s the gist. Thanks.

Categories
highly speculative highly_speculative The Chicken Experience

Consciousness and Potential

A controversial thought… but, when considering the 7 billion male chicks that are destroyed within a day of hatching (per year), what sort of difference exists, between the mind of a baby chick, and a baby human? The conscious experience, (perhaps not yet encumbered by self-modeled reflection, and the psychological maturity of complex thought), still exists in some way. It is like something, to be a living being. Mammals and Birds share a common ancestor, a few hundred million years back, and we both are born, innately knowing, and preferring some things over others.

If there were a debate, with one side seeing infanticide as worse than what we do to chickens, it would probably just come down to ‘the argument from potential’ – and yet… chickens are smarter than human toddlers. I guess that argument needs to change to ‘long term potential’?

Hmm. Interesting thought, anyway.

The topic of how babies and chicks are similar interests others too. (“Born Knowing”, Giorgio Vallortigara)

Categories
AI/ML CNNs dev Locomotion OpenCV robots UI Vision

Realsense Depth and TensorRT object detection

A seemingly straightforward idea for robot control involves using depth, and object detection, to form a rough model of the environment.

After failed attempts to create our own stereo camera using two monocular cameras, we eventually decided to buy a commercial product instead, The Intel depth camera, D455.

After a first round of running a COCO trained MobileSSDv2 object detection network in Tensorflow 2 Lite, on the colour images obtained from the realsense camera, on the Jetson Nano, the results were just barely acceptable (~2 FPS) for a localhost stream, and totally unacceptable (~0.25 FPS) served as JPEG over HTTP, to a browser on the network.

Looking at the options, the only feasible solution was to redo the network using TensorRT, the NVIDIA-specific, quantized (16 bit on the Nano, 8 bit on the NX/AGX) neural network framework. The other solution involved investigating options other than simple JPEG compression over HTTP, such as RTSP and WebRTC.

The difficult part was setting up the environment. We used the NVIDIA detectnet code, adapted to take the realsense camera images as input, and to display the distance to the objects. An outdated example was found at CAVEDU robotics blog/github. Fixed up below.

#!/usr/bin/python3



import jetson_inference
import jetson_utils
import argparse
import sys
import os
import cv2
import re
import numpy as np
import io
import time
import json
import random
import pyrealsense2 as rs
from jetson_inference import detectNet
from jetson_utils import videoSource, videoOutput, logUsage, cudaFromNumpy, cudaAllocMapped, cudaConvertColor

parser = argparse.ArgumentParser(description="Locate objects in a live camera stream using an object detection DNN.",
formatter_class=argparse.RawTextHelpFormatter, epilog=jetson_utils.logUsage())
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2",
help="pre-trained model to load (see below for options)")
parser.add_argument("--threshold", type=float, default=0.5,
help="minimum detection threshold to use")
parser.add_argument("--width", type=int, default=640,
help="set width for image")
parser.add_argument("--height", type=int, default=480,
help="set height for image")
opt = parser.parse_known_args()[0]

# load the object detection network
net = detectNet(opt.network, sys.argv, opt.threshold)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, opt.width, opt.height, rs.format.z16, 30)
config.enable_stream(rs.stream.color, opt.width, opt.height, rs.format.bgr8, 30)
# Start streaming
pipeline.start(config)


press_key = 0
while (press_key==0):
	# Wait for a coherent pair of frames: depth and color
	frames = pipeline.wait_for_frames()
	depth_frame = frames.get_depth_frame()
	color_frame = frames.get_color_frame()
	if not depth_frame or not color_frame:
		continue
	# Convert images to numpy arrays
	depth_image = np.asanyarray(depth_frame.get_data())
	show_img = np.asanyarray(color_frame.get_data())
	
	# convert to CUDA (cv2 images are numpy arrays, in BGR format)
	bgr_img = cudaFromNumpy(show_img, isBGR=True)
	# convert from BGR -> RGB
	img = cudaAllocMapped(width=bgr_img.width,height=bgr_img.height,format='rgb8')
	cudaConvertColor(bgr_img, img)

	# detect objects in the image (with overlay)
	detections = net.Detect(img)

	for num in range(len(detections)) :
		score = round(detections[num].Confidence,2)
		box_top=int(detections[num].Top)
		box_left=int(detections[num].Left)
		box_bottom=int(detections[num].Bottom)
		box_right=int(detections[num].Right)
		box_center=detections[num].Center
		label_name = net.GetClassDesc(detections[num].ClassID)

		point_distance=0.0
		for i in range (10):
			point_distance = point_distance + depth_frame.get_distance(int(box_center[0]),int(box_center[1]))

		point_distance = np.round(point_distance / 10, 3)
		distance_text = str(point_distance) + 'm'
		cv2.rectangle(show_img,(box_left,box_top),(box_right,box_bottom),(255,0,0),2)
		cv2.line(show_img,
			(int(box_center[0])-10, int(box_center[1])),
			(int(box_center[0]+10), int(box_center[1])),
			(0, 255, 255), 3)
		cv2.line(show_img,
			(int(box_center[0]), int(box_center[1]-10)),
			(int(box_center[0]), int(box_center[1]+10)),
			(0, 255, 255), 3)
		cv2.putText(show_img,
			label_name + ' ' + distance_text,
			(box_left+5,box_top+20),cv2.FONT_HERSHEY_SIMPLEX,0.4,
			(0,255,255),1,cv2.LINE_AA)

	cv2.putText(show_img,
		"{:.0f} FPS".format(net.GetNetworkFPS()),
		(int(opt.width*0.8), int(opt.height*0.1)),
		cv2.FONT_HERSHEY_SIMPLEX,1,
		(0,255,255),2,cv2.LINE_AA)


	display = cv2.resize(show_img,(int(opt.width*1.5),int(opt.height*1.5)))
	cv2.imshow('Detecting...',display)
	keyValue=cv2.waitKey(1)
	if keyValue & 0xFF == ord('q'):
		press_key=1


cv2.destroyAllWindows()
pipeline.stop()

Assuming you have a good cmake version and cuda is available (if nvcc doesn’t work, you need to configure linker paths, check this link)… (if you have a cmake version around ~ 3.22-3.24 or so, you need an older one)… prerequisite sudo apt-get install libssl-dev also required.

The hard part was actually setting up the Realsense python bindings.

Clone the repo…

git clone https://github.com/IntelRealSense/librealsense.git

The trick being, to request the python bindings, and cuda, during the cmake phase. Note that often, none of this works. Some tips include…

sudo apt-get install xorg-dev libglu1-mesa-dev

and changing PYTHON to Python

mkdir build
cd build
cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true -DPYTHON_EXECUTABLE=/usr/bin/python3 -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=true -DBUILD_GRAPHICAL_EXAMPLES=true -DBUILD_WITH_CUDA:bool=true

The above worked on Jetpack 4.6.1, while the below worked on Jetpack 5.0.2

cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true -DPython_EXECUTABLE=/usr/bin/python3.8 -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=true -DBUILD_GRAPHICAL_EXAMPLES=true -DBUILD_WITH_CUDA:bool=true -DPYTHON_INCLUDE_DIRS=/usr/include/python3.8 -DPython_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython3.8.so

(and sudo make install)
Update the python path

export PYTHONPATH=$PYTHONPATH:/usr/local/lib
(or a specific python if you have more than one)
if installed in /usr/lib, change accordingly

Check that the folder is in the correct location (it isn't, after following official instructions).

./usr/local/lib/python3.6/dist-packages/pyrealsense2/

Check that the shared object files (.so) are in the right place: 

chicken@chicken:/usr/local/lib$ ls
cmake       libjetson-inference.so  librealsense2-gl.so.2.50    librealsense2.so.2.50    pkgconfig
libfw.a     libjetson-utils.so      librealsense2-gl.so.2.50.0  librealsense2.so.2.50.0  python2.7
libglfw3.a  librealsense2-gl.so     librealsense2.so            librealsense-file.a      python3.6


If it can't find 'pipeline', it means you need to copy the missing __init__.py file.

sudo cp ./home/chicken/librealsense/wrappers/python/pyrealsense2/__init__.py ./usr/local/lib/python3.6/dist-packages/pyrealsense2/

Some extra things to do, 
sudo cp 99-realsense-libusb.rules  /etc/udev/rules.d/

Eventually, I am able to run the inference on the realsense camera, at an apparent 25 FPS, on the localhost, drawing to an OpenGL window.

I also developed a Dockerfile, for the purpose, which benefits from an updated Pytorch version, but various issues were encountered, making a bare-metal install far simpler, ultimately. Note that building jetson-inference, and Realsense SDK on the Nano require increasing your swap size, beyond the 2GB standard. Otherwise, the Jetson freezes once memory paging leads to swap death.

Anyway, since the objective is remote human viewing, (while providing depth information for the robot to use), the next step will require some more tests, to find a suitable option.

The main blocker is the power usage limitations on the Jetson Nano. I can’t seem to run Wifi and the camera at the same time. According to the tegrastats utility, the POM_5V_IN usage goes over the provided 4A, under basic usage. There are notes saying that 3A can be provided to 2 of the 5V GPIO pins, in order to get 6A total input. That might end up being necessary.

Initial investigation into serving RTSP resulted in inferior, and compressed results compared to a simple python server streaming image by image. The next investigation will be into WebRTC options, which are supposedly the current state of the art, for browser based video streaming. I tried aiortc, and momo, so far, both failed on the Nano.

I’ve decided to try on the Xavier NX, too, just to replicate the experiment, and see how things change. The Xavier has some higher wattage settings, and the wifi is internal, so worth a try. Also, upgraded to Jetpack 5.0.2, which was a gamble. Thought surely it would be better than upgrading to a 5.0.1 dev preview, but none of their official products support 5.0.2 yet, so there will likely be much pain involved. On the plus side, python 3.8 is standard, so some libraries are back on the menu.

On the Xavier, we’re getting 80 FPS, compared to 25 FPS on the Nano. Quite an upgrade. Also, able to run wifi and realsense at the same time.

Looks like a success. Getting multiple frames per second with about a second of lag over the network.

Categories
control dev envs robots Vision

RTSP

This was simple to set up, and is meant to be fast.

https://github.com/aler9/rtsp-simple-server#configuration

Unfortunately the results of using the OpenCV/GStreamer example code to transmit over the network using H264 compression, were even worse than the JPEG over HTTP attempt I’m trying to improve on. Much worse. That was surprising. It could be this wifi dongle though, which is very disappointing on the Jetson Nano. It appears as though the Jetson Nano tries to keep total wattage around 10W, however plugging in the Realsense camera and a wifi dongle is pulling way more than that (All 4A @ 5W supplied by the barrel jack). It may mean that wireless robotics with the Realsense is not practical, on the Jetson.

Required apt install gstreamer1.0-rtsp to be installed.

Back to drawing board for getting the RealSense colour and depth transmitting to a different viewing machine, on the network (while still providing distance data for server side computation).

Categories
dev Hardware Vision

Object Detection on RPi Zero?

A quick post, because I looked into this, and decided it wasn’t a viable option. We’re using RPi Zero W for the simplest robot, and I was thinking that with object detection, and ultrasound sensors for depth, one could approximate the far more complicated Realsense on Jetson option.

QEngineering managed to get 11FPS on classification, on the RPi.

But the simplest object detection, MobileNet SSD on Tensorflow 2 Lite, (supposedly faster than Tiny-YOLO3), appears to be narrowly possible, but it is limited to running inference on a picture, in about 6 or 7 seconds.

There is a Tensorflow Lite Micro, and some people have ported it for RPi Zero, (eg. tflite_micro_runtime) but I wasn’t able to install the pip wheel, and gave up.

This guy may have got it working, though it’s hard to tell. I followed the method for installing tensorflow 2 lite, and managed to corrupt my SD card, with “Structure needs cleaning” errors.

So maybe I try again some day, but it doesn’t look like a good option. The RPi 3 or 4 is a better bet. Some pages mentioned NNPack, which allows the use of multiple cores, for NNs. But since the RPi Zero has a single core, it’s likely that if I got it working, it would only achieve inference on a single image frame in 7 seconds, which isn’t going to cut it.