Categories
dev evolution Locomotion

DEAP

Distributed Evolutionary Algorithms in Python

deap.readthedocs.org/

An oldie but a goodie. Was thinking to just implement the locomotion using good old genetic programming.

You could probably generate a walking robot using a genetic algorithm that repeated the tree’s actions a number of times. Bet it would be faster than the whole neural network policy training replay buffer spiel.

https://deap.readthedocs.io/en/master/tutorials/advanced/gp.html

Categories
AI/ML deep dev Locomotion

Ray

https://docs.ray.io/en/master/rllib-algorithms.html Seems like this might be the most up to date baselines repo.

Ray is a fast and simple framework for building and running distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning
  • RLlib: Scalable Reinforcement Learning
  • RaySGD: Distributed Training Wrappers

ARS implementation: https://github.com/ray-project/ray/blob/master/rllib/agents/ars/ars.py

Categories
dev GANs sim2real

GAN SimToReal

https://github.com/ugurkanates/awesome-real-world-rl#simulation-to-real-with-gans

GraspGAN: https://arxiv.org/pdf/1709.07857.pdf

RL-CycleGAN https://arxiv.org/pdf/2006.09001.pdf

And https://sim2realai.github.io/Quantifying-Transferability/ this whole website is interesting

Categories
AI/ML dev simulation

Prioritized Experience Replay

While looking at the stable baselines docs, I came across PER: 
https://arxiv.org/pdf/1511.05952.pdf

I saw it in the parameters of DQN (https://openai.com/blog/openai-baselines-dqn/ )- It was OpenAI's 2017 algorithm. released when they released their baselines). 

model = DQN(‘MlpPolicy’, env, learning_rate=1e-3, prioritized_replay=True, verbose=1)

Categories
dev Locomotion The Sentient Table

Stable baselines

Need something to compare results to.

To install,

https://stable-baselines.readthedocs.io/en/master/guide/install.html

pip install git+https://github.com/hill-a/stable-baselines

Successfully installed absl-py-0.9.0 astor-0.8.1 gast-0.2.2 google-pasta-0.2.0 grpcio-1.30.0 h5py-2.10.0 keras-applications-1.0.8 keras-preprocessing-1.1.2 opt-einsum-3.2.1 stable-baselines-2.10.1a1 tensorboard-1.15.0 tensorflow-1.15.3 tensorflow-estimator-1.15.1 termcolor-1.1.0 wrapt-1.12.1

i’d originally done

pip install stable-baselines[mpi]

but the github installs dependencies too.

ok so pybullet comes with an ‘enjoy’ program which

~/.local/lib/python3.6/site-packages/pybullet_envs/stable_baselines

You can run it using:

python3 -m pybullet_envs.stable_baselines.enjoy –algo td3 –env HalfCheetahBulletEnv-v0

Ok I set up ppo2 and tried to run python3 ppo2.py

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 106, in spec
    importlib.import_module(mod_name)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'gym-robotable'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "ppo2.py", line 29, in <module>
    env = gym.make(hp.env_name)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 142, in make
    return registry.make(id, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 86, in make
    spec = self.spec(path)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 109, in spec
    raise error.Error('A module ({}) was specified for the environment but was not found, make sure the package is installed with `pip install` before calling `gym.make()`'.format(mod_name))
gym.error.Error: A module (gym-robotable) was specified for the environment but was not found, make sure the package is installed with `pip install` before calling `gym.make()`

Registration… hmm ARS.py doesn’t complain. We had this problem before.

pip3 install -e .
python3 setup.py install

nope… https://stackoverflow.com/questions/14295680/unable-to-import-a-module-that-is-definitely-installed it’s presumably here somewhere…

root@chrx:/opt/gym-robotable# pip show gym-robotable
Name: gym-robotable
Version: 0.0.1
Summary: UNKNOWN
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /opt/gym-robotable
Requires: gym
Required-by: 

https://github.com/openai/gym/issues/1818 says You need to either import <name of your package> or do gym.make("<name of your package>:tic_tac_toe-v1"), see the creating environment guide for more information: https://github.com/openai/gym/blob/master/docs/creating-environments.md

Is it some fuckin gym-robotable vs gym_robotable thing?

Yes. Yes it is.


self.env_name = 'gym_robotable:RobotableEnv-v0'

Ok so now it’s working almost. But falls down sometimes and then the algorithm stops. Ah, needed to define ‘is_fallen’ correctly…

  def is_fallen(self):
    orientation = self.robotable.GetBaseOrientation()
    rot_mat = self._pybullet_client.getMatrixFromQuaternion(orientation)
    local_up = rot_mat[6:]
    pos = self.robotable.GetBasePosition()
    # return (np.dot(np.asarray([0, 0, 1]), np.asarray(local_up)) < 0.85 or pos[2] < -0.25)
    #print("POS", pos)
    #print("DOT", np.dot(np.asarray([0, 0, 1]), np.asarray(local_up)))

    return (pos[2] < -0.28)  #changing fallen definition for now, to height of table
    #return False

  def _termination(self):
    position = self.robotable.GetBasePosition()
    distance = math.sqrt(position[0]**2 + position[1]**2)
    return self.is_fallen() or distance > self._distance_limit

ok so now


if __name__ == "__main__":

  hp = Hp()
  env = gym.make(hp.env_name)

  model = PPO2(MlpPolicy, env, verbose=1)
  model.learn(total_timesteps=10000)

  for episode in range(100):
      obs = env.reset()
      for i in range(1000):
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        #env.render()
        if dones:
            print("Episode finished after {} timesteps".format(i + 1))
            break
        env.render(mode="human")

Now to continue training… https://github.com/hill-a/stable-baselines/issues/599

Categories
dev Locomotion The Sentient Table

ConvertFromLegModel

This is a confusing bit of code


  def ConvertFromLegModel(self, actions):
    """Convert the actions that use leg model to the real motor actions.
    Args:
      actions: The theta, phi of the leg model.
    Returns:
      The eight desired motor angles that can be used in ApplyActions().
    """
  

 COPY THE ACTIONS.

    motor_angle = copy.deepcopy(actions)

DEFINE SOME THINGS

    scale_for_singularity = 1
    offset_for_singularity = 1.5
    half_num_motors = int(self.num_motors / 2)
    quarter_pi = math.pi / 4

FOR EVERY MOTOR

    for i in range(self.num_motors):

THE ACTION INDEX IS THE FLOOR OF HALF. 00112233
      action_idx = int(i // 2)

WELL, SO, THE FORWARD BACKWARD COMPONENT is 
negative thingy times 45 degrees times (the action of the index plus half the motors.... plus the offset thingy)

      forward_backward_component = (
          -scale_for_singularity * quarter_pi *
          (actions[action_idx + half_num_motors] + offset_for_singularity))

AND SO THE EXTENSION COMPONENT IS either + or - 45 degrees times the action.

      extension_component = (-1)**i * quarter_pi * actions[action_idx]

IF 4,5,6,7 MAKE THAT THING NEGATIVE.

      if i >= half_num_motors:
        extension_component = -extension_component

THE ANGLE IS... PI + thingy 1 + thingy 2.

      motor_angle[i] = (math.pi + forward_backward_component + extension_component)



    return motor_angle

Ok my error is,

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 350, in step
    action = self._transform_action_to_motor_command(action)
  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 313, in _transform_action_to_motor_command
    action = self.robotable.ConvertFromLegModel(action)
AttributeError: 'Robotable' object has no attribute 'ConvertFromLegModel'

Ok anyway i debugged for an hour and now it’s doing something. it’s saving numpy files now.

policy_RobotableEnv-v0_20200516-192435.npy contains:

�NUMPYv{‘descr’: ‘<f8’, ‘fortran_order’: False, ‘shape’: (4, 16), }

Cool.

But yeah i had to comment out a lot of stuff. Seems like the actions it’s generating are mostly 0.

Since I simplified to a table, turns out I don’t need any of that ConvertFromLegModel code.


Ok anyway, i started over with minitaur. lol. why are there two tables? Changing the motorDirections gave me this. Good progress.

Categories
dev envs

Google Football Env

https://github.com/google-research/football
Google made an openAI gym environment for playing football.

It looks better than FIFA.

Categories
AI/ML CNNs deep dev institutes

wandb

https://app.wandb.ai/gabesmed/examples-tf-estimator-mnist/runs/98nmh0vy/tensorboard?workspace=user-

hope that works. It’s that guy on youtube who says ‘dear scholars’ and ‘what a time to be alive’.

Advertising was: Lambda GPU clouds, $20 for imagenet training, no setup required. Good to know.

looks like a nice UI for stuff : https://www.wandb.com/articles

Categories
dev The Sentient Table

Gym Env

Starting from pybullet/gym/pybullet_envs/bullet/minitaur_gym_env.py

and going by these instructions https://github.com/openai/gym/blob/master/docs/creating-environments.md

First had to remember the protocol buffer stuff https://developers.google.com/protocol-buffers/docs/reference/python-generated

protoc --proto_path=src --python_out=build/gen src/foo.proto src/bar/baz.proto

https://stackoverflow.com/questions/45068568/how-to-create-a-new-gym-environment-in-openai

/opt/gym-robotable# pip install -e .
Obtaining file:///opt/gym-robotable
Requirement already satisfied: gym>=0.2.3 in /usr/local/lib/python3.6/dist-packages (from gym-robotable==0.0.1) (0.17.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.3.0)
Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.18.1)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.11.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.2.3->gym-robotable==0.0.1) (0.18.2)
Installing collected packages: gym-robotable
Running setup.py develop for gym-robotable
Successfully installed gym-robotable

When I try run the gym env, I get:

raise error.UnregisteredEnv(‘No registered env with id: {}’.format(id))

Ok it was some typo in the __init__.py

Fixed that up in the imports and __inits__ and now got this issue: https://answers.ros.org/question/326226/importerror-dynamic-module-does-not-define-module-export-function-pyinit__tf2/

Ok i think that was some file that was importing tensorflow 2. I don’t want to use tensorflow. it’s like 400MB.

URDF file ‘/root/.local/lib/python3.6/site-packages/pybullet_data/robot.urdf’ not found

so ok, let’s put it there.

/opt/gym-robotable/gym_robotable/envs# cp robot.urdf /root/.local/lib/python3.6/site-packages/pybullet_data/

AttributeError: ‘Robotable’ object has no attribute ‘Reset’

Ok so I’ve got robot.urdf looking like a table, and loading. But there’s various differences between the minitaur gym env and my own minitaur, the robotable. Gotta compare, etc. ok it wants reset() not Reset(…)

just a warning:

No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
torsob3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

happy with 1 defaults for mass and inertia settings, for now.

Ok so apparently just working my way through the bugs one by one. That’s probably as good a method as any :

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.GetMotorAngles().tolist())
AttributeError: ‘Robotable’ object has no attribute ‘GetMotorAngles’

K did a :%s/Get/get/g in vi. NEXT.

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 301, in reset
return self._get_observation()
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.getMotorAngles().tolist())
File “/opt/gym-robotable/gym_robotable/envs/robotable.py”, line 88, in getMotorAngles
jointState = p.getJointState(self.quadruped, self.motorIdList[i])
IndexError: list index out of range

ok self.nMotors = 4

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 527, in _get_observation_upper_bound
    num_motors = self.robotable.num_motors
AttributeError: 'Robotable' object has no attribute 'num_motors'

k lets change nMotors to num_motors. NEXT

File “ars.py”, line 77, in ExploreWorker
state, reward, done, _ = env.step(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 350, in step
action = self._transform_action_to_motor_command(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 313, in _transform_action_to_motor_command
action = self.robotable.ConvertFromLegModel(action)
AttributeError: ‘Robotable’ object has no attribute ‘ConvertFromLegModel’

Categories
dev robots Vision

ROS Camera Topic

What is a ros topic? http://wiki.ros.org/Topics
ROS can publish the webcam stream to a “topic”, and any part of the robot can subscribe to it, by name, if it is interested in that data. ROS is almost like a program where everything is a global variable.

https://answers.ros.org/question/218228/ros-example-program-doesnt-work-with-the-laptop-webcam/

I made this file for the laptop webcam, but then didn’t end up using it.

<launch>
  <group ns="camera">
    <node pkg="libuvc_camera" type="camera_node" name="mycam">
      <!-- Parameters used to find the camera -->
      <param name="vendor" value="0x2232"/>
      <param name="product" value="0x1082"/>
      <param name="serial" value=""/>
      <!-- If the above parameters aren't unique, choose the first match: -->
      <param name="index" value="0"/>

      <!-- Image size and type -->
      <param name="width" value="640"/>
      <param name="height" value="480"/>
      <!-- choose whichever uncompressed format the camera supports: -->
      <param name="video_mode" value="uncompressed"/> <!-- or yuyv/nv12/mjpeg -->
      <param name="frame_rate" value="15"/>

      <param name="timestamp_method" value="start"/> <!-- start of frame -->
      <param name="camera_info_url" value="file:///tmp/cam.yaml"/>

      <param name="auto_exposure" value="3"/> <!-- use aperture_priority auto exposure -->
      <param name="auto_white_balance" value="false"/>
    </node>
  </group>
</launch>

roscore

apt install ros-melodic-uvc-camera

rospack listnames

rosrun uvc_camera uvc_camera_node _device:=/dev/video0

rostopic list

(should show /image_raw now…)

rosrun dso_ros dso_live calib=/opt/catkin_ws/src/dso_ros/camera.txt image:=/image_raw/