Categories
arxiv institutes

paperswithcode

Somehow didn’t find this until now, but it divides papers into categories, within machine learning topics/tasks

https://paperswithcode.com/ https://paperswithcode.com/sota

ooh https://github.com/CorentinJ/Real-Time-Voice-Cloning

https://medium.com/paperswithcode/a-home-for-results-in-ml-e25681c598dc

Categories
AI/ML CNNs deep dev institutes

wandb

https://app.wandb.ai/gabesmed/examples-tf-estimator-mnist/runs/98nmh0vy/tensorboard?workspace=user-

hope that works. It’s that guy on youtube who says ‘dear scholars’ and ‘what a time to be alive’.

Advertising was: Lambda GPU clouds, $20 for imagenet training, no setup required. Good to know.

looks like a nice UI for stuff : https://www.wandb.com/articles

Categories
control robots sim2real simulation

Sim2Real links

Using Simulation and Domain Adaptation to Improve
Efficiency of Deep Robotic Grasping:

https://arxiv.org/pdf/1805.07831.pdf

Optimizing Simulations with Noise-Tolerant Structured Exploration: https://arxiv.org/pdf/1805.07831.pdf

Categories
AI/ML Math

Autograd

The ‘magic’ underlying PyTorch https://towardsdatascience.com/pytorch-autograd-understanding-the-heart-of-pytorchs-magic-2686cd94ec95

:That is true. As I wrote earlier, PyTorch is a jacobian-vector product engine. In the process it never explicitly constructs the whole Jacobian. It’s usually simpler and more efficient to compute the JVP directly.:

Source: https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf

jacobian-vector products?

Categories
ears

OpenAI Jukebox

This is trippy shit. https://jukebox.openai.com/

AI music https://openai.com/blog/jukebox/

Categories
dev The Sentient Table

Gym Env

Starting from pybullet/gym/pybullet_envs/bullet/minitaur_gym_env.py

and going by these instructions https://github.com/openai/gym/blob/master/docs/creating-environments.md

First had to remember the protocol buffer stuff https://developers.google.com/protocol-buffers/docs/reference/python-generated

protoc --proto_path=src --python_out=build/gen src/foo.proto src/bar/baz.proto

https://stackoverflow.com/questions/45068568/how-to-create-a-new-gym-environment-in-openai

/opt/gym-robotable# pip install -e .
Obtaining file:///opt/gym-robotable
Requirement already satisfied: gym>=0.2.3 in /usr/local/lib/python3.6/dist-packages (from gym-robotable==0.0.1) (0.17.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.3.0)
Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.18.1)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.11.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.2.3->gym-robotable==0.0.1) (0.18.2)
Installing collected packages: gym-robotable
Running setup.py develop for gym-robotable
Successfully installed gym-robotable

When I try run the gym env, I get:

raise error.UnregisteredEnv(‘No registered env with id: {}’.format(id))

Ok it was some typo in the __init__.py

Fixed that up in the imports and __inits__ and now got this issue: https://answers.ros.org/question/326226/importerror-dynamic-module-does-not-define-module-export-function-pyinit__tf2/

Ok i think that was some file that was importing tensorflow 2. I don’t want to use tensorflow. it’s like 400MB.

URDF file ‘/root/.local/lib/python3.6/site-packages/pybullet_data/robot.urdf’ not found

so ok, let’s put it there.

/opt/gym-robotable/gym_robotable/envs# cp robot.urdf /root/.local/lib/python3.6/site-packages/pybullet_data/

AttributeError: ‘Robotable’ object has no attribute ‘Reset’

Ok so I’ve got robot.urdf looking like a table, and loading. But there’s various differences between the minitaur gym env and my own minitaur, the robotable. Gotta compare, etc. ok it wants reset() not Reset(…)

just a warning:

No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
torsob3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

happy with 1 defaults for mass and inertia settings, for now.

Ok so apparently just working my way through the bugs one by one. That’s probably as good a method as any :

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.GetMotorAngles().tolist())
AttributeError: ‘Robotable’ object has no attribute ‘GetMotorAngles’

K did a :%s/Get/get/g in vi. NEXT.

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 301, in reset
return self._get_observation()
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.getMotorAngles().tolist())
File “/opt/gym-robotable/gym_robotable/envs/robotable.py”, line 88, in getMotorAngles
jointState = p.getJointState(self.quadruped, self.motorIdList[i])
IndexError: list index out of range

ok self.nMotors = 4

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 527, in _get_observation_upper_bound
    num_motors = self.robotable.num_motors
AttributeError: 'Robotable' object has no attribute 'num_motors'

k lets change nMotors to num_motors. NEXT

File “ars.py”, line 77, in ExploreWorker
state, reward, done, _ = env.step(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 350, in step
action = self._transform_action_to_motor_command(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 313, in _transform_action_to_motor_command
action = self.robotable.ConvertFromLegModel(action)
AttributeError: ‘Robotable’ object has no attribute ‘ConvertFromLegModel’

Categories
AI/ML simulation

Hindsight Experience Replay (HER)

Hindsight Experience Replay

(Instead of getting a crap result and saying, wow what a crap result, you say, ah well if that’s what I had wanted to do, then that would have worked.)

It’s a clever idea, Open AI.

One ability humans have, unlike the current generation of model-free RL algorithms, is to learn almost as much from achieving an undesired outcome as from the desired one.

https://arxiv.org/abs/1707.01495

“So why not just pretend that we wanted to achieve this goal to begin with, instead of the one that we set out to achieve originally?”

The HER algorithm achieves this by using what is called “sparse and binary” rewards, which only provide an indication to the agent that either it has failed or succeeded. In contrast, the “dense,” “shaped” rewards used in conventional reinforcement learning tip agents off as to whether they are getting “close,” “closer,” “much closer,” or “very close” to hitting their goal. Such so-called dense rewards can speed up the learning process, but the drawback is that these dense rewards often don’t contain much of a learning signal for the agent to learn from, and can be difficult to design and implement for real-world applications.

https://thenewstack.io/openai-algorithm-allows-ai-to-learn-from-its-mistakes/

G-HER

Hindsight experience replay (HER) based on universal value functions shows promising results in such multi-goal settings by substituting achieved goals for the original goal, frequently giving the agent rewards. However, the achieved goals are limited to the current policy level and lack guidance for learning. We propose a novel guided goal-generation model for multi-goal RL named G-HER. Our method uses a conditional generative recurrent neural network (RNN) to explicitly model the relationship between policy level and goals, enabling the generation of various goals conditions on the different policy levels.

https://sites.google.com/view/gher-algorithm

https://thenewstack.io/openai-algorithm-allows-ai-to-learn-from-its-mistakes/

Categories
Locomotion

Robot “Forward Kinematics”

This doesn’t seem necessary any more, because we have simulators. But interesting to see what people had to do back in the day:

Denavit-Hartenberg (DH) parameters 

https://blog.robotiq.com/how-to-calculate-a-robots-forward-kinematics-in-5-easy-steps

Categories
GANs

ALA – Adversarial Latent Autoencoders

https://github.com/podgorskiy/ALAE https://arxiv.org/pdf/2004.04467.pdf

Similar to GANs, but a bit cleaner. Image training encodes a latent representation of the ur-Celebrity, and new images are generated from it using another image as an input.

Latent: (of a quality or state) existing but not yet developed or manifest; hidden or concealed.

Autoencoder
Categories
dev robots Vision

ROS Camera Topic

What is a ros topic? http://wiki.ros.org/Topics
ROS can publish the webcam stream to a “topic”, and any part of the robot can subscribe to it, by name, if it is interested in that data. ROS is almost like a program where everything is a global variable.

https://answers.ros.org/question/218228/ros-example-program-doesnt-work-with-the-laptop-webcam/

I made this file for the laptop webcam, but then didn’t end up using it.

<launch>
  <group ns="camera">
    <node pkg="libuvc_camera" type="camera_node" name="mycam">
      <!-- Parameters used to find the camera -->
      <param name="vendor" value="0x2232"/>
      <param name="product" value="0x1082"/>
      <param name="serial" value=""/>
      <!-- If the above parameters aren't unique, choose the first match: -->
      <param name="index" value="0"/>

      <!-- Image size and type -->
      <param name="width" value="640"/>
      <param name="height" value="480"/>
      <!-- choose whichever uncompressed format the camera supports: -->
      <param name="video_mode" value="uncompressed"/> <!-- or yuyv/nv12/mjpeg -->
      <param name="frame_rate" value="15"/>

      <param name="timestamp_method" value="start"/> <!-- start of frame -->
      <param name="camera_info_url" value="file:///tmp/cam.yaml"/>

      <param name="auto_exposure" value="3"/> <!-- use aperture_priority auto exposure -->
      <param name="auto_white_balance" value="false"/>
    </node>
  </group>
</launch>

roscore

apt install ros-melodic-uvc-camera

rospack listnames

rosrun uvc_camera uvc_camera_node _device:=/dev/video0

rostopic list

(should show /image_raw now…)

rosrun dso_ros dso_live calib=/opt/catkin_ws/src/dso_ros/camera.txt image:=/image_raw/