Categories
3D Research AI/ML CNNs deep dev envs evolution GANs Gripper Gripper Research Linux Locomotion sexing sim2real simulation The Sentient Table UI Vision

Simulation Vision

We’ve got an egg in the gym environment now, so we need to collect some data for training the robot to go pick up an egg.

I’m going to have it save the rgba, depth and segmentation images to disk for Unet training. I left out the depth image for now. The pictures don’t look useful. But some papers are using the depth, so I might reconsider. Some weed bot paper uses 14-channel images with all sorts of extra domain specific data relevant to plants.

I wrote some code to take pics if the egg was in the viewport, and it took 1000 rgb and segmentation pictures or so. I need to change the colour of the egg for sure, and probably randomize all the textures a bit. But main thing is probably to make the segmentation layers with pixel colours 0,1,2, etc. so that it detects the egg and not so much the link in the foreground.

So sigmoid to softmax and so on. Switching to multi-class also begs the question whether to switch to Pytorch & COCO panoptic segmentation based training. It will have to happen eventually, as I think all of the fastest implementations are currently in Pytorch and COCO based. Keras might work fine for multiclass or multiple binary classification, but it’s sort of the beginning attempt. Something that works. More proof of concept than final implementation. But I think Keras will be good enough for these in-simulation 256×256 images.

Regarding multi-class segmentation, karolzak says “it’s just a matter of changing num_classes argument and you would need to shape your mask in a different way (layer per class??), so for multiclass segmentation you would need a mask of shape (width, height, num_classes)

I’ll keep logging my debugging though, if you’re reading this.

So I ran segmask_linkindex.py to see what it does, and how to get more useful data. The code is not running because the segmentation image actually has an array of arrays. I presume it’s a numpy array. I think it must be the rows and columns. So anyway I added a second layer to the loop, and output the pixel values, and when I ran it in the one mode:

-1
-1
-1
83886081
obUid= 1 linkIndex= 4
83886081
obUid= 1 linkIndex= 4
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
16777217
obUid= 1 linkIndex= 0
16777217
obUid= 1 linkIndex= 0
-1
-1
-1

And in the other mode

-1
-1
-1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
1
obUid= 1 linkIndex= -1
-1
-1
-1

Ok I see. Hmm. Well the important thing is that this code is indeed for extracting the pixel information. I think it’s going to be best for the segmentation to use the simpler segmentation mask that doesn’t track the link info. Ok so I used that code from the guy’s thesis project, and that was interpolating the numbers. When I look at the unique elements of the mask without interpolation, I’ve got…

[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   2 255]
[  0   1   2 255]
[  0   1   2 255]
[  0   2 255]
[  0   2 255]

Ok, so I think:

255 is the sky
0 is the plane
2 is the robotable
1 is the egg

So yeah, I was just confused because the segmentation masks were all black and white. But if you look closely with a pixel picker tool, the pixel values are (0,0,0), (1,1,1), (2,2,2), (255,255,255), so I just couldn’t see it.

The interpolation kinda helps, to be honest.

As per OpenAI’s domain randomization helping with Sim2Real, we want to randomize some textures and some other things like that. I also want to throw in some random chickens. Maybe some cats and dogs. I’m afraid of transfer learning, at this stage, because a lot of it has to do with changing the structure of the final layer of the neural network, and that might be tough. Let’s just do chickens and eggs.

An excerpt from OpenAI:

Costs

Both techniques increase the computational requirements: dynamics randomization slows training down by a factor of 3x, while learning from images rather than states is about 5-10x slower.

Ok that’s a bit more complex than I was thinking. I want to randomize textures and colours, first

I’ve downloaded and unzipped the ‘Describable Textures Dataset’

And ok it’s loading a random texture for the plane

and random colour for the egg and chicken

Ok, next thing is the Simulation CNN.

Interpolation doesn’t work though, for this, cause it interpolates from what’s available in the image:

[  0  85 170 255]
[  0  63 127 191 255]
[  0  63 127 191 255]

I kind of need the basic UID segmentation.

[  0   1   2   3 255]

Ok, pity about the mask colours, but anyway.

Let’s train the UNet on the new dataset.

We’ll need to make karolzak’s changes.

I’ve saved 2000+ rgb.jpg and seg.png files and we’ve got [0,1,2,3,255] [plane, egg, robot, chicken, sky]

So num_classes=5

And

“for multiclass segmentation you would need a mask of shape (width, height, num_classes) “

What is y.shape?

(2001, 256, 256, 1)

which is 2001 files, of 256 x 256 pixels, and one class. So if I change that to 5…? ValueError: cannot reshape array of size 131137536 into shape (2001,256,256,5)

Um… Ok I need to do more research. Brb.

So the keras_unet library is set up to input binary masks per class, and output binary masks per class.

I would rather use the ‘integer’ class output, and have it output a single array, with the class id per pixel. Similar to this question. In preparation for karolzak probably not knowing how to do this with his library, I’ve asked on stackoverflow for an elegant way to make the binary masks from a multi-class mask, in the meantime.

I coded it up using the library author’s suggested method, as he pointed out that the gains of the integer encoding method are minimal. I’ll check it out another time. I think it might still make sense for certain cases.

Ok that’s pretty awesome. We have 4 masks. Human, chicken, egg, robot. I left out plane and sky for now. That was just 2000 images of training, and I have 20000. I trained on another 2000 images, and it’s down to 0.008 validation loss, which is good enough!

So now I want to load the CNN model in the locomotion code, and feed it the images from the camera, and then have a reward function related to maximizing the egg pixels.

I also need to look at the pybullet-planning project and see what it consists of, as I imagine they’ve made some progress on the next steps. “built-in implementations of standard motion planners, including PRM, RRT, biRRT, A* etc.” – I haven’t even come across these acronyms yet! Ok, they are motion planning. Solvers of some sort. Hmm.

Categories
dev Locomotion The Sentient Table

Spinning Up

OpenAI Spinning Up https://spinningup.openai.com/en/latest/

So I’ve got it working, OpenAI’s PPO. https://github.com/openai/spinningup/issues/142 – Needs a workaround to run your own envs.

But I can’t work out how to increase the exploration “factor”.

It’s some sort of application of a Gaussian distribution of noise, I think, which is the simple idea.

It looks like clipratio is maybe what i need.

hmm but ppo don’t want it.

parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=64)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--steps', type=int, default=4000)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='ppo')


def ppo(env_fn, actor_critic=core.mlp_actor_critic, ac_kwargs=dict(), seed=0,
        steps_per_epoch=4000, epochs=50, gamma=0.99, 


clip_ratio=0.2, 


pi_lr=3e-4,
        vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
        target_kl=0.01, logger_kwargs=dict(), save_freq=10):

https://github.com/openai/spinningup/issues/12

ok so i tried SAC algo too, and the issue i have now is

(, AttributeError(“‘list’ object has no attribute ‘reshape’”,), )

So the thing is the dimensionality

“FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance,

env = FlattenDictWrapper(env, [‘observation’, ‘desired_goal’])

Spinning Up implementations currently only support envs with Box observation spaces (where observations are real-valued vectors). These environments have Dict observation spaces, so each obs is a dict of (key, vector) pairs. If you want to test things out in these envs, I recommend doing it as a hacking project! 🙂 “

Categories
Locomotion The Sentient Table

Locomotion success with PBT+ARS

(Population Based Training, and Augmented Random Search)

Got a reward of 902 with this robotable. That’s a success. It’s an amusing walk. Still has a way to go, probably.

Miranda doesn’t want to train it with that one dodge ball algorithm you sometimes see, for toughening up AIs. I’ll see about adding in the uneven terrain though, and maybe trying to run that obstacle course library.

But there are other, big things to do, which take some doing.

The egg-scooper, or candler, or handler, or picker-upper will likely use an approach similar to the OpenAI Rubik’s cube solver, with a camera in simulation as the input to a Convolutional Neural Network of some sort, so that there is a transferred mapping, between simulated camera, and real camera.

Also, getting started on Sim-to-Real attempts, of transferring locomotion policies to the RPi robot, seeing if it will walk.

The PBT algorithm changes up the hyperparameters occasionally.

It might be smart to use ensemble or continuous learning by switching to a PPO implementation at the 902 reward checkpoint.

I get the sense that gradient descent becomes more useful once you’ve got past the novelty pitfalls, like learning to step forward instead of falling over. It can probably speed up learning at this point.

Categories
The Sentient Table

Yee hah!

Categories
AI/ML Locomotion The Sentient Table

Tensorboard

Tensorboard is TensorFlow’s graphs website at localhost:6006

tensorboard –logdir=.

tensorboard –logdir=/root/ray_results/ for all the experiments

I ran the ARS algorithm with Ray, on the robotable environment, and left it running for a day with the UI off. I set it up to run Tune, but the environments are 400MB of RAM each, so it’s pretty close to the 4GB in this laptop, so I was only running a single experiment.

So the next thing is to get it to start play back from a checkpoint.

(A few days pass, the github issue I had was something basic, that I thought I’d checked.)

So now I have a process where it’s running 100 iterations, then uses the best checkpoint as the starting policy for the next 100 iterations. Now it might just be wishful thinking, but i do actually see a positive trend through the graphs, in ‘wall’ view. There’s also lots of variation of falling over, so I think we might just need to get these hyperparameters tuning. (Probably need to tweak reward weights too. But lol, giving AI access to its own reward function… )

Just a note on that, the AI will definitely just be like, *999999

After training it overnight, with the PBT & ARS, it looks like one policy really beat out the other ones.

It ran a lot longer than the others.

Categories
The Sentient Table

Hyperparameters and Rewards

I got the table walking with ARS, but pybullet saving just the perceptron weights didn’t seem to reload progress.

So I switched to PPO, which is a bit more complicated. Stable baselines PPO1 and PPO2 converged too easily, with the table opting to fall over all the time.

So I started editing with the reward function weights, changing it from weighing X axis movement by 1, and weighing Z axis movement by 0.5, to the opposite. So standing up is more important now. I also penalised falling over by a constant value. It’s not looking particularly smart after 11 rounds, but it’s not falling over forward anymore, at least. Yet.

I also changed some PPO hyperparams:

clip_param=0.4, entcoeff=0.2, timesteps_per_actorbatch=1024, 

Basically more exploration than before by allowing more variation in policy changes, and increasing some sort of entropy can’t hurt right? and giving it more time to evaluate per batch, as maybe falling over was as good as you could hope for, in a smaller batch.

This is a good summary of the state of the art in hyperparam tuning. I’ll probably need to do this soon. https://medium.com/criteo-labs/hyper-parameter-optimization-algorithms-2fe447525903

Combine PPO with NES to Improve Exploration https://arxiv.org/pdf/1905.09492.pdf

PBT https://arxiv.org/abs/1711.09846

https://deepmind.com/blog/article/population-based-training-neural-networks

Policy Optimization with Model-based Explorations https://arxiv.org/pdf/1811.07350.pdf

It seems like my fiddling with hyperparams caused ‘kl’

Karhunen-Loeve (KL) to go to NaN. I dunno.

Something about stochastic ‘eigenvector transform’. Similar to Fourier transform for sound, apparently.

So I need to tune hyperparams.

The Stable baselines allow you to change params https://stable-baselines.readthedocs.io/en/master/guide/examples.html#accessing-and-modifying-model-parameters

So kl becoming NaN that could mean i’m returning a zero somewhere from the model.

“In my case, adding 1e-8 to the divisor made the trick… ” – https://github.com/modestyachts/ARS/issues/1

https://stable-baselines.readthedocs.io/en/master/guide/checking_nan.html

Categories
Locomotion The Sentient Table

RL Toolboxes

stable baselines – https://stable-baselines.readthedocs.io/en/master/

Ray / RLLib – https://docs.ray.io/en/master/

OpenAI Spinning Up – https://spinningup.openai.com/en/latest/spinningup/keypapers.html

RLKit – https://github.com/vitchyr/rlkit

Garage – https://github.com/rlworkgroup/garage

Categories
dev Locomotion The Sentient Table

Stable baselines

Need something to compare results to.

To install,

https://stable-baselines.readthedocs.io/en/master/guide/install.html

pip install git+https://github.com/hill-a/stable-baselines

Successfully installed absl-py-0.9.0 astor-0.8.1 gast-0.2.2 google-pasta-0.2.0 grpcio-1.30.0 h5py-2.10.0 keras-applications-1.0.8 keras-preprocessing-1.1.2 opt-einsum-3.2.1 stable-baselines-2.10.1a1 tensorboard-1.15.0 tensorflow-1.15.3 tensorflow-estimator-1.15.1 termcolor-1.1.0 wrapt-1.12.1

i’d originally done

pip install stable-baselines[mpi]

but the github installs dependencies too.

ok so pybullet comes with an ‘enjoy’ program which

~/.local/lib/python3.6/site-packages/pybullet_envs/stable_baselines

You can run it using:

python3 -m pybullet_envs.stable_baselines.enjoy –algo td3 –env HalfCheetahBulletEnv-v0

Ok I set up ppo2 and tried to run python3 ppo2.py

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 106, in spec
    importlib.import_module(mod_name)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'gym-robotable'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "ppo2.py", line 29, in <module>
    env = gym.make(hp.env_name)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 142, in make
    return registry.make(id, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 86, in make
    spec = self.spec(path)
  File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 109, in spec
    raise error.Error('A module ({}) was specified for the environment but was not found, make sure the package is installed with `pip install` before calling `gym.make()`'.format(mod_name))
gym.error.Error: A module (gym-robotable) was specified for the environment but was not found, make sure the package is installed with `pip install` before calling `gym.make()`

Registration… hmm ARS.py doesn’t complain. We had this problem before.

pip3 install -e .
python3 setup.py install

nope… https://stackoverflow.com/questions/14295680/unable-to-import-a-module-that-is-definitely-installed it’s presumably here somewhere…

root@chrx:/opt/gym-robotable# pip show gym-robotable
Name: gym-robotable
Version: 0.0.1
Summary: UNKNOWN
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /opt/gym-robotable
Requires: gym
Required-by: 

https://github.com/openai/gym/issues/1818 says You need to either import <name of your package> or do gym.make("<name of your package>:tic_tac_toe-v1"), see the creating environment guide for more information: https://github.com/openai/gym/blob/master/docs/creating-environments.md

Is it some fuckin gym-robotable vs gym_robotable thing?

Yes. Yes it is.


self.env_name = 'gym_robotable:RobotableEnv-v0'

Ok so now it’s working almost. But falls down sometimes and then the algorithm stops. Ah, needed to define ‘is_fallen’ correctly…

  def is_fallen(self):
    orientation = self.robotable.GetBaseOrientation()
    rot_mat = self._pybullet_client.getMatrixFromQuaternion(orientation)
    local_up = rot_mat[6:]
    pos = self.robotable.GetBasePosition()
    # return (np.dot(np.asarray([0, 0, 1]), np.asarray(local_up)) < 0.85 or pos[2] < -0.25)
    #print("POS", pos)
    #print("DOT", np.dot(np.asarray([0, 0, 1]), np.asarray(local_up)))

    return (pos[2] < -0.28)  #changing fallen definition for now, to height of table
    #return False

  def _termination(self):
    position = self.robotable.GetBasePosition()
    distance = math.sqrt(position[0]**2 + position[1]**2)
    return self.is_fallen() or distance > self._distance_limit

ok so now


if __name__ == "__main__":

  hp = Hp()
  env = gym.make(hp.env_name)

  model = PPO2(MlpPolicy, env, verbose=1)
  model.learn(total_timesteps=10000)

  for episode in range(100):
      obs = env.reset()
      for i in range(1000):
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        #env.render()
        if dones:
            print("Episode finished after {} timesteps".format(i + 1))
            break
        env.render(mode="human")

Now to continue training… https://github.com/hill-a/stable-baselines/issues/599

Categories
dev Locomotion The Sentient Table

ConvertFromLegModel

This is a confusing bit of code


  def ConvertFromLegModel(self, actions):
    """Convert the actions that use leg model to the real motor actions.
    Args:
      actions: The theta, phi of the leg model.
    Returns:
      The eight desired motor angles that can be used in ApplyActions().
    """
  

 COPY THE ACTIONS.

    motor_angle = copy.deepcopy(actions)

DEFINE SOME THINGS

    scale_for_singularity = 1
    offset_for_singularity = 1.5
    half_num_motors = int(self.num_motors / 2)
    quarter_pi = math.pi / 4

FOR EVERY MOTOR

    for i in range(self.num_motors):

THE ACTION INDEX IS THE FLOOR OF HALF. 00112233
      action_idx = int(i // 2)

WELL, SO, THE FORWARD BACKWARD COMPONENT is 
negative thingy times 45 degrees times (the action of the index plus half the motors.... plus the offset thingy)

      forward_backward_component = (
          -scale_for_singularity * quarter_pi *
          (actions[action_idx + half_num_motors] + offset_for_singularity))

AND SO THE EXTENSION COMPONENT IS either + or - 45 degrees times the action.

      extension_component = (-1)**i * quarter_pi * actions[action_idx]

IF 4,5,6,7 MAKE THAT THING NEGATIVE.

      if i >= half_num_motors:
        extension_component = -extension_component

THE ANGLE IS... PI + thingy 1 + thingy 2.

      motor_angle[i] = (math.pi + forward_backward_component + extension_component)



    return motor_angle

Ok my error is,

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 350, in step
    action = self._transform_action_to_motor_command(action)
  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 313, in _transform_action_to_motor_command
    action = self.robotable.ConvertFromLegModel(action)
AttributeError: 'Robotable' object has no attribute 'ConvertFromLegModel'

Ok anyway i debugged for an hour and now it’s doing something. it’s saving numpy files now.

policy_RobotableEnv-v0_20200516-192435.npy contains:

�NUMPYv{‘descr’: ‘<f8’, ‘fortran_order’: False, ‘shape’: (4, 16), }

Cool.

But yeah i had to comment out a lot of stuff. Seems like the actions it’s generating are mostly 0.

Since I simplified to a table, turns out I don’t need any of that ConvertFromLegModel code.


Ok anyway, i started over with minitaur. lol. why are there two tables? Changing the motorDirections gave me this. Good progress.

Categories
dev The Sentient Table

Gym Env

Starting from pybullet/gym/pybullet_envs/bullet/minitaur_gym_env.py

and going by these instructions https://github.com/openai/gym/blob/master/docs/creating-environments.md

First had to remember the protocol buffer stuff https://developers.google.com/protocol-buffers/docs/reference/python-generated

protoc --proto_path=src --python_out=build/gen src/foo.proto src/bar/baz.proto

https://stackoverflow.com/questions/45068568/how-to-create-a-new-gym-environment-in-openai

/opt/gym-robotable# pip install -e .
Obtaining file:///opt/gym-robotable
Requirement already satisfied: gym>=0.2.3 in /usr/local/lib/python3.6/dist-packages (from gym-robotable==0.0.1) (0.17.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.3.0)
Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.18.1)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.11.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym>=0.2.3->gym-robotable==0.0.1) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.2.3->gym-robotable==0.0.1) (0.18.2)
Installing collected packages: gym-robotable
Running setup.py develop for gym-robotable
Successfully installed gym-robotable

When I try run the gym env, I get:

raise error.UnregisteredEnv(‘No registered env with id: {}’.format(id))

Ok it was some typo in the __init__.py

Fixed that up in the imports and __inits__ and now got this issue: https://answers.ros.org/question/326226/importerror-dynamic-module-does-not-define-module-export-function-pyinit__tf2/

Ok i think that was some file that was importing tensorflow 2. I don’t want to use tensorflow. it’s like 400MB.

URDF file ‘/root/.local/lib/python3.6/site-packages/pybullet_data/robot.urdf’ not found

so ok, let’s put it there.

/opt/gym-robotable/gym_robotable/envs# cp robot.urdf /root/.local/lib/python3.6/site-packages/pybullet_data/

AttributeError: ‘Robotable’ object has no attribute ‘Reset’

Ok so I’ve got robot.urdf looking like a table, and loading. But there’s various differences between the minitaur gym env and my own minitaur, the robotable. Gotta compare, etc. ok it wants reset() not Reset(…)

just a warning:

No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
torsob3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

happy with 1 defaults for mass and inertia settings, for now.

Ok so apparently just working my way through the bugs one by one. That’s probably as good a method as any :

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.GetMotorAngles().tolist())
AttributeError: ‘Robotable’ object has no attribute ‘GetMotorAngles’

K did a :%s/Get/get/g in vi. NEXT.

File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 301, in reset
return self._get_observation()
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 493, in _get_observation
observation.extend(self.robotable.getMotorAngles().tolist())
File “/opt/gym-robotable/gym_robotable/envs/robotable.py”, line 88, in getMotorAngles
jointState = p.getJointState(self.quadruped, self.motorIdList[i])
IndexError: list index out of range

ok self.nMotors = 4

  File "/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py", line 527, in _get_observation_upper_bound
    num_motors = self.robotable.num_motors
AttributeError: 'Robotable' object has no attribute 'num_motors'

k lets change nMotors to num_motors. NEXT

File “ars.py”, line 77, in ExploreWorker
state, reward, done, _ = env.step(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 350, in step
action = self._transform_action_to_motor_command(action)
File “/opt/gym-robotable/gym_robotable/envs/robotable_gym_env.py”, line 313, in _transform_action_to_motor_command
action = self.robotable.ConvertFromLegModel(action)
AttributeError: ‘Robotable’ object has no attribute ‘ConvertFromLegModel’