Categories
dev Locomotion robots simulation

pyBullet

Let’s get back to the main thing we wanted to look at.

pip3 install pybullet –upgrade –user

cd /opt

git clone https://github.com/bulletphysics/bullet3.git

cd bullet3/

cd build3/

cmake -G”Eclipse CDT4 – Unix Makefiles” -DCMAKE_BUILD_TYPE=Release -DBUILD_BULLET2_DEMOS=ON -DBUILD_CPU_DEMOS=ON -DBUILD_BULLET3=ON -DBUILD_OPENGL3_DEMOS=ON -DBUILD_EXTRAS=ON -DBUILD_SHARED_LIBS=ON -DINSTALL_EXTRA_LIBS=ON -DUSE_DOUBLE_PRECISION=ON ..

make

make install

ldconfig

ok that installs it and builds the examples but seems to be for Eclipse CDT. Not sure I need a C++ IDE.

ok so i was supposed to do this for cmake:

./build_cmake_pybullet_double.sh

But anyway,

cd /examples/RobotSimulator

./App_RobotSimulator

COOL.

Ok lets run build_cmake_pybullet_double.sh

ok there we go

cd ./build_cmake/examples/ExampleBrowser

./App_ExampleBrowser

OK this is great.

Categories
robots

Integrating sonar and IR sensor plugin to robot model in Gazebo with ROS.

copied from this dude https://medium.com/teamarimac/integrating-sonar-and-ir-sensor-plugin-to-robot-model-in-gazebo-with-ros-656fd9452607

what is ROS?

ThiruVenthan

ThiruVenthanFollowDec 7, 2018 · 5 min read

Robotics operating system (ROS) is an open sourced robotic middle ware licensed under the open source, BSD license. ROS provides services like communication between progress, low level device control, hardware abstraction, package management and visualization tools for debugging. ROS based progress can be represented as graph where process happens in nodes and node communicate with others to execute the overall progress.

what is Gazebo?

Gazebo is a robotics simulator which allows to simulate and test our algorithm in indoor and outdoor environment. Some of the great features of Gazebo simulator are Advance 3D visualization , support to various physics engines (ODEBulletSimbody, and DART) and the ability to simulate the sensor with noise etc., which ultimately results in a more realistic simulation results

Gazebo user interface

Requirements

  1. Computer with Ubuntu 16.04.5 LTS
  2. ROS (Kinetic) installed and a basic understanding about ROS (tutorials)
  3. Gazebo_ros package installed.
  4. A catkin work space with robot URDF and world files. (a sample workspace : git clone https://thiruashok@bitbucket.org/thiruashok/rover_ws.git

Launching the Gazebo with the robot model

Go to the cloned directory and open the terminal (ctrl+alt+t) and run the following commands.

cd rover_ws
catkin_make
source devel/setup.bash
roslaunch rover_gazebo rover_world.world

You can see a robot in a simulation world, in gazebo as shown in figure below.

rover bot in gazebo simulation

Open another terminal and run the following command to see the available topics.

rostopic list

You will get the following as the output.

/clock
/cmd_vel
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/gazebo_gui/parameter_descriptions
/gazebo_gui/parameter_updates
/joint_states
/odom
/rosout
/rosout_agg
/tf
/tf_static

From the results you obtained you can observe that there are no any topic related to sensors, but cmd_vel topic is available, so we can navigate the robot by sending commands (given below) to this topic. As robot is now using differential drive mechanism, by changing the linear x and angular z values you can move the robot around.

rostopic pub /cmd_vel geometry_msgs/Twist "linear:
x: 1.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0"

Adding sonar and IR sensor models to the robot model

Open the rover.xarco file in the rover_ws/src/rover_description directory, using your favorite text editor. Add the following code above “ </robot>” tag.

<joint name="ir_front_joint" type="fixed">
<axis xyz="0 1 0" />
<origin rpy="0 0 0" xyz="0.5 0 0" />
<parent link="base_footprint"/>
<child link="base_ir_front"/>
</joint><link name="base_ir_front">
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.01 0.01 0.01"/>
</geometry>
</collision> <visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.01 0.01 0.01"/>
</geometry>
</visual> <inertial>
<mass value="1e-5" />
<origin xyz="0 0 0" rpy="0 0 0"/>
<inertia ixx="1e-6" ixy="0" ixz="0" iyy="1e-6" iyz="0" izz="1e-6" />
</inertial>
</link><joint name="sonar_front_joint" type="fixed">
<axis xyz="0 1 0" />
<origin rpy="0 0 0" xyz="0.5 0 0.25" />
<parent link="base_footprint"/>
<child link="base_sonar_front"/>
</joint><link name="base_sonar_front">
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.01 0.01 0.01"/>
</geometry>
</collision> <visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.01 0.01 0.01"/>
</geometry>
</visual> <inertial>
<mass value="1e-5" />
<origin xyz="0 0 0" rpy="0 0 0"/>
<inertia ixx="1e-6" ixy="0" ixz="0" iyy="1e-6" iyz="0" izz="1e-6" />
</inertial>
</link>

The above lines of code integrate senor models (a simple square) to the robot model. For some basic understanding of URDF file of a robot refer this. Now launch the world again by the following code.

roslaunch rover_gazebo rover_world.world

By changing the origin rpy and xyz values within the joint tag the sensor position can be changed.

<origin rpy=”0 0 0" xyz=”0.5 0 0.25" />
rover with senor fitted on body

You can notice that the sensor model is now visible on top of the robot model.

Adding the sensor plugin for Sonar and IR

Gazebo_ros_range plugin can be used to model both the sonar and the IR sensor. This plugin publish messages according to sensor_msgs/Range Message format so that integration to ros can be easily done. To add the plugin to the open the rover_ws/src/rover_description/urdf/rover.gazebo file in your favorite text editor and add the following lines above “ </robot>” tag.

<gazebo reference="base_ir_front">        
<sensor type="ray" name="TeraRanger">
<pose>0 0 0 0 0 0</pose>
<visualize>true</visualize>
<update_rate>50</update_rate>
<ray>
<scan>
<horizontal>
<samples>10</samples>
<resolution>1</resolution>
<min_angle>-0.14835</min_angle>
<max_angle>0.14835</max_angle>
</horizontal>
<vertical>
<samples>10</samples>
<resolution>1</resolution>
<min_angle>-0.14835</min_angle>
<max_angle>0.14835</max_angle>
</vertical>
</scan>
<range>
<min>0.01</min>
<max>2</max>
<resolution>0.02</resolution>
</range>
</ray>
<plugin filename="libgazebo_ros_range.so" name="gazebo_ros_range">
<gaussianNoise>0.005</gaussianNoise>
<alwaysOn>true</alwaysOn>
<updateRate>50</updateRate>
<topicName>sensor/ir_front</topicName>
<frameName>base_ir_front</frameName>
<radiation>INFRARED</radiation>
<fov>0.2967</fov>
</plugin>
</sensor>
</gazebo>

The avove one is for the IR, you can simply copy paste and this again and set the gazebo reference to base_sonar_front and change the topicName and frameName to appropriate one. Now run launch the gazebo.

roslaunch rover_gazebo rover_world.world
rover with senor plugin loaded

Sonar and IR senor rays can be seen in the simulation world. To see the sensor reading superscribe to the appropriate topic. see the commands bellow

rostopic list

Now the output will be like this:

/clock
/cmd_vel
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/gazebo_gui/parameter_descriptions
/gazebo_gui/parameter_updates
/joint_states
/odom
/rosout
/rosout_agg
/sensor/ir_front
/sensor/sonar_front
/tf
/tf_static

You could notice that sonar and IR sensor are publishing to the new topics namely, /senor/ir_front and /sensor/sonar_front.

rostopic echo /sensor/ir_front

Type the above code in a the terminal to observe the out put from the IR sensor

header: 
seq: 1
stamp:
secs: 1888
nsecs: 840000000
frame_id: "base_ir_front"
radiation_type: 1
field_of_view: 0.296700000763
min_range: 0.00999999977648
max_range: 2.0
range: 0.0671204701066
---
header:
seq: 2
stamp:
secs: 1888
nsecs: 860000000
frame_id: "base_ir_front"
radiation_type: 1
field_of_view: 0.296700000763
min_range: 0.00999999977648
max_range: 2.0
range: 0.0722889602184
---
header:
seq: 3
stamp:
secs: 1888
nsecs: 880000000
frame_id: "base_ir_front"
radiation_type: 1
field_of_view: 0.296700000763
min_range: 0.00999999977648
max_range: 2.0
range: 0.0635933056474

Like this, all the sensor (Lydar , camera, IMU) can be integrated to the robot model. This help a lot in validating the algorithm and finding the optimal sensor position without building the actual hardware fully.Arimac

The Official Blog of Arimac

Follow

54

Thanks to Ampatishan Sivalingam. 

Categories
dev robots

ROS project setup and overview

As an interlude from checking out Blender/phobos, I saw that the big free robot technology effort of the 2010s was probably ROS, the robot operating system, and as it has been around for a long time, it has had its own established ways, in the pre-phobos days. Before these kids could just click their doodads and evolve silicon lifeforms when they felt like it.

No, phobos documentation is actually just a bit terse, so let’s watch some Gazebo videos. This dude just jumps in, expecting you to have things set up. https://stackoverflow.com/questions/41234957/catkin-command-not-found but this guy and me “Probably forgot to set up the environment after installing ROS.

So yeah, what the hell is catkin? So, it’s like how ruby on rails wants you to ask it to make boilerplate for you. So we had to run:

0 mkdir chicken_project
1 cd chicken_project/
2 ls (nothing here, boss)
3 mkdir src
4 cd src
5 catkin_init_workspace
6 cd ..
7 catkin_make
8 source /opt/ros/chicken_project/devel/setup.bash

So, starting 53 seconds in:

https://youtu.be/qi2A32WgRqI?t=53

Ok so his time guesstimate is a shitload of time. We’re going to go for the shoot first approach. Ok he copy-pastes some code, and apparently you have to copy it from the youtube video. No thanks.

So this was his directory structure, anyway.

cd src
catkin_create_pkg my_simulations
cd my_simulations/
mkdir launch
cd launch/
touch my_world.launch
cd ..
mkdir world
cd world
touch empty_world.world

That is funny. Touch my world. But he’s leading us on a wild goose chase. We can’t copy paste from youtube, dumbass.

Categories
dev Linux robots

ROS installation

Robot Operating System installation http://wiki.ros.org/melodic/Installation/Ubuntu

Categories
control Locomotion Math robots

geometric, kinematic and dynamic models

The geometric model describes the geometric relationships that specify the spatial extent of a given component.

From this website: https:

Kinematics is the study of motion of bodies without regard to the forces that cause the motion. Dynamics on the other hand is the study of the motion of bodies due to applied forces (think F=ma). For example consider orbital mechanics: Kepler’s Laws are kinematic, in that they describe characteristics of a satellite’s orbit such as its eliptical shape without considering the forces that cause that motion, whereas Newton’s Law of Gravity is dynamic as it incorporates the force of gravity to describe why the orbit is eliptical.

Categories
Behaviour bio

Ethology

Ethology lecture, Sapolsky: https://www.youtube.com/watch?v=ISVaoLlW104

Possibly very interesting… Neuroethology: https://en.wikipedia.org/wiki/Neuroethology

Also, neurorobotics:

https://github.com/HBPNeurorobotics/BlenderRobotDesigner

Categories
Behaviour

Robot Behaviour

https://www.cpp.edu/~ftang/courses/CS521/notes/robot%20behavior.pdf

Categories
blender

Phobos /Blender

This is probably the best way to create the URDF file required for further physics modelling with Bullet/Gazebo.

Here’s the Bremen University link: https://robotik.dfki-bremen.de/en/research/softwaretools/phobos.html

https://github.com/dfki-ric/phobos/wiki/Installation

“As of release 0.8 of Phobos, we only support Blender 2.79. This means it will not function properly any more for older Blender versions and might not function with later versions; Blender 2.8 is expected to include major changes that will not be compatible with Phobos.”

https://download.blender.org/release/Blender2.79/

download

tar xvf blender-2.79b-linux-glibc219-x86_64.tar.gz

And install phobos…

git clone https://github.com/dfki-ric/phobos.git

python3 setup.py

https://github.com/dfki-ric/phobos/wiki/First-Steps

And then begin… https://github.com/dfki-ric/phobos/wiki/Modeling-Walkthrough

Essentially, what you want to build is a hierarchy of objects, parented to one another in such a way that pairs of objects can be connected with joints to represent the robot’s kinematics later on. For this purpose, it is easiest to build the robot in its rest pose, i.e. the way it will look like when all its joints are at their origin position.

So we need to plan the robot now.

Categories
OpenCV Vision

Installing OpenCV on computer

OpenCV will be needed, for the robot to make sense of the camera input. We’ll have the camera on a raspberry pi of some sort.

pip install opencv-contrib-python

Someone made a useful website on OpenCV here https://www.pyimagesearch.com/

root@chrx:/opt/imagezmq/tests# python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'cv2.cv2' has no attribute '__version'
>>> cv2.__version__
'4.2.0'

Just gotta make sure it uses python3, not python2.

We’ll run it on the raspberry pi (https://www.pyimagesearch.com/2018/09/19/pip-install-opencv/) eventually, but for now, just testing core features on GalliumOS. It’s also required by imagezmq (https://github.com/jeffbass/imagezmq)

The robot has to use its sensors to find chickens, eggs, and decide whether to walk or turn. Stimuli to trigger actions.

pip install zmq

Might also need ‘pip install imutils’

Copying imagezmq.py to the tests folder cause I don’t know shit about python and how to import it from the other folder. So here’s the server:

python3 test_1_receive_images.py

import sys
import cv2
import imagezmq

image_hub = imagezmq.ImageHub()
while True:  # press Ctrl-C to stop image display program
    image_name, image = image_hub.recv_image()
    cv2.imshow(image_name, image)
    cv2.waitKey(1)  # wait until a key is pressed
    image_hub.send_reply(b'OK')

And the client program:

python3 test_1_send_images.py


import sys
import time
import numpy as np
import cv2
import imagezmq

# Create 2 different test images to send
# A green square on a black background
# A red square on a black background

sender = imagezmq.ImageSender()
i = 0
image_window_name = 'From Sender'
while True:  # press Ctrl-C to stop image sending program
    # Increment a counter and print it's value to console
    i = i + 1
    print('Sending ' + str(i))

    # Create a simple image
    image = np.zeros((400, 400, 3), dtype='uint8')
    green = (0, 255, 0)
    cv2.rectangle(image, (50, 50), (300, 300), green, 5)

    # Add counter value to the image and send it to the queue
    cv2.putText(image, str(i), (100, 150), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 255), 4)
    sender.send_image(image_window_name, image)
    time.sleep(1)

Cool it sent pics to my screen. Heh “Hershey Simplex” sounds more like a virus than a font.

Categories
arxiv

Machine Learning reddit arxivs

1-1011-2021-3031-4041-5051-6061-7071-8081-90
Week 1Week 11Week 21Week 31Week 41Week 51Week 61Week 71Week 81
Week 2Week 12Week 22Week 32Week 42Week 52Week 62Week 72Week 82
Week 3Week 13Week 23Week 33Week 43Week 53Week 63Week 73
Week 4Week 14Week 24Week 34Week 44Week 54Week 64Week 74
Week 5Week 15Week 25Week 35Week 45Week 55Week 65Week 75
Week 6Week 16Week 26Week 36Week 46Week 56Week 66Week 76
Week 7Week 17Week 27Week 37Week 47Week 57Week 67Week 77
Week 8Week 18Week 28Week 38Week 48Week 58Week 68Week 78
Week 9Week 19Week 29Week 39Week 49Week 59Week 69Week 79
Week 10Week 20Week 30Week 40Week 50Week 60Week 70Week 80