Robotics operating system (ROS) is an open sourced robotic middle ware licensed under the open source, BSD license. ROS provides services like communication between progress, low level device control, hardware abstraction, package management and visualization tools for debugging. ROS based progress can be represented as graph where process happens in nodes and node communicate with others to execute the overall progress.
what is Gazebo?
Gazebo is a robotics simulator which allows to simulate and test our algorithm in indoor and outdoor environment. Some of the great features of Gazebo simulator are Advance 3D visualization , support to various physics engines (ODE, Bullet, Simbody, and DART) and the ability to simulate the sensor with noise etc., which ultimately results in a more realistic simulation results
Requirements
Computer with Ubuntu 16.04.5 LTS
ROS (Kinetic) installed and a basic understanding about ROS (tutorials)
From the results you obtained you can observe that there are no any topic related to sensors, but cmd_vel topic is available, so we can navigate the robot by sending commands (given below) to this topic. As robot is now using differential drive mechanism, by changing the linear x and angular z values you can move the robot around.
Adding sonar and IR sensor models to the robot model
Open the rover.xarco file in the rover_ws/src/rover_description directory, using your favorite text editor. Add the following code above “ </robot>” tag.
The above lines of code integrate senor models (a simple square) to the robot model. For some basic understanding of URDF file of a robot refer this. Now launch the world again by the following code.
roslaunch rover_gazebo rover_world.world
By changing the origin rpy and xyz values within the joint tag the sensor position can be changed.
<origin rpy=”0 0 0" xyz=”0.5 0 0.25" />
You can notice that the sensor model is now visible on top of the robot model.
Adding the sensor plugin for Sonar and IR
Gazebo_ros_range plugin can be used to model both the sonar and the IR sensor. This plugin publish messages according to sensor_msgs/Range Message format so that integration to ros can be easily done. To add the plugin to the open the rover_ws/src/rover_description/urdf/rover.gazebo file in your favorite text editor and add the following lines above “ </robot>” tag.
The avove one is for the IR, you can simply copy paste and this again and set the gazebo reference to base_sonar_front and change the topicName and frameName to appropriate one. Now run launch the gazebo.
roslaunch rover_gazebo rover_world.world
Sonar and IR senor rays can be seen in the simulation world. To see the sensor reading superscribe to the appropriate topic. see the commands bellow
Like this, all the sensor (Lydar , camera, IMU) can be integrated to the robot model. This help a lot in validating the algorithm and finding the optimal sensor position without building the actual hardware fully.Arimac
As an interlude from checking out Blender/phobos, I saw that the big free robot technology effort of the 2010s was probably ROS, the robot operating system, and as it has been around for a long time, it has had its own established ways, in the pre-phobos days. Before these kids could just click their doodads and evolve silicon lifeforms when they felt like it.
So yeah, what the hell is catkin? So, it’s like how ruby on rails wants you to ask it to make boilerplate for you. So we had to run:
0 mkdir chicken_project 1 cd chicken_project/ 2 ls (nothing here, boss) 3 mkdir src 4 cd src 5 catkin_init_workspace 6 cd .. 7 catkin_make 8 source /opt/ros/chicken_project/devel/setup.bash
So, starting 53 seconds in:
Ok so his time guesstimate is a shitload of time. We’re going to go for the shoot first approach. Ok he copy-pastes some code, and apparently you have to copy it from the youtube video. No thanks.
So this was his directory structure, anyway.
cd src catkin_create_pkg my_simulations cd my_simulations/ mkdir launch cd launch/ touch my_world.launch cd .. mkdir world cd world touch empty_world.world
That is funny. Touch my world. But he’s leading us on a wild goose chase. We can’t copy paste from youtube, dumbass.
Kinematics is the study of motion of bodies without regard to the forces that cause the motion. Dynamics on the other hand is the study of the motion of bodies due to applied forces (think F=ma). For example consider orbital mechanics: Kepler’s Laws are kinematic, in that they describe characteristics of a satellite’s orbit such as its eliptical shape without considering the forces that cause that motion, whereas Newton’s Law of Gravity is dynamic as it incorporates the force of gravity to describe why the orbit is eliptical.
“As of release 0.8 of Phobos, we only support Blender 2.79. This means it will not function properly any more for older Blender versions and might not function with later versions; Blender 2.8 is expected to include major changes that will not be compatible with Phobos.”
Essentially, what you want to build is a hierarchy of objects, parented to one another in such a way that pairs of objects can be connected with joints to represent the robot’s kinematics later on. For this purpose, it is easiest to build the robot in its rest pose, i.e. the way it will look like when all its joints are at their origin position.
root@chrx:/opt/imagezmq/tests# python3
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'cv2.cv2' has no attribute '__version'
>>> cv2.__version__
'4.2.0'
Just gotta make sure it uses python3, not python2.
The robot has to use its sensors to find chickens, eggs, and decide whether to walk or turn. Stimuli to trigger actions.
pip install zmq
Might also need ‘pip install imutils’
Copying imagezmq.py to the tests folder cause I don’t know shit about python and how to import it from the other folder. So here’s the server:
python3 test_1_receive_images.py
import sys
import cv2
import imagezmq
image_hub = imagezmq.ImageHub()
while True: # press Ctrl-C to stop image display program
image_name, image = image_hub.recv_image()
cv2.imshow(image_name, image)
cv2.waitKey(1) # wait until a key is pressed
image_hub.send_reply(b'OK')
And the client program:
python3 test_1_send_images.py
import sys
import time
import numpy as np
import cv2
import imagezmq
# Create 2 different test images to send
# A green square on a black background
# A red square on a black background
sender = imagezmq.ImageSender()
i = 0
image_window_name = 'From Sender'
while True: # press Ctrl-C to stop image sending program
# Increment a counter and print it's value to console
i = i + 1
print('Sending ' + str(i))
# Create a simple image
image = np.zeros((400, 400, 3), dtype='uint8')
green = (0, 255, 0)
cv2.rectangle(image, (50, 50), (300, 300), green, 5)
# Add counter value to the image and send it to the queue
cv2.putText(image, str(i), (100, 150), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 255), 4)
sender.send_image(image_window_name, image)
time.sleep(1)
Cool it sent pics to my screen. Heh “Hershey Simplex” sounds more like a virus than a font.