Older blog entries for Pi Robot (starting at number 10)

26 Nov 2010 (updated 26 Nov 2010 at 15:02 UTC) »
Robot Cartography: ROS + SLAM

In this short article on using SLAM with ROS, I have posted a couple of videos showing Pi Robot mapping out part of an apartment using a Hokuyo laser scanner and the gmapping package. See


Pi Robot Meets ROS

For the past several months, I have been learning the basics of ROS from Willow Garage. At the same time, I have been testing Mike Ferguson's "Poor Man's Lidar" or PML as an alternative to a more expensive laser range finder. The results are encouraging--at least for obstacle avoidance and simple navigation tasks. You can see the report at:


10 Aug 2010 (updated 25 Aug 2010 at 14:54 UTC) »
Robot Agents, Messages and The Society of Mind

I recently converted most of the C# code for my Pi Robot project to Python. At the same time, I am changing the programming architecture to use message passing among nodes. To get started, I wrote up a little introduction to the topic at:

Robot Agents, Messages and The Society of Mind

30 Apr 2010 (updated 30 Apr 2010 at 23:32 UTC) »
An Introduction to Robot Coordinate Frames

I finally had a chance to write up the math behind the Pi Robot arm tracking video. Keep in mind that I am only using the two shoulder joints in each arm--the elbow and wrist servos are fixed--so the inverse kinematics is fairly straightforward. Later on I'll have to deal with the other joints...

Here is the link to the write-up:



23 Mar 2010 (updated 24 Mar 2010 at 13:51 UTC) »
Visually-Guided Grasping

Here is a followup video to my previous blog entry. In this video, a number of independent behavioral threads are running to enable the robot to track and grasp the green balloon. Whenever the balloon is grasped, the robot turns its attention to the red balloon. When the green balloon is released, tracking turns again to it and the red balloon is ignored. I use RoboRealm to do the green/red tracking. There is a sonar sensor on the inside of the left hand that tells the robot when something is ready to be grasped. It can also do this using vision alone along with some trigonometry, but the result is more reliable when using the sensor.



Robotic Eye-Hand Coordination

I just finished up some work on using RoboRealm to guide my robot as it reaches toward a target object. The ultimate goal is for the robot to be able to pick up the object from a random location or take it from someone's hands. For now, I simply wanted to work out the coordinate transformations from visual space to arm space to get the two hands to point in the right direction as the target is moved about. The following video shows the results so far:

I don't have a full write-up yet on how I did this but it basically just uses 3-d coordinate transformations from the head angles and distance to the target (as measured by sonar and IR sensors mounted near the camera lens) to a frame of reference attached to each shoulder joint. The Dynamixel AX-12 servos are nice for this application since they can be queried for their current position info. The distance to the balloon as measured by the sonar and IR sensors is a little hit and miss and I think I'd get better performance using stereo vision instead.



8 Jan 2010 (updated 8 Jan 2010 at 22:32 UTC) »


I put together a new robot using Dynamixel AX-12+ servos and I wanted to test an algorithm for tracking a moving object. The camera being used is a DLink 920 wireless operating over 802.11g and the visual tracking is done using RoboRealm. All processing is done on my desktop PC. The full writeup can be found here:



24 Nov 2009 (updated 24 Nov 2009 at 05:09 UTC) »

This video demonstrates learning by example in an artificial neural network that controls the motion of a mobile robot. The robot uses four sonar sensors and three IR sensors to detect the ranges to nearby objects. A wireless controller is used to initially remote control the robot past some test objects while the robot records the sensor readings and motor control signals. This data is then used to train a 2x7 artificial neural network (2 motors and 7 sensors). Once the network is trained, it is used to control the robot without intervention from the operator.

For more information, see http://www.pirobot.org/blog/0007/


21 Nov 2009 (updated 21 Nov 2009 at 14:59 UTC) »

This is a followup to my earlier post describing the use of a simple neural network to control a light following robot. In the original demonstration, the connections between input and output neurons were hard coded with values that were known to steer the robot in the right way. In the current demonstration, the neural network is initialized with random connections and the correct behavior has to be learned.

In the video below, the robot begins with a random 2x2 neural network for controlling the motors based on the values of the two light sensors mounted on the front. A supervised learning algorithm employing the Delta Rule is used to train the network by utilizing a known solution to provide the teaching signals five times per second. At the beginning of the video, you can see that the robot turns away from the light and even goes backward. However, within 10-15 seconds, the network is already sufficiently trained to follow the light beam.

For more information, see http://www.pirobot.org/blog/0006/


7 Oct 2009 (updated 8 Oct 2009 at 01:15 UTC) »

Greetings Roboteers,

I just finished up a little demo regarding the use of a simple artificial neural network (ANN) to control a mobile robot. The demonstration is only meant to introduce the concepts and terminology of neural nets rather than being something particularly useful. Also, this blog entry does not deal with *learning* in ANN's which is what they are most famous for. That will be the topic of a forthcoming blog entry and demo.

Here is the link to the report. If you get bored with the math at the beginning, you can scroll down toward the end where there is a Youtube video demonstrating the robot in action.


1 older entry...

Share this page