Older blog entries for Pi Robot (starting at number 11)

ROS Head Tracking Tutorial Available

For those of you getting started with ROS, I have written up a do-it-yourself tutorial for tracking a colored object using a web cam and AX-12 pan and tilt servos. You can find it here:

ROS by Example: Visual Object Tracking

 

26 Nov 2010 (updated 26 Nov 2010 at 15:02 UTC) »
Robot Cartography: ROS + SLAM

In this short article on using SLAM with ROS, I have posted a couple of videos showing Pi Robot mapping out part of an apartment using a Hokuyo laser scanner and the gmapping package. See

http://www.pirobot.org/blog/0015/

Pi Robot Meets ROS

For the past several months, I have been learning the basics of ROS from Willow Garage. At the same time, I have been testing Mike Ferguson's "Poor Man's Lidar" or PML as an alternative to a more expensive laser range finder. The results are encouraging--at least for obstacle avoidance and simple navigation tasks. You can see the report at:

http://www.pirobot.org/blog/0014/

10 Aug 2010 (updated 25 Aug 2010 at 14:54 UTC) »
Robot Agents, Messages and The Society of Mind

I recently converted most of the C# code for my Pi Robot project to Python. At the same time, I am changing the programming architecture to use message passing among nodes. To get started, I wrote up a little introduction to the topic at:

Robot Agents, Messages and The Society of Mind

30 Apr 2010 (updated 30 Apr 2010 at 23:32 UTC) »
An Introduction to Robot Coordinate Frames

I finally had a chance to write up the math behind the Pi Robot arm tracking video. Keep in mind that I am only using the two shoulder joints in each arm--the elbow and wrist servos are fixed--so the inverse kinematics is fairly straightforward. Later on I'll have to deal with the other joints...

Here is the link to the write-up:

http://www.pirobot.org/blog/0011/

--patrick

23 Mar 2010 (updated 24 Mar 2010 at 13:51 UTC) »
Visually-Guided Grasping

Here is a followup video to my previous blog entry. In this video, a number of independent behavioral threads are running to enable the robot to track and grasp the green balloon. Whenever the balloon is grasped, the robot turns its attention to the red balloon. When the green balloon is released, tracking turns again to it and the red balloon is ignored. I use RoboRealm to do the green/red tracking. There is a sonar sensor on the inside of the left hand that tells the robot when something is ready to be grasped. It can also do this using vision alone along with some trigonometry, but the result is more reliable when using the sensor.

--patrick

http://www.pirobot.org

Robotic Eye-Hand Coordination

I just finished up some work on using RoboRealm to guide my robot as it reaches toward a target object. The ultimate goal is for the robot to be able to pick up the object from a random location or take it from someone's hands. For now, I simply wanted to work out the coordinate transformations from visual space to arm space to get the two hands to point in the right direction as the target is moved about. The following video shows the results so far:

I don't have a full write-up yet on how I did this but it basically just uses 3-d coordinate transformations from the head angles and distance to the target (as measured by sonar and IR sensors mounted near the camera lens) to a frame of reference attached to each shoulder joint. The Dynamixel AX-12 servos are nice for this application since they can be queried for their current position info. The distance to the balloon as measured by the sonar and IR sensors is a little hit and miss and I think I'd get better performance using stereo vision instead.

--patrick

http://www.pirobot.org

8 Jan 2010 (updated 8 Jan 2010 at 22:32 UTC) »

Hello,

I put together a new robot using Dynamixel AX-12+ servos and I wanted to test an algorithm for tracking a moving object. The camera being used is a DLink 920 wireless operating over 802.11g and the visual tracking is done using RoboRealm. All processing is done on my desktop PC. The full writeup can be found here:

http://www.pirobot.org/blog/0008/

--patrick

24 Nov 2009 (updated 24 Nov 2009 at 05:09 UTC) »

This video demonstrates learning by example in an artificial neural network that controls the motion of a mobile robot. The robot uses four sonar sensors and three IR sensors to detect the ranges to nearby objects. A wireless controller is used to initially remote control the robot past some test objects while the robot records the sensor readings and motor control signals. This data is then used to train a 2x7 artificial neural network (2 motors and 7 sensors). Once the network is trained, it is used to control the robot without intervention from the operator.

For more information, see http://www.pirobot.org/blog/0007/

 

21 Nov 2009 (updated 21 Nov 2009 at 14:59 UTC) »

This is a followup to my earlier post describing the use of a simple neural network to control a light following robot. In the original demonstration, the connections between input and output neurons were hard coded with values that were known to steer the robot in the right way. In the current demonstration, the neural network is initialized with random connections and the correct behavior has to be learned.

In the video below, the robot begins with a random 2x2 neural network for controlling the motors based on the values of the two light sensors mounted on the front. A supervised learning algorithm employing the Delta Rule is used to train the network by utilizing a known solution to provide the teaching signals five times per second. At the beginning of the video, you can see that the robot turns away from the light and even goes backward. However, within 10-15 seconds, the network is already sufficiently trained to follow the light beam.

For more information, see http://www.pirobot.org/blog/0006/

 

2 older entries...

X
Share this page