Older Articles


Scientists Map Skin's Sensory Nerves

Posted 8 Jan 2013 at 20:02 UTC by steve

Skin is the human body's largest sensory organ. Understanding how it works will help roboticists create more useful android skins. We're a step closer to understanding the skin's sensory system thanks to a new report announced by Johns Hopkins researchers. The scientists created detailed maps of the branching patterns of sensory nerves in mouse skin. The resulting maps revealed ten distinct groups that seem to correspond to differences in nerve functions. For example, some nerve types gather information from a single hair follicle while others branch into groups that collect averaged information from 200 or more different locations. From the new release:

Nathans says the images now in hand will help scientists “make more sense” out of known responses to stimulation of the skin. For example, if a single nerve cell is responsible for monitoring a patch of skin a quarter of an inch square, multiple simultaneous points of pressure within that patch will only be perceived by the brain as a single signal. “That is why we can’t read Braille using the skin on our backs: the multiple bumps that make up a Braille symbol are within such a small area that the axon branches can’t distinguish them. By contrast, each sensory axon on the fingertip occupies a much smaller territory and this permits our fingertips to accurately distinguish small objects.

For all the details on the research, including lots of diagrams and images of the nerve networks, see the paper, "Morphological diversity of cutaneous sensory afferents revealed by genetically directed sparse labeling" (PDF format). In a related new release, Johns Hopkins researchers announced the discovery of strong evidence that there are specific nerve cells responsible for itch signals, distinct from nerves involved in pain.

Read more... (0 replies)

Best Robot Photos of the Week

Posted 7 Jan 2013 at 21:16 UTC by steve

Today's edition of best robot photos of the week just goes to show that humans and robots love to hang out together. Whether it's at dance parties, school, coffee shops, bus stops, or the park; humans and robots can run into each other anywhere and enjoy some much needed digital to analog social interaction. Every week we post a collection of the best robot photos submitted by our readers to our robots.net flickr group. Why? Because everyone likes to see cool new robots! Want to see your robot here? Post it to flickr and add it to the robots.net flickr group. It's easy. If you're not already a flickr member, it's free and easy to sign up. Read on to see the best robot photos of the week!

Read more... (0 replies)

Random Robot Roundup

Posted 4 Jan 2013 at 20:34 UTC by steve

Several mainstream news items on robots this week including an interesting piece in the New Yorker titled, Why Making Robots Is So Darn Hard. Meanwhile Salon and the New York Times dragged robots into the growing debate of "profits without prosperity" - the recent phenomenon in which big corporations are making more and more profit but without the traditional increase in general prosperity among corporate employees. In the New York Times article, Robots and Robber Barons, economist Paul Krugman cites robots as one of two possible causes for the problem. Salon, in the article, Robots don't destroy jobs by economist William Lazonick, counters that for every human worker a robot replaces, it adds multiple new job opportunities for humans. The Salon article posits the real problem is not robots but corporate abuse of profits, the fault of humans, not machines. Leaving economics behind, The Swirling Brain pointed us to some cool photos and video of a robot ornithoper made from 3D printed parts. And everybody loves a top ten list, right? The Public Library of Science (PLOS) recently posted a list of Ten Simple Rules for the Open Development of Scientific Software. Know any other robot news, gossip, or amazing facts we should report? Send 'em our way please. Don't forget to follow us on twitter and Facebook. And now you can add us to your Google+ circles too.

Read more... (0 replies)

ROS Groovy Galapagos Released

Posted 3 Jan 2013 at 20:27 UTC by steve

The latest release of the popular robot operating system ROS, nicknamed Groovy Galapagos, was released on 31 December. The Groovy release includes a lot of changes to the core infrastructure aimed at making ROS easier to use, more modular, and more scalable. Portability has also been improved with support for most GNU/Linux distros, Android, and even some proprietary operating systems such as Mac OS X and Windows. Developers will also be happy to see that all ROS packages have been consolidated on GitHub:

Traditionally, ROS code has been scattered across numerous version control systems (git, svn, hg, etc) across different hosting services throughout the world. Though the ROS wiki has acted as a central point of documentation, issue/ticket tracking has been just as disparate as the usage of VCS tools. With ROS Groovy, an effort has been made to move core packages to GitHub along with all issue tracking. This has brought several benefits including making ROS more available to the wider open source community and providing VCS consistency for ROS packages. Most importantly, utilizing GitHub has involved the ROS community more and given it more ownership of the codebase. GitHub's pull requests have made it much easier for the core ROS development team to apply patches from the community as well as respond to design feedback more rapidly.

Developers should expect to see a few changes in the build tools as well. Stacks have been removed, rosbuild has been replaced with a new build tool called catkin, the core ROS GUI tools have been replaced by a single tool called rqt, the Wx toolkit has been replaced with Qt. For a full list of changes, see the ROS Groovy Galapagos release notes. ROS is free software released under a variety of licenses that meet the guidelines of the Free Software Foundation and Open Source Initiative. If you'd like it try ROS, you can download the source or pre-packaged binaries for most systems.

Read more... (0 replies)
Aquatic Robotics

The OpenROV Project

Posted 2 Jan 2013 at 20:19 UTC by steve

The National Geographic Explorers Journal blog brought to our attention the OpenROV Project, which was recently funded by a successful Kickstarter campaign that raised over $100,000 USD. The project was founded by friends Eric and David, who wanted to build an ROV from low-cost off-the-shelf parts. The OpenROV can be used for educational purposes or for actual underwater exploration. The current version of the OpenROV is limited to a depth of 100 meters but the design is open source and you're invited to modify and improve it. In addition to the open source hardware, this little underwater robot relies on open source software running on a GNU/Linux-based embedded processor. There's a USB HD video camera and LED light arrays on board too so you can see where you're going. At present the OpenROV is strictly a DIY project that you build from the designs and source code available on the OpenROV wiki. But kits for about $750 and even fully assembled ROVs should be available soon. Read on to see the original kickstarter video that describes the ROV and a more recent video of the ROV in action

Read more... (0 replies)

Robots Podcast #120: Mel Torrie of Autonomous Solutions

Posted 31 Dec 2012 at 00:13 UTC by John_RobotsPodcast

In episode #120, reporter Per speaks with Mel Torrie of Autonomous Solutions, Inc., which he founded in 2000, with encouragement from John Deere, as a spin-off of the Center for Self Organizing and Intelligent Systems (CSOIS) at Utah State University (USU). From agriculture, the company branched out into mining and construction, then survived one lean period because it had also invested in golf course mowing. ASI has also participated in three DARPA challenges, supporting the University of Florida's team for both runnings of the DARPA Grand Challenge, and then as an independent entrant in the DARPA Urban Challenge. ASI has distilled its autonomous vehicle experience into a kit that can be quickly and easily installed in new vehicles, now marketing this kit to automotive manufacturers for use in their internal testing programs, allowing them to push their cars through grueling tests more quickly than human drivers can tolerate.

Read On | Tune In

Read more... (0 replies)

TOMBOT: A Behavior-based Autonomous Robot

Posted 30 Dec 2012 at 16:11 UTC by steve

Circuit Cellar magazine recently posted in full their two-part article by Tom Kibalo on the construction of his subsumption-based mobile robot, called TOMBOT. Part 1 of the article covers construction of the hardware and Part 2 covers the subsumption software and basic behaviors for obstacle avoidance, collisions, and light tracking. The robot is a differential drive design using continuous-turn RC Servos as motor. It lacks wheel encoders. The robot sports an XBee radio, a PIC32 CPU, and a small LCD display. It's a good basic introduction to behavior-based robots and well worth a read.

Read more... (0 replies)

Generalized Representational Information Theory

Posted 29 Dec 2012 at 19:35 UTC by steve

Just as researchers today struggle to find working definitions for words like consciousness and intelligence, they struggled to find a standardized meaning for the word information in the early 1900s. Ralph Hartley, a research for Bell Laboratories, first introduced a theory of information based on the idea that information consisted of strings of symbols, a reasonable idea in the age of telegraph, telephones, and radio. Shannon and Weaver moved things along in the 1940s, resulting in Shannon-Weaver Information Theory (SWIT). While Hartley's theory was concerned primarily with sets of symbols, SWIT was concerned with the probability or uncertainty of events (the likelihood a particular structure or sequence of symbols are meaningful). Both theories fall far short of describing what a modern cognitive scientist or AI researcher means when they talk about information. A newer theory was developed in the field of psychological research, Representational Information Theory (RIT). The idea behind RIT is that communication between animals and their environment is mediated by concepts. The only drawback of RIT is that it only supported binary dimensions. RIT looks at information in terms of complexity rather than uncertainty like SWIT. In a new paper published in the journal Information, researchers described a generalized version of RIT, called GRIT that may be useful in the fields of AI and robotics:

"concepts live in the mental space of organisms ranging from aplasia to insects and from dolphins to humans. Some may argue that they also live in the mental spaces of intelligent robots and expert systems. Regardless, the point is that only by using concepts as mediators can information as a measurable quantity reflect human intuitions as to what is informative."

The paper includes a technical appendix with mathematical examples of Generalized Representational Information Theory (GRIT) showing examples such as the one above that includes three dimensions (shape, color, and size). For all the details, read the paper, titled, "Complexity over Uncertainty in Generalized Representational Information Theory (GRIT): A Structure-Sensitive General Theory of Information" (PDF format). The paper was written by Ronalda Vigo of the Center for the Advancement of Cognitive Science, Psychology Department, Ohio University. Hartley's 1928 paper is available online as Transmission of Information (PDF format). Shannon's 1948 paper (on which the later Shannon-Weaver book was based) can be found as "A Mathematical Theory of Communication" (PDF format).

Read more... (0 replies)

Best Robot Photos of the Week: Xmas Edition

Posted 24 Dec 2012 at 19:32 UTC by steve

This week's edition of Best Robot Photos of the Week is a special holiday collection of Christmas robots submitted by our readers. We also received one holiday photo made by Hanukkah nanobots. No one posted photos of Kwanzaa bots or Festivus droids this year. Whatever your preferred winter holiday, just remember that Axial Tilt is the reason for the season and enjoy our these photos of holiday robots. Want to see your robot photo here? Post it to flickr and add it to the robots.net flickr group. If you're not a flickr member yet, it's free and easy to sign up. Read on to see the best robot photos of the week!

Read more... (0 replies)

Brain's Semantic Mapping System Decoded

Posted 21 Dec 2012 at 21:04 UTC (updated 22 Dec 2012 at 05:11 UTC) by steve

Yet another brain mapping project has announced some pretty amazing new findings. Researchers at UC Berkeley's Gallant Lab have succeeded in decoding the semantic mapping space in which the brain stores all the information we take in. They've mapped the space both as abstract, multi-dimensional graphics and they've mapped the actual locations where the information nodes are stored in the physical brain. They've learned all sorts of new things about how the brain categorizes things. For example, one semantic dimension (abbreviated PC) of our brain space categorizes things by whether they move - cars, motorcycles, people vs buildings, cities, and the sky. Another dimension distinguishes between things involved in social interaction (people, verbs, furniture) and things involved in less interactive outdoor activities (geological formations, animals, vehicles). They've identified four semantic dimensions so far but believe with higher resolution scans and more work, many more will be revealed.

"Across the cortex, semantic representation is organized along smooth gradients that seem to be distributed systematically. Functional areas defined using classical contrast methods are merely peaks or nodal points within these broad semantic gradients. Furthermore, cortical maps based on the group semantic space are significantly smoother than expected by chance. These results suggest that semantic representation is analogous to retinotopic representation, in which many smooth gradients of visual eccentricity and angle selectivity tile the cortex (Engel, Glover, & Wandell, 1997; Hansen, Kay, & Gallant, 2007). Unlike retinotopy, however, the relevant dimensions of the space underlying semantic representation are not known a priori, and so must be derived empirically"

The mapping of the semantic space onto the brain reveals that as much as 20% of the brain, including parts of the somatosensory and frontal cortices, is devoted to storing these highly organized semantic maps. Less surprisingly, the maps confirm the location of previously established specialized areas. Information about humans, for example, overlaps the fusiform face area (FFA) of the brain which is known to be involved in face recognition. For more see the paper "A continuous semantic space describes the representation of thousands of object and action categories across the human brain" (PDF format). The paper will be in Neuron Vol 76, Iss 6. If you're using a browser such as Google's Chrome that supports WebGL graphics, you can explore an interactive version of the researcher's semantic brain map. And read on to see examples of the semantic space mapped onto the physical brain as well as a short video describing the research.

Read more... (0 replies)

3500 older articles...

Suggest a story

Robot of the Day


Built by
Adam Oliver

Recent blogs

17 Apr 2014 shimniok (Journeyer)
8 Apr 2014 Petar.Kormushev (Master)
6 Apr 2014 steve (Master)
4 Apr 2014 mwaibel (Master)
10 Mar 2014 Flanneltron (Journeyer)
2 Mar 2014 wedesoft (Master)
1 Dec 2013 AI4U (Observer)
13 Nov 2013 jlin (Master)
23 Jun 2013 Mubot (Master)
13 May 2013 JLaplace (Observer)
21 Apr 2013 Pi Robot (Master)
12 Apr 2013 Pontifier (Apprentice)
31 Mar 2013 svo (Master)
16 Mar 2013 gidesa (Journeyer)
12 Mar 2013 ixisuprflyixi (Master)
28 Feb 2013 JamesBruton (Master)
14 Dec 2012 oleglyan (Observer)
11 Dec 2012 Christophe Menant (Master)

Newest Robots

7 Aug 2009 Titan EOD
13 May 2009 Spacechair
6 Feb 2009 K-bot
9 Jan 2009 3 in 1 Bot
15 Dec 2008 UMEEBOT
10 Nov 2008 Robot
10 Nov 2008 SAMM
24 Oct 2008 Romulus
30 Sep 2008 CD-Bot
26 Sep 2008 Little Johnny

User Cert Key

Share this page