Recent blog entries for wildmage

Lab Work and Robotics Wiki

It's been a while since my last diary entry, but I've been working hard on my lab projects. Unfortunately I can't talk much about it until we get our NASA contract signed.

In other news, I've launched a robotics wiki to try and make a collaborative resource for robotics. It's got a lot of stuff in it already for being such a new wiki. I've also decided to make the content available under a Creative Commons license, so that it's essentially a free resource.

Most of my time is spent on IRC chatting about robots and helping people with their projects. You can find me on irc.freenode.org in #robotics. If you don't know how to use IRC, read here

The wiki URL is: roboticswiki.org

I'm also very interested in compiling a robot zoo, or an exhaustive list of robots past and present. My vision for it is to become a kind historical record of the robot population. I have to acknowledge that this is a highly difficult task, but I think it can be accomplished with a collaborative effort.

Jacob Everist

Robosphere 2004, Nov. 9-10 Eyewitness Account (Part 2)

I should mention that as a condition for being allowed to attend the Robosphere workshop for free, I was supposed to volunteer my time to help with the proceedings. I spent the early part of lunch getting briefed on my responsibilities. My job title for the first day was Miscellaneous. I was essentially supposed to sit near the entrance hall and take care of anything that didn't fall under the category of registration, audio/visual, or video recording. I think the only thing I did was to ask someone if they needed anything. They said "no".

So after briefing, I went to get some lunch. I think the first person I met was Greg Hornby. He was a former student of Jordan Pollack at Brandeis University and now working at NASA Ames research center. I didn't know this at the time, but I did ask him a little about his work. He mentioned that his work was on evolutionary algorithms. I wasn't particularly interested at the time because I've heard that word thrown about so often that I just considered it a dead-end buzzword. I came to appreciate it later in the workshop.

His work is essentially in evolutionary algorithms for computer-automated design. The way this works is that you give the computer a bunch of parameters, a task, and a simulation environment. The computer then proceeds to make up a robot design and test it in the simulation environment. If it does well, it tries to improve on that design. If it does poorly, it discards those changes and tries another route. What's interesting is that this is done in an analogous way to natural selection in the process of evolution. Many populations of different robots with different designs are tested in a simulation environment. The poor performers are terminated and the good performers are allowed to continue existing. In addition, the good performers are allowed to "mate" and create hybrid designs that try to combine the best features of both performers.

I think I talked more about this here in my Oct. 22 entry last year. This is a very interesting approach to robot design that is starting to show that it has more and more useful applications.

Someone else I met at lunch was Greg Chirikjian and one of his students, Kiju Lee, from John Hopkins University. His work involves self-replicating robots. I saw him once at a robotics conference in New Orleans, but didn't have a chance to talk to him.

The idea of self-replicating robots is that you send a small colony of robots to some place like the Moon or Mars. Once there, they utilize the resources that they find in the soil or regolith, synthesize this into parts, and make copies of themselves. This essentially allows the robots to multiply completely on their own and saves you lots of time and money in shipping extra robots and parts to these extra-terrestrial environments.

This may sound cool or scary, but it's a really really difficult problem. This idea has been around for quite a while, but not taken very seriously because the technology just isn't there. Chirikjian has shown several demonstrations of an actual self-replicating robot making a copy of itself in an autonomous manner. Albeit, the robot is made out of legos and there are only four parts to be assembled to make an extra robot copy. Nevertheless, the feat is amazing to anyone has actually tried to do something like this. Go ahead and try to do it yourself. You'll be surprised how hard it is to make a robot that can build and assemble an exact copy of itself-- even a simplified assembly.

After lunch, it was time to head back into the conference hall and assume my post as Miscellaneous man.

The first speaker was Ashley Stroupe from NASA JPL in Pasadena, California. She's one of the staffers on the Mars Rover projects, so she immediately has a superstar presence. Not that people are asking her for her autograph, but people are interested in her work like in the normal humdrum way of academics.

But this talk was not about the Mars rovers. This talk was about autonomous construction robots. More specifically, "Sustainable Cooperative Robotic Technologies for Human and Robotic Outpost Infrastructure Construction and Maintenance". Their work essentially consisted of two rover-like robots, with manipulator arms, and operating in a sandbox. There task was to go to two sides of a beam, pick up the beam in a synchronized manner, turn around and carry it to the construction site, and set the beam down aligned on top of another beam. This was done in a completely autonomous manner with no human intervention. This was the culmination of about 9 months worth of work.

I was deeply impressed. I've had experience trying to get mobile robots to interact and manipulate objects in precise way and realized all the difficulties. It's particularly difficult for mobile robots since you don't have a fixed reference frame to work with. The robot has to be adaptive and able to perceive the object well enough to know where it is and understand how it affects the object. Their only sensors in this feat were a camera and some force-torque sensors on the arms to keep the robots synchronized while carrying the beam. Some fiducial markers were placed on the beams and the cameras were used to detect them so the robots could more easily position themselves and the beams.

So currently the project at NASA is in very preliminary stages and its not clear whether it will continue to be funded, but this project seems to align very well with NASA's new space objectives outline by President Bush a year ago which include robotic and manned missions to the Moon and Mars.

More later.

Jacob Everist

Robosphere 2004, Nov. 9-10 Eyewitness Account

Robosphere is a bi-annual robotics workshop that focus on deploying robots into space for long-term operations. This includes adding robustness to existing robot platforms and making robots more and more self-sufficient such as being able to self-repair, perform habitat construction, utilize in-situ resources, and be more autonomous.

I wasn't around for last year's workshop, but this year it was at NASA Ames in Moffet Field, CA near San Jose. I live in Los Angeles, so we had decided to drive up which takes about 5.5 hours.

So three of my colleagues and I plus Prof. Wei-Min Shen gathered together at 5:30am and took a rental van up to San Jose. We stopped once to have some breakfast at some fast-food town, but we spent a lot of the time napping for the sleep we didn't get during the night. I'm not sure what we talked about, but I think we mostly did some joking around.

Finally, we arrived at NASA Ames at 11:15am. Only 3 hours and 15 minutes late! Sadly, we missed some of the speakers already who talked about habitat construction for extra-terrestrial environments. According to the program, I missed the following presentations:

  • "Mobile lunar and planetary base architectures", Marc Cohen, NASA Ames Research Center
  • "Mobitat: Mobile Modular Habitat", A. Scott Howe, Plug-in Creations Architecture
  • "LB1 - A Case Study: A Lunar Habitat as a Self-sustaining Intelligent Robotic System", Susmita Mohanty, Moonfront LLC
  • "Radiation and Micro-meteorite Shielded Lunar Habitat Formation by Semi-autonomous Robotic Excavation", Dr. Lyman Hazelton, KinetX Inc.
Notice that a lot of the speakers are from companies. I think that's particularly interesting since from my experience at academic conferences, most people have been from universities or research institutions. There were quite a bit of people from companies at this workshop.

So the next section of talks was about "Robotic Colonies/Ecologies". These talks essentially boiled down to how to control many robots over a long period of time and have them adapt to the environment and needed tasks.

Anthony Enguirda of Griffith University in Australia started off by describing the concept of the robot colony and how it differs from the conventional paradigm. I arrived right in the middle of this talk, so I didn't learn very much. I'm looking at the accompanying paper in the proceedings, and it looks interesting, but it doesn't seem like there's a lot of substance. Of course a lot of this workshop was focused on wild speculation and the introduction of new ideas, so I think this is acceptable. I'll have to focus on this paper more closely when I have time.

Hamid Berenji of Intelligent Inference Systems Corp. gave a talk about Dynamic Cased-Based Reasoning (DCBR). This was a method for robots to ascertain their state and recover from faults and error conditions. This looked a lot like an expert system with the capability to generalize and adapt to the situation. This seems effective, but the drawback to any type of system like this is it requires a heck of a lot of pre-programming of all the fault cases you can think of.

Finally, Zhihua Qu of University of Central Florida gave a talk about a control-theoretic approach to controlling a large population of robots. It basically takes a huge matrix representing the state of every robot in the population and you add an extra row/column that is the human controller. Then you effect this matrix with your control inputs. It's a very interesting approach and I wonder how you could apply it. It seems in order for this control system to actually work, you need to know the state of all the robots and they need to receive your appropriate inputs. How do you do this in a physical space with poor communication, I don't know. Maybe there's an assumption in here that I didn't get.

After this, we went to lunch. More later.

I suppose I should introduce myself.

My name is Jacob Everist and I am a PhD student at the University of Southern California. My life-long goal is to get robots to do automated construction and assembly in all kinds of environments. This includes indoors, in the lab, in the wild, and extra-terrestrial environments. I'm also interested in exploring various methods of robot excavation in different types of soil.

Finally, I'd like all these activities to scale up to very large projects, such as building solar-arrays in the Sahara, erecting bases on the moon, and harvesting near-earth asteriod resources for transport to Earth.

I am interested in self-reconfigurable robots, self-assembling robots, and automated robot fabrication.

I have done some work on in-space assembly, and you can see a video of last year's work here. Scroll down to "SOLAR" and the first six videos are my work. I also published a paper on it at IROS 2004 in Sendai, Japan.

I am currently working on making a smart room that can make observations and decisions using models of the brain. This work is in a very early stage and can be found here.

Finally, I founded the IRC channel #robotics on irc.freenode.net. If anyone is interested in discussing robots, design, and ideas please join us! We have a lot of experts and we're eager to share ideas.

My homepage.

Jacob

X
Share this page