Older blog entries for jwp9447 (starting at number 60)

Robotic Surrogate Takes Your Place at Work

Having one of those days where even a hearty bowl of Fruit Loops and Jack Daniels can't get you out of bed? A telepresence robot can come into the office for you, elevating telecommuting to a decidedly new level. The somewhat humanoid 'bots, produced by Mountain View, California-based Anybots, are controlled via video-game-like controls from your laptop, allowing you to be "present" without actually being in the office.

The robots are equipped with a screen displaying your smiling mug to your co-workers, as well as a camera that beams a feed straight to your computer screen. They're also mobile, so you can still drop by co-workers' offices unannounced, presumably to let them know that a robot will be filling in for you today. There are a few variations that Anybots is toying with, including one with long arms and hands, and another with a laser pointer that, while useful for pointing things out in a physical environment, will prove slightly less cool than Johnny 5's shoulder-mounted laser cannon.

In all seriousness, Anybots plans to have a telepresence robot to market in the second half of next year. While it may seem to be little more than an expensive videoconferencing link (the product is expected to retail between $10,000 and $15,000), it could be a valuable tool in workplace environments where a more physical presence is handy, such as for U.S. businesses trying to keep an eye on manufacturing floors overseas. After all, you never know when a surrogate might come in handy.

All right, all right. See QA operating from both sides of the link in the (real) videos below.

[Anybots via Technology Review]

Syndicated 2009-11-17 21:16:26 from Popular Science - robots

Doctors Equip Yorkshire Man With Cyborg Sphincter

Meet Ged Galvin, the Steve Austin of colorectal surgery. After a car crash in which Galvin almost died, surgeons at Royal London Hospital realized they could rebuild his crushed organs. Stronger. Faster. They had the technology to give him a cyborg colon.

"The operation changed my life and gave me back my pride and confidence," Galvin told the Daily Telegraph.

After he spent a short time with a colostomy bag, the doctors surgically removed muscle from Galvin's leg, fashioned it into an ersatz sphincter, and surgically implanted electrodes into the new muscle ring. Doctors then installed the device in Galvin.

Now, Galvin is free from the indignity of a colostomy bag. He controls his functions with a small remote control, about the size of a cell phone, that operates the electrodes. Now he can do his business at his leisure, although the muscles and electrodes will need replacement every five years. "Because of the remote control I can lead a normal life again," Galvin said.

Armed with his new sense of confidence, all Galvin has to do now to steal Steve Austin's crown is fight Sasquatch.

[Daily Telegraph, via Geekologie]

Syndicated 2009-11-17 16:47:08 from Popular Science - robots

Robo-Negotiator Talks Down Armed Lunatic

Hostage situations are often described like explosive devices, as ticking time bombs waiting to go off. And just as bomb disposal units have robots to help with their job, now police negotiators have a bot of their own for defusing a different kind of explosive situation.

In Colorado, a heavily armed 61-year-old man had barricaded himself in his home. Afraid that the volatile suspect would shoot anyone sent in to negotiate his surrender, the police found themselves at an impasse. Then, they sent in the robot.

The police equipped a bomb disposal robot with a microphone, a camera, and speakers, and maneuvered it into the house. An operator navigated the droid to the suspect, and then began to talk him down. Eventually, the man surrendered to the robot, and was taken into police custody.

Usually, when Murphy said "dead or alive, you're coming with me", the "alive" option rarely panned out. This robocop, however, already seems to be getting better results.

[Glenwood Springs Post Independent, via The Register]

Syndicated 2009-11-12 16:33:14 from Popular Science - robots

11 Nov 2009 (updated 15 Nov 2009 at 11:28 UTC) »

Soccer-Ball-Sized Submersible Robots Will Track Ocean Currents and Disasters at Sea

The National Science Foundation has awarded almost $1 million to develop a swarm of underwater robotic explorers <!--paging_filter-->

Hundreds of soccer-ball-sized robot drones could soon ply the friendly waves to help scientists track ocean currents and harmful algae blooms, or even swarm to disaster sites such as oil spills and airplane crashes. That's no mere flight of fancy, now that the National Science Foundation has provided almost $1 million in funding to researchers at the Scripps Institution of Oceanography in San Diego.
<!--break-->
The underwater swarm would coordinate with larger mothership drones as they move around and gauge the physics of ocean currents. Such information might allow researchers to scout out critical nursery habitats in protected marine areas, and might likewise lead salvage teams to recover the black boxes from airplane crash sites.


"You put 100 of these AUEs [Autonomous Underwater Explorers] in the ocean and let 'er rip," said Peter Franks, an oceanographer at Scripps. "We'll be able to look at how they spread apart and how they move to get a sense of the physics driving the flow."

More data gathered over time could also feed into better ocean models that try to capture the ocean weather and climate.

Scripps researchers first plan to build five or six prototypes the size of soccer balls, along with 20 smaller versions. They would join a growing fleet of underwater robots ranging from U.S. Navy submarine drones to ring-wing robots designed for oil exploration.

[via PhysOr g]

Syndicated 2009-11-11 19:02:03 from Popular Science - robots

Neuron-Like Computer Chips Could Portably Digitize Human Brain

Simulating the brain with traditional chips would require impractical megawatts of power. One scientist has an alternative

According to Kwabena Boahen, a computer scientist at Stanford University, a robot with a processor as smart as the human brain would require at least 10 megawatts to operate. That's the amount of energy produced by a small hydroelectric plant. But a small group of computer scientists may have hit on a new neural supercomputer that could someday emulate the human brain's low energy requirements of just 20 watts -- barely enough to run a dim light bulb.

Discover Magazine has the story on how the Neurogrid computer could completely overhaul the traditional approach to computers. It trades the extreme precision of digital transistors for the brain's chaos of many neurons firing, with misfires 30 percent to 90 percent of the time. Yet the brain works with this messy system by relying on crowds of neurons to shout over the noise of misfires and competing signals.

That willingness to give up precision for chaos could lead to a new era of creative computing that simulates the unpredictable patterns of brain activity. It could also represent a far more energy-efficient era -- the Neurogrid fits in a briefcase and runs on what amounts to a few D batteries, or less than a watt. Rather than transistors, it uses capacitors that get the same voltage of neurons.

Boahen has so far managed to squeeze a million neurons onto his new supercomputer, compared to just 45,000 silicon neurons on previous neural machines. A next-generation Neurogrid may host as many as 64 million silicon neurons by 2011, or approximately the brain of a mouse.

This new type of supercomputer will not replace the precise calculations of current machines. But its energy efficiency could provide the necessary breakthrough to continue upholding Moore's Law, which suggests that the number of transistors on a silicon chip can double about every two years. Perhaps equally exciting, the creative chaos from a chaotic supercomputer system could ultimately lay the foundation for the processing power necessary to raise artificial intelligence to human levels.

[Discover Magazine]

Syndicated 2009-11-06 20:43:33 from Popular Science - robots

Wearable Artificial Intelligence Could Help Astronauts Troll Mars for Signs of Life

Not since RoboCop has being a cyborg seemed so very cool. University of Chicago geoscientists are developing an artificial intelligence system that future Mars explorers could incorporate into their spacesuits to help them recognize signs of life on Mars' barren surface.

The systems would entail an AI system known as a Hopfield neural network that uses processes closely mimicking human thought to weigh evidence and make decisions based on previously known facts and patterns. Using digital eyes incorporated into astronauts' suits, the AI system would collect data from the environment and analyze it in the Hopfield networks located on the hips of the suits.

Preloaded data as well as data collected as the astronauts go about their Martian surface walks would be turned over in the AI systems much in the same way a human brain would crunch it. For instance, the Hopfield's algorithm can learn colors from a single image, then relate it to previously observed instances of that color, making connections between the two. Recent tests of a complete, wearable prototype suit at the Mars Desert Research Station in Utah found that the AI could tell the difference between lichen and the rock surrounding it.

But that's just scratching the surface; next, researchers plan to teach the Hopfield to differentiate between textures, and ultimately to engineer the system to work at scales ranging from wide landscapes to the minuscule. They have plenty of time to do so, as no one plans to send a manned mission to Mars any time soon. But the data the algorithm is already learning on Earth could ride with robotic missions to Mars in the more immediate future.

[PhysOrg]

Syndicated 2009-11-05 14:39:18 from Popular Science - robots

MIT Introduces a Friendly Robot Companion For Your Dashboard

Kitt? KITT? Is that you?

With all the sensors, computerized gadgetry and even Internet connectivity being built into cars these days, it's a wonder our automobiles aren't more like Optimus Prime. Our cars will now email us when they need to have their oil changed, and recognize our facial expressions to determine whether we're enjoying ourselves, but for all the information available to us when we're driving, it's often not possible to organize it all in real-time and package it in a way that we can digest while behind the wheel. Researchers at MIT and Audi created the Affective Intelligent Driving Agent to address exactly that problem.

AIDA communicates with the driver via a small, sociable robot built into the dashboard. The idea is to develop an informed and friendly passenger, the buddy perpetually riding shotgun who aside from reading the map and helping with navigation, acts as a companion. As such, AIDA is being developed to read drivers' moods via their facial expressions and other cues (hand gestures?) and respond to them in the proper social context. It communicates back in very human ways as well: with a smile, the blink of an eye, the drooping of its head.

Prompting memories of KIT from the (most excellent) television series Knight Rider, the idea is for AIDA to have personality and establish a relationship with the driver in which both parties learn from each another and help each other out. AIDA analyzes the driver's mobility patterns, common routes and destinations, and driving habits. It then merges its knowledge of the driver with its knowledge of the city around it, mashing up the drivers priorities and needs with real-time information on everything from tourist attractions to environmental conditions to commercial activity to help the driver make better decisions.

If, for instance, there's a parade route between you and the grocery store, AIDA will tell you about it and help you find your way around it. Or it might simply remind you that your gas tank is low, knowing that given the time of day you must be on your way to work several miles away. AIDA will even give you feedback on your driving, helping you increase your fuel efficiency or suggesting that your policy of making rolling illegal lefts through stop signs while in school zones may be ill-advised.

Unlike the sci-fi cars of our childhood fancy -- KIT, the Batmobile, Herbie -- AIDA is still a ways from pulling off daring rescue maneuvers or other heroic acts of derring-do. But it can make the road a safer, more informed place, and if the MIT robotics researchers have their way, one that's not quite so lonely.

[MIT]

Syndicated 2009-10-29 19:11:54 from Popular Science - robots

Image and video hosting by TinyPic

Image and video hosting by TinyPic

MIT researchers and designers are developing the Affective Intelligent Driving Agent (AIDA) - a new in-car personal robot that aims to change the way we interact with our car. The project is a collaboration between the Personal Robots Group at the MIT Media Lab, MIT’s SENSEable City Lab and the Volkswagen Group of America’s Electronics Research Lab.

“With the ubiquity of sensors and mobile computers, information about our surroundings is ever abundant. AIDA embodies a new effort to make sense of these great amounts of data, harnessing our personal electronic devices as tools for behavioral support,” comments professor Carlo Ratti, director of the SENSEable City Lab. “In developing AIDA we asked ourselves how we could design a system that would offer the same kind of guidance as an informed and friendly companion.”

AIDA communicates with the driver through a small robot embedded in the dashboard. "AIDA builds on our long experience in building sociable robots,” explains professor Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Lab. “We are developing AIDA to read the driver's mood from facial expression and other cues and respond in a socially appropriate and informative way."

AIDA communicates in a very immediate way: with the seamlessness of a smile or the blink of an eye. Over time, the project envisions that a kind of symbiotic relationship develops between the driver and AIDA, whereby both parties learn from each other and establish an affective bond.

To identify the set of goals the driver would like to achieve, AIDA analyses the driver’s mobility patterns, keeping track of common routes and destinations. AIDA draws on an understanding of the city beyond what can be seen through the windshield, incorporating real-time event information and knowledge of environmental conditions, as well as commercial activity, tourist attractions, and residential areas.

“When it merges knowledge about the city with an understanding of the driver’s priorities and needs, AIDA can make important inferences,” explains Assaf Biderman, associate director of the SENSEable City Lab. “Within a week AIDA will have figured out your home and work location. Soon afterwards the system will be able to direct you to your preferred grocery store, suggesting a route that avoids a street fair-induced traffic jam. On the way AIDA might recommend a stop to fill up your tank, upon noticing that you are getting low on gas," says Biderman. “AIDA can also give you feedback on your driving, helping you achieve more energy efficiency and safer behavior.”

AIDA was developed in partnership with Audi, a premium brand of the Volkswagen Group, and the Volkswagen Group of America's Electronics Research Lab. The AIDA team is directed by Professor Cynthia Breazeal, Carlo Ratti, and Assaf Biderman. The SENSEable City Lab team includes team leader Giusy di Lorenzo and includes Francisco Pereira, Fabio Pinelli, Pedro Correia, E Roon Kang, Jennifer Dunnam, and Shaocong Zhou. The Personal Robots Group's technical and aesthetic team includes Mikey Siegel, Fardad Faridi and Ryan Wistort as well as videographers Paula Aguilera and Jonathan Williams. Chuhee Lee and Charles Lee represent the Volkswagen Group of America’s Electronics Research Lab.

Robotic Pathologist Performs Precise, Clean Autopsies on Humans

Autopsies, for all the useful information they provide, have significant downsides. They are often upsetting to the deceased's family, they prevent people from receiving certain kinds of religious burials, and they leave a bit of a mess. To correct for those problems and more, a team at the University of Bern, Switzerland, has developed a robot that can perform virtual autopsies.

The robot uses stereo cameras to record a 3-D image of the body's exterior, and a CT scanner to record the body's internal condition. This results in a complete, 3-D, computerized model of the entire body. Doctors can control the robot to perform micro-biopsies for tissue examination, doing away with any serious deformation of the body. The medical examiner can then analyze the image, perform virtual biopsies, and store the data for future use. The process leaves no pile of used organs, and no jars filled with alcohol and tissue.

In addition to making the whole process much easier, the robot-conducted virtual autopsy also makes it easier for medical examiners to compare the current corpse with previous cases, and build a database for future reference.

Additionally, the robot used for the autopsy is much cheaper than the robots usually used for surgery. Since there's no chance of hurting someone who's already dead, the robot that does the job can be a less precise, industrial model, the type designed to assemble a car, not remove an appendix. For similar reasons, surgical robots need a doctor monitoring them at all times, but the robo-medical examiner can operate autonomously.

Welcome to the future: if you're not killed by a robot, then at the very least, a robot will figure out how you died.

[via New Scientist]

Syndicated 2009-10-27 16:59:11 from Popular Science - robotics

A Look At Japan's Retro-Future

As much as we love the actual future here at Popular Science, we love the past's vision of the future almost as much. So we basically freaked out when our good friends over at Pink Tentacle discovered this spread from a 1969 issue of the Japanese magazine Shonen Sunday.

These pictures show a predicted 1989 where computers have changed how we live. The above photo depicts a classroom full of children learning on computers, watching a video of a teacher, and receiving beatings from enforcement robots. Considering that most students today actually use their computers for sexting, downloading music, and online poker, maybe those robo-thugs weren't such a bad idea.

This pic shows the home of the future. The personal computer and Roomba-like vacuum robot are pretty spot on, but why is Mom's computer using a punch card? And why does the robot need arms to clean the dishes? If they have video phones and flying cars, you'd think they'd at least have a dishwasher.

This third image is probably the most accurate. Sure, it was off by about 20 years, but that machine is the spitting image of the DaVinci medical robot. Listed as one of our top surgical advances of the last 20 years, remote surgery via robots was a bold prediction in 1969, but one that our present technology has vindicated.

This article leaves me with one question, though: Why is everyone in the future always wearing a jumpsuit?

[via Pink Tentacle]

Syndicated 2009-10-23 20:01:54 from Popular Science - robotics

51 older entries...

X
Share this page