Sensors

Tiny Eye Movements Prove Necessary

Posted 16 Jun 2007 at 13:59 UTC by Rog-a-matic Share This

The EyeRIS platform at the Active Perception Laboratory of Boston University is being used to understand and model the tiny, simple eye movements we unknowingly use to aid vision. Small movements of the eye, along with movements of our head and body, prevent us from maintaining a static image. Since stable stimuli fade on the retina, it is thought that the movements are used to improve the signaling received by the brain and are an important part of vision processing. A proper understanding of this could lead to improvements in robotic vision sensors and processing algorithms.


mandala, posted 16 Jun 2007 at 14:23 UTC by dpa » (Master)

Practicers of meditation have known this for centuries. By purposely focusing on the center of a mandala, the eye movements slowly come to a stop and the visual imagery fades away. Try it!

Resolution, posted 16 Jun 2007 at 15:53 UTC by steve » (Master)

I believe there's also been research showing that eye saccades significantly increase the effective resolution of the eyes over the physical resolution of the retina. I'm fairly certain there was some research done in making digital image sensors that "shake" to achieve the same resolution increase.

There have also been several studies showing that we briefly cease to perceive anything during the saccades. One experiment had people reading a document on a monitor with a sensor that detected the timing of their eye saccades. During a saccade, the computer would switch words in the sentences the person was reading. The changes were never perceived by the reader, even though people looking over their shoulder (whose saccades were not synchronized with the changes) immediately detected the changes and realized the content and meaning of the document was changing in real time.

Increase resolution, posted 16 Jun 2007 at 17:29 UTC by Rog-a-matic » (Master)

It makes sense that apparent resolution could be increased by moving the sensor or the objects in view because transistions could be detected.

A good analogy would be optical encoders. You can either count the full pulses or you can use software and/or hardware to detect the edges to multiply the resolution.

Hmmm, wonder what mutation is responsible for this in humans :)

mutations, posted 17 Jun 2007 at 01:13 UTC by steve » (Master)

Why do I suspect that's not a serious question. :-) My guess is that no mutation was necessary. I've read that eye saccades match waveforms that inherently occur in the biological neural nets that control the eye. So it's probably far more likely that a control system taking advantage of such movements would occur than the other way around. Just a guess though. Eye movements have been heavily studied for years, so I wouldn't be surprised if your answer is out there. The eyes in general are wonderfully useful examples of evolution because they're apparently so trivial to evolve and yet so handy to have. A quick google turned up a couple of essays on the subject that might point in the right direction:

Where d'you get those peepers?, Richard Dawkins

Darwin will rest easier thanks to flies with eyes on their wings, Elizabeth Finkel

The incredible trivial eye, posted 17 Jun 2007 at 15:57 UTC by Rog-a-matic » (Master)

I find it difficult to accept the term 'trivial' in any discussion about the complexity of a cell, much less the incredible subsystems of, and the eye itself.

I'm glad we can agree on 'handy' though :)

Eye Aye Sir, posted 19 Jun 2007 at 16:19 UTC by The Swirling Brain » (Master)

I had a girlfriend that worked for a lady who's eyes constantly jiggled! It was really freaky to talk to this boss lady because you couldn't help but notice her eyes going back and forth as she looked at you. Silly me, young as I was, I asked her how did she do that. To which she replied, "Do what?" "Uh, nevermind," I said. Perhaps she was high, or at least had a higher resolution of vision? Or maybe she's really a mutant?

helping stereo correlation?, posted 28 Aug 2007 at 18:31 UTC by Pontifier » (Apprentice)

When thinking about human stereo vision I reasoned that these small involuntary movements could help us to recognize the curvature of surfaces . As the eye moves, waves of correlation could pass through part of the brain. Comparison of the speed of these correlation waves would allow regions of the brain to calculate the curvature of a surface directly.

Thats just my own rambling train of thought though... I came up with that thought when I realized how good my eyes were at noticing small depth changes in a sidewalks texture and trying to come up with an explanation.

See more of the latest robot news!

Recent blogs

21 Sep 2014 mwaibel (Master)
16 Sep 2014 shimniok (Journeyer)
5 Aug 2014 svo (Master)
20 Jul 2014 Flanneltron (Journeyer)
3 Jul 2014 jmhenry (Journeyer)
3 Jul 2014 steve (Master)
2 Jul 2014 Petar.Kormushev (Master)
10 Jun 2014 robotvibes (Master)
10 May 2014 evilrobots (Observer)
2 Mar 2014 wedesoft (Master)
X
Share this page