Evolutionary Subsumption Neurocontrollers

Posted 8 May 2004 at 00:03 UTC by steve Share This

A new research paper (PDF format) written by COGS researcher Julian Togelius combines the ideas of evolutionary computing, neural networks, and subsumption architecture. A simulated robot using software based on Julian's ideas was able learn a series of behaviours through a multi-layer evolutionary process with multiple fitness functions. The paper suggests that a layered evolution approach may solve the chief problem of evolutionary robotics: scaling the software to the point that it can solve complicated, real-world problems. Julian explains layered evolution and differentiates it from incremental evolution and modularised evolution.

Evidence of scalability?, posted 8 May 2004 at 08:24 UTC by motters » (Master)

Despite the initial claims there doesn't seem to be any evidence in the paper that this method can be scaled up to much bigger problems. Obstacle avoidance and phototaxis are very typical easy problems which many robotics researchers like to go for, and in this case it looks like only a few simulated neurons were used.

Good point, posted 9 May 2004 at 02:57 UTC by roschler » (Master)

I agree motters,

There are a lot of AI problems that work well with small data sets. But in almost every case, once you get beyond a certain quantity of data, intelligently partitioning the data into something that works frequently breaks the technique. For example, the "collection of experts" multiple neural network technique.


See more of the latest robot news!

Recent blogs

30 Sep 2017 evilrobots (Observer)
10 Jun 2017 wedesoft (Master)
9 Jun 2017 mwaibel (Master)
25 May 2017 AI4U (Observer)
25 Feb 2017 steve (Master)
16 Aug 2016 Flanneltron (Journeyer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
10 Sep 2015 svo (Master)
14 Nov 2014 Sergey Popov (Apprentice)
Share this page