K Core Processors
Posted 2 Jul 2008 at 14:43 UTC (updated 2 Jul 2008 at 16:32 UTC) by Rog-a-matic
Intel is hinting at the possibility of a future with
processors containing hundreds or even thousands of cores.
While graphics applications seem to be driving much of this trend,
I see the possibilities for robotics as truly monumental.
Software development for real-time response and control of
actuators and sensors has always been a bit awkward with
asynchronous interrupts bumping into each-other and cumbersome
state machine loops. A possible solution to this mess, and
to a much-needed advancement in robot performance are
multiple processor systems with additional processors providing
centralized supervisory control. When that trend takes hold, a programmer
will be able to instruct a mobility routine to approach the
refrigerator using a group of processors dedicated
to that task, and not risk crashing the battery monitoring
routines in the process. It seems to me that the trend is more towards
parallel array of much simpler processors - something between
neural networks of biological systems and the overburdened,
highly complex, single-processor systems of today.
Processors of the giant array can be simpler, slower and die without
bringing down the whole system. The result will be a faster and more
fault-tolerant system that is easier to program, and even cheaper to
Parallel programming can be very difficult to do correctly and I wonder
if most hobby robots are even at the point of utilizing the processor
power we have now?
It's probably better software we need to make progress rather than more
processor power. Not that I'm complaining, bad software also benefits
from faster CPUs! :)
A 1k core chip might be great for neural net software. ANNs seems like a
natural fit for a massively multi-core chip.
Intel's Cell, posted 2 Jul 2008 at 19:24 UTC by cjang »
A CPU with lots of small cores sounds a lot like the STI Cell processor in the Playstation 3 and some supercomputers. It's an alternative to the GPU approach. Besides graphics, the obvious big application area will be signal processing.
Trying to use a processor to its maximum capacity is exactly the problem, and results in overly complex code that is very delicate. This was necessary in 1982 when processors were expensive and power hungry, but this old philosophy is now working against us and we must abandon it to make more progress.
I'm not proposing throwing multiple CPUs at bad software.
In fact, I'm an opponent of bad software :)
I've been a proponent of multiple processors in mobile robots for many years. My Trilobot uses several 8051s - one for the executive control, one for DC motor servoing, one for reading sensors. It's not the same problem as attempting to split up an otherwise single sequential task on a PC, and quite a bit easier to do with great returns at minimal costs. My ARobot also uses a coprocessor for motor control which frees the Stamp for high level stuff. This greatly increases its performance and ease of programming.
I too see the great benefits of multi-core processing. I've made a few main boards that have co-processor pics that talk via i2c that work great for offloading some task. Having all the cores in one package, though, would seem like a weird thread management problem. In some ways it would be cool and in other ways a big headache. I'm sure, though, with whatever platform you are stuck with, you can optimize it for your needs and get it managed. ...With these truths in mind: No realistic sized platform is bug free (whether software or hardware bug) so every platform has a bug. It is inevitable that your program will hit said bug. Your program will suffer adverse affects because of the bug. The deal is: how well does your platform cope with bugs? Self healing or some sort of crash management are nice features to have on your robot. Robots that go to mars have such things where they can go into safe mode and wait for fixes and such or manage to get around problems that are detected even when poked by cosmic particles. Future robotics will need these features to be serious contenders in the future to keep from becoming worthless piles of junk at the first brown-out or hiccup. Having said that, seeing a robot with flailing arms swirling around the room is always good for a chuckle.
Yep, those are other good uses for multi processors. The approach taken
with multiple 8051s on the Trilobot might work too depending on how much
independence there is in the cores. Can they operated completely
asynchronously I wonder? It'll definitely be interesting to see what
sort of hardware we get to play with over the next decade.
Having previously worked for Intel for many years, I suspect that their approach will be to assemble an array of x86 cores, some of which will be augmented with instruction set extensions and hardware for graphics or signal processing.
This will present a market opportunity for other companies to pursue arrays with much larger numbers of smaller RISC cores. Such systems could be significantly more powerful than the Intel offering for high-bandwidth computation.
Where the transistor count growth curve once provided a temporary market opportunity for RISC, multi-core processor architecture can now provide a significant justification for very large numbers of simpler RISC cores.