I found this story about a new DARPA request for proposals via Digg. The challenge is to build a computer system which can process 10^18 floating point operations per second. Here's the back of the envelope calculation I posted in response:
I think it could be done in 3 years if it didn't have to fit in one rack, it could have already been done, had there not been such a heavy emphasis on the Von Neuman architecture for the past 40 years.
Imagine a bit slice processor....with perhaps 1000 transistors. Put those in the same die in a 1000x1000 grid, this would require 10^9 transistors. You could clock them at the nice sane clock speed of 1 ghz. That would fit in a die the same size as a current generation cpu. That's 10^6 slices times 10^9 cycles/second, or 10^15 bit computes per second, on a practical size die, with current technology.
Even if you lost 99.9% of the compute efficiency in shuffling bits around to do a floating point operation, you could still do 10^12 Floating point operations per second, on a prototype chip... today.
The chip would be easy to test, because all of the bit slices would be identical, so the testing of each part could be done in parallel... perhaps 1 second to test time per die. (Testing is a big part of cost when it comes to chips) The chip would cost somewhere around $10 each.
If you allow me to continue with my estimate of 10^12 flops per chip, and it were possible to build a grid of 1000x1000 of them... that takes you to the magic 10^18 Flops that DARPA wants, for a cost of about $10,000,000.
10^18 operations per second, with 10^15 transistors, clocked at 1Ghz. Feasible... yes... but it does require you to give up sequential programming, and think in terms of graph flow.
It's called bitgrid, I thought of it around 1981.... and I've written some of this up at http://bitgrid.blogspot.com
- ▼ June (4)