I was benchmarking PCs to try to determine where to put some
code and I started to wonder once more where we are in the unintentional
technological race to build a cheap computer that can outperform the human
brain. I've done this exercise a half-dozen times over the years. Here's
what I found this time.
Calculating how long it's going to take for PCs to become as powerful as
the human brain is easy if you can make a few assumptions. Here are my
- Processing power of the human brain: 100 teramips.
- Processing power of an inexpensive PC today: 0.025 teramips.
- Rate of increase of computer processing power: doubling every year
With those assumptions and a tool like a simple spreadsheet, a small
computer program, or a pencil and some paper, anyone can calculate more or
less when an inexpensive PC will have the processing power of the
human brain. I chose the computer program path. Here's an example in Common
(defun forecast (&key (today-teramips 0.025) (target-teramips 100))
(let ((power (- (/ (log target-teramips) (log 2))
(/ (log today-teramips) (log 2)))))
(format nil "~a~a~a~%~a~3$~a~%~a~$"
"Human brain throughput: " target-teramips " teramips"
"PC throughput: " today-teramips " teramips"
"Years until inexpensive PC outperforms human brain: " power)))
If you run that program like this
(forecast :today-teramips 0.025 :target-teramips 100)
the program will spit out the following text:
Human brain throughput: 100 teramips
PC throughput: 0.025 teramips
Years until PC outperforms human brain: 11.97
If your assumption about the processing power of the human mind is off by
an order of magnitude, such as 100 teramips would be if the human mind
actually ran at 1,000 teramips, the result differs little (15 years instead
of 12, in this example).
If you make the same assumptions that I made, then the calculation itself
is easy and probably not disputable. However, the assumptions are
definitely quite disputable, so I will explain how I arrived at those.
- Processing power of the human brain
This number is the hard one to pin down. Picking the right value for this
variable is the key to this exercise. Here are a few starting points:
- Processing power of an inexpensive PC today
To figure this out, I started by downloading the BYTEmark benchmarking
software from here: http://www.tux.org/~mayer/linux/bmark.html
I then extracted, compiled, and ran the software on a quad-core core-2
duo machine with 6GB of RAM, a machine that is worth about US$800.00
today. One number that I used from the results of that test was 81.153,
the original BYTEmark results integer index. This number means that the
computer is roughly 81 times faster than the baseline for this test, a
Pentium 90. I found information on the Internet that indicated that the
Pentium 90 ran at approximately 150 mips.
That information allowed me to calculate a mips rating for my computer:
81 * 150 mips = 12172 mips
But my computer has 4 processors and this test measures the throughput of
a single processor. I multiplied the result by 2 rather than by 4 to
obtain a conservative 25,000 mips (or 0.025 teramips) rating for my
- Rate of increase of computer processing power:
This was the easiest of all assumptions to make. I simply checked the
processing power of the fastest computer you can buy for US$1000 today
and it turned out to be about double the of the fastest computer you
could buy a year ago for the same amount of money. However, I believe
that this assumption of computer power doubling every year is extremely
conservative because there are now
(still expensive--in the ten-thousand-dollar range) that, for some tasks,
are hundreds of times faster than a PC. This indicates that in the near
future we're going to see inexpensive computers with hundreds of
processors that are likely going to make the conservative estimate we're
using here (computer power doubles every year) seem far too conservative.