Limits of Superintelligence

2016-05-07

In 2012 Dean et al. published a paper on unsupervised learning using 16000 CPU cores. Seeing machine learning successfully scaled across a modern data center convinced me that human-equivalent and better-than-human AIs were coming soon. In the past few months I've revised my estimates of what soon means, from years to decades. This is quick attempt to jot down my reasoning.

There have not been any theoretical breakthroughs in AI research in decades. All the progress you have seen has come from the hard work of harnessing ever greater numbers of transistors. (The programming techniques for even the most sophisticated recurrent neural network used to win at Go would have seemed a natural continuation of the Perceptron to Rosenblatt in the 1950s.)

Machine intelligence is proportional to the number of transistors. (More accurately, the number of switch events, but transistors is a reasonable approximation.)

The number of transistors continues to increase, so we should expect machine intelligence to increase.

Here's the wrinkle: the last couple of years has seen the slow down of Moore's law. At this point I believe the safe money is on assuming it is over. (We won't know when it happened until long after the fact.) We still have another century at least of optimizing our hardware and software layouts, and there are more economic reasons to keep making computers. So this doesn't mean the end of increasing transistor counts or machine intelligence.

The end of Moore's law does slow down AI progress significantly. Because while the law is about die area, an important corollary is that the number of transistors you get for a Joule of energy increases. Under Moore's law, the amount of machine intelligence you got for a Joule increased every year. Now (modulo some large constant factors from improved software engineering), the amount of machine intelligence you get for a Joule is fixed.

Unlike five years ago, progress in AI is now tied limited by our industrial output.

The requirement for a superintelligence is that it is not only as smart as a human, but that it can grow its own intelligence independent of us. That means a superintelligence now needs complete control over industry: mining, refining, and manufacturing. Until we have roboticized industry and opened up effectively-limitless resources (the solar system), machine intelligence is stuck in a box.

So I expect to see superintelligence, but now I expect it to be a huge facility behind terawatts of solar panels, built on Ceres and launched into a heliocentric orbit.
research.google.com/pubs/pub40565.html


Index
github.com/crawshaw
twitter.com/davidcrawshaw
[email protected]