Jun 02, 2016 11:33
I'm reading Nick Bostrom's "Superintelligence," which overall I've been appreciating (though much of the material so far is a condensation of common wisdom in the LW-sphere). I've noticed my first major issue with the book in chapter 4, in box 4 regarding the kinetics of an intelligence explosion. Here Bostrom sets up an equation for the rate of growth of an AI's general intelligence.
There are a number of assumptions made here and he does a reasonable job addressing these... except for one assumption which slides in unnoticed. In the middle of page 92, he suggests that once an AI overtakes human levels of intelligence, it will be the primary contributor to it's own progress toward increased intelligence. This creates the comparison between rate of intelligence increase and intelligence which leads to the exponential nature of the intelligence explosion.
This is not an unreasonable assumption, and throwing out his point because it is an assumption is probably not justified. However one can probably easily imagine a situation in which AI is effectively human-level without actually being good at computer science at all, for instance.
For example, consider an AI built from a number of highly specialized modules such as computer vision processes, language parsing, etc., as well as a decision-making process that connects them together. The decision-making process constructs short term goals (give that cat a hug) then collects information from specialized modules (vision module: I recognize a cat there) and uses that information to act via other specialized modules (movement module: move so the cat is closer. Arms module: hug the cat). These sorts of hacked-together processes seem pretty similar to how humans interact. And while the sort of decision-making modules that would be necessary aren't around, once one exists by attaching the correct modules you would have an AI that can drive cars, answer jeopardy questions, summarize books, plan trips, and generally do a sufficiently wide number of tasks to constitute a general intelligence that is "near the level of human", through a combination of being more or less advanced in different areas. One can further imagine that the decision-making module has some ability to add additional specialized submodules to learn new things; this part of decision-making may not even be that hard (current NLP programs often have markers for "other" or "unknown" which could be used to try to acquire new databases of info on the sorts of information that show up as uncertain). In this way, an AI might approach human levels of intelligence in enough domains to be considered an AGI, without having ANY ability whatsoever to do computer science.
Of course, it would be able to gain the ability to do computer science, however it would be limited by its ability to be taught--it's ability to translate it's intelligence into optimization power would be limited.
In a situation like this, wherein an AI needs to teach itself to do mathematics, a fast takeoff is still certainly conceivable as it could run through, for example, dozens of video lectures in parallel and run through programming assignments in internal compilers which give it much easier access to its own shortcomings as a programmer.
However it is also easily conceivable that a moderate takeoff would happen, as the AI takes a more traditional educational route through the process of becoming a computer scientist; in this circumstance it might exhibit some symptoms of weak superintelligence for some time before an intelligence explosion occurred without being even truly weakly superintelligent (since it might be barely-human in most domains including strategic pursuit of goals, but superhuman in domains already solved by computers).
While less likely, it would also be possible for a slow takeoff to occur in this scenario. For example, if the decision-making processes were not goal-oriented and the recalcitrance of teaching them to make original scientific discoveries were high, then one could imagine a significant population of such agents existing before one decided to learn computer science at all. It is also much more likely in this scenario that there will not be a fall in recalcitrance around human level, since AI is so far from being neuromorphic.
Of course even in this slow takeoff scenario could rapidly transition into the fast takeoff scenario, if it took some time for AI to seriously attempt to learn to improve themselves and it turned out to be less difficult (past some threshold) than expected.
And this all focuses on a somewhat specific scenario, specifically AI which is far away from being neuromorphic and which learns using something similar to deep learning to find solutions between slightly subhuman to significantly superhuman by learning on large datasets over the course of days. It's not entirely clear to me how much each of the specifics of this alternate scenario are necessary for a slowdown of intelligence explosion, or how much even all of them together would necessarily cause slowdown.
I include it primarily for completeness, because I do believe that the underlying point--that human-like general intelligence may not translate directly into optimization power--is importantly true and may substantially affect the takeoff curves.
ai,
takeoff,
machine ethics,
industrious