I read
a post over on
Daylight Atheism about the Singularity, and I couldn't refrain from comment. I have quite a bit to say on the subject, though, and I didn't want to hijack the thread, even though it was posted a long time ago.
I'll try to keep this concise. I have three main points to make: first, that the standard conception of the technological singularity could be understood to have already happened several times; second, that the invention/discovery of "true AI" typically denoted by "the Singularity" (note the capital S) could easily revolutionize our understanding of the Universe for synergistic reasons; third, that if it is true that self-awareness is the key to consciousness, then the Singularity could easily happen within the next 20 years.
First to get out of the way, though: total agreement. I agree with Ebonmuse that "Singularitarians" who are certain that the Singularity will happen in [whoever's] lifetime express a commitment to the idea analogous to the religious who believen in the immanence of the apocalypse/rapture/whatever. I also agree wholeheartedly that faster clock speeds do not equate to higher intelligence - with but a single caveat. "Intelligence" is typically used to refer to "understanding of the world" (called "knowledge" if you go with the "data - information - knowledge" distinction model), and in this sense he is absolutely correct to reinforce his position with the GIGO problem (in context, I believe this is exactly what he intends). However, one popular and respected definition of "intelligence" is "the capacity to learn," which unequivocally is improved with faster clock speeds as long as programming architecture remains constant (though he acknowledges this point in the comments on his site).
My second pseudo-preamble is on credentials. I wrote a paper on technology as it relates to the human condition in PHI 199 during the fall semester of 2005, and included a lot of talk on the Singularity therein. It's still one of my favorite papers I've written, though it's in need of a substantial re-write at this point in my life. My point is simply that I've done my homework on the topic, looking not only into Vinge and the Singularity itself, but also philosophy of technology in general and information theory in particular. So while I wouldn't call myself an authority on the subject, I think I'm a rather competent speaker on it.
The basic idea of a technological singularity is a point after which previous models of predicting future technological development break down: those who came before could not possibly predict the changes with any hope of accuracy. In this sense, several singularities could be seen to have already occurred, for example: the discovery of fire, the invention of agriculture, the Industrial Revolution, and the advent of the microcomputer (or the invention of the transistor, as these are distinct events). Each one of these developments ushered in a whole host of developments that we could not have predicted until we were in the thick of it. (When I say "could not have predicted," I'm aware that anyone can "predict" anything at any time, I simply mean that they would have lacked credible reasons for doing so and many similar claims which would have happened to be false would be equally plausible at the time of prediction - in other words, similar-but-false predictions would have been epistemologically indistinguishable until after the development in question.) This is not a major point, so I won't belabor it; I'm simply trying to show that technological singularity - while an important and rare event - is defined in such a way that makes it at once more plausible as a development and less magical as a concept.
My second point is more substantial and requires more explanation: "true AI" could easily accelerate the scientific process. As a side note, I prefer the term "synthetic intelligence," as there's nothing at all artificial about a genuine machine-based intelligence, making the term misleading - however, insisting on "SI" would probably cause more confusion than it's worth, so I'll continue to use "AI" with my objection understood. At any rate, this acceleration would not occur through simple escalation of computational power - faster clock speeds do not equate to more knowledge for exactly the reasons Ebonmuse points out. Rather, there would be several synergistic effects that a true AI would cause by virtue of exactly what it is.
Before I get into this, I want to take a bit of time to distinguish data, information, and knowledge. As Clifford Stoll writes in Silicon Snake Oil,Data isn’t information, any more than fifty tons of cement is a skyscraper. ... Data is just bits and bytes...grains of sand without a concrete aggregate. Information has utility. It has meaning.
Most important, information is not knowledge. Back to my central thesis: my computer can access the Swiss molecular-biology archive, yet I still know squat about DNA transcription.
Here, knowledge is our goal - but before we can get that knowledge, we must first accumulate data, and give that data context with good experimental design. That done, we can extract information, and compare it to other information to see how it fits in. Once the information has been processed, it "turns into" knowledge which can be integrated with our other knowledge.
That said, AI would have a distinct advantage over us in this capacity: genuine self-directed intelligence, working with mechanical precision, would be able to almost entirely overcome several obstacles imposed on homo sapiens by our biology ("almost" is key: these inhibiting factors would be mitigated, not "solved"). Examples include the need for sleep, the need for food, the need for socialization, the impediments of distraction, the time it takes to interpret data, human error, the need to correlate information, and so on. Again - and I can't stress this enough - AI would be able to seriously mitigate many of these obstacles, perhaps reducing their effect by an order of magnitude (perhaps several!). An AI could have its own lab, perhaps even with slave AIs to carry out the actual experiments (if the master AI itself wasn't big enough to handle them with mere slave processes, that is), and could conduct them 24/7. Communication with other AIs (I'm assuming that there would be more than one, which I don't think is unreasonable) would be a snap, as each one could present exactly what it perceived in exactly the format it was perceived - this data-sharing would be analogous to peer review, but with the speed of software installation. With AIs directing, performing, analyzing, verifying, and interpreting the work, as well as presenting the finished product for peer review with humans - all in tandem, at several locations at once - I suspect that the most limiting factor would be the time it takes the actual experiments to be carried out. This cannot be overcome - but again, it can be mitigated by improved theoretical calculations and experimental design, and these improved experiments could also be carried out with greater precision and in greater numbers than humans can presently manage due to human error and space constraints, respectively. On top of this, I'm not even taking into account the doubly-brain-breaking possibility of the emergence of aggregate intelligence (a better use for the AI initials, if you ask me...) - though this is mainly because I think that accelerating returns on data storage, data transmission, and computational power may make more of a difference, and aggregate intelligence could quite possibly be incoherent (or a result of fallacious thinking).
My point cannot be shown by calculation, so unfortunately there's no data to dispute or refer to; my argument is precisely this, though: AI could improve nearly all aspects of the experimental process, resulting in synergistic effects that we are currently powerless to predict (for a number of reasons, not the least of which is that we don't know whether these AIs would think at about the same speed, faster, or slower than we ourselves do). Assuming that an AI could think at a rate comparable to a human, I find it plausible that a dozen or so AI-directed laboratories could churn out discoveries at a rate greater than humanity's ability to integrate them into our knowledge base, to say nothing of our educational texts (which currently suffer from this problem to a lesser degree - which would be a great reason to transfer the load to the internet). In fact, the human "scientists" of such an era would probably be devoted not to experiment itself, but merely to reviewing the results of our AI-based discoveries and relating them to the public. Humans might not even be able to integrate these discoveries quickly enough to base inventions on them, but the machines could, and by similarly improved processes.
My third and final point is even farther out from empirical verification at this time, by virtue of the very antecedent upon which it rests: if it is true that self-awareness is the key to consciousness, then the Singularity could easily happen within the next 20 years. On this point, we'll probably have to agree to disagree on quite a bit. I do think that self-awareness is the key to consciousness - moreover, I think it works as a definition. Brains model the world, and at some point in complexity, it becomes possible for a brain to model not just the external world for basic stimulus-response reactions, but to model itself for purposes of self-analysis and self-direction beyond the basic programming embedded in the hardware - learning, in other words. There are basic architectural requirements for this development, of course, but this introspective quality of a brain would be singularly responsible for the emergence of an ego (the sense of "I"), the ability to learn in a self-directed manner, the capacity for introspection, and all sorts of things which you can't do unless you think that there's a you to begin with (both the italicized terms are operant there). I want to stress here that I classify this as merely an exciting conjecture in my epistemological scheme of things - my mind perceives agreement with the proposition, though I can think neither of a way to falsify it, nor even what such an experiment might look like, so I refuse to label it a "belief."
However, were this the case, then a creative programmer who got a genetic algorithm started on trying to model and improve itself just might have the key that unlocks Pandora's box here. How this would happen, I don't know - but I strongly suspect that it would be the kind of thing that would quickly "snowball" from ponderously slow progress to ever-increasing returns (within hardware constraints, of course). For clarity, what we're talking about here is self-aware software, which would be able to both use itself and improve itself. This could possibly be done at a crude level with current technology - I'm not sure what the aforementioned architectural requirements would be, though. If it could be done crudely with existing technology, though, it could quickly scale up in ways that we can't currently predict (making the Singularity an actual singularity). The key here, though, is that the Singularity is the AI itself - not its products or progeny. Implementing the products of a Singularity could take decades or centuries, but the Singularity is the development that makes those things possible.
The main take-away points here, and what fed the above, are 1) the Singularity is a technological milestone, not the products or progeny of that milestone; and 2) the hallmark of a technological singularity is that all models for predicting our progress become unreliable, not that progress becomes unlimited. There will always be a plateau, but a singularity has occurred when we can't say with any degree of certainty just where that plateau will be. I agree that the Singularitarians who expect "unlimited" progress are thinking magically and don't properly understand what they're talking about, and a more conservative outlook is called for. However, I also think that the most extravagant predictions (except the totally impossible ones) are plausible to a degree (albeit a small one), for the reason that Socrates could not have conceived of the LHC - it is so far beyond his experience that he would have no way to think of it, it would be utterly alien to him, and so things may become to us due to a technological development. Maybe even in our lifetime. Some such developments will doubtless occur in the wake of the next technological singularity, AI or otherwise.