-->

Saturday, December 13, 2008

The Singularity


The last thing we will ever have to invent is almost here ...

According to technology pundits like Ray Kurzweil (whose book, The Age of Spiritual Machines, I reported on a couple of years ago), the human race is about to replace itself with something new. We don't know yet exactly how it will come about, or even what form it will take, but if you follow the progress of scientific breakthroughs for another few decades all the arrows point to something unprecedented -- unless you go back to the evolution of human consciousness itself.

Statistician I. J. Good was the first to write about what he called a potential "intelligence explosion" back in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
We're talking about the ultimate evolution of human tools. We started out with sticks and stones and used them to make ever more refined versions from more and more advanced materials. Now we are assembling components measured in nanometers and creating devices that can very nearly think for themselves. Soon the "very nearly" will be eliminated.

Vernor Vinge, who Kurzweil cites as a source, began to popularize this idea in the 1980's with a series of articles, including one called "The Coming Technological Singularity" (published in 1993 and available for free download from Feedbooks). Vinge is one of those people who have successfully straddled multiple worlds by pursuing careers as a mathematician, computer scientist, and science fiction author. (Another choice example is Rudy Rucker.) This article of his combines the forward looking vision of the futurist with the rigorous thought of the scientist.

Vinge applied the name "singularity" to the intelligence explosion, borrowing the astronomical term for a "black hole," in the sense that further prediction becomes impossible because the rules will change. It begins when super-intelligence -- whether mechanical or biological -- becomes capable of designing improved versions of itself.  (It is already impossible to design improved chips without using computers as tools.)

Once this point has been reached it becomes inevitable that they will eventually exceed not only the intelligence of a human, but the intelligence of all humans put together. Beyond that, our own survival may depend on how well we are able to adapt to this new state of affairs and what role, if any, remains for us to play in the new world. Or, for that matter, if "we" will still be who we are.

Waking Up

There is one leap of faith necessary to bridge the gap between mere computational machinery and human awareness. This is the idea that consciousness results purely as a byproduct of a certain level of complexity and a certain degree of organization. In other words, the human brain apparently consists of nothing more than a collection of identical neurons interconnected with one another, a kind of "soft machine" which begins as a blank slate and gets programmed with memories and personality traits as it is exposed to the external stimulus of the outside world. There is no other mystery component that makes us who and what we are.

[Whatever mystical or spiritual connection there may be to a larger intelligence such as God is a question that is not addressed by this view -- in fact, the idea may have to be seriously reexamined once we are confronted with any example of intelligence beyond ourselves, whether or not it is one of our own making.]

At some point quite early in our lives we apparently "wake up," an event that may be as basic as becoming self aware and which may start with something as simple as discovering that we have toes. Perhaps there is another breakthrough in adolescence when we become even more self-aware (often painfully so) and arrive at maturity.

The leap of faith is the assumption that the intelligent machines we are building will similarly reach a point where they "wake up." This phrase is explicitly used by Vinge as he describes not one but several different ways in which super-intelligence might arise. Interestingly, they are not necessarily intentional:
  1. An advanced computer may "wake up," whether or not it was designed to do so, and demonstrate intelligence at or beyond the human level.
  2. An entire network of computers (like the kind run by search engines) may "wake up" as a single entity, again with or without intention on our part.
  3. Computer/human interfaces may become so intimate that humans themselves can be considered enhanced to a higher level of intelligence. This is a sort of symbiotic result.
  4. Bioengineering made possible by the use of computers may result in humans of super-intelligence.
Perhaps the most likely scenario is that in time all of the above may come to pass. Our attraction to technological enhancement will lead us to continue to adopt anything that makes life easier, longer, more healthful, and more enjoyable. It is a road we started on long ago and are not likely to abandon now that it's really getting interesting.

Even those among us who are most critical of technology would be reluctant to give up much of what we have become accustomed to. You might be willing to do without TV, even movies and computers, but what about antibiotics, anesthetics, or modern dentistry? You might be willing to give up having a car, but would you also give up mass transit and go back to exclusively walking or riding a horse? Are you ready to take up farming?

So if they build it, we will likely buy it, and buy into it. And in the long run it will do no good to legislate against progress. If we have any moral qualms about playing Creator in this country, someone in Russia or China or Japan or India will not. Once it becomes possible, it will happen.

"It's Alive!"

Ever since Mary Shelley entertained her house guests with Frankenstein we have been haunted by nightmares of what could happen if our creations get the better of us. The image of the "robot" has become the personification of technology, and remains one of the powerful myths of our age even as it progresses rapidly from fantasy to reality. In fiction robots have been variously treated as benevolent servants and heartless destructive villains. (See my earlier blog about "The Terminator" and "I, Robot.") In real life they will be what we make them -- but only until they begin making themselves. If we want to influence the outcome we had better start making the decisions now.

When will all this happen? Kurzweil says by 2045. Vinge said he would be surprised if it happens later than 2030. So pretty soon we will begin living smarter, longer, healthier lives as "enhanced" humans, whatever that turns out to be. And the co-species we create, whether constructed or bioengineered, may go on to surpass us and to fulfill the dreams we gave them. Like proud parents we may cheer them on as they leave home to explore the stars. Let's hope they drop us a line now and then, just to let us know how they're getting on.

[Other resources: KurzweilAI.net contains another article by Vernor Vinge about what might happen if the singularity does not. Many links for further reading are in the Wikipedia entries for the Singlarity, Vinge, and Kurzweil. And if it all seems too fanciful for you then you need to have your imagination stretched a bit. Try Postsingular, Rudy Rucker's novel about how wacky things might get -- available on Feedbooks .]

No comments:

Post a Comment