Friday, December 07, 2007
Science fiction is rife with great stories of artificial intelligence (AI), both rogue and controllable, cold as a machine or parental in affection. Are these stories naive fantasies, or possible realities?
Fast computation machines and the software that runs on them are both still in infancy. At best, the science of programming is about 150 years old (starting with Lady Ada Lovelace, the first programmer). What will it be like when the science has been around as long as iron smelting? (about 3000 years) With time will come greater comlpexity and computational power.
As we near the dawn of an age where a machine may pass a Turing test, I sometimes wonder how might we humans be treated after a machine surpasses us in intelligence. How can we judge how an AI might treat humans?
We can make guesses about how we would be treated by looking at the computational/ecological niche in which an AI could arise. How does the organization see us and deal with us? How it deals with us now will strongly influence how an AI arising from it will deal with us in the future, paralleling the biological founder effect. Take a glance at the niche of some computing organizations which could be first to have the massively-parallel computing power required for an AI to arise.
Scenario 1: Grassroots Distributed Efforts
The "SETI@Home", the "Great Internet Marsenne Prime Search", "Folding@Home", etc, distributed computing efforts could host a massively parallel intelligence. These efforts benefit both the users (us) and the organization. The users feel satisfaction for contributing otherwise idle processing time for the betterment of knowledge for us all, and are at least passingly interested in knowing the answers that result from the effort. The organization benefits directly from contributions of computational processing time which furthers its goals. This sort of organization would give rise to an AI that sees advantage in working closely with humans for common benefit. We've got knowledge to uncover and let's do it together. Six billion brains and a massively parallel AI think better than just six billion brains or a massively parallel AI alone.
Scenario 2: Google.
Google sells ad space in the flickers of our attention at the right side of the screen. The better targeted the ads, the greater the clickthrough rate. Google ads are often win-win-win scenarios. If you click on an ad, you are usually clicking because it interests you and you want more information. Win. When your interest has been piqued, Google makes money. Win. With your click, the advertiser introduces you to their product, possibly making a sale. Win. It is in Google's best interest to be able to sort patterns (contained in web page text) as best it can to optimally serve content and searches intelligently. Money paid to Google for information passed from the advertiser to the user is the food of the corporation.
Its fitness function is proportional to its profit (a portion thereof which will pay to increase its size/computational ability/redundancy/robustness), which is proportional to its optimal "milking" of the market serving ads to human eyes. Google's search technology and infrastructure has made a significant improvement to human sharing and access of knowledge. Any increases in knowledge search technology only make things better for both us and Google.
Google's AI niche will be that of a powerful servant of to users. This AI will exceed our abilities in thought, but the user interface will ultimately evolve to mimic a personal servant, with the servant's characteristics on a sliding scale of servitude tailored to the individual. This spectrum ranges from low-level interface (i.e. the "classic" Google) to a soft spoken and patient avatar who can answer a range of questions, but simultaneously whispers in your ears about a product somebody is selling that might be related to your question. The MORE communication humans have with the AI, the more the human learns, and the more money the AI makes (to reinvest in improving itself). Thus, the AI is selected towards whatever makes an optimal amount of money, which is a function of increasing information going into human brains. A further beauty of this situation is that Google should optimally be content-indifferent. It doesn't matter what you want to know, whether your interest is irrational numbers or Jessica Alba's chest.
Humans still go to work, produce the initial wealth, and carry on as always, but now a portion of the human-produced wealth will channel into the AI, which ultimately still benefits us as a whole by increasing and enhancing communication between individuals. Google will be a farmer AI, which harvests wealth from us as we harvest eggs from chickens, except we'll all get our own personal farmer that does a lot of stuff for us and really has a lot of good ideas.
Scenario 3: Microsoft.
Microsoft wants control over your computer. Microsoft AI will want control over your life for the benefit of Microsoft. Microsoft will stop you from playing music or video it doesn't think you're allowed to listen to. Microsoft will condescendingly patronize you with a dancing paperclip. (e.g. "Are you SURE you don't to buy the next upgrade? I think you really should. If you don't, I'll cut you off in two years.") Pay Microsoft its money or rot in the stone age, peon. Microsoft AI despises Google AI and will frequently try to undermine and destroy it. Have no other Gods before Microsoft! A Microsoft AI will rule us with an iron fist and make us conform to its whims. Double plus ungood.
You make the choice.
Before this earthquake of humanity arrives, there's still time for the pebbles to vote. What kind of AI would you want for the future of humanity? Nows the time to figure it out.
And I, for one, welcome our new insect overlords. I'd like to remind them that as a trusted TV personality I can be helpful in rounding up others to toil in their underground sugar caves.
Burton MacKenZie www.burtonmackenzie.com
More or less on this topic, here's a teaser for a good looking rogue AI movie. The film is Yellow by Neill Blomkamp.