Beyond AI: Creating the Conscience of the Machine

Beyond AI: Creating the Conscience of the Machine

by J. Storrs Hall

And if machine intelligence advances beyond human intelligence, will we need to start talking about a computers intentions?These are some of the questions discussed by computer scientist J.

Drawing on a thirty-year career in artificial intelligence and computer science, Hall reviews the history of AI, discussing some of the major roadblocks that the field has recently overcome, and predicting the probable achievements in the near future.

There is new excitement in the field over the amazing capabilities of the latest robots and renewed optimism that achieving human-level intelligence is a reachable goal.But what will this mean for society and the relations between technology and human beings?

Soon ethical concerns will arise and programmers will need to begin thinking about the computer counterparts of moral codes and how ethical interactions between humans and their machines will eventually affect society as a whole.Weaving disparate threads together in an enlightening manner from cybernetics, computer science, psychology, philosophy of mind, neurophysiology, game theory, and economics, Hall provides an intriguing glimpse into the astonishing possibilities and dilemmas on the horizon.

  • Language: English
  • Category: Science
  • Rating: 3.64
  • Pages: 368
  • Publish Date: May 1st 2007 by Prometheus Books
  • Isbn10: 1591025117
  • Isbn13: 9781591025115

Read the Book "Beyond AI: Creating the Conscience of the Machine" Online

Hall makes a convincing case that it is a virtual certainty that human-level AI is coming, but the majority of his wonderfully written book focuses on where AI comes from--in terms of both the field's history and the technologies and theories that form the basis of AI research. The ideal targeting system, Wiener realized, was one that performed like a human brain--it would see the object, recognize it for what it is, consider what to do about it and then instruct the limbs to react. "It became clear to Wiener and Rosenblueth that there were some compelling parallels between the mechanisms of prediction, communication, and control in the mechanical gun-steering systems and the ones in the human body," writes Hall. "Answering a question like 'when will AI arrive?' with a numerical date makes about as much sense as answering the ultimate question of life, the universe, and everything with '42,'" he writes, meaning that the question--in its broadness--is hopelessly inadequate to address the issue it seeks to explore. If we already use AI to land planes, play chess, and drive cars, then what does it mean to produce an intelligence that performs on a par with humanity? To address this question, Hall advances a framework of six stages for understanding how AI is currently developing and where it might go in the years ahead. During this stage, AI crosses over into human territory in terms of capability. It's already apparent that some AI abilities (chess-playing) are beyond the human scale while others (reading and writing) haven't reached it yet," says Hall. Parahuman may also come to refer to humans who use computer devices such as implants to improve biological performance. The epihuman artificial intelligence would possess what Hall calls "weakly godlike" powers and the ability to out perform humans in virtually every way, but it would not be an unfathomably powerful being. "We can straightforwardly predict, from Moore's Law that 10 years after the advent of a learning (but not radically self-improving) human-level AI, the same software running on machinery of the same cost would do the same human-level tasks 1000 times as fast as we," writes Hall. Much the same thing will be true of hyperhuman AI," says Hall, "except where it has to interact with other AIs. The really interesting question, then, will be: what will it want?" In the end, the gorilla metaphor may be a more useful one for understanding AI than that of the beautiful statue springing to life.

The book starts with a letter to a future AGI in which he besieges it to keep what concience people have programmed into it. At the top he suggests there are homunculus SIGMAs -- little men that control the whole process, but only in terms of all the lower level SIGMAs. He also postulates a micro-economic model of mind, where agents compete with each other to perform tasks, and those with the best price/performance are selected. But for all that he misses the essential conclusion of my book, namely that natural selection will also drive an AGIs morality. The book finishes with some analysis and predictions about the road to AGI, whether the future needs us, and the impossibility of predicting the future beyond the Singularity.