Caging the Demon

As computers become powerful and solve more problems, the possibility that computers could evolve into a capability that could rise up against us and pose an existential threat is of increasing concern. After reading a recent book on artificial intelligence (AI), Superintelligence, Elon Musk recently said:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out

This is a tough claim to evaluate because we have little understanding of how the brain works and even less understanding of how current artificial intelligence could ever lead to a machine that develops any sense of self-awareness or an original thought for that matter. Our very human minds use our imagination to fill in the gaps in our understanding and insert certainty where it doesn’t belong. While “dangerous” AI is a future hypothetical that no-one understands, there is no shortage of experts talking about it. Nick Bostrom, the author of Superintelligence, is a Professor, Faculty of Philosophy & Oxford Martin School; Director, Future of Humanity Institute; Director, Programme on the Impacts of Future Technology; University of Oxford. Musk, one of the most admired futurists and businessmen today, is joined by other thought-leaders such as Ray Kurzweil and Stephen Hawking in making statements such as: “Artificial intelligence could be a real danger in the not-too-distance future. It could design improvements to itself and outsmart us all.”

Bostrom gives us a name for this hypothetical goblin: Superintelligence. He defines it this as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While we can all expect that the capability and interconnectedness of computers will continue to increase, it is Bostrom’s use of the word intellect that causes the most controversy.

Can an intellect arise from basic materials and electricity? While this question has theological implications, this seems a possibility for many today and is in some sense a consequence of using evolution to form a complete worldview. When our current fascination with monism and Darwinism is combined with a growing awareness of that our reliance on and the capability of machines is growing geometrically, we are primed to accept Bostrom’s reductionist and materialist statement:

Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).*

How could we mere humans ever compete? If we accept the brain and consciousness are merely the result of chemical, electrical and mechanical processes than it ought to be emulable by synthetic materials. While this would require breakthroughs in 3D printing, AI and chemistry, the question for a modern materialist is not if this is possible, but when it will occur. The argument might go: while we don’t have AI like this now, super-intelligent machines could evolve much faster than us and may consequently find little use for the lordship of an inferior species.

If some experts think this way, when do they think human intelligence and capability are likely to be surpassed? The short answer is that they don’t agree on a timeline, but there is a slight consensus that computers will be able to match human intelligence. In 2006 a survey was conducted at the AI@50 conference and showed that 18{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} of attendees believed machines could “simulate learning and every other aspect of human intelligence” by 2056. Otherwise, attendees were split down the middle: 41{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} of attendees expected this to happen later and 41{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} expected machines to never reach that milestone. Another survey, by Bostrom, looked at the 100 most cited authors in AI in order to find the median year by experts expected machines “can carry out most human professions at least as well as a typical human”. From his survey, 10{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} said 2024, 50{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} said 2050, and 90{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} said to expect human-like intelligence in 2070. His summary is that leading AI researchers place a 90{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} probability on the development of human-level machine intelligence by between 2075 and 2090. While his question shapes the results, and the results say nothing about a general or a self-aware machine, he clearly has some experts agreeing with him.

But many others don’t agree. The strongest argument against self-awareness is that the intelligence of machines cannot be compared to human intelligence, because of a difference in purpose and environment. We process information (and exist) for different reasons. Currently, AI is custom built to accomplish (optimize) a series of tasks — and there is no reason to assume an algorithm would automatically transition to excel at another task. Machines are not dissatisfied, and harbor no resentment. Freedom is not an objective ideal for machines. Computers can only recognize patterns and run optimization algorithms. No current technology has shown any potential to develop into self-aware thought.

In the ninetieth century, Ada Lovelace speculated that future machines, no matter how powerful, would ever truly be a “thinking” machine. Alan Turing called this “Lady Lovelace’s objection” and responded with his basic turing test (can a human distinguish between human and computer-generated answers?) and predicted that computers would achieve this within a few decades. Sixty years later, we are not even close and we still haven’t seen anything like an original thought from a computer. John Von Neumann was fascinated by artificial intelligence, and realized that the architecture of the human brain was fundamentally different than any machine. Unlike a digital computer, the brain is an analog system that processes data simultaneously in mysterious ways. Von Neumann writes:

A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse.

That still hasn’t happened. Current technology isn’t even moving in the direction of original thought. Chess winning Deep Blue and Jeopardy! winning Watson won by quickly processing huge sets of data. Kasparov wrote after his loss to Deep Blue: “Deep Blue was only intelligent the way your programmable alarm clock is intelligent.”* The IBM research team that built Watson agrees that Watson had no degree of understanding of the questions it answered:

Computers today are brilliant idiots. They have tremendous capacities for storing information and performing numerical calculations—far superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans; there are many situations where computers can’t do a lot to help us. *

In fact, the current direction of technology might be going the opposite direction from self-awareness. According to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT:

These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.*

Because we don’t understand how self-aware thought develops, all we have is a fleeting mirage in the future telling us that super-intelligence might be right around the corner. Without real-science, the only data to show us the future comes from our imagination and science fiction.

However, this might change. Betting against the ability for technology to accomplish any task is a bad idea. Tim Berners-Lee makes a reasonable argument when he says, “We are continually looking at the list of things machines cannot do – play chess, drive a car, translate language – and then checking them off the list when machines become capable of these things. Someday we will get to the end of the list.”*

Currently IBM and Qualcomm are building chips patterned after neurological processes and they are developing new software tools that simulate brain activity. By modeling the way individual neurons convey information, developers are currently writing and compiling biologically inspired software. The Neuromorphic Computing Platform from the European Union currently incorporates 50*106 plastic synapses and 200,000 biologically realistic neuron models on a single 8-inch silicon wafer. Like a natural system, they do not pre-program any code but only use logic that “evolves according to the physical properties of the electronic devices”.*

Should such a project produce self awareness, how dangerous would this be when compared to other existential threats? The future will clearly have higher interconnectivity and greater dependence on machines and they will continue to become more capable. In The Second Machine Age, I agree with Erik Brynjolfsson and Andrew McAfee when they write:

Digital technologies—with hardware, software, and networks at their core—will in the near future diagnose diseases more accurately than doctors can, apply enormous data sets to transform retailing, and accomplish many tasks once considered uniquely human.

Any time there is a great deal of interdependency, there is also a great deal of systemic risk. This will apply to our transportation networks, healthcare, and military systems and is a particular problem if we can’t build much more secure software. However, the threat here is malicious use combined with vulnerable software, not rouge AI. In this context, AI is most dangerous in its ability to empower a malicious actor. If, in the future, our computers are defended automatically by computers then a very powerful AI will be best equipped to find vulnerabilities, build exploits and conduct attacks. AI will also be critical to innovation and discovery as both humans and computers collaborate on societies’ hardest problems. To be most ready for this capability, the best strategy is to have the best AI which is only possible from a well-funded, diverse and active research base.

However, what if science develops a superior artificial intellect? Waiting to pull the power-cord is not a wise strategy. Issac Assimov provided us with three laws to follow to ensure benevolent interactions between humanity and machines:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Clearly, military systems will be developed which don’t follow these laws. While they have pervaded science fiction and are referred to in many books, films, and other media, they do little to guide a national strategy towards protecting us from rouge AI. Bostrom proposes regulatory approaches such as pre-programming a solution to the “control problem” of how to prevent the superintelligence from wiping out humanity or instilling the superintelligence with goals that are compatible with human survival and well-being. He also proposes research be guided and managed within a strict ethical framework. Stephen M. Omohundro writes that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways and proposes developing a universal set of values in order to establish a “friendly AI”.

The Machine Intelligence Research Institute has the mission of ensuring that the creation of smarter-than-human intelligence has a positive impact. They are conducting research to ensure computers can reason coherently about their own behavior and are consistent under reflection, trying to formally specify an AI’s goals in order to ensure such that the formalism matches their designer’s intentions and considering how to ensure those intended goals are preserved even as an AI modifies itself. These are worthy and interesting research goals, but if this science is formally developed, it will only ensure that benevolent designers will produce safe systems.

These approaches fail to consider the difficulty of accounting for unintended consequences that occur when goals are translated into machine-implementable code. A strategy that relies on compliance also fails to account for malicious actors. We should develop the best and most diverse AI possible to both protect us from malicious activity wether it is human directed or not. Such a strategy accounts for Bostrom’s main point that the first superintelligence to be created will have decisive first-mover advantage and, in a world where there is no other system remotely comparable, it will be very powerful. Only a diverse array of AI could counter such a threat.

Fortunately, since we are still dealing with a hypothetical, there is time to explore mitigation options as AI develops. I also agree with Bostrom’s Oxford University colleagues who suggest that nuclear war and the weaponization of biotechnology and nanotechnology present greater threats to humanity than superintelligence. For our lives, there is much greater danger of losing your job to a robot than losing your life. Perhaps the greatest threat is an over-reaction to AI development which prevents us from developing the AI needed to solve our hardest problems.

Some additional reading


Posted

in

,

by

Comments

2 responses to “Caging the Demon”

  1. Mooch Avatar
    Mooch

    Tim, Very well written. You’ve captured the three points of view extremely well. And I totally agree that a weaponized SI cyborg in the hands of a self-aware adversary will have to be defeated by a counter-SI cyborg in the hands of a self-aware defender. We see this already. Perhaps not in cyber but in other areas where high speed processing and algorithms are used. If an air to air weapon is using a Bayesian algorithm to calculate it’s impact with it’s target, the missile flying out to intercept that threat had better be calculating it’s intercept trajectory with a similar algorithm. The best algorithm wins, right?

    Since we’ve had so much discussion on this topic already, I don’t want to rehash the same old ground. Suffice to say for me that the missile that completed the intercept will never turn around and high-five with his buddy. The operators will. My arguments about rocks, etc still holds up and I’m still firmly in that camp (i.e., no self aware machines). I will say however that what separates the theologian from the materialist is that the materialist is painted into a corner. The materialist cannot be wrong with regard to their belief concerning not “if” but “when” machines will become self-aware. To suggest anything different contradicts their belief that there is no God. Theologian’s on the other hand, are not painted into the same corner. Should it occur, those of us who believe will be standing by to cast out the demon, with a bit of Holy water and some prayer. That is because self-aware super computers will still exist within our framework of belief. God still works in an age of self-aware machines. Therefore, just like in the case of the air to air intercept, it will simply take the equivalent of a cyborg Billy Graham to reach them…always asking the same question, “H@V3 U C0NC3D3R3D J3$U$?”

  2. Jim Muccio Avatar

    Tim,

    Very well written. You’ve captured the three points of view extremely well. And I totally agree that a weaponized SI cyborg in the hands of a self-aware adversary will have to be defeated by a counter-SI cyborg in the hands of a self-aware defender. We see this already. Perhaps not in cyber but in other areas where high speed processing and algorithms are used. If an air to air weapon is using a Bayesian algorithm to calculate it’s impact with it’s target, the missile flying out to intercept that threat had better be calculating it’s intercept trajectory with a similar algorithm. The best algorithm wins, right?

    Since we’ve had so much discussion on this topic already, I don’t want to rehash the same old ground. Suffice to say for me that the missile that completed the intercept will never turn around and high-five with his buddy. The operators will. My arguments about rocks, etc still holds up and I’m still firmly in that camp (i.e., no self aware machines). I will say however that what separates the theologian from the materialist is that the materialist is painted into a corner. The materialist cannot be wrong with regard to their belief concerning not “if” but “when” machines will become self-aware. To suggest anything different contradicts their belief that there is no God. Theologian’s on the other hand, are not painted into the same corner. Should it occur, those of us who believe will be standing by to cast out the demon, with a bit of Holy water and some prayer. That is because self-aware super computers will still exist within our framework of belief. God still works in an age of self-aware machines. Therefore, just like in the case of the air to air intercept, it will simply take the equivalent of a cyborg Billy Graham to reach them…always asking the same question, “H@V3 U C0NC3D3R3D J3SUS?”

Leave a Reply

Your email address will not be published. Required fields are marked *