Should “Prevention” Be a Core Principle of AI?

419

Intraspexion trains a Deep Learning algorithm, a form of artificial intelligence (AI), to learn about certain types of legal threats, and eventually provide in-house attorneys with an early warning of the risks.  This AI functionality has been developed to avoid or prevent litigation.

However, there is a risk much greater than litigation: AI itself. A number of well-respected thought leaders have worried that super intelligent AI poses an existential threat to humanity. See, e.g., Technology vs. Humanity: The Coming Clash Between Man and Machine, Gerd Leonhart, Fast Future Publishing LTD (2016).

We’d like to address that risk here and propose at least one way to reduce it.

In a recent TED talk published on September 29, 2016, Sam Harris asked: “Can we build AI without losing control over it?”

http://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it.

Harris made three assumptions.

Harris’s first assumption is that intelligence is simply a matter of processing information in physical systems, e.g., computers and our brains, all of which consist of atoms moving naturally in us and as programmed in computers as neural networks.

Second, Harris said, we will continue to improve our technology as we are able. In other words, the AI train has already left the station, there are no stop signs and no brakes except, perhaps, as may be applied by those developing AI.

Third, Harris maintains that we most likely do not stand currently at the high water mark of intelligence. There is no evidence supporting a metric for a top, a peak, or any level of utmost intelligence.

Given just these assumptions, Harris suggests that super AI is a looming existential threat because in part the vast majority of us have failed to even notice any risk.

We think the reason for this failure is that the risky stories are fictional while the tragedies are few and far between.

The poster child for one risky scenario is The Terminator franchise (created by James Cameron and Gale Anne Hurd in 1984). Will computers, given algorithms that can learn, program themselves, advance at an exponential rate, and develop autonomously so that they might go to war with humanity?

A second risk, which Harris describes, is the Ant scenario. Will the super AI computers advance so quickly and powerfully that they outgrow any concern for humans? Will they come to treat us indifferently, as if we are more like ants to them, and generally not worth their time or attention.

Finally, there’s the scenario of The Matrix franchise (by The Wachowski Brothers in 1999). In The Matrix, the world is shared by three competing camps which go to war with each other. The three camps are the Humans, who use software and machines, but are abused by the Software world (the Matrix), and the Machine world, which also uses software, but which is finally convinced by Neo to join forces with the Humans because the Software world has become too powerful.

Each of these dystopic visions have become increasingly familiar to us as AI develops at ever-quickening rates of change.

But they are fictional and they have a history. Depictions of techno-disasters are at least as old as The Time Machine by H. G. Wells. In 1895, Wells imagined that class warfare had split humanity into two groups living in a symbiotic relationship.

However, in The Time Machine, a Time Traveler discovers that this human struggle in the year 802,701 has grown very dysfunctional. The one group of humans, the Morlocks, are ugly, live underground, and operate machinery. The Eloi are beautiful, enjoy the surface of our planet, but have no technology. The symbiosis between the two is established by an arrangement through which the Morlocks feed and clothe the Eloi, who have come to depend totally on the Morlocks. Horrifically, the Morlocks have come to use the Eloi as food.

However, despite these disastrous “human vs. machine” images, there are at least two hopeful scenarios.

The first hopeful approach comes from the I, Robot series of short stories by Isaac Asimov (1920-1992). Asimov postulated The Three Laws of Robotics and then distilled the Three Laws with a “Zeroth” Law.

Asimov’s first law is:  A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Zeroth law or most foundational law of Robotics is a generalization of the first law:  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

As a fundamental operating principle of AI, can humanity insure that all computers and algorithms are programmed to obey laws of thought and development which are only beneficial to humans?

In a second scenario (WarGames, a 1983 movie), humans realize that they can teach a computer to learn the ethical concept of “futility.”

In WarGames, a teenager asked a computer to play a game called “Thermonuclear War,” not realizing that the computer was housed in the Pentagon and thought the game was “for real.” As the movie neared a conclusion, the computer had control of the silos and the missiles, and was striving to access the President’s launch codes.

However, when the computer (“Joshua”) was directed to play Tic-Tac-Toe against itself, it learned futility, and in the last second before launching, shut down the “game.” When the computer “stood down,” it voiced this memorable line:

“A strange game. The only winning move is not to play.”

Harris is correct in assuming that we are not going to stop using computers or improving on the algorithms which enable computers to learn.

But why not use AI in the form of Deep Learning to teach futility and other ethical concepts?

Why not teach our computers to provide humans with an early warning of some threat or other maleficent circumstance?

Doing so would enable us to avoid the potential harms from external forces, e.g., asteroids, which could lead to adverse environmental tipping points. And imbuing our AI algorithms with this constraint could also help us avoid the risks from sources internal to the computers themselves.

In other words, extrapolating from Intraspexion’s litigation early warning system, we need to build the concept of prevention into the super AI that is on the horizon.

At that point prevention would be a constraint on every computer system, which would, when the system recognized an outcome that would be harmful to humanity, provide us with an early warning. Or, if the harm to humanity is imminent, the system could take action by implementing a “shut down” state.

After all, we think about prevention in the beneficial connections of preventive maintenance and preventive medicine.

And clearly, a “shut down” state might have saved about $1 billion had a computer shut down the reactor at Three Mile Island. Better still, a “shut down” state might have saved 11 lives and $42 billion had a computer safely shut down BP’s Deepwater Horizon drilling rig.

But these tragedies preceded AI as we know it now. Given the rapid speed of AI’s power, we can foresee even greater capabilities. But rather than worry that AI, like any powerful technology, can be used for good or for evil, we need a stop sign or a brake for the AI train.

These considerations have led us to propose a prevention constraint as a core teaching for AI as it matures and spreads throughout our civilizations.

In ethical terms, this prevention constraint means that AI should be developed and used not only to achieve good outcomes for humanity, but also to prevent bad or harmful ones.

In business terms, the same principle is just as easy to understand. A focus on prevention means that AI should be deployed not only to increase revenues, but also to prevent unnecessary and avoidable costs.

If we choose not to control the developmental possibilities of AI, we have chosen by default to turn our destiny to the one AI choses for us.

# # #

by Nick Brestoff and Larry Bridgesmith – © 2016 Intraspexion Inc.

Nick Brestoff was educated in engineering at UCLA and Caltech and in law at USC. He practiced litigation for 38 years before retiring in 2014. He is the founder and CEO of Intraspexion.

Larry W. Bridgesmith is an Adjunct Professor of Law at the Vanderbilt School of Law and is associated with Vanderbilt’s Program on Law & Innovation. He is a co-founder of Intraspexion. Prior to joining Intraspexion, he organized a conference at Vanderbilt on AI and the Law.