We Need AIs that Proactively Does Good, Not Just Avoids Doing Egregious Harm

754

Guest Editorial by Dr. Ben Goertzel

As AI technology rapidly transitions from research and science fiction into industry, commerce and everyday life, the ethics of AI is understandably attracting more and more attention. There are futuristic ethical questions, such as the potential of artificial superintelligence to annihilate humanity and take over the world. And there are immediate-term practical ethical questions, like the use of AI face recognition for government surveillance, or the use of AI analytics for advertising and other types of “corporate mind control.”

Dr Ben Goertzel

To their credit, the  megacorporations that control most of the world’s deployed AI these days are taking these ethical issues seriously. Google, for instance, has articulated a set of serious ethical principles for its use of AI, and has clearly stated it will not partake in the application of AI in particular ways it deems unethical, e.g. in weapons technology or in “technologies that gather or use information for surveillance violating internationally accepted norms.”

In the end, however, I think there is a strong argument that such ethical principles are not going to be enough. Principles like Google’s “avoid creating or reinforcing unfair bias” or “be socially beneficial” are certainly much better than their absence or opposites. But they also direct attention away from more fundamental questions, such as: What are the core goals that motivate this AI’s behaviors and choices? What is the basic nature of this AI’s relationship with the humans it interacts with?

The ethics principles posed by corporate AI giants tend to focus on promising not to create AI that does egregiously bad things, while in the course of carrying out its main mission of maximizing shareholder value. But it may be that what humanity really needs, are AIs that are explicitly oriented toward being good and helpful, rather than just AIs that avoid highly clear causes of harm-causing.

For instance: If the core goal of an AI is to observe what people do and report it to the government, and the basic nature of the AI’s relationship with people is to watch them and report on them, then even if the AI is ethically guided to obey “internationally accepted norms” for surveillance, it is not going to be a true friend to the people it is watching. In the end its goal is not to help the people it watches, and its relationship is not one of deep mutual empathy, but rather one of watching and reporting.   

There may be an argument that the surveillance in question is socially beneficial, in the sense that someone calculates the crimes it prevents to be worse than the loss of privacy it entails. But as the surveillance AI grows more and more intelligent, if it ends up growing beyond the specific programming provided by its creators, it will likely grow in a direction guided by its goals and relationships — i.e. it will likely grow into a superintelligent, superpowerful spy system. And where will it grow from there?

Or, suppose the core goal of an AI is to sell people things (the core goal of most of Google’s commercial AI systems), and its basic relationship with people is to convince them to buy things. Such an AI may do social good via directing people to buy things they will enjoy and be glad they bought. Of course, it will also end up encouraging people to buy more and more stuff, which will have indirect social consequences of debatable desirability — like requiring people to somehow keep obtaining more and more money, and directing people away from pursuits that provide satisfaction without requiring ongoing significant monetary expenditure.

It is of course preferable if an advertising-oriented AI system is not racist or otherwise unpleasantly biased in its actions, and displays accountability in its use of customer data, etc. But still — one is creating an AI whose goal is to sell, and whose basic relationship with people is one of salesman and mark.

On the other hand, envision an AI whose basic goal is to help people achieve greater and greater levels of satisfaction in their minds and bodies, by whatever means happen to work. And suppose this AI works toward this goal, in part, via establishing relationships of love and friendship with the people it interacts with. This would be something qualitatively different than current commercial AI systems. An AI that avoids outright bad behavior while spying or selling is one thing; an AI that fundamentally aims to understand and help is another.

It is realistic to create an AI ecosystem in which a few top-level goals like spying and advertising don’t dominate the scene. Why not have a vast teeming network of AIs, each with its own goals — including AIs whose goals are more purely beneficial, and AIs that seek out “I-You” relationships with people rather than “I-It” relationships like watcher-watched and salesman-mark? This overall network of AIs will not have a unitary goal to the extent that big tech company AIs do now, and will not have a simplistic “I-It” type relationships with the humans that interact with it. It will interact with people in a complex and nuanced and ever-shifting way, embodying a novel form of “I-You” interaction that sensitively depends on the nature of both parties in the interaction.

To sum this point of view up in a few principles, one might say:

  • Create at least some AIs whose explicit goals are to help people, and whose human relationships are fundamentally founded on empathy.
  • Foster the flourishing of a diverse network of AIs with various different goals and relationship types.
  • Craft this network so that human/AI-network interactions are richly interactive and, at least a reasonable percentage of the time, “I-You” in nature.
  • And yes, also avoid creating AIs that actively do nasty things like kill people, make racist or other similarly biased judgments, or perform oppressive surveillance.

Via this approach, one aims to foster a global AI network that is proactively beneficial to humans, rather than just avoiding egregious harm while achieving its other goals.

A superintelligence that emerges from a decentralized AI network is by no means guaranteed to be beneficial to humans — but it would seem to have better odds than a superintelligence that emerges from an AI oriented mainly toward, e.g., selling or spying.  And en route to the emergence of superintelligence, a world with this sort of AI ecosystem will be a lot more fun to live in, with a greater diversity of AI tools carrying out various functions including the spreading of joy and fulfillment.

Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace. He is also chief AI scientist for the Loving AI project, which aims to help people achieve “blissful states of consciousness nd ositive transformation.”

For more information, visit Loving AI .