How to Avoid the Potential Dangers of AI, Robots and Big Tech Companies

1127

If you plan to live another 10 years, you should expect to live in a world with machines doing things you don’t like doing today. Shooting for another 20? Even more will be done without your lifting the proverbial finger. It’s not only menial tasks such as cleaning, laundry and dishes. High-end services previously not accessible to you will now be in your economic grasp. Your personal robot will know you better than you know yourself. This almost unimaginable lifestyle could become routine for the masses, given the tangible achievements of artificial intelligence (AI) and robotics to date and the low-latency-coupled-with-high-bandwidth-connectivity that 5G is on track to provide.

Despite the excitement of the likely new reality, however, AI, robots and big companies are three things a lot of people are afraid of. While the last one has been around for a long time, the former are things we’ll have to learn to live with. The imminent rollout of 5G infrastructure could usher in a technology revolution perhaps greater than any that have preceded it. The new networks will be a thousand times faster than 4G, which means that an entire HD film, for example, could be downloaded in seconds. High-bandwidth uploads will also be possible, which will mean that what the robot sees—constituting a massive amount of data—can be sent to, and interact in real-time with, a brain in the cloud. Robots will also be able to communicate at high speeds with each other, and network delays will be so tiny that they’ll be comparable to the unnoticeable delays within our bodies between nerve cells and our brain. Major network operators are beginning 5G rollouts in select cities by end of 2018—just around the corner.

The reasons to be fearful of AI and robotics are well-founded; reasonable people should be concerned about what may be unleashed with their introduction. AI can compete with our brains and robots can compete with our bodies, and in many cases, can beat us handily already. And the more time that passes, the better these emerging technologies will become, while our own capabilities are expected to remain more or less the same.

The fact that big tech companies are some of the leading implementers of AI and robotics further contributes to the wariness many now feel. There has been a love-hate relationship between individuals and big companies long before the present era of big tech, but when big tech, AI and robots work together, it gets even scarier.

To be clear, big tech companies have become successful by solving problems in our lives, particularly in a way that is economical and sustainable. The private sector is the key driver of the most prosperous economies. Nevertheless, concerns regarding how powerful companies may choose to design new technologies are justified, given that their primary interest is to maximize profits for their shareholders. Many of them thrive on not-so-transparent business models that collect and then leverage data associated with users. Tomorrow’s big tech companies will leverage intelligence (via AI) and control (via robots) associated with the lives of their users. In such a world, third-party entities may know more about us than we know about ourselves. Decisions will be made on our behalf and increasingly without our awareness, and those decisions won’t necessarily be in our best interests.

These issues are foreseeable, and the stakes are enormous. So, what can be done? In fact, a path exists in which we develop policies that would protect consumer interests and allow for trust in robots across all aspects of our lives including our homes. In order to achieve the highest possible public trust in the benefits of AI and robotics in our lives, there are three imperatives for action.

First, the industry’s AI and robotics leaders must start to integrate consumers’ interests in safety, privacy and personalization into their technology offerings—something we haven’t seen before in the development of the internet and mobile markets. Governments are too slow and too far behind the technology curve to be effective in creating solutions, and regulation is burdensome, which makes companies uniquely situated to get ahead of the developing situation.

By getting out ahead of these issues, companies can not only avoid consumer and government backlash, but importantly, they can eliminate barriers to better return on investment before they occur. This foundation of trust will be invaluable going forward as it will allow companies to navigate unforeseen complications and emerging issues.

Second, big tech companies and their smaller start-up siblings, should support common agreements, voluntary best practices, principles, protocols, standards and other policies—and yes, even regulation as the very last resort. Entities need to be able to anticipate the behavior of other entities, whether they be machines, humans, companies, governments or other stakeholders. The more policy structure there is in place that protects consumer interests, the more trust AI and robots deserve. Adherence to such a policy structure should also serve to protect companies from unreasonable expectations of liability, as no technology will be perfect.

Finally, a trusted entity should design and implement a reliable and verifiable means of creating and protecting the new AI and robotics policy structure from inappropriate influence. Given the vast economic growth anticipated for AI and robotics businesses, economic factors will exert continuous pressure on any policy structure.

Read the source article in Scientific American.