Should we fear Artificial Intelligence

should we fear AIIf you watch films and TV shows, in which AI has been exploited to create any number of apocalyptic scenarios, the answer might be yes. After watching Blade Runner or The Matrix or, as a more recent example, Ex Machina, it’s easier to understand why AI touches off visceral reactions in the layman.

It’s no secret that automation has posed a real threat to lower-skilled workers in blue collar industries, and that has grown into a fear of all forms of artificial intelligence. But a lot of complexities stand between where we are today and production AI, particularly the struggle to bridge the AI chasm. In other words, the type of AI Hollywood suggests we should fear, taking our jobs and possibly more, is a long way off.

At the other end of the pop culture spectrum, we have people who have embraced AI as the future of mankind. Google’s chief futurist Ray Kurzweil is a great example of thinkers who have championed AI as the next step in the evolution of human intelligence. So which version is our AI future?

The truth is likely somewhere in the middle. Artificial intelligence won’t compete against humans with extinction-level stakes à la Terminator, at least in forthcoming years; nor will it transcend us as Kurzweil suggests. The likeliest outcome in the near future is we carve out symbiotic roles for the two, because of their respective shortcomings.

While many people expect all AI they interact with to pass the Turing test, the human brain is the most advanced machine we know of. Thanks to emotional intelligence, humans can interpret and adapt in real time to changing circumstances, and react differently to the same stimuli. Humans and their emotional intelligence make it tough for AI to be benchmarked.

We are all talking about Amazon Go, Amazon’s attempt to bring its website to life in fully automated 3D retail centers. But who will customers talk to when an item is missing or a mistake is made in billing? We want human interactions, like a conversation with the neighborhood baker (if you’re French like me) or the opinion of a salesperson on the fit of a jacket. Now we also want efficiency, but not to the exclusion of adaptable and sympathetic emotional intelligence. 

In some situations, efficiency and safety are preferred over empathy, or creativity. For instance, many favor of the delegation of hazardous tasks in factories or oilfields to machines, letting humans handle higher level strategic tasks like managing employees or drawing on both the left and right brain to flesh out designs.

The world is becoming a more complex place and we can welcome more AI to help us navigate it. Consider the accelerating advance of research in many scientific fields, making staying an expert even in a well-defined field a real challenge. The issue is not just that your field is growing, but that it touches on and draws from many other fields that are growing as well. As a result, knowledge bases are growing exponentially.

A heart surgeon faced with a tough choice may consult a few books or a couple of experts and then identify patterns and weight different outcomes to make a decision. Instead, they could draw on an AI to assimilate the knowledge base to reach a logical decision from a truly holistic standpoint. This does not guarantee that it will be the right answer. Machine Learning can help the surgeon weigh thousands of similar cases, consider every medical angle, and even cross-reference the patient’s family history. The surgeon could even cover all this ground in less time than it would have taken to page through books or call advisors. But the purely logical decision should not be the right and final decision. Doing the right thing is different that having highest probability of success, and so the surgeon will have to consider empathy for the family, the quality of living of the patient, and many other emotional factors.

For now, machine learning is the most straightforward AI component to implement, and the one critical to improving the human condition. ML limits AI outputs to assimilating large quantities of data and defining patterns, but it acknowledges that AI cannot evaluate complex, novel, or emotional variables and leaves multidimensional decision making to humans. 

As researchers and futurists struggle to bring true AI to the masses, it will be a progressive transition. What I am interested to see is whether or not a rapid transition could trigger a generational clash.

Just like pre-Internet and post-Internal generations, will be see a pre-AI and post-AI ones? If that’s the case, as with many technologies, the last generation to fear it may raise the first generation to embrace it.

Author: Isabelle Guis