More than 20 years after the first edition of “The Matrix,” people are still divided about what AI can and cannot do. Some think that AI is the last hope to save the planet and humankind; and others envision our species to be doomed and at the mercy of machines—a la The Matrix. And while we currently simply don’t know which—if any—prediction will come true, there is a lot of unclarity about how AI works, and for which tasks we should use it, or not. In two recent publications in the Academy of Management Learning & Education and MIT Sloan Management Review, co-authored by Frank den Hond and Dirk Lindebaum, we tackle this problem.
More than 20 years after the first edition of “The Matrix,” people are still divided about what AI can and cannot do.
What’s behind the smokescreen?
In current scientific but also practitioner articles and news items, AI is being smoke-screened. The acronym is being used and abused for all kinds of things, alluding to the use of algorithms, calculating computers, big data, digital networks, and all other sorts of vaguely related phenomena and technologies. Yet, not everything is AI. Essentially, the concept refers to abilities that were previously the privilege of humans, including cognition and sensing. AI typically uses algorithms, which are simply sets of instructions or rules. In that sense, a cookbook is nothing but a collection of algorithms, specifying rules for the treatment and ordering of manipulating foody ingredients. Over the course of the past decades, since the 1960s, we have witnessed several waves of AI development. The technology has evolved from symbolic representation of data via statistical crunching of data to what is called machine learning. Yet, all dreams of a “general artificial intelligence” where machines are autonomous, like in The Matrix, have not come true.
What has come true, however, is the creeping dominance of so-called AI systems in all dimensions of our life.
Human judgment to the rescue
What has come true, however, is the creeping dominance of so-called AI systems in all dimensions of our life. Machine-learning technology controls our food supply chains, plans travels, steers airplanes, drives cars, sets hotel prices, nudges us into spending hours on Instagram, screens job applicants’ CVS, decides on parole decisions, grades exams, discharges patients from intensive care units, determines the quality of our schools and organizations, and so forth—the list is endless. Almost without noticing, we have happily handed over many decisions to machines that use calculus to take these decisions. Oftentimes, this works well: a machine is better at sorting and calculating, especially where much data is involved.
Machines are not built to take into account morality, judgment, and the context and history of human values when taking decisions.
But as soon as morality and ethics are involved, machines can’t compete: this is because they are not built to take into account morality, judgment, and the context and history of human values when taking decisions. Machines search for the optimal decision; but in real life, there is no one optimal solution for many of the decisions we encounter on a day-to-day basis. Moreover, the majority of our decisions depend on where we are, who we are, and what our environment allows us to decide. This means that any decision that disregards context and history and looks for an optimal solution will never be able to do justice to the richness of our human life.
What comes next?
If we go on as we have for the past decades, developing even more advanced algorithms and AI just because we can, we will soon be faced with situations that will be extremely difficult or even impossible to reverse. For example, we already know that it will soon be impossible to diagnose a deep fake video—meaning that soon, we won’t be able to know if a human or a machine created a video. Instead of in a lemming-like way just continue our path, we should carefully think about when and especially why to use AI—and when and why not.
We should carefully think about when and especially why to use AI—and when and why not.
In our two articles, we observe that our decision-making is already intertwined with that of machines. This leads us to call for inaction: we should stop to replace human judgment with algorithmic “reckoning,” or calculating. We also call to action: we need to reconsider in practice and business education which decisions are fit to be delegated to machines, and which decisions will have to be made by human beings, in order to safeguard our human values. We agree with Neo who in the first Matrix movie says “I don’t like the idea that I’m not in control of my life.” Rather, we should take back control and start using AI technology for what it has been built.