AI and The Black Box Dilemma

November 6, 2018 Article BioVox

Artificial intelligence has become a regular staple in our modern lives. With increasingly complex programs allowing for scarily accurate image recognition software and even self-driving cars, the topic has been in the spotlight a lot lately. It is no wonder that members of the life sciences community have also taken notice. This year, even BioWin Day is focused on the subject: “Artificial Intelligence for Health: Between Dream and Reality”. BioVox interviewed one of the keynote speakers of the event, Hugues Bersini, about some of the trickier questions associated with the rise of AI.

By Amy LeBlanc

Hugues Bersini is a pioneer in exploiting biological metaphors, such as the immune system, for engineering and cognitive sciences. Bersini is currently heading the IRIDIA laboratory at the Université Libre de Bruxelles (ULB) together with Marco Dorigo. Having started work with artificial intelligence (AI) already in the 1980s, Bersini is one of the most knowledgeable experts in the field. According to Bersini, he became fascinated by AI because he was trying to understand how accidents like Three-mile and Chernobyl could occur:

“Both disasters were the direct results of human mistakes; I wanted to know how we could prevent these types of catastrophes by helping human operators make better decisions. So, I started working with AI and cognitive modelling.”

Why the excitement?

I believe that beating the human threshold has been a very symbolic milestone for AI development. For the first time, computers are more consistently accurate than humans. – Hugues Bersini, ULB

Although AI has been around since the 1950s, there has been an incredible rekindled sense of excitement in the past few years. We asked Bersini why he thinks AI has become such a hot topic again:

“I believe three things are responsible for this renewed focus on AI: Firstly, the emergence of GAFA (Google, Apple, Facebook and Amazon) has established AI software as an omnipresent and influential part of our everyday lives. Secondly, computers have finally started outperforming people on certain tasks. Thirdly, and perhaps most importantly, we’ve seen a bifurcation of conscious and subconscious AI systems.”

Beating the human brain

In reference to his second statement, Bersini provided several examples of AI outdoing its human competition in the past five years. One of the most poignant examples he shared is that of AlphaGo: a deep mind computer program designed to play the ancient Chinese board game Go. In 2015, the program beat the professional Go champion Lee Sedol. Though it may seem like simply a quirky feat, this example illustrates just how powerful AI deep learning and neural networks have become. Bersini explains:

“AlphaGo learned to play Go by playing against itself. There was no need for any human expertise and no need for an actual understanding of the game. The only thing it needed to know was the definition of a win or a loss. Then, through random trial and error, AlphaGo taught itself the best way of playing the game.”

Interestingly, when people questioned Sedol after he was defeated by AlphaGo, he said he couldn’t understand the way the machine played. It was a “new way of playing Go” that he had never seen before.

A subconscious computer can’t explain what it’s doing: it’s a black box, the process is hidden. – Hugues Bersini, ULB

Although Go is just a game, this ability to outdo humans has started occurring in many other sectors too. From autonomous cars to medical imaging, the fact that AI is consistently outperforming people is a really big deal. Bersini elaborates:

“Even though the AI performance is only 1-2% better, those numbers can make a huge difference depending on the task. Take for example a cancer diagnosis: if AI is outperforming a doctor by even 1-2%, that may mean the difference between the life or death of a patient. Because of this, I believe that beating the human threshold has been a very symbolic milestone for AI development. For the first time, computers are more consistently accurate than humans.”

A fork in the artificial road

In humans, we deal with some problems through conscious cognitive processes: sequential, logical thinking and problem solving that we are actively aware of. We also have an underlying, subconscious system, where automatic information processing guides rote tasks and makes instinctive decisions for us. When Bersini speaks of artificial intelligence, he also likes to classify AI programs into these two main categories: conscious and subconscious AI.

Conscious AI systems are those where complex mathematical equations and programming have led to a machine that can perform specific tasks. Subconscious systems are those AI programs created when neural networks processing enormous amounts of data teach themselves to perform a task, such as image recognition, playing Go or even driving a car.

People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots. What we show here is that there are no universal rules. – Iyad Rahwan, MIT

One of the issues with subconscious AI programs, is that their decision-making process is a closed system. Because these programs are automated, and deal with such enormous amounts of data, there is no way to backtrack and figure out why a particular choice was selected once the decision has been made. This is what has given rise to the “black box” moniker: a subconscious AI program cannot ever be asked to explain itself.

The Black Box Dilemma

The black box isn’t much of a problem when AI is given a simple task such as a game of Go. But when the task is something like driving a car, the stakes suddenly become much higher. If an autonomous car hits a pedestrian crossing the street, we are suddenly faced with an important question: did the car make the right choice?

Unfortunately, in this scenario it may be impossible to determine whether the autonomous car made a correct choice. The decision could have been made due to a flaw in the data, a fault in the machine, or it may be that hitting the pedestrian was legitimately the best option given all other possible outcomes. Bersini foresees some serious moral questions arising from these sorts of scenarios:

“The issue is one of accountability. A subconscious computer can’t explain what it’s doing: it’s a black box, the process is hidden. So, can we still use this technology: a machine that cannot justify its own decisions? To me, this is one of the most challenging questions we are facing in AI today.”

Sexist bots and selfish cars

To complicate AI matters even further, recent real-life examples have presented us with even more conundrums. Just last month, a story was leaked that Amazon had to scrap an AI recruitment program, designed to review resumes, because the software had inadvertently started making sexist decisions. Although resumes had no official names or gender listed, the software picked up on small in-text clues indicative of the applicant’s gender and downgraded women’s CVs. Of course, the bias only happened because there was already a hiring bias in-place. As this example clearly demonstrates: AI programs are only as good as the data we feed them.

What AI is still lacking is embodiment: a connection between the data processing and the way the computer then interacts with the world. – Hugues Bersini, ULB

Furthermore, a recent Nature publication showed that moral decisions made while driving are not universally consistent. The study, based on a survey of over 2.3 million people from around the world, found that the moral principles guiding a driver’s decisions vary depending on culture. For example, when made to choose between swerving towards a child or an elderly citizen crossing the road, people from “Western” regions, such as Europe or North America, were far more likely to spare the child, while people from “Eastern” countries, such as China and Japan, were equally likely to hit either pedestrian.

“People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots. What we show here is that there are no universal rules,” said Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study.

Becoming human

Although there are many difficult questions yet to be tackled, Bersini has full faith that we are only just seeing the first indications of what AI is capable of doing. He believes AI is a useful tool that just needs to be developed further:

“What AI is still lacking is embodiment: a connection between the data processing and the way the computer then interacts with the world. What we currently have are interesting subsystems that work well for specific tasks, but they cannot currently work in an adaptive way with the environment. Unlike a human, they are not multidimensional. However, I see no reason why this shouldn’t become possible.

I believe AI will continue to outperform humans and become an invaluable tool for difficult decision-making in the future.”


Avatar photo
BioVox

With a local focus and global reach, the news platform BioVox shares insights into the Benelux life sciences ecosystem with readers across the world. BioVox is an independent publisher, providing its community with quality content and first-row access to interesting breakthroughs and trends in biotech, medtech, agtech, pharma, and more. We shine a spotlight on the latest news and innovations from both our partners and community.

All posts

Subscribe to the BioVox newsletter