Current challenges in AI

On September 25, 2018, Ikram Chraibi Kaadoud, researcher in AI and consultant at onepoint’s office in Bordeaux, participated in the Bordeaux ELLE Active Forum and seized the opportunity to discuss with us artificial intelligence and its challenges, as well as the place of women in the field.

Movies have been shaping our vision of AI, short for artificial intelligence, for several decades: Terminator, Skynet, Wall-E, Jarvis… But, is this grim future really lurking around the corner? The scientific community is unanimous: no, we’re still very far away from these completely autonomous AI systems that might succeed to enslave us one day.

By deconstructing these myths together, we can shift our focus from an uncertain future to current challenges facing AI.

Demystifying AI

Let’s start from the beginning: according to one of its forefathers, Marvin Lee Minsky, AI is “the building of computer programs which perform tasks which are, for the moment, performed in a more satisfactory way by humans because they require high level mental processes such as: perception learning, memory organization and critical reasoning.” In other words, it’s a science that imagines and designs IT tools that can mime human cognitive processes: intelligent machines capable of reasoning, analyzing and interacting, just like us.

This definition dates back from 1956. So where are we now, in 2018?

After the so-called AI winter, a period of dashed hope and reduced funding, AI has once again gained momentum over the past few years. Why? On the one hand, because processors, machines’ computing and central processing units, are becoming increasingly powerful; on the other hand, because large volumes of data, which are the raw material that neural networks need in order to learn, are readily available. These new technologies are behind the rise of Deep Learning: intelligent computer programs whose predictions are more accurate than those made by human beings.

Specialized vs. general AI, or why Skynet isn’t around the corner

If there is one blooming area within AI, that is machine learning.

Machine learning implies creating algorithms (neural networks) that are capable of learning and inferring rules when exposed to a number of sample data sets. Like this, when they analyze a new scenario, algorithms can compare it to what they already know first-hand (what they have learned) to make a prediction.

For instance, we can use AI to help choose a medical treatment, particularly when dealing with cancer cases. By learning from data gathered from former cancer patients, it’s possible to forecast the probability that a patient with the same type of cancer responds favorably or not to a specific treatment. Using the proper data and algorithms, we can obtain AI systems featuring high computing capabilities that can issue very accurate predictions.

Paradoxically, that’s precisely one of the weaknesses of such systems: an AI designed specially for medical diagnosis will do that and only that.

As human beings, we are able to multitask: we have a number of cognitive functions that enable us to simultaneously talk, listen and devise a strategy. Everything happens instantly within our brain at any given moment. For instance, you’re reading these lines and at the same time you’re recognizing characters and words, analyzing, processing and endowing them with meaning. While this set of functions is available to each and every one of us, it’s really hard for a machine to replicate it.

Some current research is focusing on developing artificial general intelligences (capable of mimicking more than one cognitive function), but no general public applications have been produced so far.

Transparency and interpretability: a technical challenge

Interpretability is AI’s main challenge: if we can interpret what’s happening within an algorithm, we can explain it and justify its decisions. In this scenario, the decisional process of algorithms, and in turn that of the structures and people who use such algorithms, becomes transparent. Importantly, the General Data Protection Regulation (GDPR) requires transparency when using algorithms. Mounir Majhoubi, the French Secretary of State in charge of digital technologies, announced at the French National Assembly that “if an algorithm cannot be explained, it cannot be used in public administrations.” In other words, an insurance company, a doctor, a banker as well as any institution using AI should be capable of explaining not the algorithm per se or the code behind it, but the logic followed by the machines that are relying on it to reach a conclusion. Because we created neural networks, which are the algorithms enabling machine learning, we know the math that makes them work. However, we’re far from understanding their “reasoning” process: once the algorithm has learned the data, it produces a result or prediction that is hard to explain.

Would you agree to entrust your medical diagnosis to a black box that only responds “yes” or “no”? Most likely not!

Many research projects and specialized companies have been created to figure out this problem.

The issue of the transparency and interpretability of AI falls within the scope of individual responsibility facing AI. Satya Nadella, CEO of Microsoft, has stressed out the need for every developer to take responsibility for the algorithms they code. This request follows the same trend that saw the launch of petitions against autonomous drones, shared and signed by IT students, professors and researchers. Besides, there are many debates being organized to raise awareness of these issues.

In 2017, the European Union voted for the implementation of ethical principles for the use of AI and robots. It also organized the “AI Convention Europe” in Brussels on October 4, 2018, in order to discuss the latest developments and innovations as well as the disruptions they cause. At a global level, the “AI for Good Global Summit” has taken place yearly since 2017 under the auspices of the UN. Participants gather to discuss the benefits brought about by AI, aiming to improve quality and sustainability of life on our planet and devising strategies to ensure not only the safe, inclusive and reliable development of these technologies, but also equal access to such benefits.

Equality in AI

In the social sphere, one of the challenges is equality, that is, equal access to AI for everyone. We have already explained that AIs need to learn from data before they can make predictions, that is, data make up the pillars of their reasoning. So what if, intentionally or not, we fed an AI with data containing only men’s physiological responses to cancer treatments?

When the AI would predict the results of the same treatments for the same type of cancer in women or children, such results would be incorrect because the machine was never trained to do that. In statistics, this is called selection bias: the population represented by the data set isn’t representative of the general population.

Google’s caption identifying a couple of African American people as gorillas on a photo in 2015 is a specific example of the impact of this kind of bias. The AI behind the system didn’t have a data set sufficiently diverse and representative of the general population, so it compared the picture with something else that it knew first-hand (gorillas, in this case). Google has since apologized for the error and worked to fix it, but this event points to the importance of both the data being fed to our AI systems and the people coding them.

Any system conceived, developed and used only by people from a single socio-professional category is putting the rest of the society on the sidelines.

Many ongoing research projects seek to limit the impact of such biases in AI. However, this is a complex issue that requires time so that we can act on data collection and analysis processes, discuss what we consider good data according to the chosen goal, and find the proper algorithm to reach this goal.

The place of women in AI

There aren’t many women in science, and even less so in IT. According to one of the opinions collected from women researchers in a 2017 article published in lemonde.fr, “IT is the only field where the proportion of women, which used to be fairly good, is in sharp decline, whereas in all the other scientific and technical branches it has increased from 5% in 1972 to 26% in 2010.”

As a result, the majority of studies are conducted by men and addressed to panels comprised of men.

That is why all of us must make this issue our own and avoid letting our gender determine our career choices. This does not mean that we all need to become researchers or IT developers, but we need to realize that tomorrow’s tools will be built with the contributions of today’s women and men.

There are a number of field initiatives seeking to promote discussion about the place of women in science, such as interviews with women scientists and “ELLE Active” talks. Members of the French association Femmes & Sciences as well as teams of researchers visit primary and secondary schools to explain scientific professions to students in accessible terms. “Girls and Math” (Filles et Math) meetings and high school visits to scientific labs are organized all year round as part of different events, like Circuit scientifique bordelais, held in October 2018 in Bordeaux in the context of Fête de la science. And, for an older audience, there’s also Pint of science. In the end, all these actions aim to make people discover science through women and men working in its different branches.