Can AI be fair? | The star

0

Teach tomorrow’s leaders how to create safe technology that will work for humanity

WELCOME to the future, where artificial intelligence (AI) systems augment, automate or replace human decision-making. Imagine applying for a bank loan via an online system; you enter all relevant information and almost instantly the system notifies you that you are not eligible for a loan.

Coincidentally, you know that a friend with a very similar profile to yours had their loan approved by the same system.

Let’s look at a second scenario: you decide to look for a new job – you send your CV to an online hiring system which immediately tells you that you are not the right person. One of your peers who you think does not qualify has better luck.

The questions that arise in your mind are: have I been treated fairly? How can I be sure that the AI ​​system did not discriminate against me?

Human decision-making can sometimes be perceived as unfair, but shouldn’t a computer without human intervention produce fair decisions?

AI is about building machines, hardware, or software that can perform tasks that commonly require human intelligence. AI has grown rapidly in recent years due to the proliferation of computing power.

Today, AI is deployed for critical applications, from employment to disease diagnosis. Almost all AI applications we see today are some form of narrow AI, systems designed for a specific task. For example, a system designed to drive an autonomous vehicle will not be able to play a game of chess. The holy grail of AI research is to create general artificial intelligence. Such systems are self-aware, conscious and sentient. Think of the character Data in the Star Trek: The Next Generation series.

Until artificial general intelligence arrives, significant progress has been made in an area of ​​AI called machine learning (ML). The voice-activated personal assistants available in today’s computing devices use some form of ML technology – take Siri, Cortana and Google Assistant, for example.

While we have read about the awesome achievements of AI, such as DeepMind’s AlphaGo software beating the best human player in the game of Go, we have also seen reports raising serious concerns about social fairness and equity. . Here are some examples to illustrate these concerns:

• In Machine Bias, an online article dated May 23, 2016, ProPublica reported that software used in the United States to predict future criminals is biased against a particular segment of society.

• The New York Times published the article Facial Recognition Is Accurate, If You’re White Male on February 9, 2018. It cited an MIT Media Lab study that found that specific facial recognition software gives accurate 99% when an input photo is a white male. In comparison, up to 35% error was observed for women with darker skin.

• More recently, an article published in the research journal Nature Machine Intelligence dated June 17, 2021 reported that texts generated by the GPT-3 language model tend to associate people of a particular religion with violence.

So apparently ML suffers from biased decisions. We would have thought that machines with no emotional or cultural attachments would do a better job. How did it happen? An ML system must learn data to perform its task.

Data is generated from human activities and workflows. Our biases and cultural nuances are inherent in the data. When we train AI systems to perform functions like we do, the biases encoded in the data will influence the models built.

When applied, the decisions made by the models will likely reflect bias in the data used for training. We enter a self-reinforcing loop where biases are amplified in new data.

Making ML systems work with the concept of fairness is difficult for several reasons.

The concept of equity should be represented in a mathematical form understandable by computers.

This task is complex because the notion of fairness depends very much on our human value systems. Although we can translate fairness requirements into mathematical terms, research has shown that some fairness requirements are contradictory.

Engineers use a common technique to make ML systems “fairer”: debiased data. A relatively simple way to do this is to omit sensitive attributes such as race and gender.

Another method is to modify the results of the ML system to align them with the equity objective. Some will see these actions as a form of positive action, provoking debate.

So, can machines be fair? In a broad sense, we need to humanize AI technology. An excellent starting point is higher education. We cannot find any dearth of technology-focused programs in our universities today.

We must teach our students, the leaders and shapers of tomorrow, to create technology that will work for humanity without harming or sacrificing the values ​​we hold dear.

Achieving this requires greater collaboration between the technical discipline and the social sciences. We need to break down disciplinary silos.

For example, AI-related courses should be taught alongside social justice concepts; call it trustworthy AI. Let’s look to a future where AI seamlessly and securely augments our human capabilities.

Professor Ho Chin Kuan is the Vice Chancellor of the Asia-Pacific University of Technology and Innovation (APU). He is also a member of the Overseas Chinese Development Research Center of the Yangtze Delta Region Institute of Tsinghua University, China. As a passionate educator and researcher, his interests include data science, artificial intelligence, machine learning, and complex systems. Professor Ho works with leading educators to co-build the future of EdTech. The opinions expressed here are those of the author.

Share.

About Author

Comments are closed.