Carnegie Mellon University has been studying AI technology since 65 years ago, and we acknowledged its great ability to perform complex tasks like making decisions after analyzing a huge number of data. However, we discovered that AI also has “bias” whenever it makes decisions on some matters. In fact, some biases in AI can be beneficial for business requirements, but for some social contexts, they are not.
For instance, there are hundreds of applications for jobs submitted to a company, so the question is: Who are appropriate for the positions? Similarly, if a criminal is released from prison early, will he/she become a repeat offender? Here, with the decision-making ability, AI can answer those questions by analysing the data given. But how can we manage the biases that are in the data sets that AI uses?
Sanghamitra Dutta, a doctoral candidate in electrical and computer engineering (ECE) at Carnegie Mellon University said that, “AI decisions are tailored to the data that is available around us, and there have always been biases in data, with regards to race, gender, nationality, and other protected attributes. When AI makes decisions, it inherently acquires or reinforces those biases.”
“For instance, zip codes have been found to propagate racial bias. Similarly, an automated hiring tool might learn to downgrade women’s resumes if they contain phrases like “women’s rugby team,” say Dutta. To address this, a large body of research has developed in the past decade that focuses on fairness in machine learning and removing bias from AI models.
“However, some biases in AI might need to be exempted to satisfy critical business requirements,” says Pulkit Grover, a professor in ECE who is working with Dutta to understand how to apply AI to fairly screen job applicants, among other applications.
“At first, it may seem strange, even politically incorrect, to say that some biases are okay, but there are situations where common sense dictates that allowing some bias might be acceptable. For instance, firefighters need to lift victims and carry them out of burning buildings. The ability to lift weight is a critical job requirement,” says Grover.
Grover indicated that if we seek for someone who is able to lift heavy weight, men will usually be our first choice. That is a bias. “This is an example where you may have bias, but it is explainable by a safety-critical, business necessity,” says Grover.
“The question then becomes how do you check if an AI tool is giving a recommendation that is biased purely due to business necessities and not other reasons.”
Undeniably, since AI has entered to our lives, it has greatly facilitated our social development. AI, furthermore, has proved its great ability to identify patterns in the data. However, bias in AI is also a serious point that people have to consider because it can highly affect our society. Thus, AI should be able to explain and defend its results. With the novel measure, the team is able to train AI to eliminate biases towards some social contexts, but keep those biases that are beneficial for business necessary.
Furthermore, Dutta also indicated that despite some technical challenges, their measure has demonstrated its effectiveness. However, there are some points that the team need to consider: how can their model automatically determine which features are business critical. Dutta explained that, “Defining the critical features for a particular application is not a mere math problem, which is why computer scientists and social scientists need to collaborate to expand the role of AI in ethical employment practices.”
In addition to Dutta and Grover, the research team consists of Anupam Datta, professor of ECE; Piotr Mardziel, systems scientist in ECE; and Ph.D. candidate Praveen Venkatesh.
Retrieved from: https://engineering.cmu.edu/news-events/news/2020/04/14-necessary-bias-ai.html