AI is Flawed — Here’s Why

Fairness in AI and AI bias

Arun
Towards Data Science

--

Photo by Bill Oxford on Unsplash

Artificial Intelligence has become an integral part of everyone’s lives. Starting from simple tasks like YouTube recommendations to complex life-saving tasks like generating drugs to cure illness, it has become omnipresent. It is influencing our lives in more ways than we realize.

But, is AI fair? No, it definitely isn’t.

It is hard to define a fair AI. Here is the best explanation I could come up with. A given AI model is fair if the outputs are independent of sensitive parameters(e.g., gender, race, sexuality, religious faith, disability, etc.) for a specific task that is already affected by social discrimination.

In this blog, I will be writing about AI bias, real-life examples of AI bias, and ways to fix it.

The Problem

AI bias is caused due to the inherent prejudice in the data used to train the model, leading to social discrimination. This leads to a lack of equal opportunities.

For instance, let's say I was tasked with creating a model to calculate a person's credit score with the location as a parameter. Certain ethnic groups are concentrated in certain locations. This will make my model racially biased towards those ethinic groups, affecting them while getting credit cards and bank loans.

Biased AI models exacerbate current social discrimination and paves way for oppression.

Real-life examples of AI bias

Here are some real-life examples of AI bias

  1. COMPAS: COMPAS(Correctional Offender Management Profiling for Alternative Sanctions) is a software used by the US courts to judge the probability of a defendant(the person accused of committing a crime) becoming a recidivist(act of repeating the previously committed crime). Due to the heavily biased data, the model predicted twice as many false positives for recidivism in black offenders than white offenders.
  2. Amazon’s Hiring: In 2014, Amazon developed an AI recruiting system to streamline their hiring process. It was found to be discriminatory against women as the data used to train the model was from the past 10 years, where most selected applicants were men due to the male dominance in the tech industry. Amazon scraped this system in 2018.
  3. US Healthcare: The US healthcare system used an AI model that assigned a lower risk rate for black people than white people for the same disease. This was because the model was optimized for cost, and since black people were perceived as being less able to pay, the model ranked their health risk lower than their white counterparts. This resulted in lower healthcare standards for black people.
  4. Twitter Image Cropping: In September of 2020, Twitter users found out that the image cropping algorithm favored white faces over black faces. I.e., When an image with a different aspect ratio than the preview window is posted on Twitter, the algorithm crops parts of the image and shows only a certain portion of the image as the preview. This AI model often showed white faces in the preview window in a picture with white and black faces.
  5. Facebook’s Advertisement Algorithm: In 2019, Facebook allowed advertisers to target people based on their race, gender, and religion. This led to jobs like nursing and secretary being targeted to women, while jobs like janitor and taxi drivers targeted men, especially men of color. The model also learned that real estate ads had a better click-through rate when shown to white people, resulting in a lack of real-estate advertisements to minority people.

These are just a few common examples of AI bias. There are many instances of unfair AI practices with or without the knowledge of the developer.

How Can You Fix It?

Photo by Yancy Min on Unsplash

The first step towards a fair AI is admitting the problem. AI is imperfect. Data is imperfect. Our algorithms are imperfect. Our technology is imperfect. It is impossible to find a solution when we pretend there is no problem.

Secondly, ask yourself if this solution requires AI.

Don’t be afraid to launch a product without Machine Learning — Google

There are a few problems that do not depend on data. Tasks like finding the probability of recidivism in the accused depend more on emotions than data.

Third, follow responsible AI practices. I have added the points from Google’s responsible AI practices guide below.

Responsible AI practices:

  1. Use a human-centered design approach: Design models with appropriate disclosure built-in and incorporate feedback from the testers before deployment.
  2. Identify multiple metrics to assess training and monitoring: Use different metrics appropriate for the task to understand the tradeoffs between different errors and experiences. These metrics can be feedback from the consumers, false positive and false negative rates, etc.
  3. Examine your raw data if possible: An AI model reflects the data being used to train the model. If the data is faulty, the model will be faulty as well. Try to have balanced data.
  4. Understand your model's limitations: A model trained to detect correlations does not necessarily help make causal interfaces. For example, a model might learn that people buying basketball shoes are generally taller on average, but this does not mean that a user buying basketball shoes will become taller as a result.
  5. Test: Conduct rigorous unit tests to determine faults in the model.
  6. Continue monitoring and updating your model after deployment: Consider user feedback and update your model based on that regularly after deployment.
  7. Designing a model with concrete goals for fairness and inclusion: Engage with experts from the fields of ethics and social studies to understand and account for various perspectives. Try to make your model as fair as possible.
  8. Using representative datasets to train and test your model: Try to assess the fairness of your data. I.e., look for prejudicial or discriminatory correlations between features and labels.
  9. Check for unfair biases: Get the unit tests inputs from a diverse background of testers. This can help identify which group of people might be affected by the model.
  10. Analyze performance: Take different metrics into account. An improvement in one metric might hurt another metric’s performance.

Tools for developing a Fair AI

  1. FATE: Fairness, Accountability, Transparency, and Ethics in AI(FATE), by Microsoft offers tools to assess visualization dashboards and bias mitigation algorithms. It is majorly used to compare trade-offs between fairness and performance of the system.
  2. AI Fairness 360: AI Fairness 360 is an open-source toolkit offered by IBM that helps you examine, report, and mitigate discrimination and bias in Machine Learning models.
  3. ML Fairness Gym: ML Fairness Gym is a tool offered by google for exploring the long-term impacts of Machine Learning systems concerning AI bias.

Conclusion

Over the past few years, companies and governments have started taking AI bias seriously. Many companies have developed tools to assess AI fairness and are doing their best to fight AI bias. While AI has a huge potential, it is now more than ever important for us to keep in mind the potential discriminatory dangers of an AI system and help develop fair AI models.

About Me

I like to write about some lesser talked about topics in AI like Federated Learning, Graph Neural Networks, Fairness in AI, Quantum Machine Learning, TinyML, etc. Follow me to stay updated on all my future blogs.

You can find me on Twitter, LinkedIn, and Github.

--

--