franki chamak for unsplash

A Chance to Get it Right: Embracing Automated Decision Making

Diminishing Bias with ADM

Melissa Maldonado
Towards Data Science
3 min readApr 15, 2019

--

It is with a sort of helpless resignation that we have spent so many decades bearing witness as imbalanced power, wealth, and race and gender dynamics have skewed access to our most fundamental basic rights: freedom, liberty, the pursuit of happiness, jobs, and even a roof over our heads. And with economic disparity, racial profiling, and gender discrimination so intrinsic, we often forget to see them for what they are: the result of systematic biased decision-making stemming from centuries of inequality, racism, and misogyny.

So… ages of faulty data accumulated in the ‘data sets’ that have trained our human thinking.

This past week at the AI Summit in Berlin, ethics was a hot topic. And now that automated decision-making (ADM), or at least semi-knowledge about it, is omnipresent, concern about biased data and discriminatory decisions is rampant.

It’s about freaking time!

Politicians are questioning the fairness of predictive policing. Ethics institutions are demanding accountability for hiring algorithms. Data entered into training sets for calculating credit scores is being dissected for traces of prejudice.

Essentially, now that machines are making the bad decisions, we’re up in arms.

The end game here — impartial decisions for the benefit of all — is the noblest cause. But it would seem as if our expectations for ADM supersede the expectations we have of ourselves and our own human ability, or rather lack thereof, to make unbiased decisions.

There is obviously nothing wrong with that! No one wants machines to perpetuate our human failings. Especially when double, triple, quadruple-questioning the accuracy and fairness of data and decisions could mean the difference between correct and incorrect medical diagnoses, false and justified imprisonment, and the best and worst pair of jeans for our specific body shapes. Fine… scratch that last one.

And, if our high hopes for flawless algorithms are the result of a newfound unwillingness to turn a blind eye to false imprisonment, hiring discrimination, favoritism, and nepotism, even better! Healthy skepticism is a welcome form of criticism and policing if it will indeed prevent us from straying from the righteous path of non-discriminatory (automated) decisions.

But lately, the often irrational anxiety about ADM reeks of a guileless belief in Hollywood harbingers of artificial super intelligence and singularity. It seems like the increasingly loud, uninformed, and incendiary rhetoric in opposition to ADM is being spurred on by the fear of potentially relinquishing our most unique and defining human capacities, the abilities to think and judge, to machines. Which is slightly absurd given our relatively bad track record when it comes to exercising exactly those skills…

If you take away only one thing from this article, please let it be this: any biased and discriminatory automated decisions made by algorithms are direct reflections of our own poor decision-making abilities.

Don’t shoot the messenger!

There are countless institutions, foundations, and working papers addressing the topic of ethical AI and ADM. They expound on the importance of accountability, traceability, redress, and overall fairness. They clarify how diverse teams and expansive (error-free) data sets will help us arrive at impartial decisions. And that makes perfect sense! The more points of view we have, the more representative the data, the less likely a judgment or assessment is to be unfair. Automated decisions are of course only as flawed as we are.

So rather than insist that they are the problem, it might behoove us to acknowledge that we are. Owning up to our historically unreliable decision-making performance might be the only way to get things right with automated decisions. History repeats itself. At least in the analog world. Not so in the digital world. Machines have a distinct advantage over us in that they are not doomed to keep repeating mistakes. They learn and retain better than we will ever be able to.

And if trained correctly, automated decision-making might just save us from our inherently biased selves!

To follow the discussion on ethical ADM and machine learning, check out the WEF working paper How to Prevent Discriminatory Outcomes in Machine Learning.

--

--

Reporting on digital innovations. Communicating digital policy. Predicting digital trends. And exploring the world and sharing my experiences in the process.