Ethics in Data Science or: How I Learned to Start Worrying and Question the Process

A review—and a discussion—of two recent books on data and bias

Ethan Feldman
Towards Data Science

--

So, you’ve created a model. Maybe it’s highly predictive, you are excited about how well it scores in your target metrics, and you’ve been rigorous in your processes throughout. Is your model good or is it doing good?

Will we do good with our new knowledge? — Photo by Gabriele Lasser on Unsplash

As part of my working through the immersive data science program at Metis I completed a short side presentation on the concerns of ethics in technology, particularly in relation to our work in data science. I focused on the two books above, Weapons of Math Destruction by Cathy O’Neil and Algorithms of Oppression by Safiya Umoja Noble. My intent here is to both briefly summarize these works and provide a few takeaways I hope to employ as I begin my career in data science.

O’Neil’s work focuses on the real world implications of models that are opaque, scale exponentially, and ultimately hurt the people on which they operate. She terms these models “weapons of math destruction” for the way in which they end up disproportionately impacting the less powerful while creating outputs within rarified air that is supposedly beyond reproach. Often, these models that power the data economy and purport validity based on their mathematical nature belie the underlying fallibility of the humans who create them.

Similarly, Noble digs into the history of Google — especially search results that are recommendation systems at heart — and demonstrates their repeated racial biases that codify the systemic issues in our society into new technology. While the internet and new tech tends make large claims to be leveling the playing field and providing for a better world, hindsight often proves that it would have been better to move slower and break fewer things along the way. Oftentimes, those things that were broken in the name of progress and the bottom line are the backs of the people who are most disenfranchised by society.

Is your model good or is it doing good?

Let’s return to that model you were working on. I’ll use an example from Weapons of Math Destruction for the discussion: prison sentencing in court rooms. Maybe you have been tasked with creating a model that reduces racial profiling and identifies the likelihood of recidivism, helping a judge determine the severity of a sentence. This a model that, on its face, is trying to solve a problem in our society of prejudicial punishment and unequal sentencing. Maybe the model you create is, as mentioned previously, highly predictive for recidivism and does not use race as a feature.

O’Neil, when asked to think about how to start a conversation about tech ethics with data scientists, looks to a time like this, knowing that most ethically motivated people would not include race, and asks, “Does your model use zip code?” Are you predicting a person’s chance of recidivism based on features about them as an individual, or based on the circumstances in which they were raised?

These models, meant to be racially blind, relied on statistics regarding how old a person was when they first interacted with the police and how many of their friends or neighbors had been convicted of crimes. Perhaps they used zip code as well. Taking the time to think about these features, while potentially correlated with recidivism they may also be highly correlated with race, socio-economic status, and other factors. As it turned out, these models did a great job of perpetuating the exact system they intended to fix, but now with an added layer of opaque application and pseudo-correctness. It may have a “good score,” but it is not helping anyone’s wellbeing.

These books go into a great deal more detail in their areas of focus, but I want to briefly touch on a few general takeaways. In reading Algorithms of Oppression, three key ideas I walked away with where:

  • Digital Redlining: Digital decisions reinforce racial profiling
  • AI will be a major human rights issue in the 21st century
  • Decision making tools and algorithms mask and deepen inequality

There are systemic and structure features of our societies that have been created to uphold a certain power dynamic. Every new piece of technology has an opportunity to work for or against that system, there is no other option. Whenever possible, try to think critically about the role a particular model or tool will play, what features is taking in, and who will it impact. We will all make mistakes, but keeping a critical eye and an open mind is imperative.

Similarly, O’Neil leaves the reader with a few important takeaways at the end of Weapons of Math Destruction, namely:

  • Data ethics may conflict with a company’s focus on bottom line
  • Many companies are literally built on these problematic models
  • Those hurt initially are by and large the poor and those with less power
  • Requires putting fairness ahead of profit
  • Tech and data are not omnipotent

It will be difficult at times to extricate the social need to be critical from the capitalistic need for profit within a business. Especially in circumstances where a particular model is in place already, straying from it to a less destructive model that shows less profit will not win over many people. Communicate your models, their inputs, outputs, and methodology, as clearly as possible, especially any assumptions you may be made. Remember that these issues cannot be solved with technology, they are pervasive in society, but that will never excused ignoring the role we can play in bringing positive change.

My takeaways as I am early in my data science career are surrounding many of the ideas that are already central to data science pedagogy.

The first is the idiom of “garbage in, garbage out.” This applies in ethics as the models I build will rely on the data I choose, find, and engineer for them to train on. If the features used rely on proxies or are indicative of systemic biases, I can expect my outputs to be problematic as well. What does that mean for me as I look to gather data for future work projects? What kinds of questions can and I should ask at the outset of a project?

Another is to try to ask myself, from the outset, what will it mean if my model is “perfectly” good? What would that look like in product, what impacts can I foresee it having, and who might it help or hurt? If my model does exactly what I intend for it to do, are there unintended consequences or applications to consider ?This additional step of scoping a project not only helps to identify potential unintended consequences, but can help set expectations for the final product.

Finally, is there anything I can do to help make my work clearer, more transparent, and more digestible for my team, stakeholders, and the people who it may eventually impact? Not only is it crucial to be able to communicate how a model was built and the way in which is might be applied in order to build trust within a company, those clear communications should then be made available to those on which the algorithm predicts. If a model I build is going to be part of a decision that impacts someone’s life, they deserve to have an opportunity to know what went into the process. Models and data are not above reproach, they deserve to be questioned and scrutinized, and that begins with clarity about how they are built and implemented.

There is a long road ahead of all of us. These concerns may have flown under the radar in the earlier days of data science, but we most bring them to the forefront. This is a human rights issue we will continue to face and we will only see more stories like Facebook settling a lawsuit on facial recognition and schools moving away from biased automated proctoring. As a data scientist and person who has benefited from a number of forms of privilege, I want to make the effort and create space to prevent future harm. We all must always consider the potential effects of any model or analysis, as it will ultimately operate on more than abstract numbers and will have an impact on the lives of real people.

For more detail about me, please visit my website and to connect, find me on LinkedIn.

--

--