Fairness and Bias

Do algorithms have politics?

Do artifacts have politics? This question has been the subject of decades of discussion in the social sciences and remains pertinent today.

Sophie Bennani-Taylor
Towards Data Science
4 min readOct 5, 2021

--

Photo by Michael Dziedzic on Unsplash

Do artifacts have politics? This question, itself the title of Langdon Winner’s theory of technological politics (Winner, 1980), has generated decades of discussion and remains pertinent today. In his influential essay, Winner proposes the theory that technology, as a material artifact, can embody political ideas (Winner, 1980). This was quite a radical notion at the time — conceiving of a material artifact as a political actor requires a new conceptualisation of how power relations are expressed.

Indeed, this theory has been controversial, and responses to Winner’s essay have argued against the agency given to technology rather than other social actors (e.g. Joerges, 1999; Woolgar and Cooper, 1999). However, his theory highlights the value of looking at the particular features and characteristics of technology design. Winner claims that technologies can embody specific forms of power and authority, rather than being politicised by the context in which they are situated (Winner, 1980). He argues that this is important because instead of analysing everything through the lens of social forces, it enables us to understand the sociopolitical implications of technology design choices.

Although Winner developed his theory over 40 years ago, the concepts he describes remain relevant today. Indeed, the recently proposed EU regulations on AI highlight the recognition of political issues in technology such as bias in AI (European Commission, 2021). Laws such as these reflect increasing concerns around discriminatory algorithms, as seen across the public and private sectors. For example, Amazon’s 2014 recruitment algorithm was found to disproportionately reject the CVs of women. They did so by penalising CVs which contained the word “women’s” (e.g. women’s chess club) and CVs with the names of all-women’s colleges (Reuters, 2018).

Amazon’s algorithm doesn’t just support Winner’s theory through the sexist impact that it had, but also through the ways in which sexist ideas were encoded into the technology. Firstly, the machine learning algorithm was trained on CVs from past (primarily male) candidates, which enabled the algorithm to re-enact the history of women’s exclusion from the IT sector. Secondly, the use of gender as a relevant variable allowed the algorithm to select candidates on that basis. Finally, the decision to design an algorithm which selects the top candidates from a list of CVs gave the algorithm the power to determine an individual’s career opportunities. Although these were all design decisions made by a range of human actors (who should be held responsible for their products), the algorithm’s material features and characteristics enabled it to become a political actor regardless of the intent of the Amazon team.

“Instead of analysing everything through the lens of social forces, it enables us to understand the sociopolitical implications of technology design choices.”

Thus, applying Winner’s theory of technological politics to Amazon’s recruitment algorithm helps us to understand why the social study of technology remains relevant today. Amazon’s algorithm didn’t just have a sexist impact because of the socio-political context in which it was placed, but also because of the way that political ideas were inscribed into the technology. These technological features — the database used to train the machine learning algorithm, the variables considered relevant to it, and the way in which the algorithm was translated into a hire/no hire decision — have very real, and very political implications.

By understanding how technologies embody political ideas, we can reframe the concept of technology as a ‘neutral tool’ and start to rethink how technologies can be better designed. In the case of Amazon, we might think about how recruitment algorithms can be trained using more diverse datasets, how they can be subject to auditing mechanisms which flag examples of bias, or perhaps how they can be prevented from use in contexts which decide our futures. As Langdon Winner puts it: we can consider not just “how technology is constructed but […] the ways in which our technology-centred world might be reconstructed” (Winner, 1993:376).

Bibliography:

Dastin, J., 2021. Amazon scraps secret AI recruiting tool that showed bias against women. [online] Reuters. Available at: <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G> [Accessed 1 October 2021].

European Commission, 2021. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. [online] European Commission. Available at: <https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0206> [Accessed 3 October 2021].

Joerges, B., 1999. Do Politics Have Artefacts?. Social Studies of Science, 29(3), pp.411–431.

Winner, L., 1980. Do Artifacts Have Politics?. Daedalus, [online] 109(1), pp.121–136.

Woolgar, S. and Cooper, G., 1999. Do Artefacts Have Ambivalence. Social Studies of Science, 29(3), pp.433–449.

--

--

Sophie is a researcher interested in digital identification, and the intersection of technology and migration.