AI Alignment and Safety

Does AI have Politics?

Analysis of Do Artifacts have Politics by Langdon Winner, 1980 — Reflecting on current AI innovation

Mark Garvey
Towards Data Science
5 min readJun 29, 2021

--

Photo by Yuyeung Lau on Unsplash

Langdon Winner’s 1980 paper Do Artifacts have Politics? (Winner n.d. 1980) is a seminal article in technology ethics, in which the author suggests that technological artifacts can have biases towards certain political structures, namely authoritarianism and democracy. There are two main topics the author touches on, ‘Technical Arrangements as Forms of Order’ and ‘Inherently Political Technologies’. This article will examine these topics and their relation to current AI innovation.

‘Technical Arrangements as Forms of Order’

Winner begins with a discussion on technical objects that serve a purpose, but may also have political tendencies. The author gives several examples where these political biases are present, including the construction of overpasses in Long Island by the developer Robert Moses, which were designed in a way such that buses could not pass underneath, keeping poorer classes who would frequently use such transport from accessing his public parks. Another example is the McCormick pneumatic molding machine, introduced for the automation of several manufacturing processes. These devices replaced many jobs in the McCormick reaper manufacturing plant, substituting the skilled molders with a few unskilled workers to man the machines. The cited reason for introducing such machines was increased productivity. The true reason however was to break up the National Union of Iron Molders, a group involved in a dispute with McCormick.

The interesting question raised here by the author is whether certain technologies “can be used in ways that enhance the power, authority, and privilege of some over others”(Winner n.d. 1980, p125). He describes how Robert Moses’ bridges and McCormick’s machines were created to help complete a certain task (highway transport and molding respectively), yet both “encompassed purposes far beyond their original use.” (Winner n.d. 1980, p125)

Many AI systems have the potential to fall into a similar category.

Photo by Chris Liverani on Unsplash

Even though most AI technologies have been developed for the sake of progress and for the betterment of our lives, the risk arises when we consider how these technologies are to be used. Take for example the collection of user health data from wearable devices. If this data is used for the purpose of allowing the wearer to get an accurate model of their health, this could have a profoundly positive impact on a person’s well-being. If however the data is taken beyond its intended original use, and is sold to third-parties to create targeted advertisements of a personal nature, there could be serious issues around privacy and even user mental well-being. It is therefore imperative that benign use cases of AI and data collection stay that way — introducing laws around data use cases, especially for advertising based on sensitive user data, can help to reinforce this.

‘Inherently Political Technologies’

The second way in which Winner describes how artifacts can have political properties, is that of “inherently political technologies, man-made systems that appear to require, or to be strongly compatible with, particular kinds of political relationships.” (Winner n.d. 1980, p123) Here the author gives the example of nuclear power, referring to the notion that such a dangerous technology requires a strict militaristic government in place to manage and enforce strict regulation. Another example given is that of the atom bomb, a technology that would not even exist sake for the existence and backing of a scientifically advanced, military-backed government.

In what ways can we draw parallels with the development of modern AI systems?

Photo by Enrique Alarcon on Unsplash

Do AI systems exist that are inherently autocratic? A clear example would be China’s mass adoption and deployment of facial recognition systems throughout the nation. This mass surveillance system is clearly for the benefit of maintaining order in the authoritarian state, but is it possible that this was always going to be the end use case for this specific technology? Facial recognition systems were first used in a production environment in by DARPA in 1993, as part of the FERET program(Rauss et al. 1997). Since much of the technological advancement in the face recognition sector was pioneered by the US military industry, perhaps the usage of such systems was always going to be for enforcing order. It is possible that such systems were inherently designed to be of benefit to autocratic states, for some of which, the use of facial recognition for racial profiling is not an accident, but actively encouraged. (‘Reporter on China’s treatment of Uighur Muslims: “This is absolute Orwellian style surveillance”’ 2021.)

Conclusion

It is amazing how relevant Winner’s paper still is today, which speaks to how well-crafted it was in 1980. In the final paragraph, a particular observation is found to be extremely accurate in the context of the integration of AI and data collection into today’s society:

“In our times people are often willing to make drastic changes in the way they live to accord with technological innovation at the same time they would resist similar kinds of changes justified on political grounds” (Winner n.d. 1980, p135).

Undoubtedly, it will be important to stay ethically vigilant as newer AI innovations inevitably arise in the future.

References

Rauss, P.J., Phillips, J., Hamilton, M.K., DePersia, A.T. (1997) ‘FERET (Face Recognition Technology) program’, 25th AIPR Workshop: Emerging Applications of Computer Vision, 2962, 253–263.

Reporter on China’s Treatment of Uighur Muslims: ‘This Is Absolute Orwellian Style Surveillance’ [online] (2021) available: https://www.cbsnews.com/news/china-puts-uighurs-uyghyrsmuslim-children-in-prison-re-education-internment-camps-vice-news/ [accessed 14 Jun 2021].

Winner, L. (n.d.) ‘Do Artifacts Have Politics?’ (1980), 17.

This article was written as part of my course on Risk, Ethics and Governance in Artificial Intelligence, as part of my part-time Masters in AI at University of Limerick, Ireland.

If you liked this story, please consider following me on Medium. You can find more on https://mark-garvey.com/

--

--

Machine Learning Engineer based in London. Interested in all aspects of AI and Data Science, especially Computer Vision and LLMs