In the machine learning model lifecycle, it’s common that we refer to implementing models into products as "deploying" models. While widely accepted, the word deployment is entrenched in power and militarization.
The language we use to speak about AI is incredibly important, including on our technical terms. One of the biggest disadvantages of the discourse on machine learning and AI are the terms we’ve used to describe it thus far. For example, take a look at implement versus deploy: Which word best describes what you’re doing with a production machine learning model? While practitioners might argue the words have the same meaning, in the context of ML, in this post, I will discuss why implementing and incorporating might be better terms than deployment. (Personally, I prefer implementing). I’ll also dive into how using different words can help us build a movement for responsible AI that spans disciplines and industries.
Is Deploying the Right Word?
The use of the word "deploy" for military purposes in English dates back to 1786, meaning extend (troops) in a line, and/or expand (a unit which had been formed in columns). The term has been used figuratively since 1829 and this is not only unhelpful when discussing AI, but also resonates with other words that are part of our modern vocabulary, such as colonize (often used when describing human travel to Mars).
In addition, it implies knowledge about what might be deployed against which group or groups based on historical language. It may sound innocent enough if you remove this connotation, however, we must re-frame our language around implementing ML models. I challenge you to understand why using deploy specifically intersects with fairness harms and unjust systems within society.
This assumed knowledge about what will be deployed is a huge issue. Data Scientists and ML Engineers understand a decision system will be deployed, but we don’t always know what kind of an impact that will have on those models are deployed on. Given that 65% of companies can’t explain how AI model decisions or predictions are made, we should be concerned that we’re deploying things without knowing about the potential consequences groups might face.
Using deploy to describe a technical action, especially given the impact and scale of Data Science and AI work is inaccurate if nothing else. The lack of specificity used when talking about AI does not help mitigate the harms of these systems that often reinforce stereotypes, degrade users, and operate with little oversight.. The term deployment reinforces the power structure between organizations that create technology and their users. Deployment is one-sided, users are rarely consulted, and there is no rebuttal or debate on the decisions of these systems. These are out of line with the goals of Ethics in AI.
ML/AI Discourse and Consequences
Our lack of specificity and attention to accurate word choice echoes the exact practices that lead us to disparate outcomes of ML modes. Lacking guidance from social scientists, downstream groups of people likely to be impacted, and fairness researchers; what engineers do right now IS deploy models with little consideration for whom the model is deployed on. This is just part of the reason why 90% of all ML models never make it to the product in the first place.
As ML models are being deployed in production with questionable ethics, without sufficient transparency or opportunity for input, it’s also facilitated back doors into our workplaces and personal lives through privacy breaches. We need to shift from using terms like deploy to broadening out what these conversations could be if they were rooted in social science research on inclusion and empathy-driven AI development.
And as AI evolves and our understanding of it changes, so too must the language we use to describe it. What does this mean for you?
A lot of Ctrl+F and Ctrl+V
Updating our language based on new information is nowhere near new. Language commonly shifts and changes over time. And as AI evolves and our understanding of it changes, so too must the language we use to describe it. It’s important for us as researchers, engineers, business leaders, policymakers, and educators working in the field of artificial intelligence to understand how language shapes perspectives about this technology.
We need not wait for a revolution or new world order before taking actionable steps at improving our modeling process; you can start today with the language you use with your team. Update slide decks and internal documents.
The discourse on ML so far has used terms that may encode the beliefs, values, and perspectives of those using them, even unintentionally. As we look to the future of AI, it is important for us to be mindful and intentional with our words. We should use language that communicates what we are doing in a way that captures the nuance and complexity of machine intelligence while also being respectful to the Etymology and connotations of the words we choose.
What do you think?
Which word would you prefer to describing your work moving a model to production? Let me know in the comments below!