I got in a Twitter fight yesterday – without meaning to.
<iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&key=a19fcc184b9711e1b4764040d3dc5c07&schema=twitter&url=https%3A//twitter.com/claudiakincaid/status/994838390725111808&image=https%3A//i.embed.ly/1/image%3Furl%3Dhttps%253A%252F%252Fpbs.twimg.com%252Fprofile_images%252F898556288094937088%252FsNiFlO5W_400x400.jpg%26key%3Da19fcc184b9711e1b4764040d3dc5c07" title=""Coursework will include math, statistics, and computer science as well as machine learning, computational modeling, and how neural nets works.’ Where are the social sciences?"" height="185" width="500">
In response to the announcement that a university would offer an undergraduate degree in AI, with coursework in "math, statistics, and computer science as well as machine learning [and] computational modeling", the question was asked: "where are the social sciences?"
As an anthropologist by training who has worked as a data scientist for over a decade, that question struck close to home. I’ve spent my career trying to strike a balance between my self-identification as a social scientist and the practical need to articulate my value in terms that my employers and customers care about. I used to write on this topic several years ago when I had just started to call myself a data scientist ([here](http://housesofstones.github.io/2013/07/09/anthropology-and-data-science-need-each-other/) and here, for example), and I’ve recently found myself returning to the topic as I’ve tried to explain how ethnographic methods are a core part of my technical work.
Where are the social sciences? Where should they be? Those are questions that I’ve honestly struggled to answer over the course of my career. I tried to start a conversation about that by responding to the Tweet I linked above, but things got ugly pretty quick. Maybe I just communicated poorly. Maybe Twitter is just a horrible place to try to talk about anything substantive. Maybe both. Really, I just rephrased the same question over and over again:
I never got an answer, but I still want one.
This is about whole disciplines, not people. Of course I believe many individual social scientists have done laudable work, just as I believe many individual computer scientists have done really terrible work. But the question implied in the original tweet wasn’t "why don’t you have specific social scientist X teach a class?", but rather "why don’t you include courses from social science departments on the curriculum?" This same question was implied by the one person in the thread who asked a clarifying question:
<iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&key=a19fcc184b9711e1b4764040d3dc5c07&schema=twitter&url=https%3A//twitter.com/deborahbrian/status/995154958457581569&image=https%3A//i.embed.ly/1/image%3Furl%3Dhttps%253A%252F%252Fpbs.twimg.com%252Fprofile_images%252F928222717328867329%252FRi_iz0qy_400x400.jpg%26key%3Da19fcc184b9711e1b4764040d3dc5c07" title=""How can you be an anthropologist and data scientist and not understand how essential an understanding of human factors and human behaior is to the development of artificial intelligence? Genuine question."" height="185" width="500">
I think this question begs two additional questions:
- Is an understanding of human factors/behavior essential to the development of artificial intelligence?
- Do the disciplines commonly referred to as "social science" reliably provide trustworthy understanding of human factors/behavior?
In answer to question 1: I don’t know, but I doubt it. If we talk about AI in the modest terms of machine learning, not in decidedly less modest terms of some sort of singularity-esque general intelligence, then we’re talking about an engineering problem, which means we’re talking about tinkering. I think it’s absolutely essential that we actively build ways to monitor how data systems create downstream impacts for individuals and societies. But I think we’re just as likely to learn about human behavior by building and then incrementally modifying AI than the other way around. I think it’s a mistake to assume that our ability to understand human behavior is currently robust enough that we can achieve understanding first and then build systems based on that understanding second. In fact, I think most of the history of science has been comprised of tinkering systems first and building understanding second. Nassim Taleb has written about this (and here is a more user-friendly summary of some of his arguments).
In answer to question 2: it has been a primary regret of my career that I have to answer no, when I would so very much like to be able to answer yes. I’ve spent my career in industry, so I judge any tool by its ability to achieve practical results. For example, I judge medicine by its track record of making sick people well and keeping well people well. If a part of medicine has a bad track record, or no track record, I judge that the level of understanding in that part of the field is poor. I don’t care how many studies have been done or how many people have written on the subject: if you can’t get results, then you can’t claim understanding.
It is a mistake to ever categorize words, alone, as results. If you can do something (change behavior, make money, etc.) in the real world using a tool – and "tool" can mean a method or theoretical perspective as well as a piece of code or machinery – then that fact is a necessary but not sufficient proof of understanding. That’s my minimum bar for claiming understanding.
I can’t think of any examples where a tool originating in the social sciences, with the exception of the ethnographic method, can clearly claim to have met that bar. Even in the case of ethnography I think my assertion of demonstrated value could be disputed. I don’t mind imperfect tools – those can always be improved. I do mind tools that have never achieved practical results in the real world. I’d like to think there are more examples where tools originating in the social sciences have met that bar, and that I’m just unaware of them. But even if that is the case, the ratio of noise to signal in the social sciences is simply too high. For the purposes of building AI, that ratio is much, much higher than it is in disciplines like computer science. That doesn’t mean computer science is proven valuable and social science is proven valueless. It means that, in a world where my ability to keep my job is based on my ability to deliver value to my employers and customers, searching the tools of computer science pays off at a much, much higher rate than does searching the tools of social science.
I wish that weren’t the case. I’m proud of being an anthropologist. Throughout my career, I’ve so often had to downplay the fact because the people paying for my skills didn’t recognize any value in that designation, even though my anthropological skill set was a large part of what allowed me to deliver the results they wanted. That saddens me, sometimes frustrates me, but it has long since ceased to surprise me.
So where are the social sciences? For the most part, it seems they’re busy talking to themselves. Inclusion in an undergraduate curriculum, or in a research agenda, or a business plan, is a sign of respect – a sign of recognized value. The fact that the social sciences are so often excluded is an indication that they aren’t widely valued. From my perspective as a practitioner, I wish I could say that they currently deserve to be valued more, but I can’t. I hope that will change. In the meantime, I have things I need to build. I’ll use whatever tools will help me do that. But I’m not taking any tool, especially not a packaged set of tools, on faith.