PODCAST

Should all AI research be published?

Rosie Campbell on responsible research and publication norms in AI

Jeremie Harris
Towards Data Science
3 min readMay 12, 2021

--

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems.

This decision was controversial, and remains that way to some extent even today: AI researchers have historically enjoyed a culture of open publication and have defaulted to sharing their results and algorithms. But whatever your position may be on algorithms like GPT-2, it’s clear that at some point, if AI becomes arbitrarily flexible and powerful, there will be contexts in which limits on publication will be important for public safety.

The issue of publication norms in AI is complex, which is why it’s a topic worth exploring with people who have experience both as researchers, and as policy specialists — people like today’s Towards Data Science podcast guest, Rosie Campbell. Rosie is the Head of Safety Critical AI at Partnership on AI (PAI), a nonprofit that brings together non-profits, CSOs, academic institutions, startups, and big tech companies like Google, Facebook, Microsoft and Amazon, to shape best practices, research, and public dialogue about AI’s benefits for people and society. Along with colleagues at PAI, Rosie recently finished putting together a white paper exploring the current hot debate over publication norms in AI research, and making recommendations for researchers, journals and institutions involved in AI research.

Here were some of my favourite take-homes from the conversation:

  • Best practices and recommended publication norms are only useful if researchers are willing and able to implement them. As a result, Rosie emphasizes the importance of recommending publication norms that are minimally intrusive on the research process, and that require the least overhead possible from researchers. For example, while she recommends that researchers include some statement about the potential impacts and use cases of their contributions in published work, but that the amount of time they invest in this be proportional to the magnitude of each contribution. For projects that represent incremental progress (which is the vast majority of ML research), researchers ought to spend less time thinking about potential harms and implications of their work.
  • One of the biggest challenges to establishing responsible publication norms is that a single organization or research team that disagrees with a given publication framework can simply refuse to implement it, thereby undermining the efforts of everyone else. In fact, this has already happened: in 2019, shortly after OpenAI announced that they would hold off on releasing their full GPT-2 model, an independent research team went about replicating it, on the grounds that restricting access to leading AI systems would prevent AI safety researchers from being able to do cutting-edge research that’s needed to keep up with AI capabilities. That’s why it’s so important to try to achieve a consensus among researchers, on a minimum set of responsible publication standards that everyone will willingly adhere to.
  • It’s unreasonable to expect busy AI researchers to become futurists and policy experts in order to be able to predict the impact or potential harms of their work. That’s why collaboration with social scientists and ethicists will become increasingly important as AI technology develops. Rosie advocates for more mixing between these disciplines as a means of supporting AI researchers in developing more robust impact assessments.

You can follow Rosie on Twitter here, or me on Twitter here.

Links referenced during the podcast:

Chapters:

  • 0:00 Intro
  • 2:05 Rosie’s background
  • 5:40 Risks of advanced AI
  • 8:15 Activity surrounding publication norms
  • 12:40 Arguments to shift away from the default model
  • 15:10 Harmful consequences of language modelling
  • 23:00 Coordinating as a community
  • 28:04 Norms of publication as default
  • 31:30 Responsibilities of researchers
  • 34:30 Governments’ role in this space
  • 40:20 Incentives for corporate AI research
  • 44:30 Auditing companies’ algorithms
  • 46:20 PAI and international involvement
  • 50:00 Calls to action
  • 51:55 Wrap-up

--

--

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).