Anticipating the emergence of AI-assisted academic misconduct

Why non-STEM disciplines need to pay attention to AI technology

Brian Perron, PhD
Towards Data Science

--

Street art in Guatemala. Photo by the author.

I am a Professor of Social Work, one of the most prominent applied social science disciplines that, for the most part, has not given AI technologies much attention. I suspect this is true in the other social sciences, as well. Regardless of the interest and perceived relevance of AI technologies, every educator in the non-STEM disciplines, especially the applied social sciences, should rethink how they are using written assignments and reflection papers for educational and evaluative purposes. Anti-plagiarism software is a significant step toward protecting against academic misconduct. However, the utility of this technology simply cannot compete with the recent and surprising advances in AI technology.

I’m writing this Medium post to offer a single example of an off-the-shelf AI technology that foreshadows the future of academic fraud. Specifically, I demonstrate an AI model capable of generating text of such high quality that it can be indistinguishable from a human writer. I’m not sure if the examples provided in this article will surprise or concern my colleagues and students, but I think the demonstration is a starting point for discussion. I’ll first describe what GPT-3 is and then provide three different examples showing the capabilities of the model.

What is GPT-3?

GPT-3 is a language model developed by OpenAI, a San Francisco-based research company. GPT-3 was trained on roughly half a trillion words to train GPT-3 is capable of performing a variety of natural language tasks. GPT-3 does not regurgitate, summarize, or rewrite the existing text. Instead, the model leverages 175 billion machine learning parameters to make word predictions. The user supplies GPT-3 with a set of starting phrases, called seed terms, which are used to predict words that are most likely to follow. The word predictions are strung together and become complete sentences, paragraphs, and full-text documents.

Generating original text isn’t the only thing GPT-3 can do. Here are a few other Medium articles with more examples details about this model.

Accessing and test-driving GPT-3

The Microsoft Corporation has exclusive rights to the underlying GPT-3 model. Microsoft released an API, allowing anyone to use the model. For convenience, I’m using an API-based GPT-3 service called Jasper. Jasper offers a no-code, low-cost, flexible platform for carrying out various natural language tasks specific to text generation.

Example 1: Jane Adams and the Hull House Settlement

For this example, I am using GPT-3 to write a short essay on Jane Adams and the Hull House Settlement. The following image shows the seed terms and a blank document that receives the text output.

Interface of Jasper.ai for generating text using GPT-3. Image by the author.

After generating the text, I performed minor formatting but did not edit any of the text. This output was generated in less than 7 seconds.

Original output of text generated using GPT-3 using Jasper.ai. Image by the author.

Example 2: Personal commitment to social justice

This second example uses GPT-3 to generate a short personal essay about social justice. I selected this example to demonstrate how GPT-3 can write a reflection in the first person, which is a common assignment in the applied social sciences, especially social work. Again, I did minor formatting but did not edit any text. I also note that the former and current examples could have been made much longer by providing more seed phrases. But, I think the amount of the text provided is sufficient for the discussion I hope to ignite.

Interface and output from Jasper.ai. Text generated from GPT-3. Image by author.

Example 3: Re-writing academic text

GPT-3 can perform other natural language tasks. Whereas the first two examples generated original text, this third example shows how GPT-3 can rewrite the existing text. The paragraph I use for this demonstration is from an editorial published in the journal Qualitative Social Work. I selected this paragraph for convenience and my mutual concern with the author about academic misconduct.

Interface of Jasper.ai. Input text by Staller (2019, p. 898).

For this example, I generated three different outputs within a few seconds. These rewrites come directly from the GPT-3 model output without any editing.

GPT-3 rewrite #1

Most people can see when a submission is not ready. This is often because it has mistakes. These mistakes might not be a big deal by themselves, but if you have a lot of them, it might mean that your manuscript isn’t good yet. As an editor, I used to think that I could help make these submissions better through the “revise and resubmit” process. But over time and through experience, I learned that this is often a sign that the manuscript has bigger problems. Usually, this journal doesn’t try to help students make their submissions better so they can be published. This should happen before you submit your work.

GPT-3 rewrite #2

Often, manuscripts that are submitted too early have mistakes. While each mistake might not be a big deal, if there are a lot of them, it can be a problem. When I was starting out as an editor, I thought I could fix these mistakes by revising and resubmitting the manuscript. But over time and through experience, I learned that this often means there are bigger and more fundamental issues with the manuscript. As a general rule, this journal doesn’t assume the responsibility of trying to mentor students so that their manuscripts are ready to be published. That should happen before submission.

GPT-3 rewrite #3

Most of the time, when someone submits their work before it is ready, there are clues that can give away that the manuscript is not finished. While these mistakes might not be a big deal on their own, if you have a lot of them, it can be a sign that there are bigger problems with the writing. I used to think that I could help fix these mistakes through the process of revising and resubmitting, but I have learned that this is often not the case. This journal does not usually take on the responsibility of helping students make their rough drafts into publishable pieces.

So what?

I firmly believe that AI technologies are essential for addressing social problems. AI technologies are not the problem. Instead, the problem is with the people who use them for malicious and unethical purposes. Even if you are not impressed with the quality of the text for these examples, keep in mind that GPT-3 is already almost two years old. Whereas GPT-3 has 165 billion parameters, we expect GPT-4 to have over 100 trillion parameters.

I think non-STEM disciplines need to recognize that AI is no longer an interesting storyline found in futuristic science fiction novels. The future has arrived. We need to have conversations about AI in non-STEM disciplines, thinking creatively and carefully about potential opportunities and challenges of AI technology. This article is simply a starting point for discussion.

Article cited

Staller, K.M. (2019). From dissertation to published article: Advice from an editor. Qualitative Social Work 2019, Vol. 18(6) 897–904.

--

--

I’m a Professor of Social Work at the University of Michigan. I’ve also been up since 4am drinking coffee. https://www.linkedin.com/in/brian-perron-6465507/