[Note: This is a guideline I wrote at the start of the recent semester, for the classes I teach. I’m sure there will be other people who want some concrete ideas on how to cope with tools like that (beyond open mantras such as “use it as a tool, wisely”), so here’s a start.]
In-house guideline: AI-assisted Writing (e.g. ChatGPT)
In the recent month or so, AI-assisted writing is the talk of the day. Though this topic has been around for several years, the sudden spike in interest stems from OpenAI’s GPT-3 (Generative Pretrained Transformer 3). “It is a state-of-the-art language model developed using deep learning techniques. It is trained on a massive amount of text data from the internet, making it capable of generating text in a wide range of styles and genres, from technical writing to creative writing. One of the core characteristics of GPT-3 is its ability to perform many language tasks with high accuracy and fluency, including natural language understanding tasks such as answering questions and translating text, and text generation tasks such as writing stories and composing poetry.” Yes, this quote was generated by GPT-3, in a natural language conversation. Not even edited.
Compared to what was out there, GPT-3 (served in a sandbox named ChatGPT) is disruptively good at sounding natural. Ask it to “write a five-point debate of the pros and cons of Facebook on democracy,” and it will give you a decent explainer. Feed three arguments on that topic as an outline and ask it to give you two more, and it will give you a naturally sounding continuation of what you started. It reads so naturally that it freaked out professors all over the world: essay tasks could become meaningless.
How it Really Works
GPT-3 is a “deep learning” mechanism, meaning it trains on massive amounts of existing texts, and transforms the words and their connections into tokens representing “meanings.” The tokens are assigned probabilities, calculating which other tokens are most likely to follow next in natural language. Repeat this until you have the full results.
Two things to note from this: first, the output is limited by the data it is trained from. The data is said to be texts from the internet, hundreds of billions of words. However, the cutoff is 2021, meaning all information beyond that point is not reflected upon. Second, although constantly called AI, this tool is for language simulation only. It DOES NOT calculate the strength of evidence, perform social simulations on the outcome of ideas, or pursue a cause for itself or the larger society.
Using AI Tools for Writing Tasks in Class
While the details may vary from class to class, the general purpose of writing in classes is to get trained on specific skills such as:
– Critical processing of ideas by others, by understanding the contexts where those ideas came from and how they would perform when applied to our circumstances.
– Transforming your understanding of an issue into logical ideas on what to do about it.
– Critical evaluation of evidence and hypothetical options by examining factuality, desirability, and feasibility (and many other criteria).
Trying to automate the writing task to simply get it done would be like bringing a Jet Ski to a swimming class. You are not here to cruise from point A to point B; you want to learn how to swim. Moreover, it isn’t even wise to operate a Jet Ski without being able to swim.
Given how language AI works and what the class writings are about, we can devise a list of when it works well and when it does not.
AI tools are great for:
1. Sentence writing assistance. such as finding the right words at conjectures. Knowing how to start and end the sentence.
2. Comparison with a “default” structure. Check whether you have left some structural parts out, which is normally expected.
3. Quality check. Is your writing better than an AI, given the same ingredients?
If the writing task is about summarizing an explanation or finding an optimal sentence structure, AI works wonders. If given the right cues, it will look like a smoothly-written paper in good language. That’s why ChatGPT is great at writing computer codes and article abstracts.
AI tools are disasters for:
1. Developing ideas. By design, the results converge into widely-acceptable mediocrity. The mechanism uses likelihood learned from the vast body of existing texts, so it is quite good (better than many other AI before) at avoiding horrible extremism. But it also avoids brilliant takes. Additionally, most questions involving political decisions are deflected by default.
2. Fact-checking your evidence. AI-writing tool simulates language, not evidence-finding. Actually, it sometimes even generates (read: fabricates) evidence to make its case sound natural.
3. Contextualizing where the ideas and evidence came from. AI is supposed to paraphrase, to put ingredients into new language. It does not recall where those ingredients came from.
4. Your uniqueness. Sure, ChatGPT is great at emulating styles. But only if it had lots of references to learn from. It probably did not learn much from YOUR style of logic, narrative, and aesthetics.
In short, if the writing task is about critically analyzing a phenomenon and proposing your ideas to make it better, ChatGPT will give you a writing filled with mediocre shallow ethical rhetoric, lots of bogus evidences, borderline plagiarism due to non-attribution, in plain and uninteresting style. Worse yet, it will look like it was written by a human. You.
Using Tools Wisely
There is no best practice yet, since it is relatively new. But let’s go with a hypothetical: on a given topic, you look up what can and has been conventionally discussed (AI useful). Then you analyze what is left out, what should be given better focus, and what needs to be examined much deeper (please no AI). You research the key facts and verify them (no AI). You devise your main arguments with their implications in mind (no AI), and polish your sentence structure (AI useful). Then ask the AI to do a summary of your writing, to see if you have written logically enough that it can sort out your main ideas.
Some people are hyping that if this pace continues, we will reach singularity sooner than later. However, as long as the tools we see are still language models, the limitations will remain relatively the same. So keep honing your critical thinking skills with good writing for the foreseeable future.