Okay, so I did the research and thought about it.
This has been the leading exhortation for faculty on managing the likelihood that students will use generative AI tools to write papers in their classes: try it out, consider how it might be useful, and write a very nuanced policy.
As designed, large language models (LLMs) like ChatGPT produce intelligent-sounding responses to a wide variety of queries. To do so, they are trained on billions of pieces of writing and develop a predictive model for the relationship between words. Because of this underlying corpus, and the feedback provided by those millions of complete examples and extensive rating by paid human assessors, they generate pleasing content that often uncannily resembles comprehension.
I’ve maintained an open session to experiment with ChatGPT, poked and prodded at its limitations, explored how it remixed and regurgitated material I’ve written, took (most of) an online prompt engineering training by a colleague on Coursera, and entered my writing assignment prompts to see what it comes up with.
And my considered answer is basically, “No.”
No, they shouldn’t use LLMs to replace either search engines, library databases, or Google Scholar. No, they shouldn’t treat LLM output as a summary of the field of human knowledge. And no, students shouldn’t be submitting large language model-generated essays to my class.
In the end, the two main things I’m looking for in class essays are self-reflection and research. And while I can get the appearance of both from large-language model the first is a lie and the second an uncertain and fragile illusion. Allow me to illustrate…
Read More »
