Travis Pickell
Assistant Professor of Theology & Ethics
Director of the Character Virtue Initiative
PhD. University of Virginia
MDiv. Princeton Theological Seminary
BA. College of William & Mary
Supervisors: Charles T. Mathewes, James F. Childress, Paul Dafydd Jones, Margaret Mohrmann, and James Davison Hunter
Director of the Character Virtue Initiative
PhD. University of Virginia
MDiv. Princeton Theological Seminary
BA. College of William & Mary
Supervisors: Charles T. Mathewes, James F. Childress, Paul Dafydd Jones, Margaret Mohrmann, and James Davison Hunter
less
Uploads
Papers
Book Reviews
For instructors heading back into the classroom and fearing the worst for the fate of their essay assignments, we offer these five ideas:
(1) Take a deep breath, find the GPT-3 system online, sign up, and run some of your essay assignments through the system. You’ll learn a lot about what the system can and cannot do, and you’ll begin the journey to find out how “bot-proof” your assignments currently are. The exercise may even calm your nerves considerably, as you’ll find that on some topics the bot is completely terrible. On others…you may start to worry. One of the most obvious takeaways is that very generic essay questions are extremely susceptible to cheating through GPT. This revelation will not be a bad thing, however, as many of us know this already and all of us can use a little kick in the pants to consider whether we are asking students to merely report basic-level information back to us on the lowest level of Bloom’s taxonomy (recall facts and concepts) or whether we can move our students toward better projects that connect, analyze, judge, design, and present new ideas. And in fact, overly generic and common essay topics were already susceptible to various forms of plagiarism and cheating. As Andy Crouch has observed, GPT is quite adept at delivering writing that is basically correct, but also quite often cliché. The problem is that this is exactly the sort of writing we too often expect from undergraduates in, say, an introductory humanities course.
(2) As of the time of this article, GPT has only “limited knowledge of world and events” after 2021 (see screen-shot from the program’s home page above). For now, this opens up an opportunity for instructors to require, as part of their writing assignments, students to integrate citation and discussion of current events or written sources published in 2022. For example, instead of merely asking students to “Explain Kant’s categorical imperative,” we could ask students to “Briefly explain Kant’s categorical imperative, and evaluate in light of the case study in Smith 2022”—where “Smith 2022” refers to a recently published think-piece on the topic you have asked students to read as part of the class or specifically for the assignment at hand. Moreover, GPT has problems with citations and using a particular citation style, so specifying things along these lines will discourage the most egregious copying/pasting from the bots.
(3) Move toward writing assignments that not only ask students to show mastery of objective facts and cite specific and recent things (see 2 above), but also to integrate their own detailed personal experiences in light of the topic. To expand upon the example given above: “Briefly explain Kant’s categorical imperative, and evaluate in light of the case study in Smith 2022. As part of this explanation and evaluation, discuss a situation in your own life where you have either failed or succeeded to act according to the imperative, with some reflection on your success or failure in this respect.” Yes, GPT-3 can write fake first-hand experiences. But knowing our students (when that is possible) will help us discern genuine engagement, and requiring the experiential piece will push some students away from trotting out the bot’s experiences as their own.
(4) For many years expert writing instructors have been touting a “scaffolding” approach to writing—that is, teaching students to write though a series of drafts, steps, and an editing process that mirrors the way real knowledge is created. Yes, the bots can write drafts of assignments. But requiring multiple writing steps, especially combined with face-to-face meetings with peers or the instructor en route to the final draft, will help create a better barrier against the most routine forms of cheating. This was good writing-teaching advice even before GPT-3!
(5) A probably endless cat-and-mouse game will emerge to create user friendly technologies that can detect AI-generated text. Instructors may not be able to rely on them fully at this point, but when they are reliable, they could provide an obvious deterrent, much like existing plagiarism checkers (e.g., Turnitin, Grammarly, etc.). At this point, it’s probably best to experiment a bit with available options (see e.g., “Streamlit“), and share what works with your colleagues. Again, as GPT technology improves, detection tools will likely be playing catch-up. But it is also possible that we may one day reach an equilibrium where those who create tools for AI-generated text tolerate a certain level of detectability so long as that text still appears natural to human readers.