Simulating History with ChatGPT

While users can never know when an AI chatbot ‘confabulates’ – that is, generate a completion without a basis in fact – sometimes that’s actually a good thing, as history professor Benjamin Breen notes…

This is a trial run for a scenario I’m developing for a world history class I’ll be teaching this fall. I’m envisioning an assignment in which my students will simulate the experience of being sold flawed copper by Ea-nāṣir, a real-life shady copper merchant in Mesopotamia circa 1750 BCE (one who, in recent years, has unexpectedly become a meme online).

Crucially, this is not just about role-playing as an angry customer of Ea-nāṣir — or as the man himself, which is also an option. As illuminating as the simulations can be, the real benefit of the assignment is in what follows. First, students will print out and annotate the transcript of their simulation (which runs for twenty “turns,” or conversational beats) and carefully read through it with red pens to spot potential factual errors. They will then conduct their own research to correct those errors. They’ll then write their findings up as bullet points and feed this back into ChatGPT in a new, individualized and hopefully improved version of the prompt that they develop themselves. This doesn’t just teach them historical research and fact-checking — it also helps them develop skills for working directly with generative AI that I suspect will be valuable in future job markets.

It’s a look at how AI chatbots will be used to help us learn – and understand. Read Breen’s entire post here.

Leave a comment