RE Log - Fall 2023
14 Ransom Everglades LOG FALL 2023 technology services – and the author of this piece. The group met weekly in the spring semester, both to discuss the rapidly evolving state of AI in education and to work on a presentation to the faculty that we eventually gave during end-of-year meetings, a presentation that highlighted ChatGPT’s fearsome capabilities as well as its great potential as a “thinking partner” for both students and teachers. In the wake of that presentation, RE’s response to AI is now entering its second phase: widespread, bottom-up, course- by-course change. Knowing that they cannot afford to be stagnant, and empowered by RE’s independence, teachers in every discipline are adapting their curricula to a new reality in which we can expect AI tools to be not just ubiquitous, but embedded into nearly every piece of software that students once used to produce work on their own. The technology presents daunting challenges. But it’s also, in its own way, liberating. In forcing teachers to adapt, AI is also asking them to pinpoint what forms of thinking truly matter in their classes – and how to elicit those forms of thinking more purely, creatively and visibly. In some cases, teachers are going “super retro,” in Nero’s words: taking tech out of the equation and returning to “the classical model of the Socratic classroom.” But in other cases, they’re embracing AI in ways that wouldn’t have been possible in 2022. Academic integrity and AI Over the past few months, RE’s Chief Technology Officer Linda Lawrence has found herself thinking about “big, uncomfortable questions” of her own: “What is the world where these tools are ubiquitous? How do we assist students in preparing for their future?” When ChatGPT burst onto the world stage, the first salvos of public discourse among educators focused, understandably, on the question of “cheating.” How could we ensure academic integrity in a world where AI, always increasing in sophistication, could simply write papers or complete problems for our students? Software developers (and even OpenAI itself) released tools that could determine the likelihood that a piece of text was AI-authored. But such tools could only ever provide a probability – never definitive proof. It didn’t take long for our task force to determine that dreaming up ever more elaborate ways to defend against ChatGPT, or to catch students using it, wasn’t a productive course of action. On the one hand, it would go against the culture of honor and accountability that we aspire to create in our classrooms, and that students create with us. On the other hand, it would be a perpetually losing battle. We realized, however, that we could gain ground against a significant threat raised by the increasing prevalence of AI by working to ensure that RE students continued to possess a sense of agency over their learning. What happens to our sense of agency when we allow machines to do our thinking for us? In March, Matthew G. Kirschenbaum, a Professor of English and Digital Studies at the University of Maryland, College Park, asked that question on a grand scale in an essay for The Atlantic titled “Prepare for the Textpocalypse.” Kirschenbaum imagined a dark future in which we have allowed AI to take on so much of the world’s communication that the internet itself becomes a cesspool of AIs talking to other AIs, “flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.” As in the garbage-filled and sun-scorched Earth in the movie Wall-E , the world of knowledge becomes a barren wasteland. Meanwhile, humans, having outsourced all thinking to machines long ago, become pure consumers of content, their intellectual muscles atrophying to the point that they can no longer think at all. Kirschenbaum’s vision of the AI-powered future is, admittedly,
Made with FlippingBook
RkJQdWJsaXNoZXIy NTY4MTI=