Last week’s New York Magazine story “Everyone Is Cheating Their Way Through College” chronicled the myriad ways undergraduates are abusing ChatGPT. This past Tuesday, The New York Times shared its own shocking reveal about AI malfeasance in the classroom: It turns out that professors are abusing generative AI chatbots too!
The Times piece focused on a complaint made by a senior at Northeastern University. The student, whose moxie I totally respect, discovered that one of her instructors was using ChatGPT to supplement his course materials, which concerned her for two valid reasons. First, the professor’s syllabus explicitly forbade “the unauthorized use of artificial intelligence or chatbots.” Second, tuition is absurdly expensive; why should someone shell out $8,000 for a college class partly generated from a program that any nonscholar could access?
These revelations about professors reportedly behaving badly are emerging at the same time that faith in American higher education is sinking to its lowest point in decades.
These revelations about professors reportedly behaving badly are emerging at the same time that faith in American higher education is sinking to its lowest point in decades. They also coincide with the Trump administration’s unprecedented attempts to punish ideologically noncompliant schools by withholding federal funds. Narratives about professors using AI to fashion their lectures or, distressingly, to grade students’ work, aren’t doing anything to boost our approval ratings.
Then again, it’s awfully easy to pounce on profs without understanding the intricacies — apparently, these AI programs love to use the word “intricacies,” as well as em dashes — of professorial labor today. So let’s delve (another AI favorite) into some of those intricacies, shall we? (I spared you the telltale invisible spaces that appear in student essays that suggest to professors that ChatGPT, not Suzie Sophomore, has pulled an all-nighter.)
The intricacy to be considered is that the American professoriat, as we know it, is under threat of extinction. The institution of tenure (in which scholars are guaranteed lifetime employment in return for proven accomplishments as researchers and, ideally, as teachers) has come undone. In 1976, the percentage of professors who rode the tenure line nationally was 56 %. It’s now down to about 24% and sinking steadily. Naysayers like me — precisely the types of tweedy, leather-embossed old heads who reflexively chafe at AI in the classroom — predict tenure will cease to exist at most schools in a few decades.
Tenure’s demise bears a causal relationship to what is referred to as “the casualization of academic labor.” The vast majority of professors in the United States have become harried, overworked cogs in a brainy gig economy. They are teaching larger and larger classes for lower and lower wages, with less and less job security and no academic freedom protections. It’s under these circumstances, and these circumstances alone, that I would suggest we extend some imaginative sympathy to academics; if they use AI to mark term papers and stack their slide decks it’s only because their classes are crammed and their wages are meager.
Now that professors have been publicly accused of behaving badly, the question becomes: What will college administrators do about it? These would be the same administrators who were responsible for the aforementioned casualization of the scholarly labor force (not surprisingly, professorial trust for our overlords is also at an all-time low).
These would also be the same administrators who display the most schizophrenic approach to AI innovations. On the one hand, they swoon for AI’s “efficiency propositions.” These innovations will let them cut costs. They can fire lots of human beings whose skill sets are, allegedly, becoming digitally replaceable (like librarians, grant writers, curriculum builders, etc.). Then there are the “synergies.” Universities are everywhere entering into lucrative partnerships with AI companies and generally vibing with this awesome new redundancy-reducing technology.
On the other hand, institutions of higher education are wed to age-old academic integrity protocols. That would be the quaint idea that students (and professors) should learn how to think for themselves. Some might say that teaching young people how to do precisely that is the core democracy-enhancing function of higher education. Other than that, college is just frats and football.
This professor has never used generative AI chatbots (are they sold in pharmacies?). That’s because I’m bound to the conceit that if you profess in the humanities or softer social sciences you strive to develop students into critical, analytical and ultimately thoughtful souls. To get them there, they have to learn how to think. The thinking process — all that framing, failing, flailing, reflecting, lurching into dead ends — can’t be left to some faceless research sommelier named Claude.
Which is why my colleagues’ use of these programs in the classroom is so alarming. Nearly every scholar working today was born before these programs existed. We’ve had the privilege of developing functioning analytical brains, because our teachers forced us to do that. Why would we want to deprive our students of the ability to do the same by letting them outsource their thought processes to lines of dreary code?
By the same token, nearly every scholar working today was also born in a moment when it was unthinkable that a technology made by the few, enriching the few, could wipe out entire guilds within the span of years. So while I personally wish professors wouldn’t succumb to the cheap lure of AI, I can’t say I blame them — at least those who have been victimized by an economy that so devalues their unique skills, if not their humanity.