Alan Jacobs


academic bullshit

#

My estimable friend Dan Cohen:

Maybe AI tools can help to combat their unethical counterparts? SciScore seeks to improve the reliability of scientific papers by analyzing their methods and sources, producing a set of reports for editors, peer reviewers, and other scientists who want to reproduce an experiment. Ripeta uses AI trained on over 30 million articles to identify “trust markers” within a paper’s dense text. Using new AI computer vision tools, Proofig takes aim at falsified images within academic work.

But fighting AI with AI assumes a level of care and attention that are increasingly scarce resources in academia. As scholarly publishers will admit, peer reviewers are harder and harder to come by, as journals proliferate and there are greater pressures on the time of every professor. It’s more productive to crank out your own work than to correct the work of others. Professors who are concerned about their students using ChatGPT to create plausible-sounding essays might not look over their shoulders at their own colleagues using more sophisticated tools to do the same thing.

If they — and we — fail to stem the tide of AI-generated academic work, that very work will come into question, and one of the last wells of careful writing, of deep thought, of debate supported by evidence, might be fatally poisoned.

All of Dan’s concerns here are legitimate and serious … but I also think there’s another side to this, at least potentially. I’ve written before about the ways that ChatGPT and the like are revealing the unimaginative, mechanical nature of so many assignments we college teachers create and administer. In that post I wrote, “If an AI can write it, and an AI can read it and respond to it, then does it need to be done at all?“ Might we not ask the same question about our research, so much of which is produced simply because publish-or-perish demands it, not because of any value it has either to its authors or its readers (if it has any readers)?

Countless times in my career I have heard people talk about their need to publish research — to get tenure or promotion — in an AI-like pattern-matching mode: What sort of thing is getting published these days? What terms and concepts are predominantly featured? What previous scholarship is most often cited? And once they answer those questions, they generate the appropriate “content” and then fit it into one of the very few predetermined structures of academic writing. And isn’t all this a perfect illustration of a bullshit job?

Yes, I’m worried about what AI will do to academic life — but I also see the possibility of our having to face the ways in which our work, as students, teachers, and researchers, has become mechanistic and dehumanizing. And if we can honestly acknowledge the conditions, then maybe we can do something better.