On Writing for Perplexity

In the current/future of AI writing, how do we avoid producing stochastic students or becoming language models ourselves? Bender and her coauthors argue for more critical engagement with language models and implicitly offer us paths forward. They note that language models work statistically, predicting next words without reference to meaning. Drawing on the fact that writing from language models tends to be predictable, AI writing detectors use perplexity to discriminate between AI and human writing. But a student, if we’re not careful, can also predict words without reference to meaning. If a student is taught and rewarded for commonplaces and stock genres, they will reproduce their training data: the boring commonplaces no teacher relishes reading and no writer learns from reproducing. Instead, we should teach and write for perplexity — not so much to avoid plagiarism detectors but to avoid the commonplaces that block critical thinking. We should all write for critical inquiry.

Annett Vee, “Against Output,” In the Moment - https://critinq.wordpress.com/2023/06/28/against-output/. She is referring to Bender, Gebru, McMillan-Major and Schmitchell’s recent paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?