On Truth Machines

Humans exist at an uneasy threshold. We have a dizzying ability to make meaning from the world, braid language into stories to construct understanding, and search for patterns that might reveal larger, more steady truth. Yet we also recognize our mental efforts are often flawed, arbitrary, incomplete. Woven throughout the centuries is a burning obsession with accessing truth beyond human fallibility—a utopian dream of automated certainty... It might be said that ‘ChatGPT is bullshit,’ as the title of one recent paper, coauthored by philosopher of science Michael Townsen Hicks, asserts—citing the philosopher Harry Frankfurt’s definition of bullshit as ‘speech intended to persuade without regard for truth.’ Instead of an externally calculated, more pure truth, these machines are distilling and reflecting back to us the chaos of human beliefs and chatter. The hope remains that we might scale these models indefinitely to reach the point of general intelligence. That is, more extrapolation than certainty—the models may already be coming to a point of diminishing returns given they require an immense amount of data for minimal improvements in their performance. In fact, the LLMs of today are missing something even Llull and Leibniz believed was essential to their machines: reason.

Kelly Clancy, “The Perpetual Quest for a Truth Machine,” https://nautil.us/the-perpetual-quest-for-a-truth-machine-702659/. This is an interesting intellectual history tracing the connections from Llull’s thirteenth-century Ars Magna to Leibniz’s 1666 Dissertatio de Arte Combinatoria to George Boole’s 1854 Laws of Thought to Joseph Weizenbaum’s 1960s ELIZA and ChatGPT today.

timothywstanley@me.com

I am a Senior Lecturer in the School of Humanities, Creative Industries and Social Sciences at the University of Newcastle, Australia, where I teach and research topics in philosophy of religion and the history of ideas.

www.timothywstanley.com
Previous
Previous

On Bayle’s Footnotes

Next
Next

On Language Equity