timothywstanley@me.com timothywstanley@me.com

On Habermas Machines

Researchers from Google recently issued a paper describing what they call a ‘Habermas machine,’ a LLM meant to help ‘small groups find common ground while discussing divisive political issues’ by iterating ‘group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings.’ Participants in their study ‘consistently preferred’ the machine-generated statements to those produced by humans in the group and helped reduce the diversity of opinion within the group, which researchers interpret as ‘AI … finding common ground among dicussants with diverse views.’ So much for the ‘lifeworld’ and ‘intersubjective recognition.’ It appears that people are more likely to agree with a position when it appears that no one really holds it than to agree with a position articulated by another person.

Rob Horning, “Habermas Machines” - https://robhorning.substack.com/p/habermas-machines.

Interesting that Google engineers would think the ends justify the means in this case. It’s almost as if they asked an AI bot how to create a machine to achieve consensus and it replied by gaming a solution to achieve that end. Horning rightly cites the problem in that consensus is achieved but in a way that leaves people isolated in networks of systemic surveillance. “In other words, tech companies can posit a world where all political discourse occurs between isolated individuals and LLMs, and the data produced could be used to facilitate social control while everyone gets to feel heard. The automated production and summarization and summation of political opinion doesn’t help people engage in collective action; it produces an illusion of collective action for people increasingly isolated by media technology.”

In contrast, Habermas’s view of democratic decsision-making inherently includes a process of mutual recognition. It’s a point even more crucial to Arendt’s view of human plurality in political spaces of appearance. Recognition or Anerkennung in German includes a notion of cognitive empathy where people learn to see each other’s perspectives. Intersubjective habits develop between people in and through deliberative practices. For instance, what’s been documented in jury forums is not simply that just decisions can be made. As well, people leave the experience with stronger ties to their fellow citizens. They come to believe that justice is possible through collaborative relationships. As I noted in my book Religion after Deliberative Democracy (p. 70), “one case study ‘discovered that each aspect of jury service has a different kind of impact on jurors, with the final jury deliberation not always providing the most important civic lesson.’ In a summative table, they outlined such positive impacts upon participation in voting, confidence in legal institutions, emotional connection to political action, local community groups, and political and civic faith’” Gastil, John, E. Pierre Deess, Philip Weiser, and Cindy Simmons. 2010. The Jury and Democracy: How Jury Deliberation Promotes Civic Engagement and Political Participation . Oxford: Oxford University Press, 174–75.

This is not to say that AI may not become an aid to deliberative democratic practices. Rather, the measure of success for “Habermas Machines” must ensure that the means are more substantially included in the ends.



Read More
timothywstanley@me.com timothywstanley@me.com

On Divine Discontent

The most fulfilled people I know tend to have two traits. They’re insatiably curious—about new ideas, experiences, information and people. And they seem to exist in a state of perpetual, self-inflicted unhappiness... But it’s this restless pursuit of greatness, even when they feel demoralized and inadequate, that shapes their lives and makes things interesting. So let’s not call it dissatisfaction. Let’s call it a divine discontent... To me, divine discontent is about cheerfully seeking out dissatisfaction. It’s choosing to ask, What could be better? What can I improve? It’s a feeling that practitioners across many fields—in literature, art, music, performance, film; but also the sciences, engineering, and mathematics—can relate to.

Celine Nguyen, “The Divine Discontent” - https://www.personalcanon.com/p/the-divine-discontent.

Read More
timothywstanley@me.com timothywstanley@me.com

On The French Dispatch

There are times when writing deadlines loom, and the only hedge against nagging other todos is to put on a quiet movie seen several times before. The difficult task at hand is comforted by something repeating itself in the background. Repetition being impossible, the hermeneutic spiral kicks in and a scene inevitably jumps out (I’m thinking of Kierkegaard and Ricoeur at this point). Here’s one such example from Wes Anderson’s typically idiosyncratic The French Dispatch (2021). The movie gravitates around a menagerie of dislocated journalists. It’s “set in an outpost of an American newspaper in a fictional twentieth century French city that brings to life a collection of stories published in ‘The French Dispatch Magazine’.” At one point near the end of the film, the managing editor, Arthur [Bill Murray], comments upon something missing in one of Roebuck’s [Jeffrey Wright] essays for the Tastes and Smells section about a chef named Nescaffier [Steve Park]. There’s an awkward tension in the air that will be familiar to anyone who has ever had a critic look over their carefully crafted work.

Arthur: Nescaffier only gets one line of dialogue.

Roebuck: Well, I did cut something he told me. It made me too sad. I could stick it back in, if you like.

Arthur: What did he say?
— The French Dispatch

The film cuts to Nescaffier, lying on a medical recovery bed, after having eaten a poisoned radish in a scheme to save the police chief’s son.

Nescaffier: They had a flavor.

Roebuck: I beg your pardon?

Nescaffier: The toxic salts in the radishes. They had a flavour. Something unfamiliar to me. Like a bitter, moldy, peppery, spicy, oily kind of earth. I never tasted that taste in my life. Not entirely pleasant, extremely poisonous, but still a new flavour. That’s a rare thing at my age.

Roebuck: I admire your bravery, lieutenant.

Nescaffier: I’m not brave. I just wasn’t in the mood to be a disappointment to everybody. I’m a foreigner you know.

Roebuck: The city is full of us isn’t it? I’m one myself.

Nescaffier: Seeking something missing. Missing something left behind.

Roebuck: Maybe with good luck we’ll find what eluded us in the places we once called home.
— The French Dispatch

The film returns to Arthur and Roebuck’s editorial tête-à-tête, which now seems to be intimating an underlying theme. The movie wanders through several quite different and equally eccentric stories from the magazine. At times, you’re left wondering if there would be any actual paying subscribers in an era where print was the primary medium of distribution. However, this scene impresses a profound feeling of nostalgia or the pain that arises when you miss something that can never return. It seemed to me that on this day, The French Dispatch was about an often unspoken feature of cosmopolitan life.

Arthur: That’s the best part of the whole thing. That’s the reason for it to be written.

Roebuck: I couldn’t agree less.

Arthur: Well, anyway, don’t cut it.
— The French Dispatch
Read More
timothywstanley@me.com timothywstanley@me.com

On Minimal Cognition

Cognition isn’t reserved only to vertebrates with language, reason, or self-awareness. There are more primitive cognitive subsystems within us, around us, and all along the ladder of evolutionary time. Studying them is the purview of an emerging interdisciplinary field in biology: ‘minimal cognition’ or ‘basal cognition’... That is to say, in order to get useful answers, it helps to meet the slime mold where it’s at. Other organisms may help us to answer different questions. And if we align our questions with the inherent capabilities of the organisms we employ in our computational experiments, we can yoke together our interests, too. Maybe that’s why I’m so interested in minimal cognition. Not only because it opens up the definition of what a brain can be, but because it binds us to the world, drawing our brains into a broader phenomenon that touches life at every level. We’re nothing special. As Reid said to me, laughing, ‘in some ways, it’s information processing all the way down.’

Claire Lewis, “What's a Brain? On Bacterial, Cellular, and Other Minimal Minds” - https://clairelevans.substack.com/p/whats-a-brain. Interesting brief summary of minimal cognition. My view is increasingly similar, but if this is the case, it means there are hermeneutic and semiotic interests here.

Read More
timothywstanley@me.com timothywstanley@me.com

On the Indosphere

But Dalrymple does give full measure to the last and greatest achievement of the Indosphere: the spread of much of culture that we recognise as distinctively modern... No less crucial in the formation of ‘the West’ as we know it was the evolution of the university system from the early Buddhist monasteries in Northern India into the madrasas and thence into Oxford, Cambridge and the Sorbonne. The lineage of those secluded quads with their communities of dedicated scholars is clear. There was no greater example than the university of Nalanda in Bihar, with its endless courtyards and temples and its ten thousand monks and scholars. Dalrymple describes in alluring detail the three thousand-mile pilgrimage in 629 ad of the Buddhist monk Xuanzang from the Chinese capital to visit this amazing place. No freshman from the sticks can ever have had his mind more thoroughly blown by the uni experience. This is perhaps the most brilliant example of the traffic running predominantly one way from China to India. It was India that was so often the destination and the hub.

Ferdinand Mount, “One-way Traffic” - https://www.lrb.co.uk/the-paper/v46/n17/ferdinand-mount/one-way-traffic. I uncovered this ancient connection between Hellenistic and South Asian cultures while writing a chapter on the early eighth-century thinker Shankara for my forthcoming book on Religion through the Eyes of Others. His birthplace in Kerala was part of the trade network Dalrymple outlines in The Golden Road. It is also thought that he adopted the Buddhist model of monastic training, noted here as the inspiration for later Islamic madrasas and European universities. It’s a forgotten legacy and one that disrupts any strict binary between East and West. Nonetheless, Shankara is a unique thinker who takes significant time and consideration to understand as just one voice amongst many in the Indosphere.

Read More
timothywstanley@me.com timothywstanley@me.com

On Bayle’s Footnotes

‘Once the historian writes with footnotes, historical narrative becomes a distinctly modern’ practice, Grafton explains. History is no longer a matter of rumor, unsubstantiated opinion, or whim. ‘The text persuades, the note proves,’ he avers. Footnotes do double duty, for they also ‘persuade as well as prove’ and open up the work to a multitude of voices... Pierre Bayle’s enormously influential Historical and Critical Dictionary (1697) is the thing to cite here. The Dictionary ‘consisted in large part of footnotes (and even footnotes to footnotes).’ Within a few decades scholars emulating Bayle ‘were producing footnotes by the bushel—and satirists were making fun of them for doing so.’

Matthew Wills, “History’s Footnotes,” https://daily.jstor.org/historys-footnotes/. One of the key contrasts between the scientific and enlightenment interest in evidence and recent web design and AI models, is a lack of provenance, which I noted here. Bayle’s work is one of the key points at which the concept of critique enters the English language. He was a Huguenot refugee, another French word that arrived around this time.

Read More
timothywstanley@me.com timothywstanley@me.com

On Truth Machines

Humans exist at an uneasy threshold. We have a dizzying ability to make meaning from the world, braid language into stories to construct understanding, and search for patterns that might reveal larger, more steady truth. Yet we also recognize our mental efforts are often flawed, arbitrary, incomplete. Woven throughout the centuries is a burning obsession with accessing truth beyond human fallibility—a utopian dream of automated certainty... It might be said that ‘ChatGPT is bullshit,’ as the title of one recent paper, coauthored by philosopher of science Michael Townsen Hicks, asserts—citing the philosopher Harry Frankfurt’s definition of bullshit as ‘speech intended to persuade without regard for truth.’ Instead of an externally calculated, more pure truth, these machines are distilling and reflecting back to us the chaos of human beliefs and chatter. The hope remains that we might scale these models indefinitely to reach the point of general intelligence. That is, more extrapolation than certainty—the models may already be coming to a point of diminishing returns given they require an immense amount of data for minimal improvements in their performance. In fact, the LLMs of today are missing something even Llull and Leibniz believed was essential to their machines: reason.

Kelly Clancy, “The Perpetual Quest for a Truth Machine,” https://nautil.us/the-perpetual-quest-for-a-truth-machine-702659/. This is an interesting intellectual history tracing the connections from Llull’s thirteenth-century Ars Magna to Leibniz’s 1666 Dissertatio de Arte Combinatoria to George Boole’s 1854 Laws of Thought to Joseph Weizenbaum’s 1960s ELIZA and ChatGPT today.

Read More
timothywstanley@me.com timothywstanley@me.com

On Language Equity

It’s clear from the research, however, that children do not have an adult-like understanding of artificial agents in general. Some evidence suggests that young children are especially likely to anthropomorphize robots and attribute mental states to them. Perceiving them to be human-like (thinking that the robot can see or can be tickled) in fact enhances learning—as does the agent’s responding to the child’s conversational moves in ways that a human might. This leads us into an ethical thicket. Children are likely to learn from an AI if they can form a bond of trust with it, but at the same time, they need to be protected from its unreliability and its lack of caring instincts. They may need to learn—perhaps through intensive AI-literacy training in schools—to treat a bot as if it were a helpful human, while retaining awareness that it is not, a mind-splitting feat that is hard enough for many adults, let alone preschoolers. This paradox suggests there’s no easy fix to the language equity problem in the child’s younger years.

Julie Sedivy, “When Kids Talk to Machines” - https://nautil.us/when-kids-talk-to-machines-655610/

Read More
timothywstanley@me.com timothywstanley@me.com

On Scientific Judgment

No one should doubt for a second that natural scientists take evidence from observation and experiment very, very seriously. But evidence, regardless of its form, cannot by itself determine what one ought to believe. Two individuals faced with all the same data can nevertheless rationally disagree with one another. This trivial point, easy to appreciate in the abstract, is for some reason treated as a scandal when applied to the domain of scientific inquiry. In the minds of many of my students, the difference between science and whatever the hell it is I do is that scientists can come to consensus because their individual use of scientific evidence guarantees that each one of them will arrive at the same logically unavoidable conclusion about nature. For them, human judgment is simply a contingency by which these logically unavoidable conclusions are reached. The hard truth, which it can take several semesters for them to come to terms with, is that scientists who agree on all the facts nevertheless routinely disagree about how those facts ought to be interpreted — and that, no matter how many more facts they acquire, rational disagreement will always be possible. Anyone who says otherwise is promoting an epistemological fantasy world that, while undeniably comforting, erects more hurdles than it clears when it comes to understanding the production of knowledge... Scientific knowledge is not what we come to believe about nature after making sure that we’ve subtracted the influence of human thought. It is a product of human thought.

Chris Haufe, “Do Humanists Know Anything,” https://www.chronicle.com/article/do-humanists-know-anything.

Read More
timothywstanley@me.com timothywstanley@me.com

On AI after Avicenna

Similarly, if an artificial neural network were presented with the task of the sheep, it would not reason as the human does, from a general concept of wolf-ness to features of the particular wolf such as dangerousness. Instead, it would reason as the sheep does, constrained to the realm of particulars... Ibn Sina’s core criterion for personhood—reasoning from universals—closely resembles systematic compositional generalizability. This criterion could provide a potentially testable standard for personhood. In fact, so far, AI has failed this test in numerous studies. Whether or not one adopts it as a solution, Ibn Sina’s account provides a new lens on the problem of personhood that challenges the assumptions of consciousness-centered accounts. Scientific ethics is so often concerned with the cutting edge—the latest research, the newest technology, a constant influx of data. But sometimes the questions of the future require careful consideration of the past. Looking to history allows us to look beyond the preoccupations and assumptions of our time and may just provide refreshing approaches toward current stalemates.

Abigail Tulenko, “What Philosopher Ibn Sina Can Teach Us about AI,” - https://www.scientificamerican.com/article/what-philosopher-ibn-sina-can-teach-us-about-ai.

Read More