On Habermas Machines

Researchers from Google recently issued a paper describing what they call a ‘Habermas machine,’ a LLM meant to help ‘small groups find common ground while discussing divisive political issues’ by iterating ‘group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings.’ Participants in their study ‘consistently preferred’ the machine-generated statements to those produced by humans in the group and helped reduce the diversity of opinion within the group, which researchers interpret as ‘AI … finding common ground among dicussants with diverse views.’ So much for the ‘lifeworld’ and ‘intersubjective recognition.’ It appears that people are more likely to agree with a position when it appears that no one really holds it than to agree with a position articulated by another person.

Rob Horning, “Habermas Machines” - https://robhorning.substack.com/p/habermas-machines.

Interesting that Google engineers would think the ends justify the means in this case. It’s almost as if they asked an AI bot how to create a machine to achieve consensus and it replied by gaming a solution to achieve that end. Horning rightly cites the problem in that consensus is achieved but in a way that leaves people isolated in networks of systemic surveillance. “In other words, tech companies can posit a world where all political discourse occurs between isolated individuals and LLMs, and the data produced could be used to facilitate social control while everyone gets to feel heard. The automated production and summarization and summation of political opinion doesn’t help people engage in collective action; it produces an illusion of collective action for people increasingly isolated by media technology.”

In contrast, Habermas’s view of democratic decsision-making inherently includes a process of mutual recognition. It’s a point even more crucial to Arendt’s view of human plurality in political spaces of appearance. Recognition or Anerkennung in German includes a notion of cognitive empathy where people learn to see each other’s perspectives. Intersubjective habits develop between people in and through deliberative practices. For instance, what’s been documented in jury forums is not simply that just decisions can be made. As well, people leave the experience with stronger ties to their fellow citizens. They come to believe that justice is possible through collaborative relationships. As I noted in my book Religion after Deliberative Democracy (p. 70), “one case study ‘discovered that each aspect of jury service has a different kind of impact on jurors, with the final jury deliberation not always providing the most important civic lesson.’ In a summative table, they outlined such positive impacts upon participation in voting, confidence in legal institutions, emotional connection to political action, local community groups, and political and civic faith’” Gastil, John, E. Pierre Deess, Philip Weiser, and Cindy Simmons. 2010. The Jury and Democracy: How Jury Deliberation Promotes Civic Engagement and Political Participation . Oxford: Oxford University Press, 174–75.

This is not to say that AI may not become an aid to deliberative democratic practices. Rather, the measure of success for “Habermas Machines” must ensure that the means are more substantially included in the ends.



timothywstanley@me.com

I am a Senior Lecturer in the School of Humanities, Creative Industries and Social Sciences at the University of Newcastle, Australia, where I teach and research topics in philosophy of religion and the history of ideas.

www.timothywstanley.com
Previous
Previous

On AI Mirrors

Next
Next

On Divine Discontent