On AI Mirrors

After all, having AIs that can beat us at chess is one thing—but now we have algorithms that write convincing prose, have engaging chats, make music that fools some into thinking it was made by humans. Sure, these systems can be rather limited and bland—but aren’t they encroaching ever more on tasks we might view as uniquely human? ‘That’s where the mirror metaphor becomes helpful,’ [Vallor] says. ‘A mirror image can dance. A good enough mirror can show you the aspects of yourself that are deeply human, but not the inner experience of them—just the performance.’ With AI art, she adds, ‘The important thing is to realize there’s nothing on the other side participating in this communication.’ What confuses us is we can feel emotions in response to an AI-generated ‘work of art.’ But this isn’t surprising because the machine is reflecting back permutations of the patterns that humans have made: Chopin-like music, Shakespeare-like prose. And the emotional response isn’t somehow encoded in the stimulus but is constructed in our own minds: Engagement with art is far less passive than we tend to imagine.

Philip Ball, “AI Is the Black Mirror” - https://nautil.us/ai-is-the-black-mirror-1169121/. An interesting interview and review of Shanon Valor’s 2024 The AI Mirror. It echoes similar concerns raised by Joseph Weizenbaum’s 1976 Computer Power and Human Reason I noted earlier here. He also recognized the confusion between human reason and computer calculations. This latest interview is less hopeful at times than I remain about the degree to which we can build ethical capacity into computer science studies. “Vallor tells me she once tried to explain to an AGI leader that there’s no mathematical solution to the problem of justice. 'I told him the nature of justice is we have conflicting values and interests that cannot be made commensurable on a single scale, and that the work of human deliberation and negotiation and appeal is essential. And he told me, "I think that just means you’re bad at math." What do you say to that? It becomes two worldviews that don’t intersect. You’re speaking to two very different conceptions of reality.' What we’ll be aiming to do in our new ethics of emerging technology course is start with mathematic principles and build AI ethical engagement in response.

timothywstanley@me.com

I am a Senior Lecturer in the School of Humanities, Creative Industries and Social Sciences at the University of Newcastle, Australia, where I teach and research topics in philosophy of religion and the history of ideas.

www.timothywstanley.com
Next
Next

On Habermas Machines