On Language Equity

It’s clear from the research, however, that children do not have an adult-like understanding of artificial agents in general. Some evidence suggests that young children are especially likely to anthropomorphize robots and attribute mental states to them. Perceiving them to be human-like (thinking that the robot can see or can be tickled) in fact enhances learning—as does the agent’s responding to the child’s conversational moves in ways that a human might. This leads us into an ethical thicket. Children are likely to learn from an AI if they can form a bond of trust with it, but at the same time, they need to be protected from its unreliability and its lack of caring instincts. They may need to learn—perhaps through intensive AI-literacy training in schools—to treat a bot as if it were a helpful human, while retaining awareness that it is not, a mind-splitting feat that is hard enough for many adults, let alone preschoolers. This paradox suggests there’s no easy fix to the language equity problem in the child’s younger years.

Julie Sedivy, “When Kids Talk to Machines” - https://nautil.us/when-kids-talk-to-machines-655610/

timothywstanley@me.com

I am a Senior Lecturer in the School of Humanities, Creative Industries and Social Sciences at the University of Newcastle, Australia, where I teach and research topics in philosophy of religion and the history of ideas.

www.timothywstanley.com
Previous
Previous

On Truth Machines

Next
Next

On Scientific Judgment