On Language-Centrism
If future AI systems are anything like current AI systems, they will not have neurons, but they will closely resemble us in terms of linguistic behaviour. Today, even as scientists approach the question of consciousness by examining neural correlates, we are wondering about nonbiological consciousness in AI systems. The question of AI consciousness sits uneasily next to the neurocentrism of current science. It may be that the anthropocentrism drives opinions about what is conscious more than the neurocentrism. Neurocentrism is a consequence of the anthropocentric reasoning that drives consciousness research, with mammalian-like nervous systems being identified as the key feature. If Chat-GPT encourages researchers to move away from neurocentrism, we may end up back with the language-centrism that Griffin worked to undermine. That would not be productive science.

Kristin Andrews, “What It’s Like to Be a Crab,” - https://aeon.co/essays/are-we-ready-to-study-consciousness-in-crabs-and-the-like. Interesting summary of recent consciousness studies of humans and animals. Begins with the twenty-five year old bet about neural correlates lost by the neuroscientist Christof Koch to the philosopher David Chalmers. The broader issue concerns how consciousness studies should proceed and the degree to which anthropomorphic assumptions about neural complexity and language use should dominate. What goes unnoticed, it seems to me, is that simple organisms can have linguistic capacities. Biosemiotics and bio-deconstruction aim to lean into this aspect of biology. How should we understand a single-celled organism that ‘remembers’ being poked and avoids it in future? Does the interpretation of the stimulus amount to signals and, therefore, a kind of writing? These questions are pursued by others, but I hope to explore them further in the coming years. In any case, the essay helpfully highlights how presumptions about linguistic or neural capacity inform scientific testing of consciousness in crabs and AI.

timothywstanley@me.com
On Medieval Time
The thinking man’s timepiece was the astrolabe, first developed in Greece but significantly improved by Arab astronomers and mathematicians in the tenth and eleventh centuries. The instrument comprised a stack of concentric brass plates, carved with the celestial sphere. By rotating the top plate, simulating the motion of the heavens, it was possible to take readings that could reveal the positions of stars, the distances between astral bodies and the phase of the moon. It could also be used to tell the time of day at a certain latitude, based on the altitude of the sun and the calendar date... The scholastic philosopher Nicholas Oresme, at the end of the 14th century, was the first writer to imagine the universe as a vast mechanical clock, in which ‘all the wheels move as harmoniously as possible.’ But the metaphor could be turned inside out: earthly clocks were made by fallible humans. The writer of ‘Dives and Pauper’, a 15th-century devotional treatise, was keen to point out that the apparent neutrality of mechanical movement was a façade: ‘in citees & townes men rule them[selves] by the clock, and yet properly to speke the clock ruleth not them but a man ruleth the clock.’

Tom Johnson, “Take That, Astrolab “ - https://www.lrb.co.uk/the-paper/v45/n20/tom-johnson/take-that-astrolabe. Interesting summary of timepieces and their implications for ways of thinking and being.

timothywstanley@me.com
On Citizens' Assemblies
Around the world, democracies are struggling with angry populations who are fed up with politicians who don’t seem to represent them effectively. Fortunately, there’s an alternative. Hugh Pope—a veteran reporter on the Middle East who also spent 15 years working for International Crisis Group—introduces us to the growing movement for ‘citizens’ assemblies’, where ordinary people get together to decide what’s best for the community. He argues that these assemblies have already been used effectively on important issues that are difficult for politicians to tackle and reveals how the French president, Emmanuel Macron, came to find out about them.

“The Best Books on Citizens’ Assemblies Recommended by Hugh Pope - https://fivebooks.com/best-books/citizen-assemblies-hugh-pope/. A collection of works that resonate with my work on religion in deliberative democratic practices.

timothywstanley@me.com
On Early Counting
Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago... the ancient Mesopotamians must have been counting in base 60 on their fingers long before they, or, indeed, anyone else on the planet, could set out numbers in writing. The Mesopotamians’ unique counting method is thought to come from a mix of a duodecimal system that used the twelve finger joints of one hand and a quinary system that used the five fingers of the other. By pointing at one of the left hand’s twelve joints with one of the right hand’s five digits, or, perhaps, by counting to twelve with the thumb of one hand and recording multiples of twelve with the digits of the other, it is possible to represent any number from 1 to 60. However it worked, the Mesopotamians’ anatomical calculator was a thing of exceptional elegance, and the numbers they counted with it echo through history. It is no coincidence that a clock has twelve hours, an hour has sixty minutes, and a minute has sixty seconds.

Keith Houston, “The Early History of Counting,” - https://www.laphamsquarterly.org/roundtable/early-history-counting.

timothywstanley@me.com
On AI Religion
AI is slowly becoming part of the religious sphere. In an era marked by rapid technological advancement, we are seeing everything from artificial intelligence to robots slowly seep into our everyday lives. But now, this technology is increasingly making inroads into a realm that has long been uniquely human: religion. From the creation of ChatGPT sermons to robots performing sacred Hindu rituals, the once-clearer boundaries between faith and technology are blurring... AI Jesus provides insight on both spiritual and personal questions users ask on his channel... A unique intersection of religion and robotic technology has emerged with the introduction of robots performing Hindu rituals in South Asia... In June 2023, hundreds of Lutherans gathered in Bavaria, Germany, for a service designed and delivered by ChatGPT.

“Navigating the Intersection between AI, Automation and Religion – 3 Essential Reads,” - https://theconversation.com/navigating-the-intersection-between-ai-automation-and-religion-3-essential-reads-211587. I’ll be teaching a new course on Virtual Religion next year that addresses some of these matters.

timothywstanley@me.com
On AI Aliens
Humans and computers belong to separate and incommensurable realms... For Weizenbaum, we cannot humanise AI because AI is irreducibly non-human. What you can do, however, is not make computers do (or mean) too much. We should never ‘substitute a computer system for a human function that involves interpersonal respect, understanding and love,’ he wrote in Computer Power and Human Reason. Living well with computers would mean putting them in their proper place: as aides to calculation, never judgment. Weizenbaum never ruled out the possibility that intelligence could someday develop in a computer. But if it did, he told the writer Daniel Crevier in 1991, it would ‘be at least as different as the intelligence of a dolphin is to that of a human being.’ There is a possible future hiding here that is neither an echo chamber filled with racist parrots nor the Hollywood dystopia of Skynet. It is a future in which we form a relationship with AI as we would with another species: awkwardly, across great distances, but with the potential for some rewarding moments. Dolphins would make bad judges and terrible shrinks. But they might make for interesting friends.

Ben Tarnoff, “A Certain Danger Lurks There”: How the Inventor of the First Chatbot Turned against AI.” - https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai.

On Writing for Perplexity
In the current/future of AI writing, how do we avoid producing stochastic students or becoming language models ourselves? Bender and her coauthors argue for more critical engagement with language models and implicitly offer us paths forward. They note that language models work statistically, predicting next words without reference to meaning. Drawing on the fact that writing from language models tends to be predictable, AI writing detectors use perplexity to discriminate between AI and human writing. But a student, if we’re not careful, can also predict words without reference to meaning. If a student is taught and rewarded for commonplaces and stock genres, they will reproduce their training data: the boring commonplaces no teacher relishes reading and no writer learns from reproducing. Instead, we should teach and write for perplexity — not so much to avoid plagiarism detectors but to avoid the commonplaces that block critical thinking. We should all write for critical inquiry.

Annett Vee, “Against Output,” In the Moment - https://critinq.wordpress.com/2023/06/28/against-output/. She is referring to Bender, Gebru, McMillan-Major and Schmitchell’s recent paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

On Neural Correlates of Consciousness
Two friends — David Chalmers, a philosopher, and Christof Koch, a neuroscientist — took the stage to recall an old bet. In June 1998, they had gone to a conference in Bremen, Germany, and ended up talking late one night at a local bar about the nature of consciousness. For years, Dr. Koch had collaborated with Francis Crick, a biologist who shared a Nobel Prize for uncovering the structure of DNA, on a quest for what they called the ‘neural correlate of consciousness.’ They believed that every conscious experience we have — gazing at a painting, for example — is associated with the activity of certain neurons essential for the awareness that comes with it. Dr. Chalmers liked the concept, but he was skeptical that they could find such a neural marker any time soon. Scientists still had too much to learn about consciousness and the brain, he figured, before they could have a reasonable hope of finding it... But the 25-year bet, at least, has been resolved: No one has found a clear neural correlate of consciousness. Dr. Koch ended the evening by carrying to the stage a wooden box full of wine. He pulled out a 1978 bottle of Madeira and gave it to Dr. Chalmers.

Carl Zimmer, “2 Leading Theories of Consciousness Square Off” - https://www.nytimes.com/2023/07/01/science/consciousness-theories.html.

On Extending Collective Intelligence
Extending the collective intelligence of others was a practical solution, not an idealistic one. Atherton’s group observed that, because the new computer terminal locations at Syracuse were ‘remote from a reference librarian or any other human specialists in the user’s interest area’, they would need an additional source of help, which could be found in ‘the human intelligence of all other users of the system’... Atherton’s group saw that we would lose expert intermediaries; they designed for this cost. In 2022 and 2023, as the first generative AI search engines, including academic search engines such as Elicit and Consensus, were introduced to a wide set of users to both great excitement and scepticism, it is similarly useful to analyse what will be lost when researchers come to rely on these tools.

Monica Westin, “Ingenious Librarians,” https://aeon.co/essays/the-1970s-librarians-who-revolutionised-the-challenge-of-search. Extended mind theory has much to say about the theoretical modeling of our reliance upon others in search technologies.

On the Unwritten World
The comparison between the world and a book has had a long history starting in the Middle Ages and the Renaissance. What language is the book of the world written in? According to Galileo, it’s the language of mathematics and geometry, a language of absolute exactitude. Can we read the world of today in this way? Maybe, if we’re talking about the extremely distant: galaxies, quasars, supernovas. But as for our daily world, it seems to us written, rather, as in a mosaic of languages, like a wall covered with graffiti, writings traced one on top of the other, a palimpsest whose parchment has been scratched and rewritten many times, a collage by Schwitters, a layering of alphabets, of diverse citations, of slang terms, of flickering characters like those which appear on a computer screen... In a certain sense, I believe that we always write about something we don’t know: we write to make it possible for the unwritten world to express itself through us. At the moment my attention shifts from the regular order of the written lines and follows the mobile complexity that no sentence can contain or use up, I feel close to understanding that from the other side of the words, from the silent side, something is trying to emerge, to signify through language, like tapping on a prison wall.

Italo Calvino, “The Written World and the Unwritten World,” - https://www.theparisreview.org/blog/2023/01/05/the-written-world-and-the-unwritten-world/. An interesting comment from a lecture Calvino gave in 1983. It evinces a kind of mysticism about the unwritten world. One wonders what Calvino might have made of the more recent biodeconstruction of Francesco Vitale. As Calvino noted in this same lecture, “I started from the irreconcilable difference between the written world and the unwritten world; if their two languages merge, my argument crumbles.”