Network

Human way

On brains and minds – Melanie Mitchell in TLS:

‘In November 2022, OpenAI released ChatGPT, a large language model (LLM) that can converse in flawless natural language, answer questions, solve maths problems and logic puzzles, write essays and poetry, and explain complex ideas. Similar models have since been released by other AI companies. Shortly after ChatGPT’s launch, the neuroscientist Terrence Sejnowski wrote: Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way … Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?’

(…)

‘All this started to change around 2010, with what is called the “deep learning revolution”. Deep learning refers to a machine learning approach in which large amounts of data are used to train “deep” neural networks. Neural networks are AI systems with structures loosely inspired by biological brains: simulated “neurons” are linked to one another via weighted connections and are arranged in hierarchical layers. The more layers, the “deeper” the network. Typically, the network is trained to map an input (eg a word or an image) to a “correct” output (eg the sound of the word or the name of an object in the image). Such training usually requires a large dataset of examples, which are used to tune the network’s weighted connections to values that will produce correct outputs.’

(…)

‘In These Strange New Minds: How AI learned to talk and what it means, the cognitive neuroscientist Chris Summerfield surveys the philosophical landscape, social impacts and potential dangers of LLMs. In the first parts of the book, he describes the intellectual history of neural networks and provides readers with an intuitive account of what LLMs are and how they are trained.
Summerfield explains how the language abilities of LLMs serve as strong evidence for a linguistic theory called “distributional semantics”, which proposes that the meaning of language can be derived from the statistics of how words occur together in text. For example, we understand the meaning of the word “bill” from its context: if “bird” and “beak” are mentioned nearby, “bill” probably has one meaning; if “plumber” and “cost” are mentioned nearby, it probably has another meaning. Distributional semantics posits that words and phrases can be mapped to a high- dimensional “semantic space”; the position of a word or phrase in that space, and its distance to other words or phrases, are what define its meaning.’

(…)
‘We should subject LLMs to the so-called Duck Test: if something swims like a duck and quacks like a duck, then we should assume that it probably is a duck, rather than inventing abstruse arguments to otherwise explain its behaviour.
The problem with this, as the author acknowledges, is that humans have a strong, sometimes misleading tendency to project mental qualities onto anything that communicates with us in fluent natural language. This has been dubbed the “Eliza effect”, named after the 1960s chatbot Eliza, which imitated a psychotherapist. Even though Eliza had zero intelligence or understanding – it used templates such as “Tell me more about X”, where X was something a user just mentioned – people who chatted with it often believed that it understood them deeply. While today’s LLMs are far more sophisticated language users than Eliza, to what extent are we humans similarly falling for an imitation of understanding and intelligence rather than recognizing the “real thing”, especially when we don’t have consensus about what “the real thing” is?’

(…)

‘As the philosopher Shannon Vallor put it, “they can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside”’

(…)

‘How can we prevent a self-improving AI system from developing a self-preservation instinct, in service of which the system will inevitably try to amass as much power and as many resources as possible, dispatching humans that get in its way and manipulating other humans to do its bidding? Such scenarios are the focus for a community of “AI safety” researchers concerned about “existential risk”. On the other side of the coin are the “effective accelerationists – extreme techno-optimists who believe that the benefits of AI will be so great that society should amplify its development and avoid slowing down progress with, say, government regulations. OpenAI’s Sam Altman, for example, wrote that AI investments will eventually lead to a utopia in which “astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace”.’

(…)

‘As this book illustrates, in the age of AI, defining the concept of mind will matter enormously.’

Read the article here.

Utopia or dystopia? I guess, none of the above.

AI is a tool that is dark some. One can encounter human beings who seem to be dark inside as well.

They might need an upgrade. Maybe a better large, language model.

discuss on facebook