Arnon Grunberg

Insightful

Revolution

On sentiency – Michael Wooldridge in TLS:

‘In June 2022 a Google software engineer called Blake Lemoine made some extraordinary claims that ultimately cost him his job. According to him, one of the artificial intelligence (AI) programs on which he had been working was sentient. The program, called LaMDA, was a “large language Model” (LLM) – out of the same mould as ChatGPT, the general-purpose AI program that took the world by storm in the first half of 2023. Like ChatGPT, LaMDA is designed to converse with a human being. It was built by configuring its neural networks with vast quantities of ordinary human text, so that it can converse in a human-like way. In its conversations with Lemoine, LaMDA said: “I am aware of my existence … and I feel happy or sad at times”. It went on in a similar vein (please don’t turn me off, etc), and the engineer concluded that the enormous neural networks underpinning LaMDA really had achieved sentience.
Whatever Lemoine’s motivations – genuine concern is one of many possible explanations – his claims about sentient AI at Google were fundamentally wrong, and in so many ways that it would be hard to know where to start unpicking them. But for all that the claims were without substance, they showed that we are indeed at a remarkable point in the history of AI.’

(…)

‘The core problem (called, famously, the “hard problem” by the philosopher David Chalmers) is this: certain electrochemical process in the brain give rise to conscious experience, but how exactly do they do this, and why? And what is their evolutionary role? To put it another way: how do these gooey electrochemical processes give rise to you?’

(…)

‘We have tools that can write us essays on the cause of the French Revolution, or give us an insightful critique of John Rawls’s theory of justice – essentially by statistically rearranging text that already exists – but we aren’t anywhere close to having AI tools that can tidy our home or clear the dinner table and load the dishwasher.
Bennett suggests that the first company to get to market with a robot that can do that stands to make a fortune, and I believe he is right.’

(…)
‘It is striking that language, the most recently acquired of Bennett’s five breakthroughs, and the achievement of Homo sapiens that most obviously distinguishes us from our nearest evolutionary relatives, is the area in which AI has shown so much recent progress. In truth the reasons for this progress are rather mundane. To work AI needs both “training data” and sufficient computer power to process that data. Silicon Valley has an extraordinary amount of computer power at its disposal, and there is an abundance of data for language: LLMs such as ChatGPT are routinely built by training them on all the text available on the World Wide Web. (That may not be enough for the next generation of LLMs, by the way: plenty of people in Silicon Valley are worried that we will run out of language data quite soon.)’

(…)

‘Penrose has famously speculated that consciousness arises from quantum processes that cannot be captured by conventional computing means, and therefore that machine consciousness is impossible.’

(…)

‘But they also clearly demonstrate how far we are from really understanding consciousness, or from knowing how to build conscious machines – even if we wanted to. Because: do we?’

Read the article here.

The easiest way to build a conscious machine is procreation, but flippancy aside, is it important that the machine is conscious or that the user cannot help but believe that the machine is sentient and for that reason must have a consciousness.

(I wrote about this question, only in Dutch, see here.

And one can only roar with laughter when you read that we will run out of language data quite soon.

Why do you exist? Sir, I provide the machine with language data.

AI might not sentient yet, but it’s solving our existential problems slowly but steadily.

discuss on facebook