Arnon Grunberg

Descartes

War

On intelligence - Niall Ferguson in TLS:

‘That doctrine was never put to the test. Indeed, even Kissinger later repudiated parts of his own argument. Yet, if one looks back on the way NATO strategy evolved in the three decades after 1957, limited nuclear war was at its heart. What else were all those short-range and intermediate-range nuclear missiles for? Had war broken out with the Soviet Union in Europe, at least on the Western side there would have been an attempt to fight it without the intercontinental ballistic missiles whose launch would have heralded Armageddon.
Sixty-four years have passed since the publication of Nuclear Weapons and Foreign Policy. Yet at ninety-eight, Henry Kissinger has not lost his knack for identifying doctrinal deficits in US national security strategy. “The age of AI”, he and his coauthors write, “has yet to define its organizing principles, its moral concepts, or its sense of aspirations and limitations … The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity.”’

(…)

‘As Kissinger, Schmidt and Huttenlocher point out, we are already growing reliant on AI. “As the individual interacts with the AI”, they write, “and as the AI adapts to the individual’s preferences … a kind of tacit partnership begins to form. The individual comes to rely on such platforms to perform a combination of functions … of postal service, department store, concierge, confessor, and friend.” But these are only the early days of this partnership and it is not hard to imagine some of the other functions AI may soon be performing for us. A generative language artificial intelligence program such as GPT-3 has one of the largest quasi-neural networks built today, but the Chinese Academy of Sciences recently announced an even larger generative language model with ten times as many weights – still “104 times fewer than estimates of the human brain’s synapses”, but catching up.
The authors identify three problems attendant on the rise of AI. The first is a general philosophical one: “When a human-designed software program … learns and applies a model that no human recognizes or could understand, are we advancing towards knowledge? Or is knowledge receding from us?” In an earlier essay on the subject, Kissinger postulated a reversal of Max Weber’s “demystification of the world”. Here he foresees a retreat from the self-confident reasoning of the Enlightenment back to the “prognostications of the Gnostic philosophers, of an inner reality beyond human experience”.’

(…)

‘What the authors do not quite make clear is the broader danger – most obvious in China, but quite possible elsewhere – that AI-enabled network platforms will be able to monitor the behaviour of everyone, not only silencing anyone detected using “hate speech”, but also penalizing malefactors by curtailing their civil and economic rights. What we are now expected to call the “Metaverse” has the potential to evolve into the automated Panopticon of Yevgeny Zamyatin’s dystopian science-fiction masterpiece We (1921).
However, the biggest danger in the eyes of Kissinger, Schmidt and Huttenloch is military. The US Air Force’s ARTU programme can already fly aircraft and operate radar systems – and is capable of making “final calls” without human override. “If such machines are authorized to engage in autonomous targeting decisions”, the authors warn, “traditional conceptions of defense and deterrence … may deteriorate.” Journalists writing about AI are for ever looking for the “Sputnik moment” of our time, as revealing of a new technological epoch as the launch of Sputnik was in 1957. One candidate is December 5, 2017, when Google’s DeepMind announced that, after just hours of training itself, their AlphaZero program had defeated Stockfish 8, until then the world’s most powerful chess program. “The tactics AlphaZero deployed were unorthodox”, the authors note, in a sentence that chills the blood. “It sacrificed pieces human players consider vital, including its queen.” That raises a crucial question: “What if AI recommended that a commander-in-chief sacrifice a significant number of citizens or their interests in order to save, according to AI’s calculation and valuation, an even greater number?”’

(…)

‘The idea of an AI program waging war, rather than playing chess, with the same ruthlessness and speed is deeply frightening. No doubt DeepMind is already working on AlphaHero. One imagines with a shudder the programme sacrificing entire armies or armadas as readily as its chess-playing predecessor sacrificed its queen. No doubt the reader should feel reassured that the United States has committed itself to develop only “AI-enabled weapons”, as opposed to “AI weapons … that make lethal decisions autonomously from human operators”. “Created by humans, AI should be overseen by humans”, the authors declare. But why should America’s undemocratic adversaries exercise the same restraint? Inhuman intelligence sounds like the natural ally of regimes that are openly contemptuous of human rights.’

(…)

‘In the same way, The Age of AI does not fully clarify how the United States should prepare for the war of the future. What it does – and does brilliantly – is illuminate the new problem we have created for ourselves. The world is changed by the development of AI as profoundly as it was changed by the advent of the atomic bomb or the satellite. Perhaps just renaming AI as “II” – inhuman intelligence – would be a simple way to raise awareness of the perils of this new age.’

Read the article here.

The 20th century (and that century was not really an exception) showed us the inhuman intelligence of humans, the willingness to sacrifice even its own people at the end, as A.H. envisioned. Some of the people around him were a bit more interested in their own survival than he was, for obvious reasons, he knew very well that there was no way out.

Perhaps inhuman intelligence is more brutal than human intelligence, perhaps it would be more willing to start a nuclear war than human intelligence, but the reasons for this are not very clear in this essay.

Why would human intelligence be more trustworthy than inhuman intelligence? Nevertheless, that our world will change because of inhuman intelligence is undoubtedly true.

discuss on facebook