Arnon Grunberg

Hidden

System

On AI, people and names – Jaron Lanier in The New Yorker:

‘The term “artificial intelligence” has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like “The Terminator” and “The Matrix,” and on characters like Commander Data, from “Star Trek: The Next Generation.” These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.’

(…)

‘A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.’

(…)

‘These efforts are well intentioned, but they seem hopeless to me. For years, I worked on the E.U.’s privacy policies, and I came to realize that we don’t know what privacy is. It’s a term we use every day, and it can make sense in context, but we can’t nail it down well enough to generalize. The closest we have come to a definition of privacy is probably “the right to be left alone,” but that seems quaint in an age when we are constantly dependent on digital services. In the context of A.I., “the right to not be manipulated by computation” seems almost correct, but doesn’t quite say everything we’d like it to.
A.I.-policy conversations are dominated by terms like “alignment” (is what an A.I. “wants” aligned with what humans want?), “safety” (can we foresee guardrails that will foil a bad A.I.?), and “fairness” (can we forestall all the ways a program might treat certain people with disfavor?). The community has certainly accomplished much good by pursuing these ideas, but that hasn’t quelled our fears. We end up motivating people to try to circumvent the vague protections we set up. Even though the protections do help, the whole thing becomes a game—like trying to outwit a sneaky genie. The result is that the A.I.-research community communicates the warning that their creations might still kill all of humanity soon, while proposing ever more urgent, but turgid, deliberative processes.’

(…)

‘This concept, which I’ve contributed to developing, is usually called “data dignity.” It appeared, long before the rise of big-model “A.I.,” as an alternative to the familiar arrangement in which people give their data for free in exchange for free services, such as internet searches or social networking. Data dignity is sometimes known as “data as labor” or “plurality research.” The familiar arrangement has turned out to have a dark side: because of “network effects,” a few platforms take over, eliminating smaller players, like local newspapers. Worse, since the immediate online experience is supposed to be free, the only remaining business is the hawking of influence. Users experience what seems to be a communitarian paradise, but they are targeted by stealthy and addictive algorithms that make people vain, irritable, and paranoid.’

(…)

‘Many people in Silicon Valley see universal basic income as a solution to potential economic problems created by A.I. But U.B.I. amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence. This is a scary idea, I think, in part because bad actors will want to seize the centers of power in a universal welfare system, as in every communist experiment. I doubt that data dignity could ever grow enough to sustain all of society, but I doubt that any social or economic principle will ever be complete. Whenever possible, the goal should be to at least establish a new creative class instead of a new dependent class.’

(…)

‘Let’s consider the apocalyptic scenario in which A.I. drives our society off the rails. One way this could happen is through deepfakes. Suppose that an evil person, perhaps working in an opposing government on a war footing, decides to stoke mass panic by sending all of us convincing videos of our loved ones being tortured or abducted from our homes. (The data necessary to create such videos are, in many cases, easy to obtain through social media or other channels.) Chaos would ensue, even if it soon became clear that the videos were faked. How could we prevent such a scenario? The answer is obvious: digital information must have context. Any collection of bits needs a history. When you lose context, you lose control.
Why don’t bits come attached to the stories of their origins? There are many reasons. The original design of the Web didn’t keep track of where bits came from, likely to make it easier for the network to grow quickly. (Computers and bandwidth were poor in the beginning.) Why didn’t we start remembering where bits came from when it became more feasible to at least approximate digital provenance? It always felt to me that we wanted the Web to be more mysterious than it needed to be. Whatever the reason, the Web was made to remember everything while forgetting its context.’

(…)

‘The technical challenges of data dignity are real and must inspire serious scientific ambition. The policy challenges would also be substantial—a sign, perhaps, that they are meaningful and concrete. But we need to change the way we think, and to embrace the hard work of renovation. By persisting with the ideas of the past—among them, a fascination with the possibility of an A.I. that lives independently of the people who contribute to it—we risk using our new technologies in ways that make the world worse. If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
This is my plea to all my colleagues. Think of people. People are the answer to the problems of bits.’

Read the article here.

We don’t know what privacy is, we don’t know what AI exactly is. Fair enough.

One of the better movies about AI is ‘Ex Machine’ (2014) by Alex Garland. Desire is feeding fear and vice versa. The whole attraction of AI is that we can, that we want to create a human, see Frankenstein, without out bodies and our bodily fluids. AI is mankind’s desire to be God, or to be like God.

It's not necessary to know anything about the technology itself, all you need to know is something about humans. Think of people, yes.

discuss on facebook