Arnon Grunberg

Falsehoods

Background

On setbacks - Tiffany Hsu in NYT:

‘Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments.
Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true.
While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background.’

(…)

‘Artificial intelligence’s struggles with accuracy are now well documented. The list of falsehoods and fabrications produced by the technology includes fake legal decisions that disrupted a court case, a pseudo-historical image of a 20-foot-tall monster standing next to two humans, even sham scientific papers. In its first public demonstration, Google’s Bard chatbot flubbed a question about the James Webb Space Telescope.
The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist.’

(…)

‘Ms. Schaake could not understand why BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world, such as Iran.’

(…)

‘Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court.
An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on the lawsuit.
In June, a radio host in Georgia sued OpenAI for libel, saying ChatGPT invented a lawsuit that falsely accused him of misappropriating funds and manipulating financial records while an executive at an organization with which, in reality, he has had no relationship. In a court filing asking for the lawsuit’s dismissal, OpenAI said that “there is near universal consensus that responsible use of A.I. includes fact-checking prompted outputs before using or sharing them.” OpenAI declined to comment on specific cases.’

(…)

‘The technology’s reliance on statistical pattern prediction also means that most chatbots join words and phrases that they recognize from training data as often being correlated. That is likely how ChatGPT awarded Ellie Pavlick, an assistant professor of computer science at Brown University, a number of awards in her field that she did not win.
“What allows it to appear so intelligent is that it can make connections that aren’t explicitly written down,” she said. “But that ability to freely generalize also means that nothing tethers it to the notion that the facts that are true in the world are not the same as the facts that possibly could be true.” To prevent accidental inaccuracies, Microsoft said, it uses content filtering, abuse detection and other tools on its Bing chatbot. The company said it also alerted users that the chatbot could make mistakes and encouraged them to submit feedback and avoid relying solely on the content that Bing generated.’

(…)
‘Artificial intelligence can also be purposefully abused to attack real people. Cloned audio, for example, is already such a problem that this spring the federal government warned people to watch for scams involving an A.I.-generated voice mimicking a family member in distress.
The limited protection is especially upsetting for the subjects of nonconsensual deepfake pornography, where A.I. is used to insert a person’s likeness into a sexual situation. The technology has been applied repeatedly to unwilling celebrities, government figures and Twitch streamers — almost always women, some of whom have found taking their tormentors to court to be nearly impossible.’

(…)
‘“Part of the challenge is that a lot of these systems, like ChatGPT and LLaMA, are being promoted as good sources of information,” Dr. Cambo said. “But the underlying technology was not designed to be that.”’

Read the article here.

If these systems and their underlying technology were not designed to be sources of information, what where they designed for?

Recently several news outlets ran articles about the fact that ChatGPT is getting dumber.

More people are needed to train AI, that’s clear. I still think that AI might end in the Second Coming i.e. the true Messiah is artificial intelligence, in the meantime AI is not making humans superfluous, it’s making humans needed.

discuss on facebook