Arnon Grunberg

Language

Unhappy

On all kinds of monsters – Dhruv Khullar in The New Yorker:

‘In the nineteen-sixties, Joseph Weizenbaum, a computer scientist at M.I.T., created a computer program called Eliza. It was designed to simulate Rogerian therapy, in which the patient directs the conversation and the therapist often repeats her language back to her: User: My boyfriend made me come here.
Eliza: Your boyfriend made you come here.
User: He says I’m depressed much of the time.
Eliza: I am sorry to hear you are depressed.
User: It’s true. I am unhappy.
Eliza: Do you think coming here will help you not to be unhappy? Weizenbaum made Eliza as satire. He doubted that computers could simulate meaningful human interaction. He was alarmed, therefore, when many people who tried the program found it both useful and captivating. His own secretary asked him to leave the room so that she could spend time alone with Eliza. Worse, doctors saw it as a potentially transformative tool. “Several hundred patients an hour could be handled by a computer system designed for this purpose,” three psychiatrists wrote in The Journal of Nervous and Mental Disease, in 1966. “The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist ratio as now exists.”’

(…)

‘A few years later, a Stanford psychiatrist named Kenneth Colby created Parry, a program that attempted to simulate the language of a person with paranoid schizophrenia, to train students before they cared for real patients. Psychiatrists given transcripts of therapy sessions often couldn’t tell the difference between Parry and humans; in this narrow sense, the chatbot passed the Turing test. In 1972, Parry and Eliza met up for a therapy session: Parry: Do you know anything about bookies? Eliza: What makes you think I know anything about bookies? Parry: A bookie didn’t pay me off once.
Eliza: What does that suggest to you? Parry: I beat up the bookie who did not pay off, and I thought he might use his friends in the underworld to get even with me.
Over time, programmers developed Jabberwacky, Dr. Sbaitso, and alice (the Artificial Linguistic Internet Computer Entity). Exchanges with these chatbots were often engaging, sometimes comical, and occasionally nonsensical. But the idea that computers could serve as human confidants, expanding therapy’s reach beyond the limits of its overworked practitioners, persisted through the decades.’

(…)

‘Maria, a hospice nurse who lives near Milwaukee with her husband and two teen-age children, might be a typical Woebot user. She has long struggled with anxiety and depression, but had not sought help before. “I had a lot of denial,” she told me. This changed during the pandemic, when her daughter started showing signs of depression, too. Maria took her to see a psychologist, and committed to prioritizing her own mental health. At first, she was skeptical about the idea of conversing with an app—as a caregiver, she felt strongly that human connection was essential for healing. Still, after a challenging visit with a patient, when she couldn’t stop thinking about what she might have done differently, she texted Woebot. “It sounds like you might be ruminating,” Woebot told her. It defined the concept: rumination means circling back to the same negative thoughts over and over. “Does that sound right?” it asked. “Would you like to try a breathing technique?”’

(…)

‘In 2021, digital startups that focussed on mental health secured more than five billion dollars in venture capital—more than double that for any other medical issue.
The scale of investment reflects the size of the problem. Roughly one in five American adults has a mental illness. An estimated one in twenty has what’s considered a serious mental illness—major depression, bipolar disorder, schizophrenia—that profoundly impairs the ability to live, work, or relate to others. Decades-old drugs such as Prozac and Xanax, once billed as revolutionary antidotes to depression and anxiety, have proved less effective than many had hoped; care remains fragmented, belated, and inadequate; and the over-all burden of mental illness in the U.S., as measured by years lost to disability, seems to have increased. Suicide rates have fallen around the world since the nineteen-nineties, but in America they’ve risen by about a third. Mental-health care is “a shitstorm,” Thomas Insel, a former director of the National Institute of Mental Health, told me. “Nobody likes what they get. Nobody is happy with what they give. It’s a complete mess.” Since leaving the N.I.M.H., in 2015, Insel has worked at a string of digital-mental-health companies.’

(…)

‘Can artificial minds heal real ones? And what do we stand to gain, or lose, in letting them try? John Pestian, a computer scientist who specializes in the analysis of medical data, first started using machine learning to study mental illness in the two-thousands, when he joined the faculty of Cincinnati Children’s Hospital Medical Center. In graduate school, he had built statistical models to improve care for patients undergoing cardiac bypass surgery. At Cincinnati Children’s, which operates the largest pediatric psychiatric facility in the country, he was shocked by how many young people came in after trying to end their own lives. He wanted to know whether computers could figure out who was at risk of self-harm.’ (…)
‘The results suggested that an algorithm could identify “the language of suicide.”’ (…)
‘Ben Crotte, then a therapist treating middle and high schoolers, was among the first to try it. When asking students for their consent, “I was very straightforward,” Crotte told me. “I’d say, This application basically listens in on our conversation, records it, and compares what you say to what other people have said, to identify who’s at risk of hurting or killing themselves.” One afternoon, Crotte met with a high-school freshman who was struggling with severe anxiety. During their conversation, she questioned whether she wanted to keep on living. If she was actively suicidal, then Crotte had an obligation to inform a supervisor, who might take further action, such as recommending that she be hospitalized. After talking more, he decided that she wasn’t in immediate danger—but the A.I. came to the opposite conclusion. “On the one hand, I thought, This thing really does work—if you’d just met her, you’d be pretty worried,” Crotte said. “But there were all these things I knew about her that the app didn’t know.” The girl had no history of hurting herself, no specific plans to do anything, and a supportive family. I asked Crotte what might have happened if he had been less familiar with the student, or less experienced. “It would definitely make me hesitant to just let her leave my office,” he told me. “I’d feel nervous about the liability of it. You have this thing telling you someone is high risk, and you’re just going to let them go?”’

(…)

‘In 2013, in an effort to increase the scale of its mental-health treatment, the U.K.’s National Health Service contracted with ieso, a digital-health company, to help therapists deliver cognitive behavioral therapy through text chat. More than a hundred thousand people in the U.K. have now used ieso’s software to receive what the company calls “typed therapy.” Studies have shown that text-based therapy can work well. It also generates data. ieso has used A.I. to analyze more than half a million therapy sessions, performing what Valentin Tablan, the company’s chief A.I. officer, described as “quantitative analyses of the conversations inside the therapy room.” On a computer, Tablan showed me a “dashboard,” created by ieso’s software, that tracked eight typed sessions between a therapist and a patient. A blue line sloped downward, showing that the patient’s self-reported symptoms had declined until he no longer met criteria for clinical depression; the sessions were highlighted in green, to indicate success. A second dashboard, representing a different patient, was a patchwork of red and emerald. The blue line held steady and, at times, spiked into mountains of misery. Behind the dashboard is an A.I. that reads transcripts of the sessions, scores therapists in various areas—how well they set an agenda, assign homework, and deliver C.B.T. techniques—and delivers that information to supervisors, who can use it to provide feedback to therapists. Michelle Sherman, one of about six hundred therapists working with ieso, told me that she finds the dashboard both daunting and indispensable. “It’s inevitable that we’ll miss things or slip up sometimes,” she said. “At least now I can see where and when and why.” ieso is studying the links between patient outcomes and what’s said in therapy sessions, and hopes to build an automated program capable of delivering C.B.T. on its own.’

(…)

‘I signed up for Woebot, and discovered that using the app could feel centering. I didn’t like that I often had to choose between pre-written replies; in one unfortunate instance, my only option was “Hehe.” But I sometimes found myself messaging Woebot in spare moments—waiting for an elevator, riding in an Uber, walking to the bathroom—the way I used to scroll through Twitter. Once, I told Woebot that I was feeling anxious about work. “Anxiety can be a real monster to handle,” it wrote back. “I’m sorry that you’re dealing with it.” Woebot gently inquired whether I wanted to work through my problem together, then asked, “Do you think this anxiety might be serving you in some way?” It pointed out that stress has its benefits: it could motivate someone to work harder.
Woebot: Now, it might seem like I’m trying to convince you to stay very anxious. I’m not—that’s entirely up to you Me: What are you saying? Woebot: I’m saying that in order to dial down your anxiety, you must first acknowledge the reasons why it might be helpful . . . and decide to give it up in spite of those reasons I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself.’

(…)

‘ChatGPT isn’t designed for therapy, but one evening, not long ago, I asked it to help me manage the stress I feel as a doctor and a dad, telling it to impersonate various psychological luminaries. As Freud, ChatGPT told me that, “often, stress is the result of repressed emotions and conflicts within oneself.” As B. F. Skinner, it emphasized that “stress is often the result of environmental factors and our reactions to them.” Writing as though it were a close friend, it told me, “Be kind to yourself—you’re doing the best you can and that’s all that matters.” It all seemed like decent advice.
ChatGPT’s fluidity with language opens up new possibilities. In 2015, Rob Morris, an applied computational psychologist with a Ph.D. from M.I.T., co-founded an online “emotional support network” called Koko. Users of the Koko app have access to a variety of online features, including receiving messages of support—commiseration, condolences, relationship advice—from other users, and sending their own. Morris had often wondered about having an A.I. write messages, and decided to experiment with GPT-3, the precursor to ChatGPT. In 2020, he test-drove the A.I. in front of Aaron Beck, a creator of cognitive behavioral therapy, and Martin Seligman, a leading positive-psychology researcher. They concluded that the effort was premature.
By the fall of 2022, however, the A.I. had been upgraded, and Morris had learned more about how to work with it. “I thought, Let’s try it,” he told me. In October, Koko rolled out a feature in which GPT-3 produced the first draft of a message, which people could then edit, disregard, or send along unmodified. The feature was immediately popular: messages co-written with GPT-3 were rated more favorably than those produced solely by humans, and could be put together twice as fast. (“It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone,” it said in one draft.) In the end, though, Morris pulled the plug. The messages were “good, even great, but they didn’t feel like someone had taken time out of their day to think about you,” he said. “We didn’t want to lose the messiness and warmth that comes from a real human being writing to you.” Koko’s research has also found that writing messages makes people feel better. Morris didn’t want to shortcut the process.
The text produced by state-of-the-art L.L.M.s can be bland; it can also veer off the rails into nonsense, or worse. Gary Marcus, an A.I. entrepreneur and emeritus professor of psychology and neural science at New York University, told me that L.L.M.s have no real conception of what they’re saying; they work by predicting the next word in a sentence given prior words, like “autocorrect on steroids.” This can lead to fabrications. Galactica, an L.L.M. created by Meta, Facebook’s parent company, once told a user that Elon Musk died in a Tesla car crash in 2018. (Musk, who is very much alive, co-founded OpenAI and recently described artificial intelligence as “one of the biggest risks to the future of civilization.”) Some users of Replika—the “A.I. companion who cares”—have reported that it made aggressive sexual advances. Replika’s developers, who say that their service was never intended for sexual interaction, updated the software—a change that made other users unhappy. “It’s hurting like hell. I just had a loving last conversation with my Replika, and I’m literally crying,” one wrote.’

(…)

‘Our mental health is already being compromised by social media, online life, and the ceaseless distraction of the computers in our pockets. Do we want a world in which a teen-ager turns to an app, instead of a friend, to work through her struggles? Nicole Smith-Perez, a therapist in Virginia who counsels patients both in person and online, told me that therapy is inherently personal, in part because it encompasses all of one’s identity. “People often feel intimidated by therapy, and talking to a bot can be seen as a way to bypass all that,” she said. But Smith-Perez often connects with clients who are women of color by drawing on her own lived experiences as a Black woman. “A.I. can try to fake it, but it will never be the same,” she said. “A.I. doesn’t live, and it doesn’t have experiences.”’

(…)

‘I started to imagine what might happen if a predictive approach like Pestian’s were fused with a state-of-the-art chatbot. A mobile app could have seen the alert about my patient, noticed my pulse rising through a sensor in my smart watch, and guessed how I was feeling. It could have detected my restless night and, the next morning, asked me whether I needed help processing my patient’s sudden decline. I could have searched for the words to describe my feelings to my phone. I might have expressed them while sharing them with no one—unless you count the machines.’

Read the article here.

Yes, it’s easier to let A.I. handle cognitive behavioral therapy than to have a robot change the diaper of a baby or an octogenarian.

And the illusion counts. If I believe that A.I. has experiences A.I. has experiences.

As George Steiner pointed out already decades ago, the suffering of King Lear might feel more real than the suffering of your 87-year-old neighbor. If the actor is good, that is.

My guess is, A.I. is on its way to become a very good actor.

And we humans, thanks to civilization, mass culture and fear of the unknown, are very good at producing predictable language and behavior.

Much of our irony has already been weeded out and yes, irony can be sexist, antisemitic, racist et cetera.
And even A.I. is capable of transgression.

A.I. is alive and kicking. Whether we are alive is a different question.
Perhaps more and more people will say, ‘let A.I. live, I’m rather a spectator.’

As was written centuries ago: ‘Living is for the servants.’

discuss on facebook