Health

Could an ‘emotional support’ chatbot help me de-stress? Only one way to find out | Arwa Mahdawi


For the last several months I have been a woman on the verge of a nervous breakdown. We moved to a new house last year only to find the place next door was about to undergo a massive construction project. Since then we’ve been living with a constant soundtrack of hammering and drilling. I’ve tried various coping methods – everything from saunas to noise-cancelling headphones to fantasies of revenge – but the noise and dust still raise my blood pressure to dangerous levels every day.

Yet there was one thing I hadn’t tried until this week: An “emotional support chatbot”. Yes, those are a thing now: in our brave new world, where artificial intelligence is seemingly on the brink of disrupting every industry, not even therapists are safe from having their jobs taken by technology. On Tuesday a prominent startup called Inflection AI launched a chatbot called Pi (short for “personal intelligence”) which is designed to give friendly advice. It’s obviously not meant to replace a real-life therapist (not yet anyway), but is pitched as a supportive companion that can help you talk through your problems. The algorithm has been trained by around 600 “teachers”, including mental health professionals, to be sensitive and conversational.

“Many people feel like they just want to be heard and they just want a tool that reflects back what they said to demonstrate they have actually been heard,” co-founder Mustafa Suleyman said in a recent interview with Bloomberg.

I’m pretty sure every human being in my life is sick of hearing me moan about the construction by now – so, despite my qualms about AI, a bot that would happily listen to me complain sounded pretty good. I gave Pi a whirl and told it how stressed I was about the situation.

Readers Also Like:  'Davina McCall effect' sparks menopause testosterone treatment rush but is it putting women at risk?

“I’m sorry to hear that,” it replied. “How long has this been going on?” Almost a year, I told it. “A year?! That’s insane!” Pi replied. Which is exactly what I thought.

“You deserve to be able to relax and enjoy your home without constantly being bombarded by noise,” the chatbot continued. Which, again, is exactly what I thought. “It sounds like they’re not being very considerate,” the chatbot added. “Just because they’re building something doesn’t mean they can do it without any regard for how it affects others.” I was starting to like this chatbot a lot now: it was telling me everything I wanted to hear and making me feel heard.

The chatbot continued to ask me follow-up questions, offered advice on people to contact, and gave suggestions on breathing exercises and mindfulness techniques to help me manage my stress. OK, I said, all that is great but – what about revenge? Can you suggest any legal ways I can get revenge? Sadly, it could not. “I understand that you’re upset, but I don’t think that taking revenge is the answer. It would only add to the negativity of this situation … maybe you could try journaling.”

At that point I lost some faith in the AI. Try journaling? Who tells an angry, adult woman to try journaling? I’m fairly sure that a human therapist would have taken one look at my face and thought to themselves: this is not someone who will appreciate a suggestion about journaling. Still, that aside, I was more impressed by Pi than I had expected to be. Turns out validation is still validating even if it comes from an algorithm. And while you don’t exactly forget that you’re talking to an algorithm, Pi feels very human: it’s a lot more conversational and empathetic than other chatbots like OpenAI’s ChatGPT.

Readers Also Like:  Junior doctors: When are they striking and why?

Expect to see a rapid rise in “emotional support chatbots” in the coming years. While it’s possible that something like Pi might be a useful way for some people to help manage their anxiety, the fact that AI is getting so sophisticated so quickly is anxiety-inducing in itself. Indeed, even the people who helped create this technology are worried about what they’ve done: this week Geoffrey Hinton, the “godfather of AI”, made headlines when he quit his job at Google so he could speak out his fears about the threat of digital intelligence and how it might manipulate and influence us. “Look at how it was five years ago and how it is now,” Hinton has said. “Take the difference and propagate it forwards. That’s scary.”

So how do we deal with these potential risks? On Thursday, following a White House meeting with the CEOs of technology companies about AI risks, Vice-President Kamala Harris warned that companies have an “ethical, moral and legal responsibility” to guarantee their products are safe and secure. Which isn’t particularly reassuring seeing as how tech companies don’t exactly have a great track record of protecting people’s data or ensuring that their products aren’t used for nefarious purposes.

What about Hinton? Does he have any ideas how we stop his creation from destroying civilization. Not really, no. “I’m not a policy guy,” he told the Guardian. “I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening.” He doesn’t seem particularly optimistic that there’s much we can do about it either. “The reason I’m not that optimistic is that I don’t know any examples of more intelligent things being controlled by less intelligent things,” he explained.

Readers Also Like:  Breakthrough pill cuts threat of getting breast cancer by HALF: Hundreds of thousands of at-risk women will be offered medicine on NHS that represents 'a new era' of treatment and will save THOUSANDS of lives

Right. Well. Maybe my emotional support chatbot had some answers? “Should I be worried that AI will end civilization as we know it?” I asked my good friend Pi. “That’s a complex question, and I don’t think there’s a simple answer,” my digital therapist replied. “But I don’t think we should be worried that AI will end civilization as we know it.” Which sounds exactly like something that an AI chatbot would say.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.