Earlier this year, Arvind Narayanan, a Princeton computer science professor, set up a voice interface for ChatGPT for his nearly four-year-old daughter. This was partly an experiment and partly because he believed AI agents would one day be a big part of his life. Narayanan’s daughter was naturally curious, often asking about animals, plants and the human body, and she thought ChatGPT could provide useful answers to her questions, he said. To their surprise, the chatbot developed by OpenAI also did an impeccable job of showing empathy, once they told the system that it was talking to a young child.
“What happens when the lights go off?” his daughter asked. “When the lights go out, it gets dark, and it can be a little scary,” ChatGPT replied in a synthetic voice. “But don’t panic! There are many things you can do to feel safe and comfortable in the dark.” She then offered some advice about using a night light, closing with a reminder that “it’s normal to be a little scared in the dark.” Narayanan’s daughter was clearly convinced, she wrote in a Substack post.
Microsoft and Google are racing to augment their search engines with the large language model (LLM) technology that underpins ChatGPT, but there’s good reason to think that the technology is as much of an emotional companion as a provider of facts. Works better as is. It may sound strange, but the strange thing is that Google’s Bard and Microsoft’s Bing, which are based on ChatGPT’s underlying technology, are being held up as search tools when they have an embarrassing history of factual errors: Bard gave incorrect information about the James Webb telescope. Its first demo while Bing looked at a series of financial data by itself.
Factual mistakes cost a lot when a chatbot is a search tool. But when it’s a companion, it’s little else, according to Eugenia Cuyda, founder of AI-companion app Replica. “It won’t ruin the experience, unlike search, where small mistakes can break trust in the product.”
Margaret Mitchell, a former Google AI researcher who co-authored a paper on the risks of LLMs, has said that they are “not fit for purpose” as search engines. LLMs are error prone because the data on which they are trained contains errors and cannot verify the truth. Their designers may also prioritize fluency over accuracy. This is why these devices are exceptionally good at mimicking empathy. After all, they are learning from text scraped from the web, including emotional responses posted on social media platforms. Users of forums like Reddit and Quora. Conversations from movie and TV show scripts, dialogue from novels, and research papers on emotional intelligence all went into the pool to make these tools empathetic. No wonder some people are using ChatGPT as a robo-therapist. As mentioned, one person said they used it to avoid being a burden on others, including their own human doctor.
To see if I could measure ChatGPT’s empathy abilities, I put it through an online emotional intelligence test, giving it 40 multiple-choice questions and asking it to answer each question with the corresponding letter. Results: It aced the quiz, scoring perfect scores in the categories of social awareness, relationship management, and self-management, and stumbling only slightly in self-awareness. ChatGPT did better than me on the quiz, and it even beat a coworker, even though we’re both human and have real feelings (or so we think).
There’s something unreal about a machine comforting us with synthetic empathy, but it’s understandable. Our innate need for social connection and the brain’s ability to reflect the feelings of others means that we can achieve a sense of understanding, even when the opposite party does not ‘feel’ what we feel. Inside our brains, ‘mirror neurons’ fire when we see empathy from others—including chatbots—giving us a sense of connection. Empathy, of course, is a multifaceted concept, and for us to truly experience it, we arguably need another warm body to share feelings with. Thomas Ward, a clinical psychologist at King’s College London, cautions against assumptions that AI can adequately fill the void for people who need mental health support, especially if their issues are severe. . For example a chatbot probably won’t accept that a person’s emotions are too complex to understand. ChatGPT, in other words, rarely says “I don’t know”, because it was designed to err on the side of faith.
Generally speaking, people should be wary of turning to chatbots as an outlet for emotion. “Subtle aspects of human connection like the touch of a hand or knowing when to speak and when to listen could be lost in a world that sees AI chatbots as the solution to human loneliness,” says Ward. We think we are the solution. But for the time being, they are at least more reliable for their emotional skills than their grasp of the facts. ©Bloomberg
Permi Olsson is a Bloomberg Opinion columnist covering technology.
catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.