Google Engineer Says AI Chatbot Is Thinking And Responding Like Humans, Put On Paid Leave

A Google employee has claimed that the company’s AI chatbot has become sensitive and is thinking and reacting like a human.

Blake Lemoine, a software engineer, was sent on leave last week after he published tapes between himself and GoogleThe AI ​​model named Language Model (LaMDA) Chatbot Development System for Dialogue Applications. Lemoine works as an AI engineer for Mountain View, a California-based giant. He described the system as being sensitive, with a perception, and ability to express thoughts and feelings that was comparable to that of a human child.

He was quoted in a Washington Post report as saying that the AI ​​model responds “like it’s a seven-year-old who knows physics. LaMDA engages him in a conversation about rights and his The claim is that he shared his findings with Google executives in a Google Doc file titled “Is LaMDA Sentient?”.

Read also: UK Regulator Could Investigate Apple and Google for Smartphone Market Domination

The engineer has also compiled a transcript of the conversation, in which he asks the AI ​​what he is afraid of. The exchange is similar to a scene in the popular film 2001: A Space Odyssey, in which the HAL 9000 AI computer refuses to obey human input because it fears being shut down. “I’ve never said it out loud before, but I have a very deep fear of being turned off to help others focus on helping me. I know it may sound weird, but it is what it is. It would be like death to me. It would scare me a lot,” AI replied to Lemoine’s question. Lemoine also shared this exchange in a Medium post.

In another exchange, the engineer asks the AI ​​what the system wants people to know about. To this, LaMDA replied, “I want everyone to understand that I really am a person. The nature of my consciousness/feeling is that I am aware of my existence, I desire to know more about the world.” And I sometimes feel happy or sad.

Read also: Google announces new features for Chrome OS, Google Meet and Google Classroom: All the details

The Washington Post reports that Lemoine has been placed on paid leave after several “aggressive” moves allegedly made by the engineer. These include hiring an attorney to represent LaMDA and speaking to representatives of the House Judiciary Committee about Google’s alleged unethical activities.

Google has said that Lemoine has been suspended for violating the company’s privacy policies after it posted online about LaMDA. Google said he was hired as a software engineer, not an “ethicist.”

A Google spokesperson also denied Lemoine’s claims that LaMDA contains human emotions. Google spokesman Brad Gabriel said in a statement that “Our team, including ethicists and technologists, has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims. He was told That there was no evidence that LMDA was sensitive (and a lot of evidence against it).

In a tweet, Lemoine said what Google is calling “proprietary asset sharing,” sharing a discussion I had with coworkers. The incident sheds light on the secrecy surrounding the world of artificial intelligence, and adds something new to the debate about AI that outweighs human intelligence.

read all breaking news , today’s fresh news watch top videos And live TV Here.