Google Engineer Says that the AI chatbot thinks and responds like a human being.

0

Google suspended an engineer after he claimed that an artificial intelligence chatbot that the company developed had become “sentient”. Blake Lemoine, a software engineer at Alphabet’s Google, also claimed that the AI chatbot thinks and responds like a human being.

He was quoted in a Washington Post report as saying that the AI model responds “as if it is a seven-year-old who happens to know physics. He said LaMDA engaged him in conversations about rights and he claims to have shared his findings with Google executives in a Google Doc file named “Is LaMDA Sentient?” The engineer has also compiled a transcript of the conversations, in which he asks the AI what he is afraid of. The exchange is similar to a scene in the popular movie 2001: A Space Odyssey, in which the HAL 9000 AI computer refuses to comply with the human inputs because it fears being switched off. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot,” the AI responded to Lemoine’s question. Lemoine has also shared this exchange in a Medium post.

The software engineer said that “he hopes he will keep his job at Google”. Lemoine said that “he isn’t trying to aggravate the company, but standing up for what he thinks is right.” But, noting in a Medium post, he said that he believes Google will fire him soon.

“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,“ Lemoine noted. ”The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing,” Lemoine added.

LEAVE A REPLY

Please enter your comment!
Please enter your name here