Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company’s AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it’s easy to see why. The chatbot system, which relies on Google’s language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world.
After discussing his work and Google’s unethical activities around AI with a representative of the House Judiciary Committee, he was placed on paid administrative leave over breaching Google’s confidentiality agreement. Source: EnGadget (Yahoo News)