Google places an engineer on leave after claiming its AI is sentient

Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company’s AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it’s easy to see why. The chatbot system, which relies on Google’s language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world.

After discussing his work and Google’s unethical activities around AI with a representative of the House Judiciary Committee, he was placed on paid administrative leave over breaching Google’s confidentiality agreement.  Source:  EnGadget (Yahoo News)


Fair Use Notice:This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.

Leave a Comment