Return to site

每日英語跟讀 Ep.K389: 谷歌員工稱AI有感情被停職 Google has more pressing AI problems than ‘sentient’ bots

· 每日跟讀單元 Daily English

每日英語跟讀 Ep.K389: Google has more pressing AI problems than ‘sentient’ bots

A Google software engineer was suspended after going public with his claims of encountering “sentient” artificial intelligence on the company’s servers — spurring a debate about how and whether AI can achieve consciousness. Researchers say it’s an unfortunate distraction from more pressing issues in the industry.


The engineer, Blake Lemoine, said he believed that Google’s AI chatbot was capable of expressing human emotion, and that the company would need to address the resulting ethical ramifications. Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.


Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm.


Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs.


The debate over sentience in robots has been carried out alongside science fiction portrayal in popular culture, in stories and movies with AI romantic partners or AI villains. So the debate had an easy path to the mainstream. “Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created by these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”

關於機器人感知能力的爭論,與流行文化中的科幻小說、故事及電影對AI情人或AI惡棍的描繪並行。因而論辨很容易成為主流 ,「而沒有討論這些公司的危害」,例如這些人工智慧系統所造成的性別歧視、種族主義和權力集中,每個人「整個週末都在討論感知」,前谷歌人工智慧倫理小組聯合負責人蒂姆尼特‧蓋布魯在推特上說道,「轉移注意力的任務完成」。

Putting an emphasis on AI sentience would have given Google the leeway to blame the issue on the intelligent AI making such a decision, Bender said. “The company could say, ‘Oh, the software made a mistake,’” she said. “Well no, your company created that software. You are accountable for that mistake. And the discourse about sentience muddies that in bad ways.”

班德說,將重點放在 AI 感知上會讓谷歌有迴旋餘地,將問題歸咎於智慧AI做出此決定。「公司可能會說,『喔,是軟體出錯了』」,她說。「嗯,不對,你的公司創建了那個軟體。你要為這個錯誤負責。但關於感知的討論很糟糕地讓這一點模糊掉了」。

All Posts

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!