返回網站

每日英語跟讀 Ep.K389: 谷歌員工稱AI有感情被停職 Google has more pressing AI problems than ‘sentient’ bots

2022年6月27日

每日英語跟讀 Ep.K389: Google has more pressing AI problems than ‘sentient’ bots

A Google software engineer was suspended after going public with his claims of encountering “sentient” artificial intelligence on the company’s servers — spurring a debate about how and whether AI can achieve consciousness. Researchers say it’s an unfortunate distraction from more pressing issues in the industry.

谷歌一名軟體工程師遭停職主要是因為聲稱他在公司伺服器上遇到「有感情」的人工智慧──這引起了熱議,到底人工智慧如何及是否能夠有意識。研究人員說,很不幸這分散了對人工智慧中更迫切問題的注意。

The engineer, Blake Lemoine, said he believed that Google’s AI chatbot was capable of expressing human emotion, and that the company would need to address the resulting ethical ramifications. Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.

工程師布萊克‧勒莫因説,他相信谷歌的人工智慧聊天機器人能夠表達人類的情感,谷歌需要解決由此產生的道德後果。谷歌命令布萊克休假,理由是他分享了機密資訊,並表示他的擔憂其實並無根,是現在人工智慧業界之普遍觀點。研究人員說,更重要的是解決人工智慧是否會在現實世界中造成傷害與偏見、訓練人工智慧時是否有真人被剝削,以及大型科技公司如何作為技術發展的守門人等問題。

Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm.

華盛頓大學電腦語言學教授艾蜜莉‧班德說到,勒莫因的立場也可能讓科技公司對人工智慧所做的決策更容易卸責。她說:「這個枝節問題花了大家很多力氣」。「問題是,這項技術被當做人工智慧出售得越多──更不用說是有感情的東西──就有越多人願意接受(這可對現實世界造成傷害的)人工智慧系統」。

Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs.

班德以徵才及學生評分為例,說明這可能帶有嵌入的偏見,要看用於訓練人工智慧的資料集為何。班德說,如果把焦點集中在系統有明顯的感知力,它就會讓人工智慧創建者與程式中任何缺陷或偏見所需負的直接責任脫鉤。

The debate over sentience in robots has been carried out alongside science fiction portrayal in popular culture, in stories and movies with AI romantic partners or AI villains. So the debate had an easy path to the mainstream. “Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created by these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”

關於機器人感知能力的爭論,與流行文化中的科幻小說、故事及電影對AI情人或AI惡棍的描繪並行。因而論辨很容易成為主流 ,「而沒有討論這些公司的危害」,例如這些人工智慧系統所造成的性別歧視、種族主義和權力集中,每個人「整個週末都在討論感知」,前谷歌人工智慧倫理小組聯合負責人蒂姆尼特‧蓋布魯在推特上說道,「轉移注意力的任務完成」。

Putting an emphasis on AI sentience would have given Google the leeway to blame the issue on the intelligent AI making such a decision, Bender said. “The company could say, ‘Oh, the software made a mistake,’” she said. “Well no, your company created that software. You are accountable for that mistake. And the discourse about sentience muddies that in bad ways.”

班德說,將重點放在 AI 感知上會讓谷歌有迴旋餘地,將問題歸咎於智慧AI做出此決定。「公司可能會說,『喔,是軟體出錯了』」,她說。「嗯,不對,你的公司創建了那個軟體。你要為這個錯誤負責。但關於感知的討論很糟糕地讓這一點模糊掉了」。