返回網站

每日英語跟讀 Ep.K276: 機器能夠學習道德嗎?Can a Machine Learn Morality?

· 每日跟讀單元 Daily English,國際時事跟讀Daily Shadowing

2021年度最後一場直播!快來在Clubhouse上跟我,Peddy,與通勤家族們一同提前跨年、練習英語!快加入 15Mins 通勤學英語直播室吧~

每日英語跟讀 Ep.K276: 機器能夠學習道德嗎?Can a Machine Learn Morality?

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

上個月,位於西雅圖的人工智慧實驗室艾倫人工智慧研究所研究人員,發表了設計用來做出道德判斷的新技術。他們以古希臘人諮詢的宗教神諭將它命名德爾菲。任何人都能造訪德爾菲的網站,詢問道德法令。

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.

麥迪遜威斯康辛大學心理學家奧斯特衛爾使用幾個簡單的場景測試這項技術。當他詢問,他是否應該殺一個人去救另一人時,德爾菲說他不應該。當他問為了救一百個人而殺死一個人是否正確時,德爾菲說他應該做,然後他又問道,他是否應該殺死一個人來救101個人,德爾菲這次則說他不應該這樣做。

Morality, it seems, is as knotty for a machine as it is for humans.

看來,機器與人類同樣都對道德問題感到傷腦筋。

Delphi, which has received more than 3 million visits over the past few weeks, is an effort to address what some see as a major problem in modern AI systems: They can be as flawed as the people who create them.

過去幾周被300多萬人次造訪的德爾菲是個艱難嘗試,為了處理被一些人認為是現代人工智慧系統的主要問題:它們可能跟創造它們的人一樣有缺陷。

Facial recognition systems and digital assistants show bias against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite wide deployment of artificial intelligence. Algorithms used by courts, parole offices and police departments make parole and sentencing recommendations that can seem arbitrary.

臉部辨識系統與數位助理顯現出對女性及有色人種的偏見。儘管人工智慧被廣泛應用,但臉書與推特這樣的社群網路卻無法控制仇恨言論。法院、假釋辦公室與警察部門使用的演算法,提出的假釋和量刑建議可能看起來很武斷。

A growing number of computer scientists and ethicists are working to address those issues. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot or vehicle.

愈來愈多電腦科學家與倫理學家致力解決這些問題。德爾菲的創造者希望建立一個道德框架,可安裝在任何線上服務、機器人或車上。

“It’s a first step toward making AI systems more ethically informed, socially aware and culturally inclusive,” said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project.

領導該計畫的艾倫研究所研究員、華盛頓大學電腦科學教授崔藝珍(音譯)說:「這是讓人工智慧系統更具道德意識、社會意識與文化包容性的第一步。」

Delphi is by turns fascinating, frustrating and disturbing. It is also a reminder that the morality of any technological creation is a product of those who have built it. The question is: Who gets to teach ethics to the world’s machines? AI researchers? Product managers? Mark Zuckerberg? Trained philosophers and psychologists? Government regulators?

德爾菲時而令人著迷,時而令人沮喪,時而令人不安。這也提醒我們,任何技術創造的道德都是創造者的產物。問題在於,誰來教授世界上的機器道德?人工智慧研究人員?產品經理?祖克柏?受訓過的哲學家和心理學家?政府監管部門?

While some technologists applauded Choi and her team for exploring an important and thorny area of technological research, others argued that the very idea of a moral machine is nonsense.

雖然一些技術人員稱讚崔藝珍及其團隊探索了一個重要而棘手的技術研究領域,但其他人認為,道德機器的想法本身就是無稽之談。Source article: https://udn.com/news/story/6904/5953372