每日跟讀#508 : Good News: AI Is Getting Cheaper. That’s Bad News, Too.

人工智慧降價 是好消息也是壞消息

· 每日跟讀單元 Daily English,國際時事跟讀Daily Shadowing

每日跟讀#508 : Good News: AI Is Getting Cheaper. That’s Bad News, Too.

A Silicon Valley startup recently unveiled a drone that can set a course entirely on its own. A smartphone app allows the user to tell the drone to follow someone. Once the drone starts tracking, its subject will find it remarkably hard to shake.

The drone is meant to be a fun gadget. But it is not unreasonable to find this automated bloodhound a little unnerving.



A group of artificial intelligence researchers and policymakers last month released a report that described how rapidly evolving and increasingly affordable AI technologies could be used for malicious purposes.

The tracking drone helps explain their concerns. Made by a company called Skydio, the drone costs $2,499. It was made with technological building blocks available to anyone: ordinary cameras, open-source software and low-cost computer chips.



In time, putting these pieces together will become increasingly easy and inexpensive.

“This stuff is getting more available in every sense,” said one of Skydio’s founders, Adam Bry. These same technologies are bringing a new level of autonomy to cars, warehouse robots, security cameras and a wide range of internet services.



But at times, new AI systems also exhibit strange and unexpected behavior because the way they learn from large amounts of data is not entirely understood. That makes them vulnerable to manipulation; today’s computer-vision algorithms, for example, can be fooled into seeing things that are not there.

In such a scenario, miscreants could circumvent security cameras or compromise a driverless car.



Researchers are also developing AI systems that can find and exploit security holes in all sorts of other systems, said Paul Scharre, an author of the report. These systems can be used for both defense and offense.

Automated techniques will make it easier to carry out attacks that now require extensive human labor, including “spear phishing,” which involves gathering and exploiting personal data of victims. In coming years, the report said, machines will be more adept at collecting and deploying this data on their own.

AI systems are also increasingly adept at generating believable audio and video on their own. This will make it easier for bad actors to spread misinformation online, the report said.