Home Featured DARPA head on AI dangers: 'It's not one of those things that keeps me up at night'

DARPA head on AI dangers: 'It's not one of those things that keeps me up at night'

DARPA head on AI dangers: 'It's not one of those things that keeps me up at night'
0
Get started on your Homeland Security Degree at American Military University.

Artificial intelligence remains predictable and will have to become far more sophisticated before it poses a serious threat to humans, according to the head of the Defense Advanced Research Agency.

During a Q&A with Washington Post columnist David Ignatius on Thursday, DARPA Director Steven H. Walker said AI is still “a very fragile capability,” one that has little capacity for acting independently.

“At least in the Defense Department today, we don’t see machines doing anything by themselves,” he said, noting that agency researchers are intensely focused on building “human-machine” partnerships. “I think we’re a long way off from a generalized AI, even in the third wave in what we’re pursuing.”

“It’s not one of those things that keeps me up at night,” he added, referring to dangers posed by AI.

Walker’s comments arrive amid a backdrop of bitter controversy surrounding the military’s use of AI. In June, thousands of Google employees signed a petition protesting the company’s role in a Defense Department project using machine intelligence.

Google eventually pulled out of the program that is known as Project Maven, an initiative that uses AI to automatically tag cars, buildings and other objects in videos recorded by drones flying over conflict zones. Google employees accused the military of harnessing AI to kill with greater efficiency, but military leaders claimed the technology would be used to keep military personnel away from unnecessary danger, ultimately saving lives.

“Without a doubt, this has caused a lot of consternation inside the DOD,” Robert O. Work, the former deputy secretary of defense who helped launch Project Maven last year, told The Washington Post’s Tony Romm and Drew Harwell in October. “Google created a big moral hazard for itself by saying it doesn’t want to use any of its AI technology to take human life. But they didn’t say anything about the lives that could be saved.”

Several months after Google’s exit from Project Maven, DARPA announced a multiyear investment of more than $2 billion in programs focused on developing AI.

Adding to the consternation surrounding AI are big-name technologists such as Elon Musk and Bill Gates, who have argued — alongside British inventor Clive Sinclair and the late theoretical physicist Stephen Hawking — that humanity is wandering into dangerous territory in its seemingly blind pursuit of AI.

Musk has compared AI to “an immortal dictator” and “the devil,” and Hawking said it “could spell the end of the human race.”

In his remarks Thursday, Walker struck a calming tone, arguing that DARPA researchers have found their machines perform “pretty badly” when they’re instructed to reason with flexibility, thinking outside the information from large data sets with which they’ve been trained.

The goal, he said, is not just to give machines the ability to understand what they’re looking at in their environment, but giving them the ability to adapt to that environment the way a human might.

For example, AI might be able to identify an image of a cat sitting on a suitcase, but the machine still can’t understand that the cat could be placed inside the suitcase — and that you probably don’t want to do such a thing. Humans, on the other hand, instinctively understand both scenarios.

“How do you give machines that sort of common sense is the next place DARPA is headed,” Walker said. “It’s going to be critical if we really want machines to be partners to the humans and not just tools.”

 

This article was written by Peter Holley from The Washington Post and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.

Comments

comments

tags: