92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built
Get started on your Homeland Security Degree at American Military University.
The Terminator is not real. Yet.
Most AI-using organizations are working to keep it that way, according to a recent study by SAS, Accenture, and Intel. Almost three quarters of large businesses are now using AI in one way or another, and 92% of the most successful ones are working to ensure their uses of artificial intelligence are pro-social.
Most of them, of course, are not developing weapons systems.
Instead, they’re trying to ensure that their AI systems don’t discriminate against minorities, the disadvantaged, or, frankly, anyone who doesn’t fit the profile of the training data their neural networks are ingesting.
Microsoft’s Tai, of course, is the prototypical example of a bot gone bad.
But we have tens of thousands of bots now. And how they treat people is increasingly important.
“Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.”
One example that maybe didn’t work out quite as planned just hit the news.
A customer service bot for WestJet, a regional airline in Canada, sent a customer to a suicide prevention line after a happy review that somehow triggered a flag for depression.
But this is increasingly important for military and defense industries as well.
The U.S. military recently confirmed that a reaper drone took down an aerial target in the first-ever air-to-air “kill.” And while drones are currently remotely controlled, the “Air Force wants to leverage artificial intelligence, automation and algorithmic data models to streamline opportunities for airmen watching drone feeds.”
You can bet the AI used here is soon going to go beyond watching.
The U.S. is currently one of three countries that were vocally in favor of “killer robots” at a recent United Nations meeting. One of the reasons: international law could be programmed into the drones.
Others are not so sure that this would be successful.
But even in the non-military world, there are sufficient areas where AI will get involved — hiring, for instance — that bias could do significant harm. That’s something we need to watch out for, but it’s challenging because in many cases the reasons why an AI system makes a decision is opaque. There’s little explanation for why a decision was made.
That could soon change.
“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” said Yinyin Liu, head of data science for Intel AI Products Group. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”
One effort to make that happen is the Explainable AI (XAI) project, which aims to make the reasons for an AI system’s judgement more clear.
Sponsoring the project?
DARPA … the U.S. Defence Advanced Research Projects Agency.