92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

Get started on your Homeland Security Degree at American Military University.

The Terminator is not real. Yet.

Most AI-using organizations are working to keep it that way, according to a recent study by SAS, Accenture, and Intel. Almost three quarters of large businesses are now using AI in one way or another, and 92% of the most successful ones are working to ensure their uses of artificial intelligence are pro-social.

Most of them, of course, are not developing weapons systems.

Instead, they’re trying to ensure that their AI systems don’t discriminate against minorities, the disadvantaged, or, frankly, anyone who doesn’t fit the profile of the training data their neural networks are ingesting.

Microsoft’s Tai, of course, is the prototypical example of a bot gone bad.

But we have tens of thousands of bots now. And how they treat people is increasingly important.

“Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.”

One example that maybe didn’t work out quite as planned just hit the news.

A customer service bot for WestJet, a regional airline in Canada, sent a customer to a suicide prevention line after a happy review that somehow triggered a flag for depression.

But this is increasingly important for military and defense industries as well.

The U.S. military recently confirmed that a reaper drone took down an aerial target in the first-ever air-to-air “kill.” And while drones are currently remotely controlled, the “Air Force wants to leverage artificial intelligence, automation and algorithmic data models to streamline opportunities for airmen watching drone feeds.”

You can bet the AI used here is soon going to go beyond watching.

The U.S. is currently one of three countries that were vocally in favor of “killer robots” at a recent United Nations meeting. One of the reasons: international law could be programmed into the drones.

Others are not so sure that this would be successful.

But even in the non-military world, there are sufficient areas where AI will get involved — hiring, for instance — that bias could do significant harm. That’s something we need to watch out for, but it’s challenging because in many cases the reasons why an AI system makes a decision is opaque. There’s little explanation for why a decision was made.

That could soon change.

“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” said Yinyin Liu, head of data science for Intel AI Products Group. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”

One effort to make that happen is the Explainable AI (XAI) project, which aims to make the reasons for an AI system’s judgement more clear.

Sponsoring the project?

DARPA … the U.S. Defence Advanced Research Projects Agency.


This article was written by John Koetsier from Forbes and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.



Learn From The Leader

American Military University (AMU) is proud to be the #1 provider of higher education to the U.S. military, based on FY 2018 DoD tuition assistance data, as reported by Military Times, 2019. At AMU, you’ll find instructors who are former leaders in the military, national security, and the public sector who bring their field-tested skills and strategies into the online classroom. And we work to keep our curriculum and content relevant to help you stay ahead of industry trends. Join the 64,000 U.S. military men and women earning degrees at American Military University.

Request Information

Please complete this form and we’ll contact you with more information about AMU. All fields except phone are required.

Validation message here
Validation message here
Validation message here
Validation message here
Validation message here
Validation message here
Validation message here
Validation message here
Validation message here
Ready to apply? Start your application today.

We value your privacy.

By submitting this form, you agree to receive emails, texts, and phone calls and messages from American Public University System, Inc. which includes American Military University (AMU) and American Public University (APU), its affiliates, and representatives. I understand that this consent is not a condition of enrollment or purchase.

You may withdraw your consent at any time. Please refer to our privacy policy, terms, or contact us for more details.