Photography

AI Podcast: Bias in AI

It’s easy to think of AI as cold, unbiased, objective. Not quite, suggests Narrative Science Chief Scientist Kris Hammond in our latest AI Podcast, because we never know when AI will repeat our biases back to us.

“Just as our biases creep into how we talk to, we train, we teach our children, they creep into the way we talk to, train and teach our AI systems,” says Hammond, also a professor of computer science at Northwestern University and founder of the University of Chicago’s Artificial Intelligence Laboratory.

Narrative Science uses machine learning to turn data into stories that help people better understand the world around them. Its natural language generation platform, Quill, has generated headlines by literally generating headlines: automating the production of earnings reports and sports stories, among other tasks.

Bias in AI: Examples Proliferating

That makes questions of bias more than just a matter of academic interest for Hammond, who is also director of Northwestern’s Medill/McCormick Center for Innovation in Technology, Media and Journalism. It’s a challenge not only in training AI in tasks — like judging beauty — that are hard to quantify to begin with, but tasks that would seem, to some, less influenced by our biases, such as assessing creditworthiness.

“We would like the artificial intelligence systems that we build to be cold, emotionless, oddly enough so that we can make fun of them, because they’re not as clever and good and creative as we are,” Hammond says, during a wide-ranging conversation with our podcast’s host, Michael Copeland. “The reality is we build them, we train them, we sometimes give them the reasoning rules we’re going to use, and there is absolutely no way to avoid having all of our notions about how the world works creep into these systems.”

Can AI Free Us From Our Prejudices? 

The solution isn’t just to look for our own biases when training AIs, but to understand our own limitations, and train AIs to help us all see past them.

“As human beings we’re a collection of vaguely serviceable heuristics and a complete misunderstanding of statistics,” Hammond says. “And having machines help us — because there are people who understand who we are and how we are and how we think — and actually design those machines to really cater to the best of us, that actually is absolutely doable.”

To hear the whole conversation, tune into this week’s AI Podcast by subscribing via iTunes, Google Play Music or Soundcloud.

Fast, Furious and Frugal

And if you missed episode 6 of the AI Podcast, it’s worth a listen: Jim Burke, a graphic artist and founder of the Power Racing Series, spoke about how hackers are taking brains, a few hundred bones and a pink Barbie jeep to create an autonomous racing league that’s fast, furious and frugal.

To get the AI Podcast delivered to your iPhone or Android, subscribe via iTunes or Google Play Music.

Featured image: Valerie Everett, via Flickr

The post AI Podcast: Bias in AI appeared first on The Official NVIDIA Blog.


by Brian Caulfield via The Official NVIDIA Blog
AI Podcast: Bias in AI AI Podcast: Bias in AI Reviewed by Ossama Hashim on January 26, 2017 Rating: 5

No comments:

Powered by Blogger.