YANSS 115 – How we transferred our biases into our machines and what we can do about it

Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University.  “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

Download – iTunes – Stitcher – RSS – Soundcloud

Great Courses PlusThis episode is sponsored by The Great Courses Plus. Get unlimited access to a huge library of The Great Courses lecture series on many fascinating subjects. Start FOR FREE with Your Deceptive Mind taught by neurologist Steven Novella. Learn about how your mind makes sense of the world by lying to itself and others. Click here for a FREE TRIAL.

This episode is also sponsored by Hello Fresh. Step outside of your cooking comfort zone with the best meal kit service on the planet. Get fresh food delivered straight to your door in recyclable packaging, and make meals using easy-to-follow recipe cards. For a limited time, you can get $30 off your first shipment by heading to HelloFresh.com and entering the offer code YANSS30. 

PatreonSupport the show directly by becoming a patron! Get episodes one-day-early and ad-free. Head over to the YANSS Patreon Page for more details.

Like all learning systems, our algorithms must make sense of present based on a database of old experiences. The problem is that looking backwards we see a bevy of norms, ideas, and associations we’d like to leave in the past. Machines can’t tell if a bias from a generation ago was morally good or neutral, nor can they tell if it was unjust, based on arbitrary social norms that lead to exclusion. So how do we teach our machines which inferences they should consider useful and which they should consider harmful?

In this episode of the You Are Not So Smart Podcast, three experts on artificial intelligence help us understand how we accidentally transferred our prejudices and biases into our infant artificial intelligences. We will also explore who gets to say what is right and what is wrong as we try to fix all this. And you’ll hear examples of how some of our early machine minds, through prediction, are creating the future they predict by influencing the systems they monitor — because our actions folds their results back into their next prediction.

Those experts are:

Shannon Vallor — a professor of philosophy at Santa Clara University. “My research explores the philosophical territory defined by three intersecting domains: the philosophy and ethics of emerging technologies, the philosophy of science and phenomenology. My current research project focuses on the impact of emerging technologies, particularly those involving automation and artificial intelligence, on the moral and intellectual habits, skills and virtues of human beings – our character.”

Alistair Croll — who teaches about technology and business at the Harvard Business School. He is an entrepreneur, author, and event organizer. “I spend a lot of time understanding how organizations of all sizes can use data to make better decisions, and on startup acceleration. I’m also fascinated by what happens when the rubber of technology meets the road of technology.”

Damien Williams — an artificial intelligence expert who writes about how technology intersects with human society. “For the past nine years, I’ve been writing, talking, thinking, teaching, and learning about philosophy, comparative religion, magic, artificial intelligence, human physical and mental augmentation, pop culture, and how they all relate.”

 

Links and Sources

Download – iTunes – Stitcher – RSS – Soundcloud

Previous Episodes

Boing Boing Podcasts

Cookie Recipes

ProPublica’s report on machine bias

The Affirmative Action of Vocabulary

Joanna Bryson on A.I.

Jana Eggers on A.I.

Shannon Vallor’s Website

Shannon Vallor’s Twitter

Damien William’s Website

Damien William’s Twitter

Alistair Crolls’ Website

Alistair Crolls’s Twitter

Machines taught by photos learn a sexist view of women

Semantics derived automatically from language corpora necessarily contain human biases

Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints

How Vector Space Mathematics Reveals the Hidden Sexism in Language

Content analysis of 150 years of British periodicals

IMAGE: The DNA Machine from Blade Runner 2049