In this episode of the You Are Not So Smart Podcast we sit down with one of the original cyberpunks, the famed journalist, documentarian, media theorist, all-around technology superstar and weirdo, Douglas Rushkoff.
MIT considers Rushkoff one of the “world’s ten most influential thinkers,” and in the episode we talk about his latest (and 20th) book, Team Human.
The book is a bit of a manifesto in which he imagines a new counterculture that would revolt against the algorithms that are slowly altering our collective behavior for the benefit of shareholders. Instead, he implores us, we should curate a digital, psychedelic substrate that embraces the messiness of human beings: our unpredictability, our pursuit of novelty and innovation, and our primate/animal/social connectedness.
In 2017, You Are Not So Smart produced three episodes about the backfire effect, and by far, those episodes were the most popular the show has ever done.
In fact, the famous web comic The Oatmeal turned them into a sort of special feature, and that comic of those episodes was shared on Facebook a gazillion times, which lead to a stories about the comic in popular media, and then more people listened to the shows, on and on it went. You can go see it at The Oatmeal right now at the top of their page. It’s titled, you are not going to believe what I am about to tell you.
The popularity of the backfire effect extends into academia. The original paper has been cited hundreds of times, and there have been more than 300 articles written about it since it first came out.
The backfire effect has his special allure to it, because, on the surface, it seems to explain something we’ve all experienced — when we argue with people who believe differently than us, who see the world through a different ideological lens — they often resist our views, refuse to accept our way of seeing things, and it often seems like we do more harm than good, because they walk away seemingly more entrenched in their beliefs than before the argument began.
But…since those shows, researchers have produced a series of new studies into the backfire effect that complicate things. Yes, we are observing something here, and yes we are calling it the backfire effect, but everything is not exactly as it seems, and so I thought we should invite these new researchers on the show and add a fourth episode to the backfire effect series based on what they’ve found. And this is that episode.
In this episode, we sit down with negotiation expert Misha Glouberman who explains how to talk to people about things — that is, how to avoid the pitfalls associated with debate when two or more people attempt to come to an agreement that will be mutually beneficial.
Parker Wiseman ran for student office in high school with photocopied flyers. He debated the public school system in social studies class. In college he took the courses and shook the hands that would help him join that peculiar Southern subculture of the embattled Mississippi Democrat, a pugnacious sort who plays darts and drinks whiskey while wearing penny loafers and forces smiles meant to fool no one. People close to Parker Wiseman were not surprised when, at the age of 28, he became the youngest mayor in Starkville history.
When I met him, he was deep into his second term, 34-years-old with bright blue eyes neatly obscured by thin-framed spectacles hugging a cleanly shaved head. I had to wait for the person before me to finish a meeting before I could take up time in his schedule, but when the door opened he traded off quickly and was all laughs and smirks as I unpacked my bag. In conversation, he moved between two poses, leaning forward with shoulders high and elbows planted wide so he could clasp his hands and focus when I was talking, and reclined in an unwound ease when he was answering, one arm propping him up so he could lean into the back the chair with his rear scooted to the forward edge of the seat and his feet as far apart as could be achieved with manners in dress slacks.
In this episode, science journalist Dave Levitan talks about his new book: Not a Scientist: How Politicians Mistake, Misrepresent, and Utterly Mangle Science.
In the book, Levitan takes us through 12 repeating patterns that politicians fall into when they talk about scientific research. Some are nefarious and intentional, some are based on ignorance, and some are just part of the normal business of politicians managing their public image or trying to appeal to their base. Not only do they often get the science wrong, they sometimes fail to communicate the nature of scientific inquiry and the goals of the scientific process itself.
Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?
“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University. “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”