Back in the early 1900s, the German biologist Jakob Johann Baron von Uexküll couldn’t shake the implication that the inner lives of animals like jellyfish and sea urchins must be radically different from those of humans.

Uexküll was fascinated by how meaty, squishy nervous systems gave rise to perception. Noting that the sense organs of sea creatures and arachnids could perceive things that ours could not, he realized that giant portions of reality must therefore be missing from their subjective experiences, which suggested that the same was true of us. In other words, most ticks can’t enjoy an Andrew Lloyd Webber musical because, among other reasons, they don’t have eyes. On the other hand, unlike ticks, most humans can’t smell butyric acid wafting on the breeze, and so no matter where you sit in the audience, smell isn’t an essential (or intended) element of a Broadway performance of Cats.

Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University.  “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

One of the most effective ways to change people’s minds is to put your argument into a narrative format — a story — but not just any story. The most persuasive narratives are those that transport us. Once departed from normal reality into the imagined world of a story, we become highly susceptible to belief and attitude change.

In this episode, you’ll learn from psychologist Melanie C. Green the four secrets to creating the most persuasive narratives possible.

When it comes to group activities — projects that require teams of people to work on a series of concrete tasks to reach a tangible goal — what do you think is the most important quality that group members should possess? Should they be smart? Should they be assertive? Should they nominate a leader or divide into pairs?

This is the question that psychologist Christopher Chabris has been pondering for several years now. He believes the answer is collective intelligence.

Fearing that new technology will lead to lazy thinking is an old concern, one that goes back at least as far as Socrates who was certain that scrolls would make people dumb because they would grow to depend on “external written characters” instead of memorization. Just about every new technology and medium has been vilified at some point by that era’s luddites as finally being the end of deep thinking and the beginning of idiocracy. It never happens, of course, and I doubt it ever will.