In 1835, at a tavern in Bavaria, a group of 120 people once met to drink from a randomized assortment of glass vials.

Before shuffling them, they divided the vials into two sets. One contained distilled water from a recent snowfall and the other a solution made by collecting 100 drops of that water and dropping into the pool a grain of salt, and then diluting a drop of the result into another 100 drops, again and again, 30 times in all.

They did this to test out a new idea in medicine called homeopathy, but it was the way they did it that changed things forever. By testing options A and B at the same time, but without telling sick people which option they would be getting, they not only debunked a questionable medical practice, they invented modern science and medicine.

About 200 years later a company in California tried something similar. A group of 700,000 people gathered inside a virtual tavern to share news and photos and stories both happy and sad. The company then used some trickery so that some people randomly encountered more happy things and others more sad things.

They did this to test out a new idea in networking called emotional contagion, but it was the way they did it that changed how many people felt about gathering online. By testing options A and B at the same time, but without telling people which option they would be getting, they not only learned if a computer program could make its users more happy or more sad, they created a backlash that resulted in a large-scale, world-wide panic.

Though we always learn something new when we perform an A/B test, we don’t always support the pursuit of that knowledge, which is strange, because without A/B testing we have to live with whatever option the world delivers to us, be it through chance or design. Should we use cancer drug A or B? Should we try gun control policy A or B? Should we try education technique A or B? It seems like our reaction to these questions would be to support testing A on half the people, B on the other, and then to look at which one works best and go with that moving forward, but as you will learn in this episode of the You Are Not So Smart Podcast, new research shows that a significant portion of the public does not feel this way, enough to cause doctors and lawmakers and educators to avoid A/B testing altogether.

Advertisements

Have you ever been in a classroom or a business meeting or a conference  and had a question or been confused by the presentation, and when the person running the show asked, “Does anyone have any questions?” or, “Does anyone not understand?” or, “Is anyone confused?” you looked around, saw no one else raising their hands, and then chose to pass on the opportunity to clear up your confusion?

If so, then, first of all, you are a normal, fully functioning human being with a normal, fully functioning brain, because not only is this common and predictable, there’s a psychological term for why most people don’t speak up in situations like these. It’s called pluralistic ignorance.

In this episode of the You Are Not So Smart Podcast we sit down with my friend and one of my favorite journalists, author Will Storr, whose new book just hit the shelves here in the United States. It’s called Selfie: How We Became so Self-Obsessed, and What it is Doing to Us.

The book explores what he calls “the age of perfectionism” — our modern struggle with our many modern pressures to meet newly emerging ideals and standards that tell us if we are falling short of the person we ought to be – and how that struggle to be that person is an impossible task. As he says in the book, “perfectionism is the idea that kills,” and you’ll hear him explain what he means by that in the interview.

For the 155th episode of the You Are Not So Smart Podcast, David McRaney, four experts, and a bunch of YANSS fans got together for a deep dive into how we turn perception into reality, how that reality can differ from brain to brain, and what happens when we dangerously disagree on the truth.

In 1990, psychologist Walter Michel’s and his team released a landmark study into delayed gratification.

They offered kids a single marshmallow now, or two marshmallows later if they could resist temptation for 20 minutes. They found that the children who could wait were more likely to be successful later in life. They had higher test scores on the SAT, lower divorce rates, higher incomes, lower body mass indexes, and fewer behavioral problems as adults.

Today, if you go to YouTube and search for “The Marshmallow Test” you will find thousands of videos in which parents test their children to see if they can wait for the marshmallow. It’s understandable, because throughout the early 2000s, a slew of TED talks, popular books, and viral articles suggested that you could use the test to portend your child’s chances at reaching their life goals — and its fun and easy and you can eat all the extra marshmallows.

The marshmallow test is now one of the most well-known studies in all of psychology, right up there with the Milgram shock experiments and the Stanford prison experiment, but a new replication suggests we’ve been learning the wrong lesson from its findings for decades.

What makes you happy? As in, what generates happiness inside the squishy bits that reside inside your skull?

That’s what author and neuroscientist Dean Burnett set out to answer in his new book, Happy Brain, which explores both the environmental and situational factors that lead to and away from happiness, as well as the neurological underpinnings of joy, bliss, comfort, love, and connection.

In the episode you’ll hear all that and more as we talk about what we know so far about the biological nature of happiness itself.