science

Do life hacks work? The truth is, we’ll never know


“Want to lose weight? Buy smaller plates.” “Mindfulness at work: a superpower to boost productivity.” “Leaving Facebook can make you happier.” That’s what the headlines and Ted Talks would have you believe. But are any of these psychological tricks – or life hacks, as they are often called these days – actually true? The truth is, we don’t know; and, in a very real sense, we can’t ever know, because of limitations that are inherent in the design of the relevant experiments – not just those on weight loss, mindfulness or social media, but just about all experiments in what we might call “lifestyle science”. That, at least, is the implication of a new study by a pair of Stanford psychologists, Nicholas Coles and Michael Frank. We’ll get to their work in a minute, but first I’d like to take you back to the German city of Mannheim in 1988.

It was here that psychologist Fritz Strack conducted a study that has since been cited almost 3,000 times and become a staple of psychology textbooks and New York Times bestsellers, including Daniel Kahneman’s Thinking, Fast and Slow. In the experiment, participants were given a cover story: that previous research using questionnaires had excluded participants who were unable to use their hands to fill in the form, and that this study would explore the feasibility of instead holding the pen in your mouth. Half the participants were asked to hold the pen in their teeth (which forced their mouth into a smile) and half in their lips (which forced their mouth into a neutral pout) while they viewed a selection of cartoon strips. Sure enough, the participants who were smiling when they saw the cartoons rated themselves as more amused than the participants who were pulling a neutral (if slightly odd) expression. Importantly, when they were asked afterwards whether they’d suspected anything fishy was going on, none of the participants showed any sign of realising that the pen-in-mouth cover story was simply a way to get them to smile. Strack seemed to have shown that – at least sometimes – our facial expressions determine our moods, rather than vice versa.

For the next couple of decades, Strack’s findings stood unchallenged. That is until 2011, when psychology unearthed its “replication crisis” and the Scheiße really hit the fan. The crisis started not with Strack’s work, but with a series of studies by Daryl J Bem showing that – among other things – participants showed better-than-chance performance when asked to guess which of two curtains was hiding a pornographic picture. Bem’s findings confronted researchers with a stark choice: either (a) seeing into the future is possible or (b) our criteria for evaluating psychology experiments are too lax. Unsurprisingly, psychologists agreed upon the second option, and started worrying about how many of the field’s long-established findings would hold up to the scrutiny of replication – that is, if we did the same experiment again, would we get the same result? Of course, being able to replicate experiments and obtain the same result is one of the fundamental tenets of science, but one to which – to our shame – we psychologists have long turned a blind eye.

The replication crisis wasn’t kind to Strack’s study. Two large-scale international replications – one in 2016 and one in 2022, with altogether about 6,000 participants – found that the mood boost from holding a pen between your teeth was, at best, infinitesimally small. How small? Well, when happiness is measured on a seven-point scale, the increase shown by the pen-in-teeth participants worked out at 0.04 – effectively zero. (When, last month, I put it to Strack that these findings undermined his conclusions, he simply pointed to a “considerable number of studies in which the effect was demonstrated”, though it wasn’t clear which particular studies he had in mind, or why their findings should override those of the two recent mega-studies.)

Once we begin digging deeper, though, the findings of the 2022 study get much more interesting – and start to cast light on issues beyond mere “replication”. For starters, the research team found that if you simply ask participants to smile, rather than tricking them into smiling, the mood boost they show is 10 times greater (though, as you’d expect, it’s still relatively modest in absolute terms; smiling is never going to cheer you up as much as winning the lottery, or watching your team win the Champions League). Most intriguingly, the happiness boost from smiling was biggest for participants who had correctly figured out that this was the hypothesis that the researchers were testing.


Of course, psychology researchers have long known that participants will often guess the experimenter’s hypothesis and – whether consciously or unconsciously – behave in a way that supports it (or, more rarely, undermines it; a kind of “screw you!” effect). It was these “demand characteristics” (like everything, researchers have a special name for the phenomenon) that Strack’s pen-in-the-mouth method was designed to overcome in the first place.

What has been unclear, however, is exactly why participants seem to work hand-in-glove with experimenters to produce the desired results. When a participant reports a higher happiness score after smiling rather than frowning, are they just pretending to be happier to please or help the experimenter, or are they actually happier? After all, as Nicholas Coles, lead author on the 2022 smiling mega-study, told me: “In medicine, placebo effects have been discussed for centuries in the context of things like pain relief.” It doesn’t seem particularly unlikely, he said, that “similar effects can occur in psychology, for instance, in the context of things like happiness interventions”. Much like patients whose symptoms improve after being given a placebo sugar pill, if you’re led to believe that smiling makes you feel happier, maybe it will.

This is exactly what Coles set out to investigate in his latest study. Half of the participants were told that the researchers expected to find that smiling would lead to increased happiness ratings compared with a neutral expression. The other half were told that the researchers expected to find no difference. Surprisingly, both groups gave higher happiness ratings after smiling than after adopting a neutral expression, although the effect was more pronounced for participants who’d been told to expect this pattern. This is not what you’d expect if participants were just trying to please the experimenter by producing the desired result.

To get to the bottom of these findings, Coles asked participants a couple of follow-up questions. First, he asked them to rate how motivated they were to support the experimenter’s hypothesis. He found – counter to all received wisdom about demand characteristics – that these ratings could not explain the main findings at all. That is, participants did not seem to be raising or lowering their happiness ratings to help the experimenter. Next, he asked participants to rate how much they believed in the idea that simply smiling makes you happier (which, by now, has pretty much become accepted folk wisdom). This time, he struck gold. Just as with a placebo pill, the more strongly you believe that smiling makes you happier, the more it does.

This got Coles wondering: how many “demand characteristics” findings in psychology research are the result not of participants giving experimenters a helping hand – which is what we’ve always thought – but of participants acting to confirm their own prior beliefs, the ones they brought with them to the experiment?

The answer, it turns out, is a lot. Poring through the literature, Coles unearthed almost 200 studies in which – just like in his own smiling study – researchers explored the effect of simply telling participants what the study hoped to find. It wasn’t possible for Coles to talk to the participants of these original studies – many of which are decades old – so instead he did the next best thing. First, he gave new participants potted descriptions of the studies, and what they hoped to find: “Listening to happy or sad music will make you feel the opposite emotion”, “Your aggression will fall after watching an aggressive film”, “Describing a country as ‘democratic’ will make you less likely to support military action against it”. He then asked them: “Supposing you were taking part in this study, how motivated would you be to help the experimenter get the expected result? And how much do you personally believe in the claim that the study is testing?”

The findings were clear: the results of the original experiments were retrospectively predicted by how much today’s participants personally believed in the claim being tested, but not by their motivation to help (or hinder) the experimenter.


In retrospect, this seems obvious. Of course if you think listening to sad music will cheer you up (perhaps because it has done so in the past), it’s likely that it will. But, in terms of the psychology literature, it’s difficult to overstate the extent to which this turns conventional wisdom on its head: every experimental psychologist is taught from day one that it’s vital not to let participants know what the experimenter is hoping to find, lest they oblige. But that was sheer vanity. It turns out that participants’ behaviour in experiments is shaped not by our hopes, but their beliefs.

Coles’s study hasn’t yet been peer-reviewed or published, just posted on PsyArXiv, a website where psychology researchers share their work in progress. That said, the idea that peer-reviewed and published equals true is exactly what caused the replication crisis in the first place. For example, the finding that participants can see into the future has been peer-reviewed and published (in Bem’s 2011 paper), but – unless our fundamental understanding of the physical universe is wrong – it’s not just untrue, but impossible.

But assuming Coles’s findings hold water, the implications for psychology research – and the “life hacks” that we’ve all been sold on the back of them – are catastrophic. The beliefs that participants bring to the experiment affect the results, and not just a little bit: Coles found that the average placebo effect (eg feeling amused because that’s what you expect the experiment to do) is just as powerful as the average real effect (eg feeling amused because the experimenter has told you a genuinely funny joke). It’s as if, in medicine, sugar-pill placebos – on average – worked just as well as the real thing.

Let’s backtrack for just a moment here to make sure we’re clear what we mean by an “experiment”. In medical research, the gold standard for experiments is the randomised double-anonymised placebo-control trial. Half of the participants (chosen at random) get the new would-be wonder drug, half get a placebo; and, crucially, neither the experimenters nor the participants themselves know who got what. In this way, medical experiments factor in placebo effects from the start. Indeed, in normal circumstances, no drug will be licensed for prescription until it has been shown to beat a placebo.

The majority of psychology experiments, however, don’t get anywhere near to this standard. The problem is that for many of the phenomena psychologists are interested in – particularly those with “real world” applications – no convincing placebo is possible.

Let’s say, for example, we want to know whether mindfulness meditation really makes you more productive at work. We run a study, and sure enough, our participants show higher productivity after doing the mindfulness course than before. But was it the mindfulness course that helped, or simply the participants’ expectation that the mindfulness course would help that made them work harder afterwards? The only way to tell would be to have a control group who – analogous to the placebo group in a drug trial – think they’re doing a mindfulness course, but aren’t. But how would that work? If you removed the mindfulness elements, it would be obvious to those participants that they were in the control group. The same is true for the supposed benefits of deleting your social media apps or using a smaller plate: unless we can come up with a convincing sham intervention (like sham acupuncture, where they put the needles in the wrong place) we can never hope to separate out the effect of the intervention itself from the effect of participants’ beliefs about it. Don’t get me wrong: none of this means that all psychology-based life hacks are mere bunkum. For all we know, some of them might work. The problem unearthed by Coles’s study is that – without proper control conditions – we will never know.

There is a silver lining, though you won’t hear it from the Ted Talkers with a book to sell. Given the power of our prior beliefs, the usefulness of any particular life hack in and of itself is often irrelevant: whether it’s using a smaller plate, deleting social media or practising mindfulness meditation, if you truly believe that something works then, for you at least, it probably will.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.