|Clinician's Digest - Page 2|
Can We Trust Studies?
It took years for psychology to move from being perceived primarily as a soft science to being viewed as one built upon the bedrock of empiricism. Now an article by Jonah Lehrer in the December 13, 2010, New Yorker hearkens back to that earlier view by suggesting that psychology findings may be more ephemeral than we thought. In “The Truth Wears Off,” provocatively subtitled, “Is There Something Wrong with the Scientific Method?” Lehrer examines the well-known phenomenon of diminishing effect sizes in psychology research: studies that attempt to replicate original studies yield smaller and smaller results over time.
Lehrer’s article focuses on social psychologist Jonathan Schooler, whose landmark 1990 study discovered the concept of verbal overshadowing. Schooler found that people who are asked to describe an event immediately after it occurs have poorer recall of it later. The study overturned the prevailing notion that talking about something immediately would enhance recall. But each time he tried to replicate his original study, he found that the effect size kept shrinking, eventually by about 60 percent. “It was,” says Schooler, “as if nature gave me this great result, and then tried to take it back.”
Lehrer examines several reasons for the phenomenon of diminishing effects. Confirmatory bias frequently plays a role, influencing researchers to find what they expect or want to find. For instance, numerous studies showing that optimism protects against cancer are slowly melting in the face of more hard-eyed research. Exaggerated results are also a problem. Some studies have exaggerated results because their sample size is too small to support the conclusions they draw, so replication is less likely. In fact, in the July 13, 2005, Journal of the American Medical Association, epidemiologist John Ioannidis concludes that in the scientific literature, 25 percent of the most frequently cited clinical trials had findings that were exaggerated because of small sample groups.
However, Lehrer attributes much of the diminishing effect phenomenon to randomness. Case in point, researchers simultaneously conducted tests of the effects of cocaine on mice under rigorously controlled, identical conditions in three cities, and yet one city’s results significantly varied. That’s because in any experiment, Lehrer says, no matter how rigorously controlled, the possibility of outliers exists—people or other organisms who, for some indiscernible reason, end up at the far ends of the bell curve. Only large or multiply replicated studies can correct for this.
So can we discern incontrovertible facts from psychology research? Lehrer doesn’t seem very positive about that question. “The declining effect,” he concludes, “is troubling. . . . We like to pretend that our experiments define the truth for us. [But] when the experiments are done, we still have to choose what to believe.”
Although the phenomenon of diminishing effect is well-known to researchers, Northwestern University psychologist Jay Lebow, a frequent contributor to the Networker on psychotherapy research, points out that Lehrer ignored the many studies that don’t have diminishing effect sizes. “There are endless examples of well-established findings in the social sciences where something is found and the effect is regularly replicated,” Lebow says. Numerous replicable studies prove there’s a strong association between individual distress and relationship problems, and that psychotherapy has a steady, beneficial effect.
Lehrer’s article, which created a huge buzz among researchers and the public, is a valuable reminder to read studies more carefully and not to rely on abstracts or media accounts of the findings.