Why is psychology not a scientific study

More than half of the results are not reproducible

Research must constantly produce new and surprising results. However, this pressure hardly leads to solid findings, as a large-scale analysis shows.

“You should clear up this mess,” wrote Nobel Prize winner and psychologist Daniel Kahneman in an open mail to colleagues in social psychology in 2012. The mess? A large number of study results that could not be confirmed by other researchers when the experiments were repeated. For this reason, not only social psychology, but also other areas of psychological research struggle with doubts about their credibility. A large-scale analysis has now attempted to quantify the problem. Their results were published in the journal “Science”: of 100 studies that appeared in three psychology journals in 2008, only 39 could be confirmed. Especially when a study showed particularly surprising or weak effects, these were difficult to reproduce. "Of course, this balance also unsettles students when they have to ask themselves how much of the basic knowledge they are learning," says psychologist Fred Mast from the University of Bern.

The noise of the data

The fact that weak effects in particular cannot be reproduced could be due to the noise in the data, explains Mast. This is based, for example, on individual differences between the test persons, their daily form or small deviations in the experimental set-up. Such fluctuations could create apparent effects, or small, real effects could disappear in them.

However, Klaus Fiedler from the Ruprecht-Karls-Universität in Heidelberg is skeptical about the implementation of the analysis and its results. He was one of the lucky ones whose studies passed the exam. "The analysis pursued the illusion of exact repetition, but omitted important controls," says Fiedler. The experiments were supposed to run under the same conditions as in the original study. "But the framework conditions cannot be reproduced exactly six to seven years later." The world has changed, there are new perspectives and everyday habits. And computer-aided experiments could not be repeated exactly the same, since hardware and software have developed rapidly since 2008.

Bad practices

For the psychologist Martin Kleinmann from the University of Zurich, however, there is no question that psychological research has a problem. One reason is that a confirmation of known results is not appreciated. Only new findings could be published. “That can lead to bad practices, for example repeating an experiment until you see a surprising effect,” says Kleinmann. Or omit unpleasant results and only report selective findings. In some cases, specialist journals already respond to these problems by requesting all raw data or by accepting studies in advance for publication based on their question before the data is collected.

“Psychological Science”, one of the three reviewed psychology journals, is also taking measures, as editor-in-chief Steve Lindsay writes on request. More evidence of the reproducibility of data will be demanded and surprising effects will be viewed with greater skepticism in the future.

Further repeat studies will have to show whether these efforts can clear the mess in psychology. But: "Research inevitably also includes results that later turn out to be wrong," says Fiedler.