Replication studies, ceiling effects, and the psychology of science

On May 25, I found Erika Salomon’s tweet:

The story started when the journal Social Psychology decided to publish successful and failed replication attempts instead of conventional papers and their conclusions for a Replications Special Issue (Volume 45, Number 3 / 2014). It accepted proposals from scientists stating which studies they wanted to try to replicate, and registered the accepted ones. This way, the journal’s editors Brian Nosek and Daniel Lakens could ensure that a study was published no matter the outcome – successful or not.

All the replication studies were direct replication studies, which means they used the same experimental procedure and statistical methods to analyze the data. And before the replication attempt began, the original data, procedure and analysis methods were scrutinized, and the data was shared with the replicating group. Moreover, an author of the original paper was invited to review the respective proposals and have a say in whether the proposal could be accepted. So much is pre-study.

Finally, the replication studies were performed, and had their results published.


The consequences of failing to replicate a study

Now comes the problem: What if the second group failed to replicate the findings of the first group? There are different ways of looking at this from here on out. The first person such a negative outcome affects is the original study’s author, whose reputation is at stake. Given the gravity of the situation, is the original author allowed to ask for a replication of the replication?

Second, during the replication study itself (and given the eventual negative outcome), how much of a role is the original author allowed to play when performing the experiment, analyzing the results and interpreting them? This could swing both ways. If the original author is allowed to be fully involved during the analysis process, there will be a conflict of interest. If the original author is not allowed to participate in the analysis, the replicating group could get biased toward a negative outcome for various reasons.

Simone Schnall, a psychology researcher from Cambridge writes on the SPSP blog (linked to in the tweet above) that, as an author of a paper whose results have been unsuccessfully replicated and reported in the Special Issue, she feels “like a criminal suspect who has no right to a defense and there is no way to win: The accusations that come with a “failed” replication can do great damage to my reputation, but if I challenge the findings I come across as a “sore loser.””

People on both sides of this issue recognize the importance of replication studies; there’s no debate there. But the presence of these issues calls into question how replication studies are designed, reviewed and published, with a just as firm support structure, or they all suffer the risk of becoming personalized. Forget who replicates the replicators, it could just as well become who bullies the bullies. And in the absence of such rules, replication studies are becoming actively disincentivized. Simone Schnall acceded to a request to replicate her study, but the fallout could set a bad example.

During her commentary, Schnall links to a short essay by Princeton University psychologist Daniel Kahneman titled ‘A New Etiquette for Replication‘. In the piece, Kahneman writes, “… tension is inevitable when the replicator does not believe the original findings and intends to show that a reported effect does not exist. The relationship between replicator and author is then, at best, politely adversarial. The relationship is also radically asymmetric: the replicator is in the offense, the author plays defense.”

In this blog post by one of the replicators, the phrase “epic fail” is an example of how things could be personalized. Note: the author of the post has struck out the words and apologized.

In order to eliminate these issues, the replicators could be asked to keep things specific. Various stakeholders have suggested different ways to resolve this issue. For one, replicators should address the questions and answers raised in the original study instead of the author and her/his credentials. Another way is to regularly publish reports of replication results instead of devoting a special issue to it, and make them part of the scientific literature.

This is one concern that Schnall raises in her answers (in response to question #13):”I doubt anybody would have widely shared the news had the replication been considered “successful.”” So there’s a need to address a bias here: are journals likelier to publish replication studies that fail to replicate previous results? Erasing this bias requires publishers to actively incentivize replication studies.

A paper published in Perspectives on Psychological Science in 2012 paints a slightly different picture. It looks at the number of replication studies published in the field and pegs the replication rate at 1.07%. Despite the low rate, one of the paper’s conclusions was that among all published replication studies, most of them reported successful, not unsuccessful, replications. It also notes that since 2000, among all replication studies published, the fraction reporting successful outcomes stands at 69.4%, and that reporting unsuccessful outcomes at 11.8%.

chart_1
Sorry about the lousy resolution. Click on the chart for a better view.

At the same time, Nosek and Lakens concede in this editorial that, “In the present scientific culture, novel and positive results are considered more publishable than replications and negative results.”


The ceiling effect

Schnall does raise many questions about the replication, including alleging the presence of a ceiling effect. As she describes it (in response to question #8):

“Imagine two people are speaking into a microphone and you can clearly understand and distinguish their voices. Now you crank up the volume to the maximum. All you hear is this high-pitched sound (“eeeeee”) and you can no longer tell whether the two people are saying the same thing or something different. Thus, in the presence of such a ceiling effect it would seem that both speakers were saying the same thing, namely “eeeeee”.

The same thing applies to the ceiling effect in the replication studies. Once a majority of the participants are giving extreme scores, all differences between two conditions are abolished. Thus, a ceiling effect means that all predicted differences will be wiped out: It will look like there is no difference between the two people (or the two experimental conditions).”

She states this as an important reason to get the replicators’ results replicated.


My opinions

// Because Schnall thinks the presence of a ceiling effect is a reason to have the replicators’ results replicated, it implies that there could be a problem with the method used to evaluate the authors’ hypothesis. Both the original and the replication studies used the same method, and the emergence of an effect in one of them but not the other implies the “fault”, if that, could lie with the replicator – for improperly performing the experiment – or with the original author – for choosing an inadequate set-up to verify the hypothesis. Therefore, one thing that Schnall felt strongly about, the scrutiny of her methods, should also have been formally outlined, i.e. a replication study is not just about the replication of results but about the replication of methods as well.

// Because both papers have passed scrutiny and have been judged worthy of publication, it makes sense to treat them as individual studies in their own right instead of one being a follow up to the other (even though technically that’s what they are), and to consider both together instead of selecting one over the other – especially in terms of the method. This sort of debate gives room for Simone Schnall to publish an official commentary in response to the replication effort and make the process inclusive. In some sense, I think this is also the sort of debate that Ivan Oransky and Adam Marcus think scientific publishing should engender.

// Daniel Lakens explains in a comment on the SPSP blog that there was peer-review of the introduction, method, and analysis plan by the original authors and not an independent group of experts. This was termed “pre-data peer review”: a review of the methods and not the numbers. It is unclear to what extent this was sufficient because it’s only with a scrutiny of the numbers does any ceiling effect become apparent. While post-publication peer-review can check for this, it’s not formalized (at least in this case) and does little to mitigate Schnall’s situation.

// Schnall’s paper was peer-reviewed. The replicators’ paper was peer-reviewed by Schnall et al. Even if both passed the same level of scrutiny, they didn’t pass the same type of it. On this basis, there might be reason for Schnall to be involved with the replication study. Ideally, however, it would have been better if the replication was better formulated, with normal peer-review, in order to eliminate Schnall’s interference. Apart from the conflict of interest that could arise, a replication study needs to be fully independent to make it credible, just like the peer-review process is trusted to be credible because it is independent. So while it is commendable that Schnall shared all the details of her study, it should have been made possible for her participation to end there.

// While I’ve disagreed with Kahneman over the previous point, I do agree with point #3 in his essay that describes the new etiquette: “The replicator is not obliged to accept the author’s suggestions [about the replicators’ M.O.], but is required to provide a full description of the final plan. The reasons for rejecting any of the author’s suggestions must be explained in detail.” [Emphasis mine]

I’m still learning about this fascinating topic, so if I’ve made mistakes in interpretations, please point them out.


Featured image: shutterstock/(c)Sunny Forest