IPL is building a platform and culture to encourage researchers to share their null results.                                                              

When social scientists design experiments to test certain theories or interventions, a key piece of the landscape is often invisible to them: null results from others who explored the same research questions. How is the research community hampered by the publication bias toward significant results? IPL’s Ala’ Alrababa’h and Scott Williamson have proposed a new effort to promote widespread reporting of null results. Here they share the inspiration behind their project and their hopes for its future.

Q: Why did you decide to tackle the problem of null results reporting? Did it ever affect your own work?

Scott Williamson: The problem really hit home for us during a survey project we ran in Jordan. We were trying to find ways to make people feel more welcoming and inclusive toward Syrian refugees in the country. We showed videos that presented different messages about refugees and measured their effect on people’s attitudes toward policies and refugees themselves. When the results came back, we found that their attitudes were similar regardless of which video they saw or even if they saw no video at all.

We were disappointed, but we still thought the results were important, and we wanted to find a way to share them with other academics, as well as policymakers working in Jordan and elsewhere. But there wasn’t a good outlet to publish a paper like this—it wouldn’t be a fit for most academic journals. So we started thinking about how to make this kind of reporting easier.

Ala Alrababa’h: Part of the reason we wanted to share our null results was that many other people are studying ways to improve attitudes toward certain groups. Other papers have found that the types of messages we tested generally work. So it seemed problematic for our field that we tend to see only the ones that find this effect. We just couldn’t know if our study was an outlier or if others had tried something similar and also found no effect. We only observe those with significant results, so consensus may form that an intervention works well, when it actually doesn’t.

SW: Apart from testing interventions, there’s a lot of conventional wisdom about how the world works, both in academia and in what the general public thinks about all sorts of issues. So if your study verifies one of these theories, it may be more likely to be published, based on what we know about publication bias. But it’s often just as interesting, and perhaps more so, to find that some relationship you believe to exist just isn’t there. That can be really important in helping people rethink the topic or problem in question.

Q: Are academic researchers already concerned about the invisibility of null results, or is it a goal of your project to raise awareness about this?

AA: People are definitely aware of the problem, and many other solutions have been proposed. There was the idea of “results-blind review,” which would have reviewers decide whether to accept or reject a paper based on the study design and pre-analysis plan. That way, if it ends with a null result, they’d still have to publish the paper. A major political science journal tried this, but after one issue decided there were too many problems with it.

Another reason journals don’t want to publish null results is that the interpretation is not always clear. A null result doesn’t necessarily mean that the intervention doesn’t work. You can only speculate.

Q: So if journals are the “demand” side of the problem, how does it look from the “supply” side, the researchers? You mention that they’re unlikely to share their null results if journals won’t publish them. What other incentives are at play?

SW: So the lack of interest on the journals’ part is the biggest disincentive, because it makes people reluctant to invest the time it takes to write up the results in a formal paper. In addition to that, I think there’s a sense that null results mean a study failed in some way. Study authors may worry that people will assume it wasn’t designed correctly or something was implemented poorly.

So there’s a concern that these results may reflect badly on the researchers, though most of the time this probably isn’t the case. We’re trying to push back against that idea. Everyone who does research is going to produce findings like these, and it’s useful to all of us if you put them out there.

AA: One way we’re trying to increase the “supply” of null results reports is to address that problem of time. Most people don’t want to write a thirty-page paper that has only a slim chance of being published. Our solution is a five-page template for reports that should be relatively easy to fill out. And we’re creating a repository that will provide a citation key for each report so that people can easily cite your work—that hopefully will give people another incentive to contribute.

Q: In your paper you talk about a “credibility revolution” in the social sciences. How would you characterize this trend, and how is null results reporting connected to it?

SW: I think in part because of how the publication process works and how people get credit for their research, there often have been incentives to do bad research to validate good ideas. In a sense, people will essentially fish for the findings that they want, to reinforce an idea they’re invested in establishing or just because they want to boost their chances of being published.

In the past several years, there has been a big push to limit the incentives for that behavior. For example, people are encouraged to pre-register their research designs and to invite replication of the analysis. We see our proposal as fitting into this broader project, by helping make it more transparent when we do find effects that may not be published.

AA: And IPL is an ideal group of people to start normalizing this trend of publishing null results. If our project is successful, when researchers embark on a project they’ll be able to see the whole universe of interventions people tried in this area, including the ones that did not work. They can observe some of the differences between the ones that worked and the ones that didn’t. They’ll know not to do exactly the same thing as a previous researcher, but instead can tweak certain aspects and see if they get a different result.

SW: If null results reporting becomes the norm, there will be some ideas that don’t hold up with this new trove of evidence, but maybe there are others that seem to be on shaky ground but will actually be reinforced. Just having more results will change the way we’re able to evaluate what is true—or seems to be true—and what is not.

To learn more about the project and see examples of IPL’s template for reporting null results, go here.