Researchers replicated a classic paper on unsuccessful treatment of writer’s block. Then they tried to write it up.

Matt Brodhead

In 1974, Dennis Upper published a paper — well, to be precise, a blank page — entitled “THE UNSUCCESSFUL SELF-TREATMENT OF A CASE OF “WRITER’S BLOCK.” There have been several attempts to replicate the work, which has become a classic among a certain cohort of academics.

Until late last month, however, there was no multidisciplinary attempt to replicate the study. (As best we can tell, anyway. Who has time to do a proper literature review these days?) Now there is, along with an editor’s note that calls it “an exceptionally fine piece of scholarship.” We felt the best way to celebrate this auspicious occasion — coming about as far on the calendar from April 1 as one can — would be to interview the corresponding author of the new paper, Matt Brodhead, of Michigan State University. Lucky for us, he did not suffer from writer’s block, so he could respond to our questions by email.

Q: We can’t find the actual paper, even though the editor’s note refers to “the article below.” Did you bury your results in the supplemental information?

A: Believe it or not, you’re actually looking at the complete dataset. In the paper, we have clearly failed to inform the reader that non-traditional data-analysis techniques may be necessary (e.g., binoculars or using the “zoom” feature on your computer). There’s been discussion about submitting an erratum, but nothing has materialized. And to be honest, every way we looked at these data, the results were overwhelming (you’ll see what we mean once you actually find them). We tried about 175 different ways to analyze our data to find the boundaries of this effect and present an honest picture of the limitations to our study. Every way we visualized and analyzed our data passed the intraocular assault test with flying colors. In the end, we didn’t feel right only publishing one analysis of the results. So, we figured we would just let our data speak for itself.

Q: How, exactly, did you attempt to replicate Upper’s study? In particular, how did you perform the research without involving any human participants, as the “ethical approval” section reports?

A: It really was a team effort that followed the standard order of operations for academic work. We started dozens of Qualtrics surveys, IRB protocols, Doodle polls, and scheduled Zoom meetings. But, nothing ever quite came together. It was all talk and no action. And when our grant funding application never quite got submitted and the number of draft emails started to pile up, we knew we had something promising. At this point we scheduled a meeting with the chair of IRB committee to see about expedited review. But, neither the chair nor us ever quite got around to actually scheduling the meeting. So we pushed ahead with data collection and publication in the name of scientific breakthrough. This data just had to get out there.

Q: You mentioned on Twitter that you had to shop this paper around. How many journals rejected it? And why? Did you consider publishing the reviewers’ comments, as the journal that Upper published in did?

A: Good questions. Someone started to create a list of all of the journals that it was rejected from and why. But we’re not sure who was supposed to do that, or the end total. Upper’s original paper was superb, as evidenced by the reviewer comments to that paper, and has come in handy on multiple occasions (e.g., every April 1st). We were only trying to replicate his findings with a broader group of disciplines – which is important – but we didn’t really offer anything novel. As many readers know, replications often don’t get reviews with the same level of enthusiasm as provocative, novel findings. We were just happy to get ours published. After all, 2 of us are going up for reappointment and 1 of us is going up for tenure (the real challenge is remembering to list this publication on our vitae).

Q: Some might argue that the evidence against replication is overwhelming. After all, some two million papers are published every year. How do you respond?

A: Two million papers a year? That’s a lot of Doodle polls that go unanswered! But to get back to your question, we believe replication really is an important topic in science right now, as further evidenced by the Editor’s note in our paper. Also, we enthusiastically support the mission of the many scientists around the globe who champion the use of only rigorous scientific methods. After scheduling an emergency meeting of all of the authors of this paper (and creating yet another un-answered Doodle poll), we have proposed a solution to this debate in between the following parentheses ( ). We believe our answer will provide the information necessary for scientists on all sides of this debate to come together in the spirit of scientific inquiry.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

6 thoughts on “Researchers replicated a classic paper on unsuccessful treatment of writer’s block. Then they tried to write it up.”

  1. I don’t believe their findings for one second! Everybody knows how ridiculously easy it is to fake the data, fudge the stats, and filibuster the senate (the latter has no real meaning here, except to complete the alliteration series I carefully crafted). And did they preregister their study and publish it along with their paper? No, of course they didn’t! We all know what that means: they just published those aspects of their findings that conform to some a-posteriori hypothesis. This all rings rather hollow to me…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.