Several years ago, a group of four chemists believed they had stumbled upon evidence that contradicted a fairly well-established model in fluid dynamics.
Between 2013 and 2015, the researchers published a series of four papers detailing their results — two in ACS Macro Letters and two in Macromolecules. Timothy P. Lodge, the journals’ editor and a distinguished professor at the University of Minnesota in Minneapolis, explained that the results were “somewhat controversial,” because they appeared to contradict the generally accepted model for how some polymer fluids move.
Indeed, the papers sparked debate between the authors and other experts who questioned the new data, arguing it didn’t upend the previous model.
Then, in 2015, the authors realized their critics might be correct.
The authors began to suspect something wasn’t quite right after a 2015 paper in ACS Macro Letters reported significantly different results. Last author Zhen-Gang Wang, a professor of chemical engineering at California Institute of Technology, explained:
We first got alarmed by the large discrepancy with our results reported in a paper by Cao and Lihktman (ref. 6 in our retractions) for a very similar system.
Wang and his team wanted to understand the source of this discrepancy, so they went back to check their data and protocols:
To resolve the discrepancy, we did a lot of independent checking using different protocols, including using a new program written from scratch by another student in one of the PIs’ group.
These independent tests “convinced us that our published results were wrong,” Wang said.
Further digging revealed the source of the problem: a glitch in the computer code used to derive their simulation results. Wang explained:
As we have described in our retractions, the source of errors was a coding glitch in the treatment of the thermal bath.
Once Wang and his team understood what had happened, they contacted Lodge and other experts in the field to let them know:
We immediately informed the editor in chief for Macromolecules and ACS Macro Letters, Prof. Timothy Lodge, and sent out an e-mail message to dozens of people in the field, acknowledging that our previous results were wrong and told them about the errors.
Wang said this was an “unfortunate but inadvertent mistake.” And Lodge agreed:
There is no indication that the mistakes were deliberate in any way.
Even so, because the results were based on unreliable code, Lodge said that, “a simple correction wasn’t enough.” Lodge reviewed the matter with the American Chemical Society (ACS), which publishes both journals, along with other experts in science publishing, and determined that “a retraction was the right thing:”
This case is unfortunate, but it happens. Science is self-correcting.
We commend Wang and his co-authors for their transparency and efforts to uncover and rectify the problem.
The retraction notices also provide an extensive account of what happened. Here’s the retraction notice for the first paper in the series, “Evolution of Chain Conformation and Entanglements during Startup Shear,” published in ACS Macro Letters in 2013 and cited 12 times:
In recent years, we published a series of four papers in ACS Macro Letters and Macromolecules(1-4) reporting Brownian Dynamics simulation results on startup shear of entangled polymers for shear rates γ̇ in the regime γ̇τd > 1 but γ̇τR < 1, where τR and τd are respectively the Rouse time and reptation time. Our results showed significant chain stretching (measured by the contour length of the primitive chain) and suggested, based on analysis of the different components of stress, that the origin of the shear stress overshoot was due to chain stretching followed by retraction instead of chain orientation, in contradiction to the predictions of the reptation/tube theory. Our results also implied violation of the empirical stress-optical rule generally believed to hold in this regime, as pointed out by Masubuchi and Watanabe.(5) Subsequently, Cao and Likhtman(6) published their simulation results on a very similar system and found results in strong disagreement with ours — their results showed little chain stretching and conformed to the stress-optical rule.
In order to resolve these discrepancies, we performed many tests, including using a new code written from scratch. We are now convinced that our previous results were wrong. Both the new code and independent runs (on LAMMPS with the “fix deform” protocol) at Akron by Yexin Zheng, a joint student between Shi-Qing Wang and Mesfin Tsige using the equilibrated copies of systems from three different sources (one of our previous copies, a copy provided by Dr. Robert Hoy, and a new copy generated at Akron), produced results similar to those reported by Cao and Likhtman.
The source of errors has been identified to be in the treatment of the heat bath under shear, which resulted in much lower temperatures than T = 1 (in scaled units) for the sheared samples. The same errors were introduced in both the Langevin heat bath and the DPD heat bath. The reduced temperatures resulted in longer relaxation times. The chain stretching reported in our earlier work was thus a result of this artifact. These errors invalidate all the data at finite shear rates reported in our published papers, and render our conclusions baseless. The authors therefore request retraction of the Article “Evolution of Chain Conformation and Entanglements during Startup Shear” and the other three affected articles.
The other three retractions contain almost identical wording. Here are links to the notices and papers:
- Retraction notice for “Origin of Stress Overshoot during Startup Shear of Entangled Polymer Melts,” published in ACS Macro Letters in 2014 and cited 25 times.
- Retraction notice for “Coupled Effect of Orientation, Stretching and Retraction on the Dimension of Entangled Polymer Chains during Startup Shear,” published in Macromolecules in 2014 and cited 12 times.
- Retraction notice for “Molecular Mechanisms for Conformational and Rheological Responses of Entangled Polymer Melts to Startup Shear,” published in Macromolecules in 2015 and cited 11 times.
Hat tip: Rolf Degen
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
How refreshing!
“If an honest man is wrong, after it is demonstrated that he is wrong he either stops being wrong or he stops being honest.”
– unknown (related by Andre Bijkerk)
Thankfully, these authors chose to stop being wrong.
This how science should work . . . transparently and honestly. It proves retracting wrong findings serves the best interests of science and scientists. What a contrast this case provides to the defensive maneuvering in so many others.
I seriously wonder how many papers are flawed because of (in practice unavoidable) coding errors. This is one of the rare retractions, but there are probably many unknown false results in the literature.
An attempt to answer your question is Soergel’s “Rampant software errors undermine scientific results” at https://f1000research.com/articles/3-303/v1
The number he gets is between 5% and 100%. In my field, according to my experience it’s about 50%.
Coding errors may not be completely unavoidable, but people should be doing a much better job of realizing that coding errors can arise and double and triple checking. I used to work as a programmer before I went to grad school and I was astonished in grad school (and still am sometimes) to see how cavalierly people treated data and code. There are lots of ways to test code. You can try a simple example to see if it works. You can find ways to cross check by using two different methods and seeing if they agree. Or even get two different people to write code using different approaches. Particularly if your results seem novel and unexpected, you need to check!!
If the code is nontrivial, this is actually a big problem. New models, newer problem domains, and low complexity of simple examples make comparing to other codes or literature unreasonable, particularly in cases where the problem the code solves is one for which there is no analytical solution. Since most code is written PhD students, who on earth could possibly get paid less than them to have as much domain knowledge to completely reproduce my code a second time?
Dr. Wang and team are true scientists. They not only retracted their own erroneous papers but did the preceding work that invalidated them. Such integrity in the pursuit of truth is all too rare.