Ever wanted to hone your skills as a scientific sleuth? Now’s your chance.
Thanks to the American Society for Biochemistry and Molecular Biology (ASBMB), which is committed to educating authors on best practices in publishing, figure preparation, and reproducibility, we’re presenting the second in a series, Forensics Friday.
Take a look at the image below, and then take our poll. After that, click on the link below to find out the right answer.
Retraction Watch readers may have noticed what seems like a growing trend: Co-first authorships. While the move might seem like a way to promote equality, some researchers are worried that it’s having the opposite effect. In response, the Journal of Clinical Investigation (JCI) recently created additional requirements for shared first authorship. We asked Arturo Casadevall, the first author of an editorial describing those changes, to answer a few questions.
Retraction Watch (RW): The title of your editorial, as well as the editorial itself, refers to bias. What kind of bias is of concern when it comes to co-first authors?
A pediatrics journal has retracted a 2016 article purporting to be the first randomized controlled trial on the effects of vitamin D supplements on autism over concerns about the reliability of the findings.
The paper, “Randomized controlled trial of vitamin D supplementation in children with autism spectrum disorder,” appeared in the Journal of Child Psychology and Psychiatry and has been cited 27 times, according to Clarivate Analytics’ Web of Science, earning it a “highly cited paper” designation compared to its counterparts of a similar age.
The authors came from Egypt, Saudi Arabia, China, Chile, the UK and Norway. According to the abstract, the researchers looked at the effects of vitamin D supplements on 109 boys and girls with autism:
Jon Stewart is a powerful figure in American media. How powerful is he? So powerful that his departure in 2015 as host of The Daily Show on Comedy Central may have tipped the 2016 presidential election to Donald Trump.
While the presence of publication bias – the selective publishing of positive studies – in science is well known, debate continues about how extensive such bias truly is and the best way to identify it.
The most recent entrant in the debate is a paper by Robbie van Aert and co-authors, who have published a study titled “Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis” in PLoS ONE. Van Aert, a postdoc at the Meta-Research Center in the Department of Methodology and Statistics at Tilburg University, Netherlands, has been involved in the Open Science Collaboration’s psychology reproducibility project but has now turned his attention to understanding the extent of publication bias in the literature.
Using a sample of studies of psychology and medicine, the new “meta-meta-analysis” diverges from “previous research showing rather strong indications for publication bias” and instead suggests “only weak evidence for the prevalence of publication bias.” The analysis found mild publication bias influences psychology and medicine similarly.
Retraction Watch asked van Aert about his study’s findings. His answers have been lightly edited for clarity and length.
RW: How much are empiric analyses of publication bias influenced by the methods used? Based on your work, do you believe there is a preferred method to look at bias?
Ever wanted to hone your skills as a scientific sleuth? Now’s your chance.
Thanks to the American Society for Biochemistry and Molecular Biology (ASBMB), which is committed to educating authors on best practices in publishing, figure preparation, and reproducibility, we’re presenting the first of a new series, Forensics Friday.
Take a look at the image below, and then take our poll. After that, click on the link below to find out the right answer.
“Dissatisfied.” That’s how Nick Brown and James Heathers describe their reaction to the progress — or lack thereof — in the case of Nicholas Guéguen, a psychology researcher whose work the two data sleuths have questioned.
Brown and Heathers first wrote about the case in 2017. In a new blog post, they write that the science integrity office at the University of Rennes-2, where Guéguen works, pulled punches in its investigation of its faculty member and in two reports it issued last year about the case. (Brown and Heathers, who has called himself a “data thug,” had hoped to make available a preliminary report about the case last year but said the university discouraged them from doing so — a stance that, if true, we wouldn’t find surprising given many institutions prefer to sit on reports of such investigations.)
Retraction Watch (RW): You “undertook a survey of publication rates, for authors with multiple retractions in the biomedical literature, to determine whether they changed after authors’ first retractions.” What did you find?
Retraction Watch readers may know the name Elisabeth Bik, whose painstaking work inspecting tens of thousands of Western blot images has led to dozens of retractions in journals including PLOS ONE. Today in The Scientist, we profile Bik, a microbiologist who calls herself a “super-introvert.”