Dear RW readers, can you spare $25?
The week at Retraction Watch featured:
- Former Harvard researcher, now at Moderna, loses paper following postdoc’s report
- Dean in Bulgaria accused of plagiarism
- 19 months and counting: Former Hindawi journal still hasn’t marked paper
- Bribery offers from China rattle journal editors. Are they being scammed?
- Exclusive: Researcher who received settlement to leave University of Iowa won’t be starting new job
Our list of retracted or withdrawn COVID-19 papers is up past 450. There are more than 50,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 300 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our list of nearly 100 papers with evidence they were written by ChatGPT?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “Publisher reviews national IQ research by British ‘race scientist.'”
- “Columbia, Penn State cancer researchers face retractions, one blames LGBTQ discrimination.”
- “Her Thesis on the ‘Politics of Smell’ Stirred the Online Masses. Here’s What She Thinks About It.”
- “NEJM Editor-in-Chief Eric Rubin told The BMJ that ‘for older manuscripts, correction is not necessarily appropriate unless there would be an effect on clinical practice.'”
- “Robust debate and discussion are crucial ingredients in the advancement of science, but should always be conducted with respect and civility.”
- “Accessibility worsens for blind and low-vision readers of academic PDFs.”
- “No one really knows how many scientific publications are published each year or how many journals exist.”
- “NIH launches ‘science of science’ pilot programme.”
- Researchers present “WithdrarXiv,” a dataset containing over 14,000 retracted papers.
- “How Bad Is Fraud in Alzheimer’s Research?”: a Q&A.
- “Retraction handling by potential predatory journals.”
- Researchers look at “Trends in Retraction of Orthopaedic Research Articles.”
- “Industry blanks Arab universities’ ‘sausage machine’ research.”
- “Unanswered questions in research assessment”: “Can global reform efforts be diverse and aligned?”, “Can we apply the scientific method to research assessment?” and “Whose values lead value-led approaches?”
- “Three years in, MIT’s OpenAg whistleblower lawsuit mired in delays.”
- “This fearless science sleuth risked her career to expose publication fraud” — and created our Hijacked Journal Checker.
- “Enhancing the trustworthiness of pain research: A call to action.”
- “Free-riding in academic co-authorship: The marginalization of research students.”
- “A Paper Mill Target Reflects”: a view “from the front lines of the fight for publishing integrity.”
- Researchers look into the “phenomenon of scientists monopolizing authorship in academic journals.”
- “Publishers are selling papers to train AIs — and making millions of dollars.”
- “Latin America is a leader in nonprofit open-access journals. But it struggles to give them global visibility.”
- ‘The “state of predatory publishing, 15 years since the term was first coined.”
- “The rise of predatory publishing.”
- “Felonies” and “misdemeanours” in research: author discusses their adverse effects.
- 19 months and counting: Former Hindawi journal still hasn’t marked paper flagged by sleuths.
- “Incorrect pKa values have slipped into chemical databases and could distort drug design.”
- South Africa’s National Research Foundation “‘looking deeper’ at inequalities in research.”
- “Bad bar charts distort data — and pervade biology.”
- “Extracting treasure from trash: The corpus of scientific literature needs a drastic clean-up.”
- “‘Getting paid to review is justice’: journal pays peer reviewers in cryptocurrency.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Rebecca Sear is free to administer testing to any population she wishes. It would be very inexpensive. Publishing new data is always better than screaming at the sky.
She has every right as a scientist to criticize another scientist’s work. Why do you feel the need to characterize her in such a demeaning way for doing so?
Re: “Race Science”
“Screaming at the sky” is better than exploiting a database, which has been shown in published articles to be inaccurate, while blithely ignoring those articles.
The problem with the “hard-nosed race-realist” scientists is the dismal quality of their scientific work (regardless of their politics). The broader problem has been the willingness of reviewers and editors to dismiss criticism as “nitpicking” (or perhaps the new term is “screaming at the sky”).
“Nitpicking” statistics when someone recycles available data is a valid criticism, but the only way to disprove IQ theory is more data. Weak statistics alone only prove that the sample size was small, not that the data are wrong. It’s curious, that for the last 30 years, the nitpickers haven’t produced better results.
1. Weak statistics in this case concerns the representativeness of samples. Whether that is “nitpicking” depends on whether you are interested in valid statistical inference about populations. One may also wish to dwell on the issue of measurement invariance in comparing populations with respect to statistics, such as averages of IQ tests. Differences in averages are a simple given, their true meaning is not.
2. Sample size is not necessarily an issue. A small sample can be representative of the population of interest. Small samples come with low power. Unrepresentative samples come with incorrect or inaccurate inference to the population(s) of interest.
3. Valid criticism does not create the obligation to collect new data. If that were the case, it would not be possible to review a manuscript, and Pubpeer would cease to exit.
4. Finally, it is not “very inexpensive” to collect representativeness data using an psychometric instrument that is measurement invariance. It is inexpensive to use potentially questionable convenience samples and to ignore the issue of measurement invariance.
“Psychometric instrument” is a fancy way of saying “test printed on paper”. However, speaking fancily doesn’t change the fact that paper is inexpensive.
The term “psychometric instrument” recognizes the fact that the “test printed on paper” is designed to measure one or more substantive latent variables. The term “latent variable” recognizes that the variable of interest cannot be observed directly. In comparing groups, this raises the issue of “measurement invariance”, which concerns the question whether the relationships between the items in the test and the latent variables that we want to measure are comparable over groups. It is complicated: that is why there are many scientific journals dedicated to such matters (Psychometrika, Multivariate Behvaioral Reseach, Journal of Educational Measurement, Applied Psychological Measurement).
All this does not change the fact that paper is inexpensive, but developing a psychometric test is measurement invariant in representative samples is not.
Tailoring a test to get the answer you want is not science. The criticism of the work was not that tests needed to be rewritten, but that the testing results were statistically “unrepresentative or too small to be meaningful”.
Thousands of years of divergent evolution should not be obscured through manipulated testing, it should instead be studied to better understand how genetic expression can advance intelligence.