Weekend reads: ‘Race science’ under review; researcher blames critiques on anti-LGBTQ discrimination; the politics of the politics of smell

Dear RW readers, can you spare $25?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up past 450. There are more than 50,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 300 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our list of nearly 100 papers with evidence they were written by ChatGPT?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Processing…
Success! You're on the list.

9 thoughts on “Weekend reads: ‘Race science’ under review; researcher blames critiques on anti-LGBTQ discrimination; the politics of the politics of smell”

  1. Rebecca Sear is free to administer testing to any population she wishes. It would be very inexpensive. Publishing new data is always better than screaming at the sky.

    1. She has every right as a scientist to criticize another scientist’s work. Why do you feel the need to characterize her in such a demeaning way for doing so?

  2. Re: “Race Science”

    “Screaming at the sky” is better than exploiting a database, which has been shown in published articles to be inaccurate, while blithely ignoring those articles.

    The problem with the “hard-nosed race-realist” scientists is the dismal quality of their scientific work (regardless of their politics). The broader problem has been the willingness of reviewers and editors to dismiss criticism as “nitpicking” (or perhaps the new term is “screaming at the sky”).

    1. “Nitpicking” statistics when someone recycles available data is a valid criticism, but the only way to disprove IQ theory is more data. Weak statistics alone only prove that the sample size was small, not that the data are wrong. It’s curious, that for the last 30 years, the nitpickers haven’t produced better results.

  3. 1. Weak statistics in this case concerns the representativeness of samples. Whether that is “nitpicking” depends on whether you are interested in valid statistical inference about populations. One may also wish to dwell on the issue of measurement invariance in comparing populations with respect to statistics, such as averages of IQ tests. Differences in averages are a simple given, their true meaning is not.
    2. Sample size is not necessarily an issue. A small sample can be representative of the population of interest. Small samples come with low power. Unrepresentative samples come with incorrect or inaccurate inference to the population(s) of interest.
    3. Valid criticism does not create the obligation to collect new data. If that were the case, it would not be possible to review a manuscript, and Pubpeer would cease to exit.
    4. Finally, it is not “very inexpensive” to collect representativeness data using an psychometric instrument that is measurement invariance. It is inexpensive to use potentially questionable convenience samples and to ignore the issue of measurement invariance.

    1. “Psychometric instrument” is a fancy way of saying “test printed on paper”. However, speaking fancily doesn’t change the fact that paper is inexpensive.

  4. The term “psychometric instrument” recognizes the fact that the “test printed on paper” is designed to measure one or more substantive latent variables. The term “latent variable” recognizes that the variable of interest cannot be observed directly. In comparing groups, this raises the issue of “measurement invariance”, which concerns the question whether the relationships between the items in the test and the latent variables that we want to measure are comparable over groups. It is complicated: that is why there are many scientific journals dedicated to such matters (Psychometrika, Multivariate Behvaioral Reseach, Journal of Educational Measurement, Applied Psychological Measurement).

    All this does not change the fact that paper is inexpensive, but developing a psychometric test is measurement invariant in representative samples is not.

    1. Tailoring a test to get the answer you want is not science. The criticism of the work was not that tests needed to be rewritten, but that the testing results were statistically “unrepresentative or too small to be meaningful”.

    2. Thousands of years of divergent evolution should not be obscured through manipulated testing, it should instead be studied to better understand how genetic expression can advance intelligence.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.