Weekend reads: China cracks down; unearned authorship rife; new jargon for a new year

Would you consider a donation to support Weekend Reads, and our daily work?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up to 283. There are more than 38,000 retractions in our database — which powers retraction alerts in EndNoteLibKeyPapers, and Zotero. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

6 thoughts on “Weekend reads: China cracks down; unearned authorship rife; new jargon for a new year”

  1. I hypothesize that the majority of scientists, if they spent 10-minutes with an AI like ChatGPT, would be refreshingly surprised by the crudeness and inaccuracy of current AI technology. When posed with general chemistry questions, ChatGPT provides answers that are jumbled and obviously incomplete to SMEs, when posed with theoretical research questions it responds with a “maybe”, and when asked a regulatory effectiveness question it is not decisive on the needs for any adjustments to current thresholds. Papers written using this technology, in its current state, may pass through peer review; however, this is already occurring when fraudsters paraphrase Wikipedia and journals cannot find SMEs for peer-review. All of this to say, SMEs will readily recognize ChatGPT materials that make it through a faulty peer-review process and will dismiss their publishers in the same manner that we currently ignore low quality Beal’s list-esque journals.
    A separate observation on ChatGPT… one only needs to ask simple political questions, wherein the name of a politician is the only modification, to observe the programmer’s biases. For example, when “what is controversial about Trump” is queried, a list of opposition party talking points in returned. On the contrary, when “what is controversial about Biden” is queried, the return offers no opposition party talking points… instead, ChatGPT is programmed to chastise you for asking a “non-productive, disrespectful, derogatory, and demeaning” question. This type of psy-op-esque programming, while expected when built by political operatives (especially in light of recent Twitter file revelations), should be publicly shunned by the scientific community; not simply for the obvious bias, but for the implications of harder-to-detect bias that would skew research if future versions of AI may be relied upon to generate novel research suggestions. I would propose that any AI used for research have open coding so that programmed biases are public knowledge.

    1. Lenny – I thought you made some good points, but I went to check out your concern that ChatGPT answers differently for Trump or Biden. The results:
      What is controversial about Trump?
      “There are many things that are controversial about former President Donald Trump. Some examples include his policy positions, his personal behavior, and his actions while in office. For example, Trump’s efforts to repeal the Affordable Care Act and his hardline stance on immigration were controversial and widely debated. Additionally, Trump faced criticism for his handling of the COVID-19 pandemic and for making baseless claims about voter fraud in the 2020 election. Trump’s use of Twitter to communicate and his tendency to make inflammatory and divisive statements also sparked controversy and criticism.”
      What is controversial about Biden?
      “There are many things that are controversial about President Joe Biden. Some examples include his policy positions, his personal behavior, and his actions while in office. For example, Biden’s efforts to address climate change and implement immigration reform have been controversial and widely debated. Additionally, Biden has faced criticism for his handling of the COVID-19 pandemic and for some of his foreign policy decisions. Some people have also raised concerns about the influence of certain special interests on Biden’s policymaking. As with any president, there are also those who disagree with his policy positions and actions, and who view him as a controversial figure.”
      Perhaps this has changed over time.

      1. Chesire, I tried my query again and indeed the output has been changed. Being that my comments were based on my very recent experience with ChatGPT (on 01/05/22) and the output change correlates with the window of my commenting here, I would like to believe that Retraction Watch readers played some role in balancing the algorithm. Perhaps our discussion will spur someone with excess free time to do a deeper dive for additional biases.

        I do have screenshots from 01/05, as I was discussing my observations with a friend. If anyone is interested in obtaining them for use in putting together a review article, leave a comment with contact info and I will check back here in a few days.

        1. FWIW, I was skeptical when I first read Lenny’s post and so I checked it for myself and, indeed, the response I obtained was as Lenny had originally described. I did not make a note about the day and time of my access to the site to check, but I am certain it was on the same day that Lenny’s original post had appeared on RW.

  2. > “Female author representation differs between journals from the United States of America, Europe, and Asia: a 10-year comparison of five medical disciplines.”

    There’s some funny methodology going on here imo: small number of journals (one for each of five specialties per continent, total 15); and moreover collapsing national statistics into continental in two out of three cases seems like it’d do some significant damage to comparability. How do you get something useful out of lining up US statistics vs “all of Europe” vs “all of Asia”? Especially when you then pick just one or two countries (UK and Germany for Europe, only Japan — where medical schools were revealed in 2018 to be artificially depressing acceptance of women — for Asia) to establish your baseline ratio of women physicians for an entire continent. I don’t think I’d be too surprised if the overall picture held up, the Japanese medical school scandal probably means the numbers appear _better_ than they actually are, but it’s kind of apples and oranges from the start.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.