First, a housekeeping note: We migrated web hosts this week, and while the move seems to have gone mostly smoothly, we’ve noticed a few issues: Comments aren’t threaded (even though we have them set up to be), categories aren’t properly nesting, and a small percentage of comments didn’t transfer over with the rest, the way they should have. We’re working on getting this resolved, and looking into whether we can (or should) restore upvoting and downvoting on comments, so please let us know of any other issues you see, and thanks as always for your patience.
Here’s what was happening elsewhere:
- Scientists behaving badly: Andrew Gelman explains why we should care.
- A married Yale cardiology researcher smitten with a junior scientist told her “that she was choosing the wrong man since he was in a position to ‘open the world of science’ to her,” The New York Times reports.
- A Northwestern researcher has agreed to pay $475,000 to settle claims of grant mismanagement. The university had already paid nearly $3 million over related claims.
- Google Scholar is filled with junk science, says Jeffrey Beall.
- Fabricating and plagiarism: Mark Israel on when researchers lie.
- Why are publishers and editors wasting time formatting citations? asks Todd Carpenter.
- Are improper kinetic models hampering drug development? asks Ryan Walsh.
- “Should there be standard protocols for how researchers attempt to reproduce the work of others?” Kerry Grens explores the question.
- Journals including Science and Nature have joined forces to create guidelines designed to improve reproducibility.
- Who needs science when we have anecdotes?
- “A concert pianist had demanded that a review of a 2010 concert he gave be removed from internet search results under the European ‘right to be forgotten’ law.” Could scientists found guilty of misconduct be next?
- Inside Higher Ed covers the Fazlul Sarkar lawsuit.
- Peer review is under attack, reports Tom Spears of The Ottawa Citizen, who has done his own “sting” of predatory journals.
- Rolf Degen has an update on a case of suspected plagiarism by six professors of sports medicine in Germany.
- Nature Nanotechnology is making double-blind peer review an option. “Our decision to offer double-blind has been driven by concerns from sections of our community that biases, such as those against female authors or researchers based at less prestigious labs and institutions, could play a role in the review process.”
- A Harvard vice provost authorized a study that photographed faculty and students in class without notice, The Crimson reports.
- Star Trek medicine: A scientist’s apology for basic research.
- Reuters has done away with comments on its news stories.
- “Sorry if I seemed to be legally liable in any way:” Sorry Watch on an apology stemming from a request to retract a Washington Post column.
“wasting time formatting citations”: I wonder whether Todd Carpenter has ever heard about BibTeX…
Jeffrey Beall and Tom Spears both make valid points about the problem of junk science. But they point in the wrong direction. I think too many scientists close their eyes to their own need to determine whether a paper is junk (plausible) or not rather than rely on the name of the journal to make that interpretation for them. Gullibility or blind faith, whichever you prefer, is not science. Apparently impeccable sources also produce junk – arsenic replacing phosphorus anyone?
Good papers get cited, mediocre get ignored, the rest mentioned on PubPeer!
Warrick, “Good papers get cited”. Well, that hypothesis is clearly disproved by the 677 citations of Shigeaki Kato’s latest 3 of 28 fraudulent papers, all now retracted. So, one could say that Google just accumulates everything, including junk, and that “junk” can be perceived very differently depending who you are speaking to. For example, it is possible that the aristocratic members of the scientific community, who only publish in IF journals that exceed 15, may consider the work published by the pleb of the community that publishes in IF journals from 1-3 to be “junk”. Similarly, scientists from many developing countries may consider the “junk” in so-called “predatory” OA journals to be heaven-sent. Perspectives, social rank and culture will mould that assessment, I believe. Google is not to be blamed, and can in fact serve positive functions, namely to track fraud, duplications, and to conduct effective post-publication peer review. Are you suggesting that hose who are gullible, and who have been tricked in some way by “predatory” publishers, are not true scientists? Or are they simply, at least a tranche of them, a smart set of scientists who understand the way the publishing game is being played? Unlike Google, however, Thomson Reuters and Elsevier must be held accountable for the “junk” that appears on thei data-bases, because they claim to have screening teams and expert panels that approve the inclusion of journals on their data-bases. They d not use spiders like Google. More academics need to keep this difference in mind.
BibTeX is a garbage-in, garbage-out tool. But what Carpenter fundamentally wants is for authors to go to the trouble of not just adopting, but thinking in terms of a baroque publisher-created system of semantics-free identifiers in order to save the latter a truly minimal amount of time.
Not a retraction, but a curb-stomp in nature, of work that also appeared in Nature. The claim that one could monitor superoxide bursts with a miochondrially expressed circularly permuted yellow fluorescent protein (cpYFP), and the putative role in ageing, has been critiqued by the who’s who of oxidative stress/gerontology researchers:-
http://www.nature.com/nature/journal/v514/n7523/full/nature13858.html
The ‘mitoflash’ probe cpYFP does not respond to superoxide
Markus Schwarzländer
Stephan Wagner
Andreas J. Meye
Yulia G. Ermakova
Vsevolod V. Belousov
Rafael Radi
Joseph S. Beckman
Garry R. Buettner
Nicolas Demaurex
Michael R. Duchen
Henry J. Forman
Mark D. Fricker
Lee J. Sweetlove
David Gems
Andrew P. Halestrap
Barry Halliwell
Ursula Jakob
Iain G. Johnston
Nick S. Jones
David C. Logan
Bruce Morgan
Tobias P. Dick
Florian L. Müller
David G. Nicholls
S. James Remington
Paul T. Schumacker
Christine C. Winterbourn
Michael P. Murphy
A note stating “Tracking retractions as a window into the scientific process” appears as well, covering a portion of the title of each article.
Other behaviors described here:
http://journal.frontiersin.org/Journal/10.3389/fpls.2014.00536/full
Agreed with JATdS, popularity is no measure for quality. Many important papers get completely ignored, most famous one is Mendel’s research on peas, another good example is the definition of Theory of Relativity way before Einstein did (some wonder, copied?) it with greater political support. Many highly-cited authors have been recently proven great manipulators.
I reread the first post on RW that explains its raison d’etre as well the motto ““Tracking retractions as a window into the scientific process” , thus I am confused about the relevance of the current Yale imbroglio to this blog.
I accept the Yale committee’s findings and abhor Simons behavior.
I believe that this was ethical misconduct but no papers have been retracted. This is different than Yale’s Felig fraud that has been forgotten but can be found here http://www.nytimes.com/1981/11/01/magazine/a-fraud-that-shook-the-world-of-science.html
I agree with CR. Citation is becoming a biased process.
Many ‘scientists’ cite only their friends and/or papers published in ‘known’ journals.
Citation is as biased as the impact factor, from which it is emanating.
Personally, from now I have decided not to cite papers from journals with ‘high impact factors’as possible I can to avoid participating in their arrogance and ‘pseudo-prestige’.
Dear jpaul, while I can appreciate your sentiment, I think that it is very misdirected. For example, I believe that it is very wrong to use your papers, particularly the reference list, to somehow “punish” the system, or to target journals that you may have a personal beef with. Allow me to explain. By selectively not including valid references in your papers of papers that should be referenced, you would be actively distorting the literature. You would also be manipulating the literature, not for any valid academic reason, but rather for personal revenge. I think this is wrong, and so is the attitude. I coined a term for this in 2013: snub publishing [1], and went about trying to quantify the extent using a very small sub-set of the plant science literature [2]. Rather than snubbing the system you despise (aka, the impact factor) and the journals you consider to be arrogant and pseudo-prestigious (aka, the high IF journals you loosely refer to), may I suggest instead that you correctly reference papers that should be referenced, but instead channel any valid criticism as letters to the editor, comments or research notes that can be published in high or low IF journals in order to expose the issues. My personal experience is that wild or unfocused anger or frustration towards a concept, an entity, or even an individual – even if you are totally right – will back-fire, sometimes disastrously. Finally, the corollary to what you state is even more frightening: if we were to support the opposite end of the academic spectrum, and reference their papers (namely the bottom-feeders in several of the OA journals listed on Beall’s lists), then we would be doing an even worse injustice to academics and the literature.
[1] http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)35-37.pdf
[2] http://link.springer.com/article/10.1007/s12109-014-9355-6
Yet I think jpaul does have a point: usually when citing a source there are several options, of which quite often you have not read all. Many I know give preference to citing high-impact journals when given a choice, as to “increase reliability” in their argument. I think such reliability expectancy from high impact is as flawed as impact factor is, thus I also prefer not feeding this general notion. However I think the ideal would be to, when given options (which is quite frequent) one should chose the most appropriate or include all, however we know this depends a lot on personal judgement. I also tend rather to favour the less popular papers when including my citations.
You missed this Guardian story on European science funding. I disagree with some of the claims it makes, but it makes for a good read, and is guaranteed to start a discussion.
http://www.theguardian.com/higher-education-network/2014/nov/07/european-research-funding-horizon-2020?CMP=share_btn_fb
@JATdS; Thank you for your comment.
While I share your first view point (first comment), I don’t share the second. When I chose not to cite papers in ‘arrogant journals’, I do exactly as they often do in their rejection policy when they reject many ‘suitable’ papers on the basis of merely personal, editorial appreciation. They select the papers they publish, I am also in my total right to select the reference to cite in my paper when I have the choice between two similar references.
For your information, I have had examples where I submitted papers to ‘prestigious journals’ but a little later they published articles on the same topic as my rejected paper!
Another example: Can you explain for example why most ‘high impact factor journals’ set their submission policy (particularly for reviews papers) to be ‘commissioned by editors’ other than to increase their biased IF?
Since they set the policy they want, I also should be free to do what I wish in my papers, in this case not to cite pseudo-prestigious papers or journals!
Many authors have already boycotted the most ‘prestigious journals’:
http://retractionwatch.com/2013/12/11/cell-nature-science-boycott-what-was-randy-schekmans-tenure-at-pnas-like/
http://www.theguardian.com/science/2013/dec/09/nobel-winner-boycott-science-journals
I’ll try to do the same at my ‘modest’ level!
@CR: Thank you for your comment. This is my viewpoint, too. I do know also many people who still think that ‘citing papers from high IF is a gauge of ‘trust’ to increase the chance of acceptance of their paper’! This is simply scandalous!
@JATdS again,
In my view, what you are calling ‘snub publishing’ is exactly what is done by the so-called ‘prestigious journals’!
Try to publish their and you will see snub they are!
You will be snubbed, if you are not already!
‘Prestigious journals’ look for ‘star’ authors or ‘sexy’ papers!
But what ‘sexy’ papers would mean in scientific fields?
Isn’t a shame to talk about ‘sexy’ papers in science? Are we in a ‘strip tease’ party?
jpaul, no need to be combative. My record shows that I am quite an anti-establishment individual, and battle daily against the injustices in Springer, Elsevier, Taylor and Francis, the three main locations of my research. I am 100% anti-IF, and am also constantly battling Thomson Reuters. So, when you state that snub publishing is conducted by the top-tier journals, I totally agree with you. However, we have to put our ideas to paper, and so, rechanneling the frustration of seeing this academic injustice would be better served if you could quantify the snubbing taking place, or the “sexiness” (or the bias, broadly speaking) rather than referring to it in qualitative terms only. I actually think I agree with most of your ideas and sentiments, but it is the process by which we achieve the goals that is important moving forward. As for being snubbed, I have been constantly snubbed, I believe, simply because the peer review systems of journals like Nature are skewed. What they perceive to have merit, or be important, is more sensationalist. But saying that one got snubbed by the most prestigious journal on this planet might not be a bad thing! However, I do take great insult by plant science journals who rejected en masse my ideas and several papers on post-publication peer review for the plant sciences, and my papers were often rejected because of being “out of scope”. So I know what it feels like to be constantly subbed by my peers for carrying contrarian views, controversial opinions and sometimes radical posturing. Incidentally, I had to get my work published in less than desired locations [for example, 1], but finally this actually served more as a weapon and as a tool in my movement moving forward.
[1] http://journal.frontiersin.org/Journal/10.3389/fpls.2013.00485/full
Have you considered the possibility that they were? I don’t know the literature in plant sciences; does it generally contain broad reflections on the state of the literature? It seems as though your forthcoming paper is well suited to Research Publishing Quarterly per se, even if that journal is not something that would normally even cross the radar of your peers.
@JATdS,
I can admire and applaud your efforts in deprecating the defects of the publishing system at the highest level possible. I agree with the most of you are writing.
I also think that there should be another, fairer publication system, non based on ‘ranking’ or bibliometric measures. A free, neutral and transparent publication system without any notion of ranks or classification. Skillful scientists and reader would be able to screen the quality and the importance and to select the suitable papers according to their specialties.
Scientists should be able by themselves to probe the importance of science and knowledge, not based on biased numbers calculated mostly for business purposes.
It is for the reasons you are talking about, we should NOT cite and promote ‘top-tier journals’, otherwise we are feeding their snobbery and their biased measures in skewing science and research
Why to grant them ‘values’ that they do not deserve?
It is true, we should put our ideas on papers, but for which purpose? Isn’t for conveying knowledge and spreading it as large as possible (not for money, supposedly)?
However, with Internet now we can do it in many ways:
– Specialized forums,
– Preprint servers,
– Comments (I even call to assign DOIs to comments published on Internet so that they become ‘citable articles’ at the same title as any other article published by the so-called peer-review journals! We sometimes can find much more useful and helpful information on this site (retractionwatch) than for example in Nature or Science or any other journal!
So, if comments are assigned with DOI, the ideas expressed in comments would be ‘proprietary’ (or nominative) and anyone should refer to when needed.
Sometimes, the amount of text we write in commenting other articles here or there would make full and interesting articles that would deserve indexing and full recognition!
Many ideas of editorials, news & views…perspectives etc. published by top-tiers journals are ‘stolen’ from authors’ thoughts, comments, or from their rejected papers, surveys, opinions etc., so these journals are cheating authors in many ways.
– In the past, people were writing and conserving their books in their shelves! With Internet now, we can write and get the source easily. For example, if you have an excellent idea, you can publish it ‘anywhere’, on internet and the day you would need to argue about it, you can indicate its link (whatever the place you publish it in). If you publish an idea and you afraid to be scoped, you then can bring the link and can show that you were the first who talked about! Any interested people will find it while searching internet! So, even if there articles published bout the topic, you were the first who talked about it!
In my opinion, formal journals should lose their ‘authoritative’ positions because it is related to much more bias than to benefits for authors.
This might be harsh but who knows, in the future journals may disappear in favor to open forums, open discussions… or other online platforms etc., which I strongly hope to see as soon as possible!
Knowledge should be free, accessible and easy to spread, to seek and to find without much constraints or restrictions as is the case nowadays.
I don’t have much more to add, but there must be an underlying reason why we are all commenting here at RW, some more fervently than others. Maybe RW could have a special story asking readers why they have come to RW. In 2015, I am seriously considering self-publishing my ideas on a simple platform, and having at least one official number associated with the papers, like an ISSN. DOIs cannot be obtained by individuals, unfortunately. Desite Beall’s critique, we should thank Google for tracking everything with their spiders, so our comments, and even self-published papers, have the ability now of reaching a wider audience than the greed-filled pay-for-access systems imposed by the snobs you describe. Even their pseudo-free models of pay for open access (i.e., one PDF) is a farse. However, self-publishing can only be achieved when one already has a strong or established portfolio, and even though I ruffle alot of feathers in the plant science community, because duty and responsibility call me to do this, the only reason why I can capture the attention of plant scientists (even if in a negative way) is because of not only what I say, but of who I have proved myself to be scientifically over 2 decades. The battle field was not established from thin air and did not develop accidentally. So, I fully agree with your ideas, and RW serves as one excellent platform to voice our opinions, because we know that not only the scientific community is reading, but that the very ones we are criticising, namely the snobs and iritants, are watching this blog very carefully. As for Narad’s suggestion about RPQ, if he notices carefully, I did publish my paper on snub publishing (set of examples 1 in Anthurium) in RPQ. Sometimes, the most effective strategy of beating the enemy is by laying eggs in its nest. Because that way, we become familiarized with the system, and no one can accuse us of not understanding it, because we are in it. It also then becomes difficult, or maybe even impossible, to reject or retract our papers based on disagreements with our ideologies, or for political reasons, because that would be counter to the principles they embrace with their ethics master, COPE. Something like a cuckoo. Think about it, even though Scientia Horticulturae has made me persona non grata, imagine how irritated the editors whose shenanigans I exposed must feel every time they see my name on their journal’s data-base. So, a war involves victories amongst the losses, and we need to lead those who are willing to sacrifice their positions, names, and even more in the name of academic justice. My own flock is growing in the shadows, and one can start to sense the fear of the pseudo-academic props, as their system starts to be rocked with an increasing exposure of fraud and misconduct, bad science and failed traditional peer review.
I wouldn’t have mentioned it had I not observed this in the first place.
@JATdS :
ISSN or DOI identifiers are not mandatory to publish articles or journals!
All prestigious journals have run longtime ago without DOI or ISSN!
A good alternative is thus to launch your own journal with some other colleagues in your institution and that’s all! It is not so difficult! You can ask a publisher to set up you a journal you want against some fees annually and you get it! Or, if you have money, you can hire a developer from a low-income country (hence no so expensive) and he will do it for you! I’d be be among the first who would publish in your journal!
Randy Schekmans has launched his eLife journal this way! He was resentful of the system and he wished to find a ‘better’ solution or alternative.
It remains to know if he would behave the way the journals he criticized or differently!
Good luck!
For next week’s weekend reads, I also recommend this highly interesting blog that appears to have been set up a couple of months ago (or even last week with entries backdated), by a pseudonymous person who hates PPPR, anonymity, PubPeer, and especially Retraction Watch, and all aspects of it (including the intern). I’m sure it will make for ten minutes of highly entertaining, even if a bit disturbing for the RW team, reading. I’m guessing this person’s scientific output was featured on this blog.
http://scienceretractions.wordpress.com/
The new threading module breaks the “recent comments” links if they’re in a collapsed “replies” section; in my case, the link doesn’t even go to the original comment that contains the collapsed section, it just stays at the top of the page.