Another busy week at Retraction Watch, which we kicked off by asking for your support. Have you contributed yet? Here’s what was happening elsewhere on the web:
- Why does “impact factor mania” persist?
- Women cite their own work far less often than men do, according to a new study
- “Universities trade on our hopes, and on the fact that we have spent many years developing skills so specialized that few really want them, to offer increasingly insecure careers to young scholars.”
- University College, London, has blasted the Daily Mail for insinuating that two scientists were chosen to comment on new findings on the origin of the universe because of their race and gender. Perhaps Mail columnist Ephraim Hardcastle could publish his next essay in the Journal of Proteomics.
- “Will a few hubs such as ResearchGate or Pubpeer.com dominate post-publication peer review?” asks Richard van Noorden. And at GigaOm, Jim Woodgett wonders whether ResearchGate played much of a role in the RIKEN stem cell scandal.
- “…regarding correcting the literature, [editorial and publishing consultant Irene Hames] said editors’ awareness of the issue had been raised by websites such as Retraction Watch, as well as from being contacted by increasing numbers of whistleblowers, who can now carry out analyses of large numbers of papers as a result of the digitisation of journals.”
- The Wit and Wisdom of Psychology Abstracts: A brilliant sendup by Neuroskeptic
- Technophilic Magazine asks Ivan why we launched Retraction Watch. And Business Insider says Ivan is one of 40 science experts who “will completely revamp your social media feed.”
- The NSF “has seen its budget stagnate in recent years and is now facing attacks on its peer-review system and social-science division from conservative members of Congress,” writes Jessica Morrison in a profile of the agency’s incoming director.
- “Although the peer review research community is aware of the consequences of nonpublication of research, 39% of studies presented at [Peer Review Congresses] have not been fully published,” reports a paper in JAMA.
- “Ethics in the Production and Dissemination of Management Research: Institutional Failure or Individual Fallibility?”
- Here’s how academics learn to write badly
- The March issue of the COPE Digest features a number of items about retractions, fake papers, and fake conferences
- Why coming clean in retractions is important, by Virginia Gewin in an an issue of Nature that happened to include one
- “According to one study, which was presumably read by more than three people, half of all academic papers are read by no more than three people.”
- “Lawyers scuttle hopes of international court for fraudulent science:” A spoof from the European Heart Journal (see page 3 of PDF
- “As graft targets go, China’s R&D spending offers rich pickings.”
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.
Also worth reading this weekend:
Spurred by the Stapel affair, the Dutch association of universities will drop the criterion “productivity” from their performance evaluation protocol because it creates a perverse incentive. Thus they hope to counter the criticism that the “pressure to publish” has been taken too far lately.
Certainly a move in the right direction! Although I yet have to see that the new criteria, in particular “scientific excellence”, truly alleviate the problem, given that “citations of research literature” appears as before.
Here the (google-translated) press release:
http://translate.google.com/translate?sl=nl&tl=en&u=http%3A%2F%2Fwww.vsnu.nl%2Fnieuws%2Fnieuwsbericht%2F133-wetenschapsorganisaties-presenteren-nieuw-evaluatieprotocol-voor-onderzoek.html
Whoa – this could be huge. Thanks for sharing this!
I disagree somewhat with the position by Casadevall and Fang that alot of the blame about the IF mania lies on the shoulders of scientists. Scientists in many countries are integrated into a fairly rigid system, institutional, then ministerial, that imposes rules that are often unrealistic, created by bureaucrats and technocrats that have to satisfy, in some crude way, society to show that their institute and its staff are being “productive”, but that try to induce higher “productivity” by forcing their scientists to publish, XYZ papers a year. If they don’t follow those rules, the outcome is quite simple: they lose their jobs, salaries, grants and/or get fired. So, what happens when a scientist is cornered into a position which they might not necessarily like to be in? The inevitable, a flood of irrelevant papers, >50% of which no-one reads, or at least, as we know now, not more than three. The one-paragraph characterization by Casadevall and Fang of the national incentivation schemes to reward scientists based on their papers in IF journals is a gross cultural under- and misrepresentation (page 2 of the PDF, under the section “National endorsements.” Although I can appreciate that that is a perspectives paper, I am of the opinion that this factor in itself is the greatest driving factor, and Thomson Reuters knows it well.
The “productivity” criterion will be replaced by the “popularity” criterion which, of course, does not create a perverse incentive. The only way out of this mess is for societies to realize that you have to fund all scientists in the system, regardless of the results they get. Lot of studies lead to nothing, but work still went into doing them. This has to be recognized as part of what scientists do. What is perverse is the idea that scientists can produce new knowledge on demand. That does not happen, it is not how knowledge discovery works. Knowledge discovery is surreptitious, with long stretches of nothing and sudden bursts of breakthrough discoveries. The celebrity and business models that are being applied to science create the perverse incentives that we see in academia these days.
I wonder if “popularity” cannot have similar effects; the “of course” in your first sentence is not at all obvious to me.
It will not perhaps seduce you to permit submitting one paper every week, but it may still make it attractive to “sensationalise” your data a bit, shoving the pieces that will be less popularly received under the carpet. In particular for the Stapel case, which seems to have motivated this policy change to some degree, I believe his desire to be a popular scientist was more important than the desire to have the longest publication list.
I completely agree with you that interesting science cannot be fully planned ahead, though.
I think Jerry was being ironic. Imagine a world in which grants are given according to the number of “Likes” in a social network…
Ah yes, I see…
:-S
Yes! We should fund scientists who decide to do absolutely no work at all the same as those who work 80 hours a week. We should also fund all grant applications. If a truck driver wants an R01, give him one! This makes sense.
Science has to be evaluated in some way in order to allocate resources, otherwise, EVERY lazy person be a “scientist.”
The world has limited resources. If everyone was a plumber, we’d have no food. If there was a job that didn’t require you to ever produce any result, everyone would want it. If everyone wants the same job, either society crumbles or we only let a certain number of people have that job. Should we just choose scientists purely randomly, offer them tons of money to do science, but never hold them accountable to actually go to work?
With limited resources, one needs either a meritocracy or a random-ocracy. I highly doubt randomly picking scientists will be better than the current methods. This recent swing of nobel prize winners saying “Oh, in the current climate, so and so would have never discovered ____.” I call total BS. Yes, that person who was relatively unproductive but put out a big discovery after 20 years might not have discovered ____ and might have lost their job… but someone else would have discovered ____.
While we like to laud individuals who “discover” big things, I would argue that most things will be discovered by some person within a given time window. Sure, we might inappropriately punish some people who work well and hard but have bad luck and favor too much the lucky, but the collective pressure seems to produce results. Yes, it probably produces some bad apples too… (but I don’t think it’s publish or perish per se that drives them, it’s wanting glory, not just survival).
I guess the point is this: the current system DOES give us all of the tools to appropriately judge people’s work, so long as they don’t cheat… we just have to, collectively, use them. We have to take impact factors with a grain of salt. We have to actually read grants and papers and judge their contribution, etc. No change to the system is ever going to affect cheating. People cheat for glory alone all the freaking time. The only way that we will ever get rid of cheating, most likely, is to make all science completely anonymous and pay all scientists only a welfare equivalent while requiring them to work full time. But even if ALL incentives were removed, I’d bet that some people would still cheat. If the internet has taught us nothing, people love to troll/grief.
Right now, if your experiments fail, you are in trouble. No papers, no grant, no nothing. Then you wonder why there is cheating? Nobody is saying to hire truck drivers in academia. There is plenty of selection during the PhD and postdoc years. You pick people who know what they are doing for academic jobs. After that, however, it does not matter if your experiment works out or not. That is my point.
I might argue that that is a bit backwards, and in fact, more often than not it’s the grad students/post docs who are the driving forces behind fraud. If you’re a student/post-doc, you’re very limited in your ability to try different things. You can only take on so many projects, and if your experiments fail, no matter how well thought out they were or how much work you put in, you get nothing. As a PI, you (should) have resources and multiple underlings. If your post doc finds nothing interesting, your grad student can save the day.
“You pick people who know what they are doing for academic jobs.” The central question is how? Based on their publications? Then we’re back at square one. Further, the skillset required to be a good post doc does not completely overlap with that of a good PI. Even further, if we decide that PIs now have no incentive to publish, they can just do experiments until they find something they want to publish… how are we going to deal with the problem that post-docs need to be selected for faculty jobs? Say professor X decides he’s going to focus his efforts on a 12 year, very high risk, very high reward project with pretty much nothing to report in the mean time. She’s dooming her post doc to not getting an academic job. Because, as you just admitted, there is selection during the post doc phase.
At my institution, there are plenty of professors getting paid full salaries and not even making an effort to be productive. Few careers allow people the freedom that academia does… does any industry, especially a publicly funded industry, say that it would be OK to get paid for, literally, doing nothing? And yes, there are professors getting paid full salaries to spend half their year on vacation.
QAQ:
“She’s dooming her post doc to not getting an academic job. ”
The solution to that problem is for the PI to have skilled technicians under her. People “destined” for academia should not be doing their PIs work for them. That way postdocs have the chance to prove their worth doing something *they* direct, maybe on a less ambitious project than the PIs 12-year one. Once the postdoc has proven her potential, she can be recruited on a permanent basis as a PI for a longer term, more ambitious, project. This is as things were meant to be, and often were, not so long ago.
I feel that there is a force at work that goes beyond “productivity” and “popularity”. There is almost this desperate grab at blind “legacy”. There is no doubt in my mind that alot of laboratories conduct exploratory science the way science was meant to be explored, driven by the null hypothesis and not driven by the need to fortify an existing fact in a biased way, nor by the need to be cited, or become popular. The complexities of the lab are unfortunately not restricted to the lab and world of science publishing has its dark forces at play, too, often unknown to many scientists, who actively wish to live on the moon (figuratively-speaking). Although there is no doubt that one of scientists’ key objectives has shifted from search-for-truth to search-for-recognition, we cannot blame scientists entirely for this. The current socio-political, economic, and religious landscape into which science (and thus scientists) finds itself lends itself to being manipulated, and thus corrupted. At the end of the day, you are most likely to find very few scientists who are willing to research for FREE (unless they see benefit down the line) in the name of science. At the end of the day, Dr. X in lab Y is seeking his monthly salary to pay the bills, the mortgage and the loan on his car, the education fees for his kids and perhaps, once a year, that nice trip to an exotic beach with his family. Scientists are no different, in a basal human sense, from the baker, the CEO, or the psychologist. They just want to make ends meet using science as their motivational tool. However, they have now got caught up in this power struggle that involves strata upon strata of institutional bureacrats who probably wouldn’t understand the difference between a hypothesis or a banana, even if you explained it to them on a tablet watching YouTube.
QAQ, I am not sure that I agree entirely about your characterization of what is “original” and what is not. One could in fact say that a negative result is also original. Just because it’s not positive, or great, doesn’t remove its originality. That is why those who defend the bad quality of science in alot of bad, low-quality, often non-IF journals, actually have a case, because who in fact can state what is “quality” and what level constitutes “good” or “bad” quality, or a “high” or a “low” level journal? Since these adjectives are exactly what they are, adjectives, they are subject to manipulation. Quality variables with limited boundaries, when in the hands of a brilliant subjective manipulator such as a public relations officer or a marketing manager, or lawyer, can be made to have unlimited borders of interpretation, and thus use. A 5-peer peer review in a IF = 20 journal cannot claim to be superior in quality to a one-peer peer review in a IF = 0.5 journal based on the number of peers or the IF alone. This is because all 5 peers might be observing the results and study from a very subjective perspective, even if the peer review is blind, or double blind, because certain pre-conditioned filters exist in our minds as to what is original, or good. This is why the war about traditional peer review, alternative forms of peer review, what is “good” peer review, and what is a good or bad paper will remain a point of contention provided that there is always science and the liberty to think. My concern is that the way in which control systems are being increasingly implemented in science publishing by publishers, and the way in which laws and regulations are being imposed by politicians and bureaucrats upon scientists, is stiffling original thought. If you manage to clone a perception, or instill a system that can twist perception to make it mirror itself, then you have successfully taken the reigns of science. That is the superior struggle that is now taking place, but which many squabbling scientists, who are more preoccupied by the tone and intensity of the banding on a gel, are not quite able to perceive yet (I believe).
You have hit the nail on the head, most scientist want and will do the right thing, as do most people, if they find nothing they will report nothing found, but the reward is finding something (a published paper), so what if you have to tweak the numbers a little, nothing really bad will happen to you IF you are caught. But in our modern society, that wants instant results (tv shows that resolve all problems in 30-60 minutes) we want results NOW. Administrators (Deans/Chancellors) want to leave a positive legacy (and they have to do in the time frame of their tenure) and want productivity (impact factor), thus the need for instant results as opposed to long term results.
Just look at banking, you get a bonus based on amount of loan you book, you won’t care if you know it will go bad in 3 to 5 years, you get the bonus now. Lawyers win a case,so get immediate feed back, what do they care if it effect future cases. TV shows are based on instant viewers verses long term viewership. If you can inflate your companies stock prices that are short term things (lay-off, cutting research and development, etc.) you get a bonus or stock options, that negatively effect your company, what do you care, you got the bonus, and will leave the problem to the next person. Dederick Stapel had a great impact factor (he had over 1300 citations), do we really want that to be the standard.
In my opinion impact factors should be based on long term impact, who cares if a paper is cited 30 times in the first 1-2 years (hot topic, friends, politics etc.), the impact should be on what was the long term impact, 5 to 10 years. Did your papers negative or no findings (based on honest results) lead others to not follow a dead end, which might be more impacting than a positive finding. But sadly our modern society won’t allow this.
ResearchGate – who he?
Having followed the STAP cell denouement over the past few weeks, at special times with tongue protruding slightly left, dangling, drooling, sliming keyboard (necessitating periodic wipes with paper tissues), I have formed an opinion of the most important sites contributing to the dissection of the STAP cell papers. As I imperfectly remember it, here they are:
The central hub of the online discussion. Firstly on scientific grounds, then increasingly on the figure issues
http://pubpeer.com/
this blog for first revealing an example of blatant figure reuse.
http://blog.goo.ne.jp/lemon-stoism/e/008ac025ee1ccf4c694869f09b053ee7
this blog for revealing more and more issues with STAP-related publications
http://stapcell.blogspot.com/
this one for being THE stem cell blog, initially attempting to rationally discuss and even moderate the internet discussion, but eventually being overwhelmed by all the revelations and the total wackiness of it all
http://www.ipscell.com/
this one for also keeping an eye on developments (but being a retroactive blog, not playing a leading role here)
http://retractionwatch.com/
and even Nature, dark side of scientific publishing, who apparently don’t know what an experimental protocol looks like since they didn’t bother to include one in this methods paper, have helped because, for face-saving purposes, they have had – for the first time ever – to trash their own paper a mere couple of weeks after publication :-0 Twice actually if you want to google these new items, but please forgive me for not linking.
Did I miss something?
Anyway, I’ve now decided never, ever to join ResearchGate after (a) being continually over an annoyingly long period of time spammed LinkedIn-wise by RG pretending that some of my contacts are oh-so-desperate-for-me-to-join-too; and (b) after this disgraceful attempt to claim the hard work of a bunch of indefatigable (mostly Japanese, presumably motivated to rectify a national embarrassment) figure sleuths who now know that they have genuine outlets on the web to pass on their frustration and dismay. There is no sliming your way in here, RG, because this is built on trust of the peer-review site; KNOW THIS – it is not going to be your business opportunity to sell my scientific and personal interests. And why would any other practicing scientist want you to do that?