Resveratrol researcher Dipak Das: My lab’s work was “99% correct”

Das, via UConn

Dipak Das, the UConn red wine researcher charged by his institution with rampant misconduct that will likely lead to dozens of retractions, is evidently a 99%-er — when it comes to accuracy, that is.

According to a statement purportedly from his lawyer refuting those charges, Das claims, among other things, that the output from his lab was nearly perfect. He also has a lot to say about a 60,000-page report that the statement says he may not have actually downloaded.

We might note a lot more things about the letter, which we received from Bill Sardi, president of Longevinex, a resveratrol company which has worked with Das. Sardi has been sending Das defenses since the story broke; we posted some of them and Derek Lowe has posted parts of another. But here’s the letter, in its entirety:

Note: this 9-point document was obtained from legal counsel for Dipak Das, PhD, a researcher at the University of Connecticut recently accused of scientific fraud.

Discovery of an online document involving allegations against a University of Connecticut Health Center researcher accused of scientific fraud reveals a long-standing internal battle between the accused researcher and an administrative physician at the institution that may have resulted in false allegations being generated.  That document reveals the following:

1. Dr. Dipak Das, PhD, the accused, alleges all of the original documents involving 42 years of research which includes images of tests known as western blots, were confiscated by a representative of the university and were destroyed.  These original raw western blot images, which would serve to completely exonerate Dr. Das, are no longer available for comparison with altered images that were later published in scientific journals.  This same antagonist within the university proceeded to write hundreds of letters to scientific journals and funding sources, says Dr. Das, making false allegations that “I made up all the western blot tests.”

2. Dr. Das further alleges, once the original images were destroyed and could not be used for comparison in his defense, the university chose to employ software that can detect alterations to graphic images, software that has a high rate of false-positives and is not considered reliable unless original images are available for comparison purposes.  Dr. Das says: “no one will use this software on the published paper unless originals are NOT available.”

3.  Dr. Das counter attacks the University of Connecticut’s 60,000 page damning report which accuses him of altering images in order to fraudulently gain research grant money.  Dr. Das claims he is an eminent scientist who was pre-funded by the National Institutes of Health and did not have to publish to gain grant money.

4. Dr. Das indicates, in this available online document, that he never personally performed any of these western blot tests that are now in question and that the person who performed most of these tests is retired and surprisingly not on the list of researchers accused of submitting fraudulent data to scientific journals.

5.  Dr. Das then says he proceeded to examine the work of others in his laboratory and found their work to be “99% correct.”  Dr. Das said he is considered an expert in reviewing research papers and had been requested to review western blot tests for various scientific journals.

6. Contrary to what the University of Connecticut report contends, Dr. Das denies he was the only person who had keys to his office and that many other students and post-doctorates had access to his computer to enter results of experiments they conducted.

7. Dr. Das categorically denies, as the university pejoratively alleges, that he “de-funded” a student because she did not produce the test results he demanded.  Dr. Das claims he only took her off of his budget because she was working exclusively for another researcher.

8. The 60,000-page report describing the alleged scientific misconduct by Dr. Das, while only recently released to the public to put him on trial in the court of public opinion, was produced sometime in 2010, but it is unclear whether Dr. Das ever had an opportunity to even view it in its totality because he could not download it onto his computer because of its large size.

9. Dr. Das claims the allegations against him and his East-Indian colleagues began with a change in the administration at the university and for unknown reasons only focuses on East-Indian researchers when researchers of other ethnic origins performed most of the tests now in question.

Because of the seriousness of the charges and the fact they involve federally funded research studies, and the possibility that tissue samples as well as test data may have been intentionally destroyed by the university, it appears federal investigators need to intervene as quickly as possible.

In fact, the Office of Research Integrity, which investigates alleged misconduct by federal grant recipients, was the one that tipped off UConn to the case.

The letter appears to be from Scott Tips, a “health freedom” lawyer in California and president of the National Health Federation. The NHF calls itself

an international nonprofit, consumer-education, health-freedom organization working to protect individuals’ rights to choose to consume healthy food, take supplements, and use alternative therapies without government restrictions.

Das, meanwhile, has apparently been lecturing in Kolkata, India.

70 thoughts on “Resveratrol researcher Dipak Das: My lab’s work was “99% correct””

  1. Most of these allegations seem pretty unlikely, even in the most toxic of academic environments. If this does turn into a long, and sad, legal fight, at least all of the unsealed court documents will be in the public record, and perhaps we’ll understand what actually happened here.

  2. He has lawyered up, hence the 9-point manifesto. In fairness, this guy has been doing science for over 40 years and the resveratrol business is but a small part of his scientific resume. Therefore, a sense of proportion is needed as most of his other work is presumably legit.

    1. Why should we presume legitimacy in a researcher accused of faking so many papers? That’s required in a court of law, but it’s not required in judging the output of a lab.

      Most fraudulent papers are published by “repeat offender” authors (Steen ’11 J Med Ethics 37: 113-117). In a sample of 788 retracted papers, roughly 53% of fraudulent papers were written by a first author who had written other retracted papers, whereas only 18% of erroneous papers were written by a repeat offender (Chi-square=88.40; p<0.0001). These results could potentially be produced if the entire output of an author was retracted as soon as a single fraudulent paper was identified, but I don't think that's what happens. Rather, I think people acquire a taste for fraud, once they figure out how easy it is to publish a paper if you don't have to do the work.

      1. If more murders are committed by serial killers, it does not mean that a majority of killers are serial killers. In particular, if 53% of murders are committed by serial killers, it surely means that a vast majority of killers are NOT serial killers.
        This guy’s older research has never been flagged, so maybe it took him almost 40 years to “acquire a taste for fraud”.

  3. Interesting statement: “pre-funded by the National Institutes of Health and did not have to publish to gain grant money.”

    That sounds like a nice arrangement to have. Not sure sure I’m aware of any other researchers who do “not have to publish to gain grant money.”

    1. I believe that Das had a MERIT Award (Method to Extend Research In Time (MERIT) Award (R37)). Which is an R01 that doesn’t have to go through peer review after 5 years. Rather the Council of the awarding Institute can, based on a positive Programmatic recommendation, decide to renew it.

      Essentially a 10 year R01.

      1. Fair enough, but the public statement that someone does “not have to publish to gain grant money” seems unwise. Regardless of the situation, I’m assuming publications had to accrue to get this type of a grant, and publications would need to continue in order to receive whatever the next grant is.

      2. @Sierra Rayne:

        I doubt the wisdom of the whole letter; its accusations run more than close to paranoia. Still, the letter reads with a bit of legalese, and was obtained through his legal council. If the statement is true, it could cast reasonable doubt on a financial motive for publishing fraudulent work. This could be part of a strategy to avoid criminal charges. It’s also probably a statement meant more for the public than for those in science. Most people that I know outside of academics don’t really understand how central publishing is to continued funding.

  4. Curiouser and curiouser. It’s interesting to watch this degenerate into a mud-slinging match, all while ignoring the OVERWHELMING EVIDENCE OF FRAUD! C’mon Das, are you blind to the fact that the general public is in possession of the same information as your accusers, and is perfectly capable of drawing their own conclusions? A retarded monkey with cataracts could look at those western blots after downing 10 shots of absinthe and see they were faked! You lost any remaining credibility when you played the race card. From then on, it was a comedy side show. I’ll stay tuned because I have a penchant for watching slow moving train wrecks unfold, especially ones in which the driver appears to be smoking crack.

    On a serious note, while I understand the need for this type of drama to be played out in public, I do question whether a brand new post on retraction watch is called for in this case. After all, there hasn’t actually been a retraction yet, right? It may have been better to link to the unfolding story as an update in the older posts. I wonder if maybe there’s a place here for a new link/status type thing in your side-bar… ongoing cases, sorted by last name, so one can click through to the latest update on a person, without clogging the front page of RW with non-retraction stories.

    1. The original stories are a ways back because retraction activity has been hot and heavy lately. The intro to each story on RW is fairly short, so I wouldn’t say that this story is “clogging the front page of RW”. Retractions seem inevitable, so I think the story is appropriate for the site. One reason for publishing news about retractions is to warn scientists and the general public that some research is not to be relied upon. The investigation took several years. Should we wait another several years for a final adjudication before we learn that resveratrol is not all it’s cracked up to be?

    2. Thanks for the feedback, always appreciated. A few responses:

      — One journal has in fact agreed to two retractions, as we noted in a post early on: http://www.retractionwatch.com/2012/01/12/resveratrol-fraud-case-update-dipak-das-loses-editors-chair-laywer-issues-statement-refuting-all-charges/
      — We do have a function that’s basically what you’re describing. Look in the right-hand column for “Retraction posts by author, country, journal, subject, and type,” which is a drop-down menu. You’ll find an entry for “Dipak Das” under “by author.”
      — In general, additions to older posts get lost, so it’s better to get news into subscribers’ email inboxes, RSS readers, and elsewhere rather than update old posts, especially when it’s been more than a week.

  5. The current research atmosphere is actually TOXIC to humans (medical research).
    Universities reward research with a pay increase/promotion/tenure – ECONOMIC benefits.
    Research fraud/incompetency is hard to detect in medical research.
    It is a fantastic opportunity to make money with little risk.
    How many people will stay honest in such a situation?
    I am not saying that ALL researchers are dishonest-but that I will be surprised if a sizeable number are not.

    Scientific misconduct is worryingly prevalent in the UK, shows BMJ survey
    http://www.bmj.com/content/344/bmj.e377

    Is there any REAL desire on the part of academia to stop research misconduct? I don’t think so.
    After all everyone is making money- and tax-payers are happy to finance the party, aren’t they?
    Tax-payers are getting more “research” aren’t they?

    The usual half hearted (dishonest) strategy for dealing with research misconduct is to pretend it is a problem limited to “junior, poorly mentored researchers”. The solution; to get the “honest” senior researchers to mentor them better.
    Only problem is (and the evidence stubbornly refuses to go away), most mega fraudsters uncovered thus far have been “seniors”- and there seems to be plenty of them around.

    The solution is quite straightfoward.
    1. Any university which does not want to encourage criminal behaviour will have to stop financing it.
    2. Honest researchers will have to start actively educating the public about research misconduct.
    3. Any other way ?

    1. MSN, you sound like a jaded conspiracy theorist! It may be time for a bit of perspective here….

      About 2 years ago, I did a study in which I found that 788 papers had been retracted between 2000 and 2010 (Steen ’11 J Med Ethics 37: 249-253). Since I completed that study an additional ~600 papers have been retracted from the same period. I’m certain that there are more papers that haven’t yet been found, but should be retracted. So, let’s assume that there are 10,000 retraction-worthy papers for the period of time that I investigated. During that same period, 4.8 million papers were published. Hence, 10,000 retractions would represent 0.02% of the total output. Not so bad….

      To think about it in another way; If 10,000 papers are eventually retracted, 480 papers were still published for every retracted paper. To me, that’s analagous to the weather report being incorrect once every 16 months. The actual numbers I found work out to be comparable to an incorrect weather report every 16.7 years.

      I think scientists have an enviable record of accuracy and honesty. But there’s room left for improvement.

      If we don’t reduce the rate of fraud in published research, then I do not believe that taxpayers will be “happy to finance the party.” We have entered an era of accountability and, if we don’t police ourselves, we will be policed.

      1. Dr Steen,
        Why “assume” 10,000? How did you arrive at that figure? The 4.8 million research output you refer to, does it come only from the US or the rest of the world as well? UK, Europe, South Asia, China, Africa, Arabia, Russia etc.
        All I can say is that some, in the UK at least, don’t seem to think that things are “not so bad”. In fact they seem to think that it is terrible. BMJ editor Ms Godlee thinks the public should be alerted !

        Misconduct pervades UK research
        http://www.ft.com/cms/s/2/bc6f7204-3d1f-11e1-8129-00144feabdc0.html#axzz1kXc09tbs

        That’s for deliberate, premeditated fraud.

        Now, would you be willing to regard research incompetency as a form of fraud (I had referred to both)? A deliberate misrepresentation of one’s training and ability for economic gain? What would you say about a medical doctor who tries doing, say, laproscopic surgeries without sufficient training? No harm, he’s just having a bit of fun?
        Unfortunately, this is a regular feature in research medicine. As far as I know (in my part of the world at least) there is no requirement for anyone to have any form of formal training in peer review, ethics, research methodology, statistics etc before being allowed to apply for a grant or publish a paper. If like me, you routinely have to read clinical research for patient care, you will quickly come to realise that a lot of clincial research is plain unusable. Expensive, fully paid for for the public-but not of benefit to them.
        It is clearly the work of incompetent amateurs. I call this fraud.

        The scandal of poor medical research
        http://www.bmj.com/content/308/6924/283.full

        Poor-Quality Medical Research
        http://jama.ama-assn.org/content/287/21/2765.full

        I am interested in knowing whether you still stand by your figure of 10,000?

        Warm rgds

  6. If Das had raw images that would completely exonerate him, why did he published doctored ones? (My apologies to the doctors in the audience.)

    Same old same old. The big cheese is happy to take credit and promotions when things are going well, but as soon as a project turns sour he denies having anything to do with it. Considering how little Das claims to have done, one wonders why his name was on all the papers.

    1. I believe he’s trying to say the published ones were NOT doctored… otherwise what you said is valid and would make him an idiot. Not that he isn’t one already.

  7. Hmmm….the National Health Federation. I sincerely hope Das stays away from that organisation. If not, I start to understand why he doctored his results.

  8. my 2 western blot cents: I think that WB is the key technique responsible for the vast majority of JUNK in the scientific literature. It is a crap technique (I have done close to 500 of those in my scientific existence and there not a technique I trust less than WB), old-fashioned – so early 90’s, and what I propose is this:

    EVERY protein quantification must be confirmed at mRNA level, or any other additional way. I.E. peer-reveiwers should always ask for further validation of any WB crap. If for some reason mRNA cannot be done, do request blots with minimum 3 different antibodies for the same protein. I have seen so many years of lives spent hunting nonsense based on WB that if there is a lesson to be learnt for new scientists – do not EVER base projects on WB only. Never ever. Ever.

    Re fraud – when my colleagues from Prague, Czech republic, got their paper provisionally accepted in Blood journal in early 2000’s, the editor requested they send ALL WB films (not copies or scans, the actual films) to him, that he will return those once he determines the images they sent weren’t faked. They were outraged beyond limits, because this is something entirely unprecedented, and felt that if they were not from central Europe, nobody would dare to ask for it, but they did as they were asked, and indeed the paper was accepted.

    Well, I believe from the bottom of my heart, that this should be required for EVERY SINGLE paper accepted for publication. Send the films, period. That way we can determine whether whatever you submitted is based on the truth, or your photoshop skills. I believe that this would reduce the number of crap data rapidly.

    1. I wouldn’t want to look at all the Western blots as a reviewer, as it would increase the review time exponentially. If somebody wants to cheat, they can do it with any type of data, not just Western blots.

    2. Many labs no longer use film, instead opting for digital “dark box” imagers. Unfortunately, mRNA quantification would not be suitable for many studies in which researchers are interested in protein levels specifically.

      What I do hate is the trend to crop out just a narrow band around the MW of whichever protein you are showing (cropping out all the “nonspecific” bands). I think, at the least, the entire uncropped images should be supplied as supplementary data.

    3. What you’re missing here, is that in the days before potatoshop, people just used to doctor the film images instead. I’ve heard of various tricks, e.g. inserting a sliver of tissue paper in between the membrane and the film, to quench the signal strength for a specific band on the blot. In such cases, the film would appear “true” to a journal or reviewer, so what next?, ask for the archived blot membrane? The fact is, if someone wants to deceive they will find a way. It’s just that modern technology has made it easier. Let’s not be fooled into thinking that scientific fraud was absent before potatoshop.

      The only way forward as I see it, is a lot more money invested into ORI and other policing strategies, and a lot higher penalties for those who are caught, along with extensive ethics education early on in scientific careers. Every graduating PhD should emerge sh!t scared of ever getting caught doctoring their blots.

      1. So you are saying repression is better than some kind of prevention..?

        If e.g. PI’s used photoshop, which is what Das seems to have done, sending the original film would help at least in those instances.

    4. There are a myriad of very good mechanistic reasons why protein expression differences visible on a western blot may not coincide with a similar difference in mRNA quantity. Someone with such scientific experience should be well aware of this. Also, many blots these days never make it to film and are developed by sophisticated digital imaging devices. I have advocated for a digital image standard that accepts only raw acquisitions and allows only basic whole-image manipulations for compliance, and this I think would be a better solution than sending films through the post.

      Antibody-based assays have been the standard for protein expression. With advances in proteomic techniques, perhaps this will change soon. But at some point, you have to be willing to accept an imperfect technique for what it is and acknowledge the limitations. If one is convinced that any niggling doubt about an assay or reagent can invalidate an entire line of inquiry, I can almost guarantee that if you dig deeply into the work underlying the initial production of those 3 antibodies you’re asking for (or simply acknowledge the fact they’re probably derived against some sort of non-native form of the protein) you can poke holes in those blots as well.

      1. In many instances, mRNA quantity does indeed correlate with protein levels. I never said it always does.
        As a peer-reviewer, I always ask for mRNA quantification, and if it does not anyhow reflect the protein change, I expect a valid explanation. It is a legitimate request, and I am afraid that your evasive comments convince me it should always be requested (sorry, but you do sound like someone whose results are indeed not confirmed with qPCR, but you decide to trust your blots and just keep marching on blaming it on elusive “mechanisms”… no offense).

        We have to be rigorous about techniques we use, millions of dollars are invested based on such experiments. I am surprised how uncritical you seem to be of WB. That does not seem to be a sign of a very rigorous scientist.

      2. @rosta
        Occasionally yes, we will get a peer reviewer who obsesses over one pet issue at the expense of some objectivity, but fortunately these are rare. I’m as critical of western blots as I feel is reasonable and I recognize their limitations, but do you apply the same impossibly lofty standards to every other experiment? If you do, I don’t imagine you ever approve any manuscripts.

        Unfortunately, your preoccupation with the potential drawbacks of one assay has caused you to completely miss the point. There is practically no experiment one could perform which wouldn’t, in the mind of some stubborn reviewer somewhere, beg some sort of additional confirmatory work. Given the numerous posttranscriptional regulatory mechanisms that exist, and given that an adequate investigation of the particular one at play might likely comprise an entire paper in itself, I feel it completely unreasonable, stubborn and obtuse to robotically demand gene expression analysis every time you see a western blot.

    5. All of these western blot “manipulations” (duplicate images, mirror images, cropping, cut-and-paste of bands) could be caught by an automated program, provided that the program had access to a library of figures from all publications. I think such a system should be put in place (also for images of cells and tissue sections as well), but i guess its just a matter of finding someone to fund/run it.

      WB is a very powerful technique, which I don’t believe there is currently an adequate replacemenet for, and I hate to see it so misused.

      The question is this: are these researchers only messing with the WBs and nothing else, or are there other shenanigans going on and its just that WBs are the easiest to catch? I would suspect that if someone is willing to cut and paste bands into a western blot, they wouldn’t think twice about adjusting numbers in a table or fudging their error bars in a graph.

      Maybe we should be thankful for the WB in that it allows us to catch all these frauds!

      1. If you look at the Abnormal Science blog, you will see plenty of examples of non-western manipulation. Thus, as you correctly surmise, anyone willing to mess with a blot is probably messing with graphs and other types of data too, it’s just harder to spot the latter.

  9. albertp, this should be indeed the job of the editorial office after provisional acceptation, not the peer-review. Editors do have the time for this, or can get interns. I am sure they would be surprised….

    James – why not do mRNA quantification in addition to protein levels? Can’t do any harm, other than showing your westerns are crap.

    1. I agree that mRNA quantification is pretty easy to do these days, and should be done in most cases. But there are certainly other factors in regulating protein levels besides mRNA, and researchers studying degradation pathways may in fact expect that the mRNA and protein levels will not correlate.

      Most of these western blot misconduct cases were caught by blatant (at least upon detailed inspection) cropping/duplication of images. If someone really wanted to fake westerns, or any other type of experiment, there are ways to do it that are virtually untraceable (for example loading different samples than what you say they are).

      1. Dear James,

        you are indeed right, that mRNA quantification may not be necessary in virtually all cases, nevertheless, in such “standard” experiments as knock-down, overexpression etc., mechanistic links, it is a must I believe.
        I indeed agree that there is more than enough effective ways to cheat if you wish to cheat, on the other hand, the harder it becomes the better. If it is just one person cheating, it is fairly easy to have a go with a photoshop, especially if he is the last one in the line.
        If it is a student, he can load a different samples, but should be under laboratory supervision so this is not always possible.
        Also, there are many instances in which people conduct the experiments with genuine faith, but the antibody is faulty, their WB technique is not good etc., when mRNA level can serve as an Occam’s razor. Especially with difficult antibodies, unpublished on before, some kind of rigor must be in place, because sheer nonsense than becomes an integral part of the literature.

  10. I’m going to “doctor up” here so don’t be surprised if my point of view is “doctored”…heheh.
    Out of 4.8 million papers published (worldwide, probably) only maybe 10,000 are retracted. Sounds reasonable. But what about the junk papers referred to by another commenter? How many are there? Probably at least 100,000 and maybe 500,000. This is just a wild guess with no data to back it up, but I think those who read a lot of papers will support this or even raise it.
    The fact is, it takes a very critical reader to winnow out all those useless papers. What’s worse, even a critical reader is likely to miss a lot of papers that are fraudulent–not the stupid ones that obviously “doctor” their Western Blot images, but the ones that present bogus data points as if they are real. How is the reader supposed to discriminate a bogus data point from a real one?
    As I have opined in these pages before, the basic problem is the incredible pressure to produce papers, any papers, in order to get grants, tenure, respect, and just make a living doing research.
    The result is a lot of wasted effort and a lot of fraud. The reader/researcher/clinician as well as the funding authority are the losers.
    Until that pressure is somehow ameliorated we will continue this unsatisfactory situation of caveat emptor.
    So in the meantime, I will exercise the privilege of the retired and be a “retarded monkey with cataracts after 10 shots of absinthe”–what an experience!

    1. Dear Conrad,

      personally I’d like to think I am a fairly critical person, the last peer-review of a grant I was asked to write by a funding agency was 3 A4 pages long, and it does take some effort to convince me that something works or works not, not merely publishing it in Science or Cell (not like you would get a Science paper showing something doesn’t works, but hypothetically speaking).
      From my perspective, good 40 percent of published materials are plain junk, generated due to technical mistakes and errors, poor techniques and contamination, albeit in good faith by the scientists. Some further 10-15 percent are blatantly fraudulent falsifications, to a greater or lesser extent, and some further 10 percent are “dead end” papers (the sort that brings nothing novel and usually confirm the obvious, like “tomatoes grow in a green house both on Monday and Tuesday, checking Wednesday is our future work” kind of papers). Those are usually legitimate papers though per se. Thus some 35 percent of publications have some merit of reflecting upon the truth, which is the ultimate task of science: The Truth. And it is a fairly disappointing number.

      I agree about the pressure, I have worked at Harvard in the past, and the atmosphere was really intense (although I got 2 papers out of it), but I must say that at other places, like the UK, some people are really work shy, and get away with it happily because of the overall system. Nobody can be fired in the UK (almost), and many people merely care about their lunch breaks and annual leave, do not give a damn about the research, because the system will always protect them and their mortgage will not be endangered if they do jack all day in the lab. If there is not enough to write a grant, well that is the PI’s problem, not theirs, as they can always apply for another post-doc/technician elsewhere. At Harvard, we all had our necks on the line for the work we did… But yes, it was incredibly stressful, I sometimes stayed in the lab all night /the night guards were not even surprised/ to complete time-course experiments, which would be really unthinkable in the UK. I can easily imagine that it would drive people to commit a fraud – rather than investing all the hours into getting the WB right, they would just fake it…

      So my point is that science is a really HARD, HARD work, and only a handful of people are actually prepared to take it on (which is why only some 1/3 of US PhD graduates moves on to a postdoc position). I think the work ethic determines one’s survival in science more than anything else, and with many people, only the pressure you mention actually generates hard work… I think than many people take on PhD programmes without having the slightest idea what scientific research entails, but once they have gone down the route find it hard to get out, and start cheating and doing all kind of stuff to survive in a field they no longer care about… just my opinion.

      1. So ~50% of the literature is dead wrong? Many thousands of times more papers should be retracted than are? Do you have any data to back this up other than the faint buzzing coming from inside your tinfoil hat?

      2. Those are pretty astonishing opinions rosta! Happily they don’t bear much relation to reality.

        You’ve clearly had some odd experiences in the UK assuming your opinions are based on personal experience. I’ve done biomedical research both in the US and my experiences are completely at odds with your opinions. PhD students and postdocs work unsavoury hours in the UK just as they do in the US. As a postdoc I used routinely to work through the night to utilize time on equipment on which time was scarce. I’ve just returned from working in the lab this morning as I do every Saturday, and most of the labs in my department have got PhD students and postdocs working this morning.

        And it is straightforward to get rid of people that don’t do their job properly in the UK (this applies to PhD students and postdocs). Unproductive academic staff can be moved from academic research pathways to academic admin/teaching pathways. In fact quite a number of academic staff in the UK academic departments I’ve worked in have left once their research funding have dropped to unsustainable levels. The RAE/REF procedures in place for the last decade or more have made academic research-active departments very uncomfortable places to be once one becomes unproductive. A significant number of people find it intolerable and leave or switch to admin roles.

        And as reported in Nature for example (e.g. see issue of October 11th 2011) the UK punches well above its weight academic research wise according to several assessments reported last year (“ Compared to researchers in the United States, China, Japan and Germany, for example, UK scientists generate more articles per researcher, more citations per researcher, and more online downloads, even though the country spends less in absolute terms.”.) That simply doesn’t chime with your “opinion” of UK academic scientists doing “jack all” all day. Perhaps you work in a particularly second-rate environment…

        You suggest that hard and stressful work would be “unthinkable” in the UK and “imagine” that it would ”drive people to a commit fraud”. What an odd non-sequiter!

        As for your “opinion” that 10-15% of published papers are “blatantly fraudulent falsifications” why not show us some of these “blatantly falsified” papers. How about taking the most recent issues of Nature, Science and PNAS and of the 100 or so of the papers published in those issues point out to us the 10-15 of them that are ”blatantly falsified”

      3. I might just expand on some of the reasons why the UK academic research community punches well above its weight by most measures of research productivity, in contrast to rostas opinions that UK researchers do “jack all” all day, and that “Nobody can be fired in the UK (almost) and many merely care about their lunch breaks and annual leave, do not give a damn about the research, because the system will always protect them…”

        1. In the biomedical sciences the vast majority of postdocs in the UK are on short 2 to 3 year contracts funded by charities (Wellcome Trust; British Heart Foundation; Alzheimers Society; Cancer Research charities etc.) or Research Council grants. If a postdoc does “jack all” s/he can be sacked. If s/he isn’t productive then the funding won’t be renewed and that’s end of job. Being a postdoc in the UK is just as transient as in the US.

        In fact the extension of postdoc funding beyone 3 years in a single University/department is only done under much consideration since employment laws means that redunancy is rather more expensive once a postdoc has been “on the books” for more than 3 years. This is not much fun for postdocs, but all of this means that postdocs in the UK are as productive as elsewhere if not more so.

        2. It”s straightforward to “fire” PhD students since these all undertake their stipends with an initial probationary year. If they don’t cut it they aren’t transfered from MSc studies to PhD studies status.

        3. The majority of academic staff in research-active University departments are recruited from a pool of Research Fellows. These are outstanding individuals who have shown sufficient promise and dedication to their subject during PhD and post-doc research to be awarded fellowships (Wellcome Trust, Royal Society, BHF; BBRC; Lister etc.) and so they have demonstrated first class abilities to do independent research and to attract substantial funding. In other words good academic research departments in the UK are staffed by high quality researchers that are productive.

        4. As I indicated above (see http://www.retractionwatch.com/2012/01/25/reseveratrol-researcher-dipak-das-my-labs-work-was-99-correct/?replytocom=9323#comment-9395), UK academic research is done in an environment of intense external pressures supposed to promote productivity. The periodic Research Assessment Exercises (RAE now called REF), assess individual and departmental research productivity, and this has a major influence on how government research funding is “divvyed-up”. There is huge pressure on academic researchers to be productive, and it is very difficult to maintain a research programme in a department if one isn’t productive (you are encouraged to move to the non-research academic progression, or to take early retirement, and quite a few individuals find the pressure sufficiently difficult to bear that they leave).

        This makes for a hugely “efficient” research structure in which everyone works pretty damn hard. Is it truly efficient in the sense of being an optimal approach to high quality science that will benefit the UK economy and public well-being? That’s another matter altogether and I don’t think the answer is necessarily yes….

    2. There’s a lot of junk, that’s true. But junk may be in the eye of the beholder, whereas fraud is less subjective.

      I see no way to ameliorate the pressure to publish. Our national commitment to funding new research has fallen since the days of Sputnik and will likely never return. In the absence of funding sufficient to support those scientists who are already in the system, it will be a zero-sum game going forward. Inflation of lab costs means that it costs more to do less. Yet those who are already in the system will be expected to do more with less , so the impetus to fabricate may grow. New scientists will not be recruited to fill the shoes of retiring scientists and scientific progress will eventually stagnate. Cheery scenario, eh?

      I worry that all this agonizing about retractions will harm an enterprise that is in admirable shape. Scientists as a group may come to be seen as no better than politicians seeking the next donation or entrepreneurs looking for the next “sure thing.” We need to remember that there will be many false starts and trails that lead nowhere. After all, if we knew what we were doing, it wouldn’t be science. It would be engineering.

      1. anonpostdoc, I think it is time you took your pills, went to watch Spongebob on the TV and left a civilized discussion to adults. Thank you for your cooperation.

        Dear R,

        I do agree to a large extent with what you say, on the other hand, I sometimes wonder if relieving the pressure is in fact even a good thing (as per my comment).

        I absolutely agree that even finding that something does not work, which is mostly the most probable scenario in truly experimental research. However, HOW OFTEN do you see anything like that published? 99% of experiments people do not or cannot work, yet 99% of what is published is indeed along the lines that something “works”. How come?

        I still believe a big part of the blame should go on the heads of professional editors, who are merely after advertising money and hand-select papers which are “interesting and novel” to boost subscriptions, and than have to retract those (with less fanfare). Thus the pressure is not only from the funding agencies – get publications, it is also from the journals themselves – get “positive” stories, or you will not publish anyway. Here is where a big part of the problem lays.

      2. @rosta: Your ad hominem toward anon postdoc was uncalled for. I would also like some proof that over 40% of all published science is junk. Perhaps that’s true in your field. In my experience, most of what’s published is small and incremental, but hardly junk.

        @R. Grant Steen:
        There have been some noticeable leaps in funding since the 60s. Notably, the NIH budget ballooned in the 90s. At the moment, the US is in a funding downturn, but most faculty members I’ve talked with seem to view this as cyclical. I do think that if institutional growth continues as is, it may significantly out pace funding, to the detriment of all. Bruce Alberts had some words about this in 2010: http://www.sciencemag.org/content/329/5997/1257.full

      3. “99% of experiments people do do not or cannot work”??? That can’t be right rosta!

        Most experiments “work” in my experience, unless you mess them up, but that should be a rather minor element in the working life of a scientist unless s/he is stunningly incompetent. Of course experiments may not give you the result that you want or expect, and one might not even know what the result means, but that doesn’t mean the experiment hasn’t worked.

        One normally performs an experiment within some sort of context of expectation and often the result doesn’t conform to expectation. So then one repeats it. And then one thinks about it and likely comes up with a tentative explanation or two that can be tested by other experiments. At some point one begins to home in on a meaningful interpretation of the observations, and if this holds up to repetition and controls give supporting outcomes and so on, then one thinks about writing a paper.

        So a paper is pretty much bound to describe a positive result, even if the positive result is a demonstration that one’s working hypothesis that prompted the investigation is incorrect and another interpretation applies.

        Of course technical phenomena can bedevil the experimental process. You simply can’t get your protein to crystallize or your His-tagged protein construct won’t express or the protein doesn’t stick to a nickel column. But nobody else really wants to read about you lack of technical success (after a while your supervisor would probably rather not hear about it either!). Best thing is to make a careful note of the conditions you’ve used and test other conditions or explore other ways of crystallizing or expressing your protein, and to get advice, until you find something that works. Or else switch to another approach entirely.

        There may well be value in having a repository for unsuccessful outcomes in the present digital age that would be a resourse for other researchers that may be trying to solve a particular technical issue (‘though the competitive nature of science might result in unsuccesful stuff being deposited only once a successful outcome was eventually achieved). However in my opinion it makes sense that the scientific literature comprises successful and productive steps towards obtaining insight about particular scientific problems.

      4. Yes I agree with much of your post. I think science is working pretty well, although science funding is being heavily squeezed in the current economic climate. I also agree that one shouldn’t interpret the excellent stuff highlighted on RetractionWatch as an indication that there is a particularly large amount of dodgy or fraudulent stuff being published. Dodgy/fraudulent science is car-crash fascinating, it should be pursued vigorously (and RetractionWatch is a fantastic resource to aid that) but there isn’t really that much of it in the grand scale of things.

        And there is lots of junk being published….more so that 20 or 30 years ago and even more so than 50 -60 years ago. In my opinion there are both far too many journals and also too many scientists expecting to be funded and publish papers. One could dump the entire output of certain journal publishers and the progression of science would suffer not a jot.

        If there are any easily identifiable problems, I would suggest for one the behaviour of publishers like Elsevier, who have hugely expensive institutional packages whereby Universities (like mine) are forced to buy umpteen journals of essentially zero value in order to receive the handful of important journals within which cutting edge research is published. Without this practice some of the journals of little value might well “die” through lack of interest and we’d all be better off. A tiny handful of the hundreds of “open access” journals published by Bentham (to name a notorious example) are largely repositories for stuff that is of little interest to anyone other than the authors, but all of this stuff has to be peer-reviewed and edited and it’s a big drain on everyones time with close to zero contribution to the progresssion of science.

  11. Marco,

    so what exactly is the purpose of science, according to you, if not finding the truth? To get money? Or it does not matter?
    I don’t see anything religious about “truth”, merely philosophical, which is where my comment comes from. Search for truth is one of the most noble and most important endeavors of mankind, and science if one of its tool. One could even say it is practically opposite of “religious” (not getting into this discussion though).

    Looks to me like you missed your lectures on Ethics…

    1. The purpose of science is to provide a functional description of reality. One that allows us to explain observations and make predictions. That, however, is not the same as “the Truth”. And definitely not “Truth” with a capital T. The only people I have seen use “Truth” with a capital T are religious people.

      This is Philosophy 1.01, and has little to do with ethics. In ethics classes you learn at best the *process* of science; in other words, what is the right approach and what is the wrong (as in “not acceptable”) approach. You do not learn “the Truth”. You can’t, as ethics is determined by cultural norms and thereby automatically subjective.

      1. Very constructive rebuttal, Hans Brighter. You must have many publications accepted after you responded to the reviewer comments like you did to me…

  12. Good discussion going on here. Coming back to the issue of this posting. the last sentence “Das, meanwhile, has apparently been lecturing in Kolkata, India”. Just googled the meeting and found this link. http://www.sfrrkolkata2012.in/sp.htm you can see whos who there including some of the names discussed on this case. Loo at the speakers list posted http://www.sfrrkolkata2012.in/speakers.htm and the committee http://www.sfrrkolkata2012.in/comm.htm While this case was being posted on Retraction Watch he was giving a plenary lecture!!!

  13. Haven’t we had enough ad hominem attacks on this thread? This is getting as ugly as the Republican primary….

  14. 1) please no ad hominem attacks such as “postmodern deconstructionist nonsense”…Marco is not speaking postmodernism at all; if you’ve ever delved into deconstructionism, it is an abandonment of “truth” altogether.
    2) I agree with rosta that research is hard, much harder than clinical work. During my time in research (many yr ago) I spent more all nighters per week than I did in residency. Plus research doesn’t pay well at all, especially not in comparison with cutting and sewing or prescribing Vicodin.
    I think there is some data that will back up the statement that academic positions have not increased in proportion to the US population during the last forty years; there are more people trained in research than can be accommodated by the positions available. Thus the increase in part time faculty with no benefits.
    3) therefore it is reasonable to say that US government should be funding a lot more research positions and more research. We need more research to face the problems in the world that threaten our survival as a species. At the same time, ORI should be strengthened to keep up.
    4)I agree with rosta (this is a recent discovery to me, as when I was working I read mainly the free journals supported by the drug industry…all about how to diagnose the diseases that only Risperdal could “cure.”) There is a lot of junk out there; 50 % seems like an exaggeration until you get into the details about whether a paper really advances the state of science or just leads down another dead end.
    5)To get back to Dipak Das, it is becoming clear that he is another very senior scientist (40 years!) who is used to bloviating for naive crowds (plenary lectures!) and having postgrads do all his actual lab work for him (as revealed in his lawyer’s screed.), AND doesn’t have to publish anymore for his funding! His righteous anger at being accused of fraud just drips from that screed. He is so rigid, he may already have Alzheimer’s Dz. The field of resveratrol research has been set back by 20 years.
    6) I think I’ll have another shot of absinthe and tune in to the latest Spongebob rerun.

    1. “I agree with rosta (this is a recent discovery to me, as when I was working I read mainly the free journals supported by the drug industry…all about how to diagnose the diseases that only Risperdal could “cure.”) There is a lot of junk out there; 50 % seems like an exaggeration until you get into the details about whether a paper really advances the state of science or just leads down another dead end.”

      I think that this view is profoundly short-sighted. Perhaps it’s true in some areas of biology. I was trained in the physical sciences, and find myself working close to cell biology. In physics and physical chemistry, there are plenty of papers that appeared to be “dead-ends” in the 60s and 70s, but found new life in a few decades.

      For example, if you look at what’s been published on stochastic chemical kinetics, you’ll find a few papers in the 70s written by David Gillespie, and then almost nothing until the late 90s. Google Scholar returns about 10 articles that cite the original paper between 1977 and 1990, and about 68 between 1990 and 2000. The article has almost 3500 citations today, largely in the last decade. The renewed interest in developing these techniques is largely due to the realization that a handful of biological enzymes can have profoundly interesting behavior.

      We do not know what is to come, what will lead us there, or what will be important when we get there. To criticize so much work as being a “dead-end” assumes that we understand what is actually important. That’s an arrogance that science cannot afford.

      1. Agreed! It looks like this thread is full of individuals who know what is junk and what is not. I suspect that the definition of junk they use is “what the other guys are doing”. Some humility would not hurt.

      2. sfs I think your example actually illustrates something about the nature of quality in scientific publishing that is relevant to the question of what is or isn’t of value scientifically speaking.

        If the paper by David Gillespie you’re referring to is “Exact stochastic simulation of coupled chemical-reactions”, then it’s notable that this was published in the leading subject-specific journal of its day (Journal of Physical Chemistry). Papers in J. Phys. Chem. were and still are properly peer reviewed with high standards for acceptance, and good scientists have always published good papers there. That doesn’t mean that the paper will necessarily have much of an impact immediately or in the future (as Gillespie’s certainly did as a “late developer”!). But the intrinsic quality of the papers in J. Phys. Chem. is high and you get a nice warm feeling if you get your paper accepted there because you know it’s a good jopurnal that publishes good stuff…

        But now there are dozens of physical chemistry journals. Many of these seem to be rather little noticed (impact factors below or not much above 1). Anyone with anything of value to say would likely not publish in those journals – they’d want to get their paper in J. Phys. Chem. or if it was particulalry important in PNAS or Science or one of the Nature journals.

        Compared to the 1970s when David Gillespie published his important paper, there is a vastly greater number of scientific journals and many of these publish stuff that is largely of little value unfortunately. Some of the vast number of “you pay we publish” open access journals seem to be little more than vanity publishing enterprises. Might there be the occasional “gem” in the dross? Possibly, but one would expect that anyone having some insight into to the nature of their work in the context of the research field would publish their “gem” in a good journal (like J. Phys Chem!).

        1. This reads like an advertisement for J. Phys. Chem., and I have personally exposed junk in this journal and watched the editor (Schatz) act in a most unsatisfactory manner regarding some issues. We can talk specifics if you like. Better yet, get Schatz to come online here and we can chat specifics and see how he responds. J. Phys. Chem. is no better or worse than other journals in this field. To think that there are not massive publishing politics in this journal, and others, is delusional. Often very strong papers are forced to be published in so-called ‘lesser’ journals for the simple reason that editors and editorial boards of some so-called ‘good’ journals will not allow the work to be published for purely political reasons.

      3. really Sierra? That’s not my experience. Note that we’re only discussing J. Phys. Chem. since that’s the journal the paper by David Gillespie that (I think) sfs is referring to was published in.

        I should probably qualify my point to indicate that I’m speaking generally. By that I mean that in general J. Phys. Chem. publishes pretty decent papers. No doubt there are a few papers that aren’t so good and I expect J. Phys. Chem. has had the odd paper retracted too.

        My main point ‘though was that 50 and even 20-30 years ago when David Gillespie published his highly cited paper on stochastic chemical kinetics, there were far fewer journals and far fewer papers published, and that the standard of papers (and probably the level of good faith on the part of scientists overall, since it’s difficult to see why scientists of 30-50 years ago would engage in the sort of low grade fraud we encounter nowadays) was higher in the sense that there wasn’t the vast amount of nothing papers in nothing journals that we see nowadays…

        1. @Chris, in response to his statement that “it’s difficult to see why scientists of 30-50 years ago would engage in the sort of low grade fraud we encounter nowadays.”

          Scientists of the past were not necessarily any better than scientists of the present, and the temptation to publish prematurely or to outright fake were probably the same. So, why are we plagued with retractions now? I think that there are more retractions now because plagiarism-detection software works pretty well and because the issue of fabricated and falsified papers has finally gotten the attention it deserves, rather than being politely swept under the carpet. There may also be more pressure to publish now than in the past, but I don’t see the proliferation of weak journals as terribly relevant; they may be more open to fraud but no one reads them, so little harm is done. Where we need to focus is on clinical trials and studies in major journals.

          Here’s a quote from a recent review (Steen ’11 J Med Ethics 37:498) that describes a study of the quality of the old clinical literature:
          “The published record in hepatology was evaluated, to see how often ‘established wisdom’ stands the test of time [6]. A total of 474 papers was selected to cover the time from 1945 until 1999. Among 474 conclusions in these papers, 60% were deemed to be correct in 2002. Forty per cent of the papers were thus flawed by misinformation–wrong or incorrect information–although it was not known as misinformation at the time of publication. Overall, 19% of conclusions were obsolete (eg, immunoglobulins prevent hepatitis A infection, but an effective hepatitis vaccine is now available), and 21% of conclusions were incorrect. Surprisingly, half the papers had conclusions thought to be true 45 years post-publication [6].”

          Ref.6 is: Poynard T, Munteanu M, Ratziu V, et al. Truth survival in clinical research: an evidence-based requiem? Ann Intern Med 2002;136:888-895

      4. @chris: Indeed, I have seen some major problems (ethical and scientific) at J. Phys. Chem. That said, there are many good articles in this suite of journals. Some authors have good experiences with a journal, others have bad experiences. I’m sure that’s is the case for most journals. But JPC is most certainly not perfect. Some good Elsevier journals on this front, as well, such as Chem. Phys. Lett., which occasionally has some problem articles, but very often publishes quality material. I don’t have very many general rules when it comes to what is a ‘quality’ journal. There are some very dubious so-called ‘open access’ journals – whereby as long as you pay the $1500 the paper gets published no matter what and peer-review is a joke/non-existent – but among the major reputable publishers, each journal I have experience with has some problems and some good attributes. I look at the articles individually, and place little emphasis on the journal name, or even the authors.

      5. @Sierra Rayne: You’re absolutely right about the role of politics in publication. Plenty of reviewers try to kill papers to kill off a competitor, and I’ve seen editors make decisions that felt clearly political.

        I’m not sure that you can single JPC out as being better or worse in that area. JPC has published some flawed papers, but it’s a really big (as in pages) journal, that publishes *a lot* of papers, so you’d expect some flawed papers. It’s reputation varies with the subfield of p-chem, as well. Some subfields of physical chem. prefer to publish in JPC, while some subfields prefer JCP (or PRL). I know that the reaction dynamics people make good use of other journals (PCCP and CPL), but my area of training didn’t use them very much.

        I’ve found the open-access journals to be no more of a mixed bag than the for-profit ones. PLoS Biology has some of the problems of the top-tier, but its longer format alleviates some of them. The few BMC journals that I’ve encountered seem to publish work that is solid but more incremental. There have been some remarkably good papers published in PLoS One, and some that were not so good.

        1. @sfs: I entirely agree with you. I most definitely am not singling out JPC, but I do rebut any notion that it only contains gems and/or there aren’t editorial problems. I probably cite JPC articles more than any other journal in my recent papers, so I clearly have respect for a lot of work it publishes. That said, of the three experiences I’ve had with the journal, they have all been miserable gong shows. That’s fine, I don’t even consider submitting there anymore, and we pursue other avenues of publication. I suspect JPC doesn’t much care what I think, either.

          Over the past decade, and when I was a member of ACS, I’ve seen several instances of articles in C&EN where ACS tries to claim there are no systemic problems at their journals. This, of course, is nonsense. A more believable story is that there are systemic problems (hopefully minor in the broader context) at some ACS journals (but that, overall, most journals are generally of high quality), and that we would hope ACS and the broader chemical community is trying to fix them. Anytime someone claims perfection/lack of any problems, I know they are full of b.s.

          This leads me to belatedly reflect on the title of this blog regarding Das’ “99%” comment. Anyone who is a real scientist knows a 99% accuracy rate is ridiculously good. I doubt any paper, or many papers, are 99% accurate. Unintended mistakes creep into all sorts of papers via all sorts of routes. The question is whether the 1% inaccuracy goes to the core of the findings. If it doesn’t, who cares? If it does, we need to look deeper. But all scientists went through undergrad and grad studies – did they all achieve 100% on their course grades? No chance. So we’d expect the resulting scientists to produce 100% accurate work over their entire careers? No chance. Ergo, while there are many, many problems and questions with the Das’ statements above, to focus on the “99%” comment is rather petty and shows a lack of real understanding as to how science actually works. Let’s focus instead on any major systematic problems that may have existed in the Das group, and avoid the unsupportable ‘gotcha’ silliness.

    2. A small correction: it’s Daniel Gillespie.

      @chris:
      One of the theorists in my grad dept used to refer to J. Phys. Chem. as “the Journal of Physical Comedy.” He is, roughly, Gillespie’s contemporary. It’s reputation in some areas of theoretical chemistry is quite a distant second to J. Chem. Phys.

      Regardless of what you think of J. Phys. Chem. as a journal, or how you view George Schatz as an editor, my point was completely different. The 1977 Gillespie paper was not considered important at first, as reflected in its citations for the first 10 years, but is clearly considered important today. No one could have guessed its importance at the time, and that might have been why he published in J Phys Chem and not J Chem Phys.

      Sure, there are a lot of journals these days. The journals with lower impact aren’t uniformly poor, and I’ve found some very nice work in them. The work is usually more specialized, incremental, or not as complete. This does not qualify it as “junk.” You cannot judge the importance of the work, or even its quality, by the journal. And, you cannot judge what set of observations or insights will be important in 10 or 20 years.

  15. Well, yes, it’s junk until it leads to the next big discovery! The history of science is full of great discoveries that were produced by “mistakes”. You need people who are able to notice the anomalies, though. I am not sure the current “big science” climate generates many of these individuals anymore.

  16. That’s why I follow this blog avidly. You guys are so well-read.
    My perspective for thirty years was 99% free junk and 1% New England Journal of Medicine. Of the journals put out by Elsevier and other mouthpieces for the drug industry, I mostly looked at the pictures; about 50% of the pages (I counted) consisted of drug ads.
    On the other hand, I had total faith in everything I read in the NEJM; they had the decency to segregate the ads to the outside of the journal. I’ll never forget reading in the letters in 1981 about a series of a dozen patients whose immune systems appeared to have shut down completely. This somehow resonated with some rather nasty clinical experience I had just had in Los Angeles a year before. A few patients had gotten sick and died of strange infections and nothing we could do was of any help. That seemed like as good a time as any to emigrate to Wyoming.
    Now that I have time and Google is indexing scientific journals the scales are falling from my eyes. Much of the “real science” is behind paywalls so I indulge in abstracts most of the time to guide me towards experiments that advance knowledge significantly rather than confuse the issues. Where is the filter that will print out the Einstein and suppress the Das?
    The point I’m trying to agree with is that some journals are good and some are not so good. But no one can deny that even a great journal can put out a howler now and then.

    1. It would be quite nice if publishers (American Chemical Society, my glare is at *YOU*) released their work, especially after some reasonable amount of time. I’d say six months, but we could start with ten, or even thirty, years…

      It would be even nicer if some scientific socieities (shame on you, American Chemical Society) hadn’t joined Elsevier and other major publishers to fight open access in the US Congress.

    2. @Conrad T. Seitz MD:

      Thinking more about your post. Abstracts are a pretty bad way to look at literature, especially if you don’t have a sense of the field. Public universities often let citizens use their libraries. If you’re still in LA, the UCLA library is available to the public, and its journals should be accessible through the library computers.

      http://www.library.ucla.edu/service/2027.cfm

      I think that many other state universities have this, as well. Not quite as nice as immediate online access, but better than being stuck with abstracts. If you’re lucky enough to live in a place like Manhattan, the NYPL research division has an exceptionally complete set of journals, as well.

  17. The reversatrol (and what is with this name? It sounds like something the Simpson’s would have come up with) hyping continues in the MSM: http://abcnews.go.com/blogs/health/2012/02/02/study-unlocks-secrets-of-red-wine-chemical/. In defense of the paper’s author, he tries to provide some context, but what readers will take away is “red wine contains reversatrol; reversatrol cures for obesity, Alzheimers and chronic disease. Where can I buy some?”

  18. I agree that abstracts are inadequate in many, many ways. Next time I’m in LA…
    One further point of agreement: resveratrol (not “reversatrol” although I like it) does have that “d’oh” effect, especially when Aggarwal is the first paper to show up in a Google search (now where’s my plagiarism software?)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.