As promised, Michael LaCour, the co-author of the now-retracted Science paper on gay canvassing, has posted a detailed response to the allegations against him.
In the 23-page document — available here — LaCour claims to
introduce evidence uncovering discrepancies between the timeline of events presented in Broockman et al. (2015) and the actual timeline of events and disclosure.
He also says that the graduate students who critiqued his work failed to follow the correct sampling procedure and chose an incorrect variable in what LaCour calls “a curious and possibly intentional ‘error.'” He writes:
When the correct variable is used, the distributions between the CCAP thermometer and the LaCour and Green (2014) thermometer are statistically distinguishable.
He says that the reason he is unable to produce the data requested by his critics that it was destroyed “in the interest of institutional requirements:”
I take full responsibility for errors in the design, implementation, and data collection regarding the field experiments and panel survey reported in LaCour and Green (2014). I also take full responsibility and apologize for misrepresenting survey incentives and funding in LaCour and Green (2014). In fact, I received a grant offer from the Williams Institute, but never accepted the funds, the LA GLBT received funding from the Evelyn and Walter Haas Jr. Fund., and the Ford Foundation grant did not exist. Instead, I raffled raffled Apple computers, tablets, and iPods to survey respondents as incentives. I located some of the receipts, Link here. Some of the raffle prizes were purchased for a previous experiment I conducted. I take full responsibility for destroying data in the interest of institutional requirements.
As we reported last week, LaCour’s co-author Donald Green wrote in a letter to Science that LaCour
claimed he deleted the source file accidentally, but a Qualtrics service representative who examined the account and spoke with UCLA Political Science Department Chair Jeffrey Lewis reported to him that she found no evidence of such a deletion.
LaCour’s response glosses over the apparent contradiction between “accidentally” deleting the file and having done so to satisfy institutional requirements:
This assertion by Professor Green is implausible, because when LaCour spoke with Qualtrics representative Derek Johanson on May 20, 2015, he had to supply a username on the account and the userid for the survey he had conducted. LaCour never shared these identifiers with anyone.
He also finds fault with his critics’ methods:
I note that Broockman et al. (2015)’s decision to not present the lead author with the critique directly, by-pass the peer-review process, privately investigate data collection activities without knowledge or consent of the author, demand confidential identifying information from respondents in a study without grounds or standing to do so, publicize unsubstantiated allegations and hearsay prior to a formal investigation, is unprecedented, unethical, and anomalous in the relevant literature.
David Broockman and Joshua Kalla — the two graduate students who questioned the work — and Peter Aronow, of Yale, issued this statement on Twitter:
https://twitter.com/j_kalla/status/604487899967561730
LaCour spoke to The New York Times, saying, among other things, that
he lied about the funding of his study to give it more credibility. He said that some of his colleagues had doubted his work because they thought he did not have enough money to pay for a such a complex study, among them David Broockman, a political scientist at Stanford and one of the authors of a critique of his work published last week. Mr. LaCour said he thought the funding sources he claimed would shore up the plausibility of the work. “I messed up in that sense, and it could be my downfall,” he said.
A former collaborator of LaCour’s offered some context on Twitter:
in April/May 2013. We really sent out mail offering an iPad. UCLA IRB approved it. We got some real data! 38 whole responses! (2/n)
— Chris Skovron (@cskovron) May 30, 2015
he stopped returning my calls and emails and kicked me off the project. Now we know why! His timeline misrepresents the pilot. (4/4)
— Chris Skovron (@cskovron) May 30, 2015
This is a fast-moving story, so this post will likely be updated frequently.
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.
Key question, not touched in his response: what’s the evidence that the survey took place at all? He failed to identify the company he used, or get any one on record saying that the survey happened at all like he described.
While I am sympathetic to the notion of guaranteeing confidentiality to those who were surveyed, there are many ways to anonymize raw data, for example replacing personally identifiable information with randomly generated identifiers. So I am still skeptical, specially given that he has acknowledged that he misrepresented things like the source of funding. His reference that the study was successfully replicated in Missouri is similarly suspicious given the small sample size of the new study. Instead of doing the right thing, he is piling up excuses.
Actually, deleting “identifiers,” or replacing them with random keys, isn’t necessarily robust against sophisticated attacks, for example see the following paper and look up the topic of differential privacy:
http://arxiv.org/pdf/cs/0610105
This observation of course doesn’t change the rules LaCour was actually operating under.
Hmmmm. The Institutions I have worked at require data and evidence to be preserved for years.
LaCour quotes policy as
“Protocols should be designed to minimize the need to collect and maintain identifiable information about research participants. If possible, data should be collected anonymously or the identifiers should be removed and destroyed as soon as possible”
so removing identifiers was what the policy stated, not destroying all of the data. This argument of LaCour’s is disingenuous indeed.
So he got 85% response rate with the promise of 1/1000 chance of winning an ipod?
From This American Life
‘Green today told me if there was no survey data, what’s incredible is that LaCour produced all sorts of conclusions and evaluations of data that didn’t exist. For instance, he had “a finding comparing what people said at the door to canvassers to what they said on the survey,” according to Green. “This is the thing I want to convey somehow. There was an incredible mountain of fabrications with the most baroque and ornate ornamentation. There were stories, there were anecdotes, my dropbox is filled with graphs and charts, you’d think no one would do this except to explore a very real data set.”
“All that effort that went in to confecting the data, you could’ve gotten the data,” says Green.
…….
Green says that as of yesterday LaCour still claimed the data is real.
Green says one thing that seemed promising about working with LaCour is that LaCour seemed to be lavishly funded”‘
http://www.thisamericanlife.org/blog/2015/05/canvassers-study-in-episode-555-has-been-retracted
princeton offered him a job after all its ps professors went over his work and assessed it as worth of a tenure track job, what does that tell you?
Able to get a paper published in Science but not able to make a data set anonymous? Right.
He’s admitted lying about funding sources, the incentives provided to survey participants, and the name of the survey company. Yet he is still trying to convince people his study is valid, using arguments such as those on pages 12-13 of his response. He agrees that one of his graphs is statistically extremely close to one in the CCAP paper, but says that if you compare to a more appropriate graph in the CCAP paper you get substantial differences.
This whole incident fascinates me. How can he not realize that once he admits several lies no one is going to take highly questionable excuses seriously.
This reminds me, real money and real resources were given and thus not available for other research. Is any of this in ORI’s and OMB’s purview? Will UCLA have to refund any government grants or supports that were awarded?
LaCour’s response is extremely weak, particularly when he tries to address the clear overlap between the data he supposedly collected in two independent samples and the CCAP data. Using another variable in the CCAP data does in no way render the most damaging results by Broockman et al. less convincing. The argument is not whether all of LaCour’s data match with the CCAP data, but rather that some of his data were apparently copied from the CCAP dataset. The same applies to the extremely high test-retest reliabilities. The fact that some variables in the data do not show that weird pattern does not render the weird data less weird. One cannot be excused of lying by being honest about other things. The excuse that he might have accidentally confused simulated data with real data is a last resort of someone who appears to have fabricated the data altogether. At the very least, he is a sloppy scientist who evidently has lied on several occasions.
Not only is he a sloppy scientist, but his argument that he was protecting participant information doesn’t hold up. At every step did Mr. LaCour refuse to furnish data in the name of protecting participant information, and at no point did he go to the IRB and asked them for advice.
It is also implausible that Professor Green would make up a story involving the UCLA Department Chair as his informant; a story which can be verified or falsified simply by asking Jeffrey Lewis if the purported conversation actually took place.
According to the New York Times:
“One of the most damning facts in the critical review of Mr. LaCour’s work was that the survey company he told the Los Angeles LGBT Center he was working with did not have any knowledge of his project. He now says that, in fact, he did not end up using that survey company but another one.”
So far everything is as clear as mud!
What is clear is that Mr. LaCour is flailing, and throwing out further fabrications.
Maybe I missed the part of the response, but I can’t think of any legitimate way to obtain the correlations between the measurements he has. We see that the later measurements are more strongly correlated with the FIRST measurement instead of the measurements closer in time to themselves. E.g. Measurement 2 and 3 correlate .93, but correlate .97 and .96 with the first measurement.
I have made an illustration here: https://docs.google.com/drawings/d/1BYR6hXCz7LHcRpyxtXJZYQxegejF6wHnGwvSy9xnI2E/edit
Am I the only person who finds the way this whole story has come out disturbing? Mike LaCour may well have faked his data, but instead of the deliberate reasoned way that science should be done we have had trial by media, publication on the internet without proper review and statements on twitter reduced to 140 characters.
LaCour should have been given the time and opportunity to respond in detail to the very serious charges presented by Broockman et al. before matters became public. Instead Green apparently gave him a 24 or 48 hour ultimatum. Moreover, the paper by Broockman et al. should have had some sort of independent third party review before publication. After all, it’s a scientific paper, and publishing scientific papers on the internet without peer review is just wrong.
LaCour has now responded and while admitting some wrongdoing, is now offering rebuttals to the most serious accusation of faking data. These statements should be straightforward to disprove or verify and the truth will be out. Whatever happens though, LaCour’s career is ruined, and because of all the media coverage, the reputation of science in the public mind has been damaged.
My point is that it needn’t and shouldn’t have happened this way. Green could have written to Science without going public. The Journal could then have asked LaCour for his detailed response and presented this, together with Broockman et al. for urgent third party review. In parallel, UCLA could have had a full investigation into funding sources, Qualtrics data sets and so forth. This way, the first public and press statement, if any, would have been in the journal Science as an expression of concern or a retraction, which is how it should be.
Green apparently acted because of the flagrant falsification (couldn’t find ANYTHING that didn’t point to fabrication) and the high profile of the story and subject matter.
LaCour has now responded and while admitting some wrongdoing, is now offering rebuttals to the most serious accusation of faking data. These statements should be straightforward to disprove or verify and the truth will be out.
No, the statements are not “straightforward to disprove or verify”, because LaCour claims to have destroyed the evidence. See the title of the post.
His current position is that he lied about some things, like the lucrative research grants which comprised large parts of (some versions of) his CV, while on other things he was telling the truth… but he can’t prove that because of destroying the data (rather than make anonymised versions of it available to other researchers for re-analysis).
Ultimately, if the evidence no longer exists then it doesn’t matter whether LaCour is telling the truth about originally collecting evidence. It’s gone now.
Herr Doktor Bimler,
Surely the point is that there have been two massive take downs in social psychology – Jens Foerster and Diederik Stapel and they were both done by people quietly and anonymously taking their concerns to the equivalent of their institution’s Research Integrity Officers. And perhaps you didn’t get the sugar-hit of instant gratification of twitter outrage but in the end a fair result was delivered.
According to David Broockman:“Part of the message that I wanted to send to potential disclosers of the future is that you have a duty to come out about this, you’ll be rewarded if you do so in a responsible way”
Then the question has to be posed: was this a responsible way?
Using the Research Integrity Office protects both the person accused from an unfair or injudicious accusation, it also protects the accuser (in theory) from negative consequences of an accusation made in good faith which turns out not to be fully substantiated.
If Broockman et al. are on the money then they will succeed, but it surely isn’t an example I would want others to follow in the belief it is “the responsible way.”
Green lost confidence in the integrity of the paper he had co-authored, and went to the journal with his concern. Should he have left the paper out there, accruing citations, knowing that the source data didn’t exist, while the university and the ORI slowly recapitulated his own investigations? That is a big ask.
I was more referring to Broockman et al., than Green. I am not interested in the timeline but arguably the appropriate thing for Green to do was to bring it to the attention of the UCLA RIO and notify the journal – with the benefit of hindsight. But I believe there has to be exceptional circumstances to think the issue of a dodgy paper accruing citations is a sufficient reason to bypass the RIO process.
Provided the study was roughly performed as described, I think that the grounds for believing the findings are flawed are still not strong. It would be fair to say that the antics of LaCour are creating the impression in some minds that he faked the entire study. I would still find that very surprising. The secondary issue (but still important) that LaCour misrepresented the funding – that misrepresentation is now a given, possibly he is still misrepresenting it.
Grey Rabbit:
Still not strong?—you’ve got to be joking.
Yes, and there were “exceptional circumstances” here.
When LaCour published a scientific paper, he opened up his work to public criticism. Broockman & Kalla’s criticism has been peer reviewed by all of the scientists who’ve read it. It has passed that peer review process since no-one (AFAIK) thinks the criticism was flawed.
My view on this issue is that the problem is that there is no standard way to deal with cases like this, where there is a high-profile paper where dishonesty is suspected. As a result people come up with ad hoc approaches which then can be subject to criticism. An argument against giving someone a chance to respond early is that it would just make it easier for that person to conceal his/her behavior.
In the end though, I don’t think in this case it mattered because of the extreme and blatant nature of LaCour’s lying. Maybe in the future there will be a better system in place for dealing with cases like this.
Who watches the peer reviewers? Who watches the RIO? It still is a system that can be compromised by internal conflict of interest or conflict of interest by the RIO, the particular department, its chair, its powerful faculty, or the university. INTERNAL is the problem: who watches the peers? So there is always to be a prospect of tension, regardless of whether there is a standard. The standard is only as fair as the integrity of the authority members. Thus there also needs to be protection for whistleblowers, and constraints against inadvertent rewards or other abuse of process FOR phony or otherwise conflicted whistleblowers…and then there is the problem of those conflicts which remain hidden. But surely it is possible for intelligent people to perceive the behavior functions that reward honesty and reward dishonesty, and address the latter so as at least not to add to the perversions. Scientists owe it to themselves to not live in a bubble or seek it. If everyone is not dedicated to being responsible, how can anyone truly be?
Such judgment and forethought IS possible and conclusion to go public may well be fully warranted, especially if the process you suggest can be perverted to reward and further fraud.
Another LaCour paper under fire http://polisci.emory.edu/faculty/gjmart2/papers/lacour_2014_comment.pdf
The dog, er, computer, ate my homework defense.
Lame.
As a molecular biologist I find this whole discussion more than mildly amusing. Following John Searle I call for a constitutional separation of “sciences” (e.g. chemistry) and “musings” (e.g. political science).
Yes, because we’ve never seen scientific fraud from a molecular biologist. *rolls eyes*
Yeah, I can think of multiple papers that were just as egregiously faked in the hard sciences (especially, but not limited to, molecular biology), and many more that were the result of incompetent data handling. I enjoy making fun of the social sciences as much as any biologist, and poorly controlled and statistically invalid psychology studies are a major problem, but RW seems to have several examples of Photoshopped gel images every month. At least these particular political scientists know how to use R, which is more than most biologists can say. (I’m particularly shocked at the number of cheating biologists who can’t figure out how to use the airbrush or blur tools to clean up the obvious edges from splicing gel bands. I expect better from our graduate institutions!)
I’m envisioning an Onion article: “Graduate student debates adding ‘image manipulation’ to his CV.”
Yeah, that whole kinase cascade fiasco from awhile back comes to mind. What’s eerily similar is the extent to which the authors seemed to have gone beyond some run-of-the-mill fabrication and have constructed their own alternate reality. In many cases it’s easy to see the incentives the person is considering in the ordinary cases, like a desperate post-doc misrepresenting a gel, but something like this (if true of course) strikes me as just bizarre. At a certain point it has to be easier to just do the study properly.
Count me among those who would vote for such an amendment: the separation of hard science (which would be called from that moment on just “science”) from soft science like social science, political science, etc…
I don’t think someone in molecular biology should be casting stones. LOL.
The amount of fraud in the “hard sciences” is huge. It’s different. There’s a lot of image fraud, study faking (cold fusion), and so forth. It’s an example of ridiculous arrogance, in addition, to hold up one’s own area as the model. No science is immune from fraud.
UCLA’s (or any institution’s) requirements for the handling of data and identifying information can usually be found easily online.
Here’s a link to the PDF from UCLA’s office of human research protection outlining, in detail, policies related to data collected from online surveys.
http://ora.research.ucla.edu/OHRPP/Documents/Policy/8/Internet_Research.pdf
Of particular interest:
• The IRB must review and approve the method and procedures for data collection and
security.
• Investigators must provide information regarding the transmission and storage of the data.
• When an Investigator chooses to have a separate server for data collection or storage, the IRB must review and approve its administration.
I am especially surprised by the replication study he presents on the final page that reproduces the effect according to LaCour. It seems to me that the massive effect of the original paper is not replicated, because (1) the sample difference is only 4.75 percentage points and (2) the sample is much too small to speak of an effect here (CI of canvas group [.244 – .617]).
Additionally, I would like to note that the Kolmogorov-Smirnov test is overly powerful and can detect differences between distributions that are highly alike. The K-S tests are therefore only informational if we see the cumulative densities of each variable separately upon which these are tests are based. X-Y plots do not show the absolute differences that the K-S uses (maximum absolute = test statistic, so a discrepancy at only one point of the distribution can already make the test significant).
In these kinds of cases I always hope the response can explain the situation or raise doubts about the accusations, but I have failed to be convinced in this case (similarly in Förster case). These accusations were just too well founded and apply dilligent documentation and methods. Whether fraud occurred in either case, is a legal issue (i.e., intent), but that the scientific body of evidence cannot build research on these findings specifically seems clear.
Kudos to LaCour for the extensiveness and speed with which he has written this response. I would have probably gone haywire in his situation and not produced a single page.
I’m confused by your first point about the replication study. I don’t think we have enough information to speculate on the statistical significance of the effect. The N’s are not people, but precincts; we need to know standard errors to determine whether the two percentages are different. We can’t get that without the number of voters in each precinct.
At any rate, the author of the replication study has tweeted that 1. his study is not a replication, it uses different methods, and 2. he hasn’t finished analyzing his results.
https://twitter.com/bcalfano
The standard error for the reported proportions can easily be computed at the precinct level (sqrt([p * (1-p)] / total N)). His statements are at the precinct level as well.
It matters whether there are a million people in each precinct or 1000 or 10. Intuitively, it would be much easier to change a proportion by 5 percentage points if the total n were 20 people than if it were 1000 people–regardless of whether those n people were grouped in to 10 precincts or 2.
Kudos to LaCour? Hm, I don’t know for what really. Certainly not for that response, regardless of its “extensiveness and speed”. In that situation, you probably should go haywire indeed.
Regarding the replication: you can replicate “wrong” results, too. The replication does not tell one all too much about the validity of the data.
I am not giving kudos to the content, just the fact that he wrote it this quickly and it is somewhat internally consistent. That I disagree with the response, remains.
Of course you can replicate wrong results; the case here was that the results were not even near what he was saying they were.
Why are you surprised? I think you are bang on about the high power of the test allowing one to “show” a link when there is not one by drastically magnifying the scope of virtually any variability. But why do you find it “surprising”?
Surprised of the interpretation because of the arguments I posed. I could have been clearer there 🙂 Thanks.
My comment pertains only to the ‘X-Y plot’ comment and the R code
Lacour has provided the following R code on page 3 of his response and the last line of the copied code here refers to a Q-Q plot
>>>>>>>
ks.test(lacour.therm, ccap.therm)
##
## Two-sample Kolmogorov-Smirnov test
##
## data: lacour.therm and ccap.therm
## D = 0.067086, p-value < 2.2e-16
## alternative hypothesis: two-sided
qqplot(ccap.therm, lacour.therm, ylab = "LaCour (2014), Studies 1 and 2 Therm", xlab = "CCAP Therm")
<<<<<<<<<<
I understand a Q-Q plot is derived form of CDF.
(See paragraph 1 of the following link)
http://www.stat.wisc.edu/~mchung/teaching/MIA/theories/quantile.feb.05.2007.pdf
So, would you comment on why applying a K-S test is a problem.
I do see that with a huge n, even a small K-S Distance would automatically make the distance significant.
Also, Broockman et al provided the Q-Q plot and there is no reason to expect Lacour to do it differently.
On another issue: Lacour does not deserve any Kudos, but one can learn from the savvy R code provided by both parties and graphical displays of data. The latter issue will make Tufte proud.
Again, my comments only pertain to the R code.
You are correct in saying the QQplot is a combination of CDFs and one can derive the difference from them. All I am saying I find it difficult to asses the maximum absolute difference on which the K-S is used and i would like to see the individual cdfs both on the x-axis.
Again, the kudos is not to the content, just the fact that he wrote a 23page response that is not just a lorem ipsum dolor text. It must have cost him a tremendous amount of energy to produce this (yes, I know almost nobody cares). It is a weak response, nonetheless.
Something on LaCour’s awards listed on his CV:
Does anybody have information regarding a “Walter Winchell Fellowship in Communications & Journalism”, awarded by UCLA’s Department of Communication Studies? LaCour lists this award on his CV. I cannot find anything even remotely similar to such a fellowship, apart from LaCour mentioning that he got it in 2013/2014.
The Pi Sigma Alpha award for his presentation at the Midwest Political Science Association is now being reconsidered, after the revelations. His talk was on what came to be the Science article. See here:
http://www.mpsanet.org/Awards/2015AwardRecipients/tabid/943/Default.aspx
Also, I am unable to find supporting information for his claim of having been name “Super Reviewer” in 2013 by the American Journal of Political Science. I could not verify if something like that is awarded by the journal. Anybody?
A Google search of Walter Winchell Fellowship in Communications and Journalism lists a UCLA alumni who apparently received the Walter Winchell Scholarship in Journalism in 1994. However, my personal reading of this listing is quite negative.
After various searches of using various combinations of “Winchell” “Award” “Fellowship” “Winchell UCLA” “UCLA Political Science Fellowships” “UCLA Winchell Communications” among others I can find a Winchell award for STEM researchers, a Winchell award for semantics (nothing to do with Walter Winchell though) and a Runyon-Winchell Fellowship for Cancer Research. None of these have anything to do with Political Science, Journalism or Communications.
Whilst I can find a Walter Williams Scholar Program offered by the Missouri School of Journalism, this Fellowship has nothing to do with UCLA or, indeed, Walter Winchell. The 1994 recipient of this Fellowship doesn’t mention ever attending Missouri so it is unlikely to be some sort of error in the 1994 listing.
In short, no Fellowship with this name and for this academic purpose could be found after 2 hours of Google searching.
After searching various UCLA Political Science and the UCLA Communication Studies webpages, I can find no Winchell Fellowship offered by these two departments. I cannot look at the crucial December 2014 / Page 3 in the News Section of UCLA’s Communication Studies Department, corresponding to the date of the Science paper, because some nonsense from Forbes magazine keeps getting in the way, although Page 4 and the remaining pages are available. Nothing about LaCour receiving this award is listed in 2013/14 or any other year for in the News pages, although I can find that someone else was awarded a Fellowship with a totally different name in 2011 as a reference for the type of information these pages do include.
The only hit I get for this Fellowship is LaCour’s CV. I can find no announcement on any other webpage about LaCour receiving the Fellowship, no announcement on any other webpage about this Fellowship being received by any other academic in any year, and no other CV listing this Fellowship or any other claimant (except for the 1994 entry).
You appear to be right. I highly doubt that this award or Fellowship actually exists.
Google “American Journal of Political Science” and “super reviewer” and you’ll find vitae of other scholars who have received that designation.
Thanks for the information Bill.
This story is getting increasingly bizarre, to the point where I don’t think amateur sleuthing is productive or healthy for me, and is probably unfair to the individual concerned.
I’ve made up my mind about the research in question. Barring a somewhat unlikely turn of events I won’t ever be asked make an actual decision involving this person, so I will just assume that UCLA know what they are doing and cease caring.
Also, may I make another point: how are we to judge an entire sector, peer review and research integrity, that undergoes significiant change by grace of what surely cannot be a predictable or arguably even probable event: the creation of a certain unique retractions blog and enormous effort by its creators to maintain and foster it, and fill it with content? What if Oransky turned left instead of right at the light the day he conceived it, and a bus hit him, inducing anterograde amnesia? What is to be said of a field, peer review and research integrity, that turns so much on the wingbeats of a butterfly in Cleveland (or wherever he was that day)? Not reassuring.
A superfluous observation perhaps, but there’s something about this turn of phrase that’s trying to capture my imagination:
LaCour should team with Michael Bellesiles. The two of the could create some interesting research (emphasis on ‘create’),
Research that could be used to influence society should be flagged early and loudly if error or fraud is suspected. We still have people believing that the MMR vaccine causes autism, DDT causes significant eggshell-thinning, and that most people in early America did not own guns. The last example was used in Second Amendment court cases before the fraud was flushed out.
It would be to the detriment of society to handle such research in careful, quite, reserved fashion. Post it on the internet and let others debate it before it becomes “settled science” for a new cult. If you kept up with the chocolate ‘sting’ paper, the aurhor said that the newsmedia didn’t bother to ask obvious questions or do any research on the paper’s authors, but that they were asked by the online community in comments.