Fredrickson-Losada “positivity ratio” paper partially withdrawn

am psychIn 2005, Barbara Fredrickson and Marcial Losada published a paper in American Psychologist making a bold and specific claim:

…the authors predict that a ratio of positive to negative affect at or above 2.9 will characterize individuals in flourishing mental health.

The paper made quite a splash. It has been cited 360 times, according to Thomson Scientific’s Web of Knowledge, and formed the basis of a 2009 book by Fredrickson, Positivity: Top-Notch Research Reveals the 3 to 1 Ratio That Will Change Your Life.

But something didn’t sit right with Nick Brown, a psychology grad student at the University of East London. He found the paper’s claims wanting, and contacted Alan Sokal — yes, that Alan Sokal, who published a fake paper in Social Text in 1996. Sokal agreed, and he, Brown, and Harris Friedman published a critique of the paper in July of this year in American Psychologist. Its abstract:

We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the “positivity ratio.” We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada’s claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.

Fredrickson responded in another paper in American Psychologist:

…I draw recent empirical evidence together to support the continued value of computing and seeking to elevate positivity ratios. I also underscore the necessity of modeling nonlinear effects of positivity ratios and, more generally, the value of systems science approaches within affective science and positive psychology. Even when scrubbed of Losada’s now-questioned mathematical modeling, ample evidence continues to support the conclusion that, within bounds, higher positivity ratios are predictive of flourishing mental health and other beneficial outcomes.

But in the wake of the critique, Discover blogger NeuroSkeptic called for the Fredrickson-Losada paper to be retracted. The Chronicle of Higher Education covered the story in some detail in early August.

Now, the Fredrickson-Losada paper has been partially withdrawn. Here’s the notice, which appeared on September 16:

Reports an error in “Positive Affect and the Complex Dynamics of Human Flourishing” by Barbara L. Fredrickson and Marcial F. Losada (American Psychologist, 2005[Oct], Vol 60[7], 678-686). The hypothesis tested in this article was motivated, in part, by the nonlinear dynamic model introduced in Losada (1999) and advanced in Losada and Heaphy (2004) and herein (Fredrickson & Losada, 2005). This model has since been called into question (Brown, Sokal, & Friedman, 2013). Losada has chosen not to defend his nonlinear dynamic model in light of the Brown et al. critique. Fredrickson’s (2013) published response to the Brown et al. critique conveys that although she had accepted Losada’s modeling as valid, she has since come to question it. As such, the modeling element of this article is formally withdrawn as invalid and, along with it, the model-based predictions about the particular positivity ratios of 2.9 and 11.6. Other elements of the article remain valid and are unaffected by this correction notice, notably (a) the supporting theoretical and empirical literature, (b) the data drawn from two independent samples, and (c) the finding that positivity ratios were significantly higher for individuals identified as flourishing relative to those identified as nonflourishing.

We asked Sokal what he thought of the move:

I would say that it is a positive step, but it still leaves some key issues unresolved.  That is because Fredrickson’s response to our paper fails to make clear which claims of the Fredrickson-Losada paper she is withdrawing and which ones she is reaffirming; and the brief withdrawal notice partially compounds the confusion.

In the 2005 paper and in her 2009 book, Fredrickson asserted that a discontinuous phase transition — analogous to the phase transition between liquid water and ice — occurs when the positivity ratio passes through the value 2.9013 (in her book she most often rounded this off to 3).  The only reason for entertaining such a radical claim was the nonlinear-dynamics model, which Fredrickson and Losada have now officially withdrawn.  Nevertheless, Fredrickson insists in her 2013 response that “Whether the outcomes associated with positivity ratios show discontinuity and obey one or more specific change points, however, merits further test.”

Now, one could conceivably argue that any hypothesis, no matter how implausible a priori, “merits further test”;  so this sentence may simply be an unobjectionable way of saving face.  But Fredrickson is apparently not yet prepared even to abandon the attempt to model the time evolution of human emotions using the Lorenz equations: “Whether the Lorenz equations — the nonlinear dynamic model we’d adopted — and the model estimation technique that Losada utilized can be fruitfully applied to understanding the impact of particular positivity ratios merits renewed and rigorous inquiry.”

It would therefore be valuable to know whether Fredrickson’s published response to our paper represents her current opinion, or whether she has now abandoned also the attempt to model the time evolution of human emotions using the Lorenz equations and/or the attempt to find discontinuous phase transitions.

Finally, in their 2005 paper (p. 684) Fredrickson and Losada claim that their two samples provide empirical support for a phase transition at 2.9:

“More critical to our hypothesis, however, in each sample, these mean ratios flanked the 2.9 ratio.”

“Supporting the hypothesis derived from Losada’s (1999) nonlinear dynamics model, we found in two independent samples that flourishing mental health was associated with positivity ratios above 2.9. … The relationship between positivity ratios and flourishing appears robust …”

Now, it is very easy to demonstrate that their empirical data do not show anything of the kind — and indeed that, given their experimental design and method of data analysis, no data whatsoever could possibly give any evidence of any nonlinearity in the relationship between “flourishing” and the positivity ratio — much less evidence for a sharp discontinuity.  For lack of space we omitted from our paper such a discussion, but we will be happy to provide it if the editors of American Psychologist give us the opportunity to respond to Fredrickson’s comment on our paper (thus far they have refused).

The partial withdrawal notice says that

“Other elements of the article remain valid and are unaffected by this correction notice, notably … (b) the data drawn from two independent samples, and (c) the finding that positivity ratios were significantly higher for individuals identified as flourishing relative to those identified as nonflourishing.”

But what about the claim that their data provide empirical evidence for a phase transition at or near 2.9?  Are they withdrawing this claim as well, or not?  Both Fredrickson’s response and the withdrawal notice are unclear on this point.

Brown also gave us some thoughts about “some very basic issues with the empirical part of the Fredrickson and Losada (2005) paper:”

  • The datasets analysed were, by Fredrickson’s own admission, taken from studies that had been conducted for some other purpose and were analysed post hoc; see http://www.youtube.com/watch?v=jvPHF3u5zL8 from about 29:30 for about four minutes (although the whole video is worth watching to get an insight into Fredrickson’s understanding of the math here).  Now, this doesn’t have to be a problem on its own, but it would have been good practice for the authors to declare it; if you read the relevant passage from the paper, you will see that it is very carefully worded.
  • Of the two datasets, one (Study 2) does not, in fact, achieve significance; the authors state a result of “t(99) = 1.62, p=.05”, but in fact the (one-tailed) p-value of that t-test is .0542, and you don’t get to round down and then state that your number doesn’t exceed .05!
  • Continuing the above, the authors give no justfication for using a one-tailed test. It seems to me to be rather unusual to use a one-tailed test to determine whether the means of two groups are different, unless there are extremely powerful theoretical or logical reasons for believing that one mean must be greater than the other (for example, if you’re measuring a population that starts off fixed and declines over time).  In this case, however, it appears that the only possible justification for such a belief, and hence for using a one-tailed test, is the assumption that the theory of the critical positivity ratio is correct, which is the very theory being tested by the experiment.

We will be addressing these issues in our reply to Fredrickson, whether that appears in AP or another journal.  Even if the empirical work were completely spotless, however, there would remain the question of just how a paper can be allowed to stand when approximately 60% of it, by word count, has been marked by the first author as “invalid”.  I’m sure you’ve seen plenty of articles completely retracted for erroneous content occupying one-tenth as much of the whole.  Clearly the 340+ scholarly citations of this article are not because of two small empirical studies of undergraduates that produced the unsurprising result that people who have flourishing lives also express more positive emotions; its popularity results from its substantial (not to say grandiose) claims to general truth.  Fredrickson’s correction does little to address this, unless AP is planning to re-issue the PDF of the article with “Corrected” stamped across every page.

Sokal told us he thinks the case raises other issues:

Last but not least, there is a huge open question, which concerns not Fredrickson and Losada but the entire psychology community, and particularly those people working in “positive psychology”. How could such a loony paper have passed muster with the reviewers at the most prestigious American journal of psychology, netted 350 scholarly citations, and been repeatedly hyped by the “father of positive psychology” (and past president of the APA), without anyone calling it into question before a first-term part-time Masters’ student in Applied Positive Psychology at the University of East London came along and expressed his doubts?  Where were all the leaders in the field of positive psychology?  The leaders in the application of nonlinear-dynamics models to psychology?  Was everyone really so credulous?  Or were some people less credulous but politely silent, for reasons of internal politics?

Hat tip: Dale Barr

30 thoughts on “Fredrickson-Losada “positivity ratio” paper partially withdrawn”

  1. Partially withdrawn? So is this a “R”, “Re”, “Ret”, “Retr”… …or a “Retractio”?
    At least we can have faith in the young, who have far better critical faculties than many of the Faculty!

  2. “Last but not least, there is a huge open question, which concerns not Fredrickson and Losada but the entire psychology community, and particularly those people working in “positive psychology”. How could such a loony paper have passed muster with the reviewers at the most prestigious American journal of psychology, netted 350 scholarly citations, and been repeatedly hyped by the “father of positive psychology” (and past president of the APA), without anyone calling it into question before a first-term part-time Masters’ student in Applied Positive Psychology at the University of East London came along and expressed his doubts? Where were all the leaders in the field of positive psychology? The leaders in the application of nonlinear-dynamics models to psychology? Was everyone really so credulous? Or were some people less credulous but politely silent, for reasons of internal politics?”

    Yes !! Yes, and yes !!

    Where were all the scientists in psychology?

    Where are all the scientists in psychology?

    Maybe there should be an inverse retraction watch: for psych researchers whose work is found to be replicable, for projects that are scientifically sound, and for journals that have high standards.

  3. This is a total abuse of the retraction process. Retraction should be done when there is fraud, plagarism, or other errors. It is NOT appropriate for scientific disagreements. Scientific disagreements should be done in the literature, by a rebuttal paper which demonstrates the errors in the original. This is a dreadful development.

    1. Our (Brown, Sokal, and Friedman) article *is*, for all practical purposes, a rebuttal, which indeed demonstrates the (many) errors in the original; we submitted it to the journal that published the original article, and it was accepted. So we have, I believe, done entirely the right thing, both in our terms and yours (@Statistical Observer). What the authors of the Fredrickson and Losada article choose to do in reply to our article is entirely up to them. So I’m not sure I understand what point you are making here (i.e., whom you are criticising, or for what). Personally, I don’t see how it’s an abuse of the retraction process for Fredrickson to issue a correction that accepts the main points of our rebuttal, although of course the wisdom of doing that (as opposed to, say, withdrawing the paper completely) can be discussed.

      That said, the errors in the three papers that we examined (Fredrickson and Losada’s model is based on two earlier papers that are arguably considerably worse than their 2005 article) are so egregious that we are arguably not in the realm of “scientific disagreement” here. Losada’s claims are a priori astonishing (that’s a euphemism), and we are amazed that they passed peer review in three journals in different scientific fields. In our article we carefully avoid going into questions of motivation, such as whether or not this constitutes fraud, but the net effect — namely, that something which cannot possibly be true was touted as fact and cited by hundreds of people — is perhaps not too dissimilar from what might have happened if one of the authors had indeed set out to pull the wool over the eyes of their co-authors and/or the scientific community.

    2. ‘We will be addressing these issues in our reply to Fredrickson, whether that appears in AP or another journal. Even if the empirical work were completely spotless, however, there would remain the question of just how a paper can be allowed to stand when approximately 60% of it, by word count, has been marked by the first author as “invalid”. I’m sure you’ve seen plenty of articles completely retracted for erroneous content occupying one-tenth as much of the whole. Clearly the 340+ scholarly citations of this article are not because of two small empirical studies of undergraduates that produced the unsurprising result that people who have flourishing lives also express more positive emotions; its popularity results from its substantial (not to say grandiose) claims to general truth. Fredrickson’s correction does little to address this, unless AP is planning to re-issue the PDF of the article with “Corrected” stamped across every page.’

      I agree that scientific disagreements is no reason for withdrawal, but I think this might be a special case: ‘Even if the empirical work were completely spotless, however, there would remain the question of just how a paper can be allowed to stand when approximately 60% of it, by word count, has been marked by the first author as “invalid”’

    3. I would say this comes under the heading of “other errors”. A technical process (mathematics) was applied wrongly, and the results based on that process were therefore wrong: the authors, having now realized the error, don’t stand by the results.

      It’s as if they have just found out (or just admitted) that their equipment was calibrated wrong or their software had a bug that affected the results. Such retractions happen all the time.

      This is not a new ‘development’. Except perhaps in the history of scientific scandals.

    4. Everyone will agree that a retraction is not the way to resolve a scientific disagreement. But it is important not to miss the fact that: (1) there has been as of yet, no retraction, only a correction, and (2) there is no scientific disagreement about whether or not the math in the original article is fundamentally flawed. Brown et al. have convinced everyone, even the original authors (see Fredrickson’s reply, and the correction notice).

    1. This brings up a question for me….How does a journal handle a situation like this when the author who’s work is being questioned is an associate editor of the journal? How does this influence things?

  4. Well, as a soft science we have to endure this nonsensical work. Articles in major psychology journals have promoted false memories (with dire consequences), facilitated communication (again, with serious consequences for families), broadening of mental disorder criteria (to the delight of Big Pharma), and silly psi to name just a few. It is sometimes embarrassing to mention that I am a psychologist!

    1. I agree with ferniglab: how do you partially retract a paper? By leaving in half the PDF file? This is nonsense. A clear case of editors sitting on the fence. If it is fraud, or malpractice, or serious error, then it is clear-cut retraction. But, if it is an estimate made on research conducted almost 10 years ago that has been disproved by new data or evidence, then this is not fraud, or reason for retraction. This is science. Science evolves and so do theories as new evidence is produced. I don’t see why this paper has been retracted. The authors made a one-sentence claim base don their data set. It seems perfectly valid at the time. I agree with Statistical Observer that this is an abuse of the “retraction process” and a “dreadful development”. The evolution of science needs to be expanded upon in discussions and publication of letters that dispute older data sets. Duplications, false data sets, manipulation. This is pure fraud and should be met with a strong response: retraction. Too much psychoanalysis, perhaps, has led to over-judgement?

      1. @JATdS

        I am totally with you that science evolves and new findings constantly falsify old theories.

        But here a mathematical model was applied nonsensically. And it was known, or could have been known, to be nonsensical at the time of the original publication.

        The fact, that apparently the entire (social) psychology community was inept to detect the nonsense does not make it any better.

        The error in determining the statistical power at least of part of the empirical research in the paper makes it completely dispensable.

        The main point of the paper, that magical 2,9something factor, has been erased. If this paper was 50 years or older, it would be an interesting object of science history reasearch. But it’s from the 21st century and a lot of research work is still started on these unfounded findings. It’s a clear retraction.

  5. ” something which cannot possibly be true was touted as fact and cited by hundreds of people”

    Such a phenomenon has been puzzling me: Who would verify the validity of those hundreds of papers which totally hinges on cited erroneous incredible hypotheses?

    Some had their work rejected by because their data disagreed with previously published fake findings which had been continuously verified in the literature!

    And this raises questions about the number of people who claim to agree with implausible findings, and those who manage to publish replication of faulty research!

  6. A one-sided p-value- seriously? That one should raised massive red flags in the initial review. The number of situations where that is appropriate is so vanishingly small that it’s easier just to say ‘never.’

    1. Wrong.

      One-tailed t tests are reasonable when there is (a) a clear theory being tested with (somewhat redundantly) (b) a clear (set of) predictions.

      In this (as in so many cases within psychological science) neither “a” nor “b” were satisfied.

  7. I dislike the fact that Fredrickson distances herself from the nonsense by calling it “Losada’s now-questioned mathematical modeling”. If she was not in a position to evaluate it, she should not have been an author, and should not have reaped the benefits of the nonsense for all these years.

  8. Psychologists-sociologists must be told to lose taste for physics and mathematics and statistics. The deeper they go into these sciences, the more opportunities for telling their nonsense to the public they get. And the more invulnerable to criticism they become: not many psychologists would dare to denounce the application of these smart equations. Meanwhile, the number of the specialised “niches” grow. It’s probably the first time they met real scientists. Well, actually, the first time they were shown the value of their science-shaped research was in the old paper of A. Sokal, but they apparently did not understand what was it about.

    The best way to cool their predilection for science would be to kick the whole “research” out of universities. The “research” should best be funded by political parties and “groups”, i. e. by their first line of consumers.

  9. “namely, that something which cannot possibly be true was touted as fact and cited by hundreds of people”

    and

    “Such a phenomenon has been puzzling me: Who would verify the validity of those hundreds of papers which totally hinges on cited erroneous incredible hypotheses?”

    Just to point out that citations have no inherent “approval connotation”.

    Without checking the context of each citation, you cannot draw any conclusion as to whether the citations were positive or negative in context, just that the paper was referred to.

  10. I have read the paper. What is the “complex order of chaos”? Or – “limit cycle of languishing”? Or – “complex dynamics of flourishing”? I can agree that “Appropriate negativity is a critical ingredient within human flourishing”, but I doubt that it “serves to maintain a grounded, negentropic system.” First, they talked about “mental health”, then it changed to “human flourishing”. They started with this definition: “To flourish means to live within an optimal range of human functioning, one that connotes goodness, generativity, growth, and resilience.” I expected that the paper will prove that “positivity” gives all this “goodness, generativity, growth, and resilience”. But not, they just played a trick – concluded that their research shows that positivity gives flourishing. I would say that mental health is simply an adequate reaction. I think that “flourishing mental health” is a nonsense that gave them opportunity to publish one nonsense after another many times. I am sure that positivity or negativity, both show mental health when this is an adequate reaction. I don’t call this whole thing a fraud as I don’t believe anybody would intentionally bring such disgrace upon herself. I call it – profanation of science. Previously, they used to go with compass around your cranium, and, similarly, introduced tons of new terms. For those who didn’t see the paper, here are more pearls: Our discovery of the critical 2.9 positivity ratio may represent a breakthrough. The positivity ratio that bifurcates phase space between the limit cycle of languishing and the complex dynamics of flourishing is 2.9. Flourishing is associated with dynamics that are nonrepetitive, innovative, highly flexible, and dynamically stable; that is, they represent the complex order of chaos, not the rigidity of limit cycles and point attractors.

  11. (Sorry, my comment above had paragraph structure.)

    But there are more important things here. I would like to know how much influence this “positive psychology” has on medical practice and on our culture, government, policies, television production, employment policies, etc., etc. Here, the definition of mental health has been perverted. We are told an absolute nonsense that below 3:1 prevalence of positive reactions, you have poor mental health. Humans are treated inappropriately, to say the least. The Retraction Watch site now appears lacking mental health!

    Does anyone want to set the things straight right now?
    I also shall note that all scientific discoveries were based on the NEGATIVE view of previous concepts, on feeling of deep dissatisfaction, etc. What’s going on with the psychology today?

    1. Very good point here.
      Everyone with a ratio of less than 3 is now mentally sick. All based on top-notch high-impact research, of course……And they all need a SSRI……

      If anyone finds irony, he or she may keep it….. 😉

  12. An interesting point here is the question of how a correction or partial retraction should look like.

    In the end, you should be able to read a corrected or partially retracted version of the paper in total, so that you know what the current state of the paper is. There must be a final version of the paper representing it the way the authors want it to be.

    If you correct a single figure I guess it is ok to just publish the corrected figure. Although, even in that case, in modern electronic publishing times there should be a corrected .pdf of the entire document as well (plus for the record still the original .pdf as well!).

    But here in this case, if they want to call it a partial retraction, they have to rewrite the paper. They need to make clear what is left of the paper and what not. Any attempts to do so would most likely make it obvious that there is not much left in the paper…..

  13. Good on you Nick Brown, Alan Sokal and NeuroSkeptic et al for pushing hard for that spectacularly faulty paper’s retraction. Success must have tasted sweet. And hat-tip to Tom Bartlett at The Chronicles of Higher Education for highlighting the episode, serious publicity that hopefully will “encourage” others not to publish faulty nonsense in academic journals.

    Nick Brown, I write because I too have been seeking the correction or retraction of a high-profile “peer reviewed” study that is complete nonsense, a paper confusing up with down in simple charts and embracing falsifed data as fact: http://www.australianparadox.com/pdf/GraphicEvidence.pdf

    What a pity that, unlike you, I was not smart enough 18 months ago to enlist the help of a heavy-hitter like Alan Sokal: http://www.smh.com.au/national/health/research-causes-stir-over-sugars-role-in-obesity-20120330-1w3e5.html

    Not only have I been completely unsuccessful in securing the correction or retraction of the University of Sydney clownish Australian Paradox paper – self-published by a famous lead author (3.5 million low-GI diet books sold) acting as the “Guest Editor” of the MDPI publishing journal – I have not even been able to convince the Dietitians Association of Australia to stop promoting the paper’s reckless false claim that there is “an inverse relationship” between sugar consumption and obesity: http://daa.asn.au/for-the-media/hot-topics-in-nutrition/sugar-and-obesity/

    The lead author – University of Sydney’s highest-profile obesity expert – this week appeared in a discussion about softdrinks and obesity on Australian national radio:

    “JENNIE-BRAND-MILLER: It irritates me, frankly, to see that soft drinks are getting special mention yet again. Soft drinks are clearly a problem in US. American children drink about 10 times as much soft drink as our children do here in Australia. America has a problem. We don’t. And there is very, very little support for the idea that Australian children are putting on weight because of soft drink.” http://www.abc.net.au/worldtoday/content/2013/s3868327.htm

    Ten times? Are Australian kids on average really drinking only about 10% of the sugary softdrinks and sugary energy drinks that American kids drink? No, of course not.

    My best guess is that Professor Brand-Miller’s “about 10 times” estimate is wrong by a factor of five or more. That’s based on Coca-Cola’s published data: http://assets.coca-colacompany.com/ba/22/39fae0564dcda20c694be368b8cf/TCCC_2010_Annual_Review_Per_Capita_Consumption.pdf

    Readers, my working assumption remains that, pound for pound, Australia’s at-risk fat kids are drinking and eating as much sugary junk as US kids. Assuming Coca-Cola’s data provide a rough sense of orders of magnitude, the University of Sydney’s highest-profile sugar-and-obesity expert is wrong by multiples in her assessment that (sugary) softdrinks are not a problem for Australian kids.

    Did I mention that the University of Sydney authors operate a low-GI business that exists in part to charge food companies up to $6000 a pop to stamp particular brands of sugar and sugary products as Healthy?

    In my opinion, the University of Sydney should immediately correct or retract on national radio its befuddled “10 times” estimate. It’s hard enough to convince people to cut back on harmful sugary softdrink consumption without the University of Sydney recklessly claiming that we already are down by 90% versus US levels.

    So too, the University of Sydney should immediately correct or retract its clownish Australian Paradox paper.

    Are there any heavy hitters out there who would like to help me tackle the University of Sydney about the serious problems in its “nutrition science” area?

  14. From the original Brown paper:

    Unfortunately, there is one final, yet crucial, flaw lurking here: the values of sigma, b, and (especially) i plugged into Equation 6 are totally arbitrary, at least within wide limits; so the predicted critical positivity ratio is totally arbitrary as well. Choose different values of the parameters sigma, b, i and one gets a completely different prediction for (P/N)crit. Recall that Saltzman (1962) chose sigma = 10 for illustrative purposes and purely for convenience; then Lorenz (1963) and Losada (1999) followed him. Were humans to have eight fingers on each hand instead of five, Saltzman, and in turn presumably Lorenz and Losada, might well have chosen sigma = 16 instead of sigma = 10 — which (with b = 8/3) produces a very similar Lorenz attractor, except that the borderline of chaos is now rcrit = 1040/37 = 28.108, and the predicted critical positivity ratio (with i = 16) is (P/N)crit = 1233/296 = 4.1655405. Yet other values of sigma, b, i would yield still different predictions for (P/N)crit. Thus, even if one were to accept for the sake of argument that every single claim made in Losada (1999) and Losada and Heaphy (2004) is correct, and even if one were to further accept that the Lorenz equations provide a valid and universal way of modeling human emotions, then the ideal minimum positivity ratio that Fredrickson and Losada (2005) claimed to have derived from Losada’s “empirically validated” nonlinear-dynamics model would still be nothing more than an artifact of the arbitrary choice of an illustratively convenient value made by a geophysicist in Hartford in 1962.

  15. Retraction should be done when there is fraud, plagarism, or other errors. My opinion is that this was fraud. An arbitrary number was picked out of the air and presented as an empirical finding.

  16. I’ve just started my final year for undergrad Psych. The fact that this can happen, and that such a fuss has to be made to demonstrate obvious errors, makes me seriously question whether Psychology is a field I wish to enter.

    1. Jennifer – don’t be discouraged. This is how science works when it works well. Person A publishes research, Person B critiques it or does additional research, and the truth comes out in the end. It’s the scientific method working well – an approach that non-science fields don’t have.

  17. A study should be replicable, that’s most important. This study is, and therefore it is not fraud. We all make mistakes, but that doesn’t mean we do it on purpose.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.