Rapid mood swing: PNAS issues Expression of Concern for controversial Facebook study

pnas 1113The Proceedings of the National Academy of Sciences (PNAS) is subjecting a much-criticized study involving Facebook that it published just two weeks ago to an Expression of Concern.

From the abstract of the original study:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and non-verbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.

In other words, the researchers manipulated hundreds of thousands of Facebook feeds to see what effect it would have.

Critics — and there were many online — said the study violated ethical norms because it did not alert participants that they were taking part. The standard for studies involving human subjects is that they be approved by an institutional review board (IRB). In a story titled “Even the Editor of Facebook’s Mood Study Thought It Was Creepy,” referring to Susan Fiske, who edited the paper for PNAS, The Atlantic reports:

…there seems to be a question of whether Facebook actually went through an IRB. In a Facebook post on Sunday, study author Adam Kramer referenced “internal review practices.” A Forbes report, citing an unnamed source, said that Facebook only used an internal review. When I asked Fiske to clarify, she told me the researchers’  “revision letter said they had Cornell IRB approval as a ‘pre-existing dataset’ presumably from FB, who seems to have reviewed it as well in some unspecified way… Under IRB regulations, pre-existing dataset would have been approved previously and someone is just analyzing data already collected, often by someone else.”

The mention of a “pre-existing dataset” here matters because, as Fiske explained in a follow-up email, “presumably the data already existed when they applied to Cornell IRB.” (She also notes: “I am not second-guessing the decision.”)

Here’s the Expression of Concern, signed by editor-in-chief Inder Verma:

PNAS is publishing an Editorial Expression of Concern regarding the following article: “Experimental evidence of massive-scale emotional contagion through social networks,” by Adam D. I. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock, which appeared in issue 24, June 17, 2014, of Proc Natl Acad Sci USA (111:8788–8790; first published June 2, 2014; 10.1073/pnas.1320040111). This paper represents an important and emerging area of social science research that needs to be approached with sensitivity and with vigilance regarding personal privacy issues. Questions have been raised about the principles of informed consent and opportunity to opt out in connection with the research in this paper. The authors noted in their paper, “[The work] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

When the authors prepared their paper for publication in PNAS, they stated that: “Because this experiment was conducted by Facebook, Inc. for internal purposes, the Cornell University IRB [Institutional Review Board] determined that the project did not fall under Cornell’s Human Research Protection Program.” This statement has since been confirmed by Cornell University.

Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the “Common Rule”). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.

In press materials, the journal adds:

Please note that PNAS is also preparing a correction to change co-author Jamie Guillory’s affiliation on the original research paper from her current affiliation of University of California, San Francisco to Cornell University, where she was a graduate student at the time of the study.

The Expression of Concern is a bit difficult to parse. It seems to suggest that the authors didn’t do everything they were supposed to in order to publish in PNAS, but that the journal allowed them to publish anyway. Now, after a lot of criticism, the journal is somehow suggesting that the authors should have given them more details. But it’s not clear how those details would have changed the fact that the researchers didn’t get IRB approval, which seems to be a requirement for publishing in PNAS.

There are apparently several buses under which various people are being thrown. Or PNAS has just raided Rick’s Club American and is “shocked, shocked!” that there’s something amiss.

58 thoughts on “Rapid mood swing: PNAS issues Expression of Concern for controversial Facebook study”

  1. It is radically unreasonable to:

    1) Have performed this study without specific, individual, documented informed consent.

    2) Claim that terms-of-use for an internet service can substitute for informed consent.

    3) Have accepted and published such a paper without verification of informed consent.

    But it should be noticed that businesses are not bound by scientific or medical ethics. So-called social media companies are particularly aggressive in claiming rights over everything you say, do, write, photograph, etc. It seems that most people are quite happy with this situation. But you should remember the old adage: If you are not paying for it, _you_ are the product.

    1. Social science journals don’t require verification of informed consent. I’ve been on an IRB committee for more than a decade and I’ve been following this situation with amazement. If I saw an ethics protocol with this design, I would required far more information and wouldn’t have shrugged it off as ‘secondary use of data’. If I had been a peer reviewer, I would have asked the editor to obtain verification of IRB approval.

    2. It is not at all unreasonable that Facebook might have fed two sets of users different news feeds for purposes of finding out which style of feed generates more user interaction. This would be a pre-existing dataset obtained for internal purposes.

      Having that dataset, and being able to connect the two different feed styles to specific users, it is not unreasonable that Cornell psychologists might mine that dataset to look more closely at subsequent user behavior. That is, examine the content, as well as the clicks.

      1. In this case, they were not gauging interaction.They were purposely attempting to cause harm by eliciting negative emotions–this was, in fact, one goal of the study. That is not passive data collection/observation. It is the active introduction of a study variable into a human subjects research study.

        I am not bothered by the potential ‘harm’ here because it probably isn’t a major issue. I am very alarmed that corporate terms of service and privacy policies are being equated with informed consent, though. If you sign the privacy policy at your physician’s office, you are not authorizing him/her to secretly alter your medications–without your permission or knowledge–and then publish the results as research. Even if the medication change was not harmful, no one would argue that this was a ethical study.

        In terms of whether this was just corporate marketing practice or a human subjects study, Facebook and Cornell clearly thought of this as social science research. They published it in PNAS, not a business or marketing journal. They don’t get to have it both ways.

        1. Even if it had been published in a business or marketing journal, it would still be subject to IRB review as social science research. Publishing in business journals also requires IRB approval.

      2. That’s not what they did, according to the study itself. For selected participants, FB actively manipulated the algorithm that determined content presented to the participant. It wasn’t a pre-existing dataset.

        However, they could have conducted a very similar study using pre-existing data by searching for posted content with specific emotional tone and then examining the subsequent posts for the readers of those FB comments. That would have required far more work on the part of the researchers.

  2. It appears to me that PNAS quasi-retraction notice is **intentionally** difficult to parse. Facebook is right that it has no obligation to secure consent before data mining; its (non)privacy policy gives a carte blanche. However, researchers commit fraud when they submit to journals like PNAS without complying with journal policies, such as IRB approval. And PNAS appears to have acted rather cravenly in discarding its policy in order to publish an attention-getting article.

    I think the PNAS EIC is at risk of being terminated.

  3. PNAS’ policies obviously weren’t written with this scenario in mind. For example, human research “must have been approved by the author’s institutional review board” is difficult to apply if if the authors do not work at a research institution. The paper’s claim that the terms of use provide informed consent seems questionable, but is it a journal editors’ job to decide this, or an IRB’s job?

    1. IRB employee here. I don’t work at a research institution–this is a private company. As a consent form editor, I work with a team of lawyers and the company’s Board to determine whether the documents provided are legally appropriate/complete in terms of informed consent. Any researcher can submit materials (protocol, consent form, drug information, medical device information, etc.) to us and have their study reviewed by the Board. Our company then provides oversight for the duration of the study. There are a number of other private IRBs out there–if Facebook truly wanted this oversight, the powers that be could have found one and paid for the review.

    2. Any study that is based on Facebook has to be a laugh. That is because only one type of personality trait joins Facebook. Same applies to Twitter. Admittedly, I did not see the sample size, but most likely that could be way better than these blitz polls that are always being published in the main-stream media, e.g. Gallop polls, etc. that rely on a few hundred or just over a thousand respondees and are somehow meant to be extrapolated to several million individuals. When we start to rely on Facebook or Twitter or any other social sites simply because they have the numbers and data mining is simple (at least way easier – and cheaper – than going directly to individuals), then we start to make science look like academic papers that reference Wikipedia as academic sources for their reference lists.

      1. The laugh here is the idea that “only one type of personality trait joins Facebook.” Assume that’s true (for fun) – this study then can be *perfectly* applied to over 1 billion people – the number of people using Facebook. Not too shabby. Let’s just admit the obvious – social media sites afford a fairly diverse sample for studies like this, and the results are probably often quite generalizable.

        1. But the issue is still the same: the omitted variables. No matter how large the sample size is, if there are systematic biases that influence who get into the pool, we won’t be able to generalize the estimations from this particular sample to general population. Anyone who takes probability sampling will know this. :p

  4. I have yet to find a social science journal that requires proof of IRB approval from authors. Journals generally ask authors to check off on a box stating that the research complied with all required research ethics protocols and that’s it. The journals do it to cover their legal bases and don’t want to start demanding proof of IRB approval. Social science authors complain about bureaucratic ‘ethics creep’ (Haggarty, 2004) and providing proof of approval would be one more hurdle.

    1. The “ethics creep” (Haggarty, 2004) paper is truly terrifying. Here’s a snip from the introduction:

      “Concerns about the ethical quality of research are characteristic of a society where anxieties about the unintended consequences of science and technology are increasingly common (Beck 1992)….As a regulatory system, however, the research ethics process now poses dangers to the ability to conduct university based research.”

      Actually, concerns about the ethical quality of research are characteristic of a society where human subjects have been repeatedly subject to gross mistreatment in various scandals where researchers, at best, thought they were acting for “the greater good” in the pursuit of their research. What we’ve learned is that a single ethical violation by one researcher can put off entire generations of a community from involvement in all forms of research, meaning that the consequences of an ethical lapse go far deeper than a single researcher and set of subjects.

      Are there cases in which ethics review processes have become overly bureaucratic and inflexible? Absolutely. But Haggarty seems to think that anything a journalist does is fair game for an academic researcher, a comparison that flatly ignores the high position of trust and respect the academy aims to foster. He’d be far better served focusing on specific instances where the IRB process has failed (as the second half of his paper does in more depth) rather than taking such a negative tone to the idea that such review serves a valuable function.

  5. The PNS Expression of concern states that study authors indicated that “[The work] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research”. One can debate whether merely accepting a ‘terms of use’, which may legally protect the company for using FB users’ data, constitutes appropriate consent for purposes of research as defined by the US government. However, an article from the Washington Post (http://www.washingtonpost.com/news/morning-mix/wp/2014/07/01/facebooks-emotional-manipulation-study-was-even-worse-than-you-thought/?tid=hp_mm) raises questions about whether the terms of use agreed to by users at the time the study was carried out even covers ‘research’.

    A contributor to another forum provided a link to what appears to be the old terms of use vs. the newer policy: https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-xpa1/t39.2178-6/851577_359286377517112_2039494561_n.pdf.

  6. PNAS is becoming a journal that many will not publish in for this given article and many more. Why this is published in PNAS in the first place? Yes I know politics but we are talking about science,

    1. PNAS isn’t science. They don’t have ethics, allow Natl Academy members to publish unreviewed hogwash. Several of them conveniently forget their conflict of interest statements on their papers. I’ve never published there (I publish in better journals) and never will.

      1. I thought they changed this unique policy (termed euphemistically as “communicated submission”) after the controversial paper by Lynn Margulis back in 2009.

  7. I don’t understand the problem here. In the early days of Facebook all updates were present on your news feed. Once it grew to the point that that was no longer feasible, they added filters, which they have undoubtedly experimented with. They have been collecting user data and using it to enhance their targeted adverts for a very long time, which is every bit as manipulative as the actions reported in the PNAS paper, if not more so. When I go to a supermarket, the data they extract from me, such as co-purchases, is integrated with data such as where the products are placed in the store, and used for research that is clearly intended to manipulate consumer behavior in the future. Consent is implicit, and I don’t have to give permission to be used as a human subject in research. The store is free to move their products to different shelves to evaluate the consequences, just as Facebook is free to alter the filters on their news feed. To believe that your data isn’t being used in research seems very naive.

    1. The distinction here is that the scary Facebook study is openly being called an experiment, where the manipulations secretly imposed by your supermarket are for commercial gain. The threshold for outrage, in the context of free market profit, is much higher than the threshold for outrage in the context of improving society. And maybe it should be – if you’re going to claim some kind of altruism in your motivation, then it follows that you should be ethical or altruistic in your methods. If your motivation is greed, then people already expect you to lie, cheat, and steal.

      As I said above, I think the first question should be whether this feed manipulation was instigated by Facebook for business purposes and afterwards mined for social contagion, or whether it was a social science experiment from the start.

      1. Worried, brilliantly said. You’ve hit the nail squarely on the head. The day when science became commercialized, and then disguised as a “marketing” coating, science started to lose its integrity. This is because science and scientists could be bought out, persuaded by the highest bidder, or by non-academic incentives. This is why the impact factor is gamed, and why Thomson Reuters turns a blind eye, this is why Springer became Springer Science + Business Media, and why so many other problems that are leading to retractions are taking place, I believe. Because science – and with it by association, scientists – have to some extent become corrupted. No longer is there an incentive to fight for knowledge unless there is a financial base, and while there is this financial base, there will always be incentives for greed and corruption. Even if there exist honest, non-profit minded scientists out there, ultimately the objective is to gain, get ahead, beat the competition and be one step ahead. If not, nothing can be published. Originality has a price, and a premium. I have noo doubt that this paper in PNAS was strongly motivated by the price and had a premium. However, it was the Facebookers who paid that price. If I was a FB subject of that study, I would definitely complain. The same way that if I were a subject of a medical study without verbal or written consent, I would demand a retraction of the paper that took advantage of my health. As long as corporations think of humans as dispensables and give us this legal bla-bla-bla about free markets, data use policies and all this other crap that accompanies marketing-driven capitalism, then science will just be driven deeper into the ground. The problem is that 99.99% of scientists are mining science financially for their own personal gain (probably before the good of science). Until they wake up and see what they are supporting, get ready for a total collapse of science publishing, as all of the vultures swarm in to take their piece. If you don’t know what I’m talking about, don’t worry, this message was most likely not for you.

      2. Read the PNAS paper, it was a research experiment from the very start. They deliberately manipulated the emotional tone of user newsfeeds for research purposes.

  8. The original paper states: “Author contributions: A.D.I.K., J.E.G., and J.T.H. designed research.”

    If the academic researchers were involved in the design of the study, presumably that occurred before data were collected. But their IRB only approved their involvement as analyzing an archival data set. If they designed the study, and Facebook collected the data, they were still involved in the collection of data from human participants. It seems that the authors need to resolve this apparent discrepancy. Posting the IRB documents from Cornell would be a good start in trying to figure out whether all ethical obligations were met.

    1. Even if authors are not involved in designing the study, data obtained using unethical methods should not be used for publishing research in journals. As Martin Kollner correctly points out, if researchers would avoid IRB oversight by outsourcing data collection to 3rd party organizations. A key element of informed consent is voluntariness and a key point in voluntary participation is the ability to withdraw at any time. The subjects in the Facebook study did not have the ability to withdraw from participation as they were not fully informed that they were participating in the research. The TOS is implied consent at most and not informed consent. The research also does not explain if or how they excluded vulnerable populations.

  9. Hopefully there will be a retraction of the paper in the end – otherwise this would constitute a terrible precedent: Psychologists would not have to bother about ethical standards anymore if they aquire their data from external organisations/companies. The worst-case consequence of this would be that scientists could even actively assign their data collection to such companies if they want to conduct unethical research that would stand no chance to aquire IRB-approval.

    1. I do hope that we either see a retraction, or at least for the authors to make a serious attempt to address these issues. This attempt might include:

      1) A timeline of the academic researchers’ involvement of the study and whether they were involved in the design of the study before it became an archival data set.

      2) A posting of all IRB documents from all research institutions involved.

      3) An attempt by the authors to contact research participants to allow them to with draw their consent from the study, and a reanalysis involving only individuals who were willing to participate.

      and perhaps most importantly:

      4) A guarantee by researchers involved that before any similar manipulations of negative affect are implemented, participants are screened with standard measures for clinical depression to make sure clinically depressed individuals are not included in an experiment that might harm them.

      If the authors do not make an attempt to address these issues, PNAS should retract.

  10. It’s been mentioned repeatedly that Facebook, as a private self-funded company, is not bound by the Common Rule. Does this mean that private companies are free to perform human subject experimentation for their own purposes without informed consent, provided they don’t violate other statutes? Please tell me there are laws out there preventing human experimentation without consent.

    1. The article is co-authored with university researchers. Those researchers are obligated to follow the Common Rule. By publishing in a peer reviewed journal that states that it requires ethics approval for research it publishes, there is a strong and reasonable expectation that they would follow the Common Rule. Conducting research that includes vulnerable population participants, there is an expectation that they would follow the Common Rule. In the USA, the relevant laws are found in http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.102

      1. I understand that this paper had university researchers on it. I am asking about the larger issue: are private companies (who may have no interest in publishing the outcome of their research) that privately finance their research (no federal support) legally permitted to perform experimentation on human subjects without informed consent? The link you posted appears to concern only entities getting federal support.

        1. Douglas, I have been on my institution’s IRB for over 15 years and I am not aware of any United States laws against corporate behavioral research for internal purposes. I assume, for example, that having supermarket personnel systematically manipulate product placement to discern which types of placements are best to increase sales goes on all the time. That said, I also suspect that there are still small pockets (more like individual instances) far and wide of certain types of applied social science research within the US, even within academic settings, that are still carried out and perhaps even published, without the proper ethics oversight. The Flynn case at Columbia University, though somewhat dated now (2001), represents an example of this type of situation, http://www.nytimes.com/2001/09/08/nyregion/scholar-sets-off-gastronomic-false-alarm.html. Of course, that study was never published.

          One has to wonder what levels of ethical oversight of this type of research, or even of medical research occur in different regions across the world.

        2. It depends on the type of research. It depends on how experimentation is defined.

          §46.101(2) notes that (2) Research that is neither conducted nor supported by a federal department or agency but is subject to regulation as defined in §46.102(e) must be reviewed and approved, in compliance with §46.101, §46.102, and §46.107 through §46.117 of this policy, by an institutional review board (IRB) that operates in accordance with the pertinent requirements of this policy.

          Private companies engaged in pharmaceutical research must follow regulations.

          I expect the lawsuits that are being filed will determine whether or not Facebook’s research falls within the scope of §46.102(e). There are privacy issues involved as well as consent issues related to minors.

          Facebook, and other social media companies, are constantly revising the programming of their software in an effort to evaluate the responses of their users. That’s not generally considered research.

          1. Good point, Martin. The same must apply to the medical device industry and other similar entities. When I submitted my post I was thinking about possible experimentation by non biomedical, non health-related entities.

  11. Some posts here (and on other blogs) suggest that a protocol for this study was scrutinized by Cornell’s Human Research Protection Program as an ‘exempt’ protocol – that is, a protocol that, while ‘exempt from full board review’, is reviewed by an individual knowledgable about risks and ethics of human subjects research.

    However, the 30 June media statement from Cornell states, “Cornell University’s Institutional Review Board concluded that he [Professor Hancock] was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.” This suggests that the data analysis done by Cornell personnel was not reviewed by a Cornell IRB.

    It is not uncommon for an IRB to conclude that work with or ‘mining’ of ‘de-identified’ data sets (i.e., data sets collected by others and for which the current researcher has no ‘identifying’ information about persons who contributed the data) does not require submission or review of an IRB protocol.

    It may be important to note that if an academic researcher (whose institution has ‘checked the box’ to include all human subjects research under the institution’s FWA) is going to work with such a data set without IRB review, s/he does not get the opportunity to design the data collection.

    Participation in the design would mandate IRB review (perhaps exempt from full board review, but review) of the research. For work planned as research, this review is needed before work starts. If you conduct work for a non-research reason (common examples might be evaluating a new curriculum or course design, or making a business decision) and then decide to ask a research question about the data, you seek IRB review at that point.

    It seems to be an open question whether or not the academic researchers participated in the design of the feed alterations and data collection. If they did, it appears IRB review (which seems not to have occurred) would have been needed.

    1. According the the author contribution section of the paper: “A.I.K., J.E.G., and J.T.H. designed research.”

      This seems to fairly unambiguously suggest that the involvement of academic researchers went beyond analysis. I suppose there could be other interpretations, but this seems to be the critical issue that needs to be resolved first in order to understand whether ethical guidelines were followed.

    2. PNAS should have never published this. What happen to this journal. This is supposed to be the high tier journal.

      1. I strongly support the criticisms made by “Jph” and “failuretoreplicant”. Shoddy details about the exact responsibilities of the authors reminds me of that joke of how many XYZ does it take to screw in a light bulb? 4: three to hold the roof and one to screw in the bulb. (XYZ corresponds to your culture of choice for humorous critique). Also, I would personally like the FB users who were “used” in this study to come forward and comment about how they feel.

        1. There is nothing I can find in the study that indicates the FB users were aware they were participants in the study. The authors note “People who viewed Facebook in English were qualified for selection into the experiment.” and “The experiments took place for 1 wk (January 11–18, 2012). Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.”

          So anyone who viewed Facebook in English during the week of January 11–18, 2012 and posted at least one status update during that week was a possible, if not likely, participant. That criteria would have included me as a possible participant.

          1. Unfortunately, that would also include minors as participants, as well as people experiencing serious mental health issues. Can you imagine someone with severe clinical depression being bombarded with mostly negative content from their friends and family for an entire week? Chilling.

          2. and how many of them are fake email accounts. is it also possible that there more than one facebook accounts for one person.

          3. From this discussion, it seems that PNAS, the authors and Facebook have as equally many questions to answer. It also allows us to rethink how freely (willingly or not) people are in having faith in “social” sites, whose main objective is market-driven. I have often claimed that the local populace is nothing but a giant pool of commodities that can be used or abused at will, provided that those who abuse it have the power, and the resources. Unfortunately, science is being dragged into the same cesspool. When someone creates an account at FB, thinking that they are simply creating an account so they can stay in touch with their friends, or follow their idols (aka Twitter), then think again. Every possible personal detail of theirs is being monitored, and mined. And it’s not for charitable, or honorable reasons. It’s ultimately to see how such mining can lead to profits, higher share value, and thus greater smiles on FB executives’ faces. Those who blindly defend FB with their chameleon-like adaptation to the landscape, adjusting their privacy policies ae as brainwashed as FB wants them to be. And, if I may be permitted to make a bold claim: the same is happening with science. Flashy websites, functionality that elevates the super-ego is exactly what is behind sites like ResearchGate, BioMedLibrary, LinkedIn, Skill Pages, etc. It is the local populace (in the case of social sites like Twitter, FB, etc.) and scientists who actually think that they are benefitting (in the case of the sites I list above) who are responsible. If they hadn’t created a profile, naively thinking that it would serve some personal purpose, then they wouldn’t have been open to abuse. Not that I am saying that those who create such a system that could be open to abuse, or those that actually do abuse the system are right. Just that the fodder to be abused wouldn’t exist in the first place. I guess that’s why I have actively avoided creating an account on any of these sites, social, or science. If you avid the first step of the risk, then you can avoid the down-stream consequences, too, or at least minimize them.

    1. Thanks Miguel, for providing that link. What an incredibly pathetic explanation by that principal author. Goodness gracious, that study should have been published in The Onion, not in PNAS. PNAS has really reached new lows. What were the editors at PNAS actually thinking? How did they actually “peer review” this study? I would love to see a public posting of the peer reports. Was this some popularity stunt meant to elevate their intellect to the level of contributions in Nature? If so, then they succeeded, and excelled in their mission. For the records, related to FB, I can state that thousands have contacted four individuals in Brazil and Portugal with the same/similar name as me on FB, and they are probably sick and tired of being contacted by scientists who want to befriend him. Worse yet, someone created a false account using my name on FB in late 2012 or early 2013. After formal complaints to FB, I notice that that account was erased (I checked today). This, however, indicates to me that false accounts seem to be easily created at FB. If ghosts exist on FB, then how confident are these authors that the emotions they were trying to study and monitor were also not by ghosts, or ghostly themselves? I reiterate, I believe that this is NOT a scientific study. It is some social assessment, made sloppily, without any control groups, no understanding of the cultural influence (or possibly origin) of the users, and used within a tiny window frame, “one week, in early 2012”. Were most of the users from the US? I wish that my CV could also be padded with studies that just use data-bases created by websites, without the users’ explicit permission, based on one week’s work, and then published in one of the world’s top science journals. The more I reflect on this case, the more circus-like it becomes. For example, the did the authors check whether maybe globally important events may have played a role on the emotions of FB users? I tried to find (very crude search) some events that happened in late 2011 or early 212, the time when FB user emotions were mined, and found some select events on the local (US) and global stage*:
      a) Protests Intensify in Syria
      b) European Union Agrees to Impose Oil Embargo on Iran
      c) The Costa Concordia Capsizes off Italian Coast
      d) Thousands Flee Nevada Wildfire
      e) Protests Turn Violent over Austerity Measures in Greece
      f) Report Exposes Assassination Plot against Putin (Feb) vs Putin Wins Presidential Election in Russia (March)
      g) Fire Kills Hundreds at Prison in Honduras
      h) U.S. Soldier Kills 16 Afghan Civilians
      i) Assad Agrees to Cease-Fire
      j) Goldman Sachs Executive Resigns and Writes Scathing Editorial

      What I am trying to say is that local or international events can also likely have a strong influence on FB users’ emotions. How can the authors guarantee that there was no “noise” from these extraneous variables?

      * http://www.infoplease.com/news/2012/current-events/

      1. Hi JATdS
        The authors did use a control group to account for extraneous factors. Your critisism is invalid.

        1. Iain, thanks for correcting me. Could you be so kind as to provide as much information as you can about the control group. That might actually resolve a few dangling factors.

          1. The paper is freely available. If my memory serves me well, the sample was split randomly into control and non-control.

  12. I have a couple of additional issues with this study.
    Firstly: Am I wrong to say that besides this human experimentation problem there is a Conflict of interest problem.
    How can authors working at Facebook declare that they have no conflict of interest for a research concerning the psychological effect of their own product… Does anybody else find this misleading.
    Secondly, the premise of the paper is that there was growing concern in that looking at what other people do in facebook would make people sad about their own lives. That is to say that seeing people be happy would make one miserable.
    According to the authors, this research proves the opposite. Showing that increase in the happy content of the feed makes people post more happy things. They say this proves a contagion of positive emotions. Why is this the only explanation? Isn’t it still possible that seeing positive feeds makes one feel negatively anxious about their own life resulting in a “pretend” over expression of their positive news. Somewhat of a competition effect.
    This second point ties together with the first one. If there is a controversy regarding the emotional effect of social media on people, should we trust Facebook to respond this question with a “pre-existing data-set”? Without informed consent? Without declaring a conflict of interest?

    1. This is a very interesting interpretation. Could done use bloggers posts here at RW that are open and publically available, provided that the source is indicated, to do analyses? The relative power of expression of blogs that supercedes very often the power of information transmission of a regular scientific paper have brought us into an era of journallism and publishing with overlapping frontiers and where situations like this FB-PNAS study are new territory, with new challenges and problems. Advice, please.

  13. To sum it up: the research was flawed (lack of subject consent, conflict of interests). Lack of consent might even be illegal. The public outcry could possibly lead to loss of business opportunities (users, authors) for both PNAS and Facebook (yes, that is a rather theoretical possibility in the second case).

    The question I can’t resist asking is: why have they really done this experiment? For the sake of science? Business? Publicity?

    1. What is a “positive” or a “negative” word? This is a highly subjective view, so in that sense, I would have to support your characterization of this study as being “absolute garbage”. Or at least not good enough for PNAS, but rather maybe tossed to one of Beall’s listed OA publishers.

      1. It’s even worse. They used the Linguistic Inquiry and Word Count software (LIWC2007) for the study. That software was designed to analyze large amounts of text, not short status updates. Using It for short text messages leads to coding errors. For example, if a participant wrote “I am happy to receive an A in my course paper today” the LIWC2007 would code that as a positive message. If the participant wrote “”I am upset to receive an D in my course paper today” it would be coded as a negative message. More importantly, however, if the participant wrote – “I am not happy to receive an F in my course paper today” – it would have been coded as a positive message. LIWC2007 was not designed to deal with negations such as “not”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.