Facebook’s “Emotional Contagion” Experiment: Was it Ethical?

597765_33116146

Update, June 29: Co-author Adam D. I. Kramer posts a response here.

By now, you’ve probably heard that data scientists at Facebook recently published a study in The Proceedings of the National Academy of Science revealing that, in their words, “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” Or, to be blunt, seeing more negative stories on Facebook can make you sad.

Multiple news outlets covered the results, which broke a couple of weeks ago, but in the last day the focus has shifted to the methodology of the study, revealing that:

  • 689,003 Facebook accounts were used for the study
  • The researchers “manipulated the extent to which people…were exposed to emotional expressions in their News Feed.”
  • According to the study, “No text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

I’m not going to focus too much on the ethics and quality of the science here–others are ably doing that (see links below)–but I do want to speak to the way in which user data was used, and the problematic precedent that sets for the ethical use of social data in general.

In the proposed Code of Ethics that the Big Boulder Initiative has drafted (still open to feedback before we finalize), we laid out four specific mandates for social data use: Privacy; Transparency and Methodology; Education; and Accountability.

Privacy

While the experiment aggregated data such that researchers could not identify individual posts, it breaches users’ expectations about how such data will be used. The Big Boulder Initiative draft Code of Ethics states that, “in addition to honoring explicit privacy settings, organizations should do their best to honor implicit privacy preferences where possible.”

In the section of the Facebook privacy page entitled “Information We Receive and How it is Used,” however, Facebook focuses primarily on the advertising uses of social data, with the exception of a brief bullet point at the end, which states that Facebook may use data:

“…for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

While the word “research” is there in black and white, there is no description of the nature of any potential research, which raises an important point related to privacy; ethical use should anticipate not only the implicit (downstream) implications related to an explicit privacy setting, but a reasonable user’s expectations as well.

Says Sherry Emery, Principal Investigator at the Health Media Collaboratory, who works regularly with social data, “the fact that the researchers justified their use of the data by saying that it complied with the ‘terms of use’ highlights how ineffective–useless, even–the ‘terms of use’ agreement is.”

Transparency and Methodology

The experiment relies on Facebook’s Data Use Policy to argue for transparency, but, says Emery, “It’s one thing to observe and make inferences about human behavior in a ‘naturalistic setting’.  It’s another to manipulate subjects without their knowledge.”

Part of the challenge is that the research study raises questions about the proper use of social data within the social and behavioral sciences. But, while social data is relatively new, social science is not. The National Science Foundation commentary on informed consent provides a clear guideline:

“IRBs [Institutional Review Boards] and researchers should not defeat the purpose of informed consent by substituting a legalistic consent form for an effective communication process.” (Informed Consent in Social and Behavioral Science)

And the study has wider implications. We have to ask how, ultimately, these findings may be used. Does this set a precedent to use Facebook or other data to manipulate individuals’ emotional states for commercial or other purposes via “contagion”? (That term is really not helping, btw).

Whatever our personal standards for ethical use of data in general, the fact remains that social data is new and complex, and it carries with it a slew of implications that we are only just beginning to understand. “If this doesn’t spark a huge debate about data ethics,” Emery says, “I’ll be surprised.  I’ve been waiting, a little bit worried, for public outcry about data science if we didn’t get out ahead of the curve and establish guidelines for ethics in data research.  I think this might be the thing that starts the debate–with a bang.”

Please leave your thoughts here, and contribute to the Code of Ethics; the more specific and evidence-based, the better. I will link to substantive related posts below.

Last updated 8:50 PM June 24

About susanetlinger

Industry Analyst at Altimeter Group
This entry was posted in behavior, Big Boulder Initiative, Big Data, Data Science, Ethics, Facebook, Research, Social Data, social data ethics, Social media, Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.

8 Responses to Facebook’s “Emotional Contagion” Experiment: Was it Ethical?

  1. Pingback: Facebook's "Emotional Contagion" Expe...

  2. James Myers says:

    You’re lacking rigor here. I’d bet FB was believing it was taking a high (and rigorous) road when it published in a hotsy totsy jstor sorta forum. We all know how this works, net companies, especially social media companies, believe that social science operates at a glacial pace, while a FB can change it’s policy, feed, drop a few Billion while a paper goes not even halfway through peer review. This almost certainly isn’t a new or unique experiment, but knowing that requires a lit review. Lit reviews aren’t popular with net geeks with a distaste for any history not related to Star Trek or it’s ilk. Rather than berating FB for it’s ethics, we ought to attack their inherent insecurity, criticizing them on what a half assed bit of undergraduate level work it is. Then they might try harder next time, and there might be a next time. Wouldn’t it be a good thing if The Elect in the valley took social science as seriously as math & physical sciences? No danger of it happening, but we can dream on.

    • We can dream, James, and my point is that we should dream. And, more than dream, define what exactly constitutes ethical use of social data so that real scientists–who are trying to solve problems with immense human impact–don’t pay the price. As I said, I’m not going to comment on the science; it’s not my field of expertise, though I’m tempted to take a closer look at the LIWC methodology.

  3. Amy Charles says:

    All wrong, James. All wrong. Human-subject use is a serious issue and not to be dealt with in a libertarian-utopian manner.

    The thing I wonder about is who paid for this work, and where the handles are to make anything happen. Facebook can fund its own research, doesn’t need NSF or NIH, though if this study was indeed publicly funded, that’s some chutzpah and probably actionable. At that point it’s up to the publications to ensure that human-subject rules were followed, and refuse to publish if they haven’t been.

    Facebook is, of course, perfectly capable of publishing its own research. And they don’t give a rat’s ass about earnest Eileen-Fisher-shift-wearing codes of conduct. Which means that the only way of dealing with this effectively is via legislation. And that would be a very interesting can of worms.

    • Amy, I just added a link to Cornell Chronicle, which states that:

      “The study was funded in part by the James S. McDonnell Foundation and the Army Research Office. Other investigators included Jamie Guillory, a Cornell postdoctoral associate when the project began who now works at the UCSF Center for Tobacco Control Research and Education, and Adam D.I. Kramer of Facebook.”

      I have not independently confirmed that, but it should be easy enough to do.

      As for the “Eileen Fisher shift-wearing code of conduct”…the point of the code is not that it will ever have the power to enforce anything, it’s to begin to establish a cultural norm so that anyone who is working with this data has to think twice before using it in a way that violates an individual’s trust or privacy. People who want to exploit it will still do that. But at least we, as members of an emerging industry, will have publicly articulated what is and isn’t acceptable, and that will spark debate and, we hope, educate people to ask more probing questions and hold the users of this data to a higher standard.

      Sure it’s idealistic, but I’d argue that it’s a critical step in educating the market–individuals, the public and private sectors–to have a specific handle on the issues so they can also advocate for themselves. It’s not just that “it’s creepy.” It’s creepy because it breaches informed consent, for example.

  4. opinions12 says:

    I read about this today here: http://askmepc-webdesign.com/hub/feel-facebook-secret-psychology-experiment-users-emotions/.
    I was shocked and upset to think that me and my friends could have our feeds and information manipulated like this. That is so wrong. Talk about breaking trust with your users. This was in 2012, so what have they been doing since then? This is wrong

  5. greg says:

    There was no informed consent THUS it was not ethical. PNAS is at fault of publishing the results in any form.

  6. greg says:

    Let’s ask for an opinion from the authority than; HHS; http://wh.gov/lFYEs

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s