Even the Editor of Facebook's Mood Study Thought It Was Creepy

"It's ethically okay from the regulations perspective, but ethics are kind of social decisions."

Reuters

Updated, Sunday, 6/29, 9:54 p.m. Eastern.

Catching a glimpse of the puppet masters who play with the data trails we leave online is always disorienting. And yet there's something new-level creepy about a recent study that shows Facebook manipulated what users saw when they logged into the site as a way to study how it would affect their moods.

But why? Psychologists do all kinds of mood research and behavior studies. What made this study, which quickly stirred outrage, feel so wrong?

Even Susan Fiske, the professor of psychology at Princeton University who edited the study for Proceedings of the National Academy of Sciences of America, had doubts when the research first crossed her desk.

"I was concerned," she told me in a phone interview, "until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research."

Institutional review boards, or IRBs, are the entities that review researchers' conduct in experiments that involve humans.

[Update, Sunday, 9:54 p.m.: But there seems to be a question of whether Facebook actually went through an IRB. In a Facebook post on Sunday, study author Adam Kramer referenced "internal review practices." A Forbes report, citing an unnamed source, said that Facebook only used an internal review. When I asked Fiske to clarify, she told me the researchers'  "revision letter said they had Cornell IRB approval as a 'pre-existing dataset' presumably from FB, who seems to have reviewed it as well in some unspecified way... Under IRB regulations, pre-existing dataset would have been approved previously and someone is just analyzing data already collected, often by someone else."

The mention of a "pre-existing dataset" here matters because, as Fiske explained in a follow-up email, "presumably the data already existed when they applied to Cornell IRB." (She also notes: "I am not second-guessing the decision."]

Universities and other institutions that get federal funding are required to have IRBs, which often rely on standards like the Common Rule—one of the main ethical guideposts that says research subjects must give their consent before they're included in an experiment. "People are supposed to be, under most circumstances, told that they're going to be participants in research and then agree to it and have the option not to agree to it without penalty," Fiske said.

I emailed the study's authors on Saturday afternoon to request interviews. Author Jamie Guillory responded but declined to talk, citing Facebook's request to handle reporters' questions directly. Early Sunday morning, a Facebook spokesman emailed me with this statement: "We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.”

Later on Sunday, study author Adam Kramer published a response on his Facebook page, saying he and his fellow researchers didn't clearly explain the reasons for the study and that they "care about the emotional impact of Facebook."

Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

("Hope it’s sufficient!" Kramer told me in an email about the Facebook post. "I have had about 200 media and personal requests in the last day so I don’t have time to carefully respond to everything.")

But Facebook, as a private company, doesn't have to agree to the same ethical standards as federal agencies and universities, Fiske said.

"A lot of the regulation of research ethics hinges on government supported research, and of course Facebook's research is not government supported, so they're not obligated by any laws or regulations to abide by the standards," she said. "But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It's voluntary. You could imagine if you were a drug company, you'd want to be able to say you'd done the research ethically because the backlash would be just huge otherwise."

The backlash, in this case, seems tied directly to the sense that Facebook manipulated people—used them as guinea pigs—without their knowledge, and in a setting where that kind of manipulation feels intimate. There's also a contextual question. People may understand by now that their News Feed appears differently based on what they click—this is how targeted advertising works—but the idea that Facebook is altering what you see to find out if it can make you feel happy or sad seems in some ways cruel.

Mood researchers have been toying with human emotion since long before the Internet age, but it's hard to think of a comparable experiment offline. It might be different, Fiske suggests, if a person were to find a dime in a public phone booth, then later learn that a researcher had left the money there to see what might happen to it.

"But if you find money on the street and it makes you feel cheerful, the idea that someone placed it there, it's not as personal," she said. "I think part of what's disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people... Who knows what other research they're doing."

Fiske still isn't sure whether the research, which she calls "inventive and useful," crossed a line. "I don't think the originality of the research should be lost," she said. "So, I think it's an open ethical question. It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...I'm still thinking about it and I'm a little creeped out, too."

Adrienne LaFrance is the executive editor of The Atlantic. She was previously a senior editor and staff writer at The Atlantic, and the editor of TheAtlantic.com.