Pages

Sunday, June 29, 2014

What's Your Take?

You've probably seen the tweets, etc. about the paper, Experimental evidence of massive-scale emotional contagion through social networks.  This was a study conducted by the:


Core Data Science Team, Facebook, Inc., Menlo Park, CA 94025;
Center for Tobacco Control Research and Education, University of California, San Francisco, CA 94143;
and Departments of Communication and Information Science, Cornell University, Ithaca, NY 14853

From January 11-18, 2012, Facebook adjusted its news feed algorithm so that 689,003 users received either less positive emotional content, or less negative emotional content, from friends. Then they looked at what people posted in response.

They found that less positive feed content yielded less positive posts, and less negative content yielded less negative posts. If that sounds confusing, they tried to make people either sadder or happier. The ones they tried to make sadder, by giving them less happy stuff to read, got sadder. And the ones they tried to make happier, by giving them less sad stuff to read, got happier.

They used linguistic software to determine the happiness or sadness of an individual post.

The paper was reviewed and edited by Susan T. Fiske, a professor of Psychology and Public Affairs at Princeton. It was published in the Proceedings of the National Academy of Sciences of the United States of America.

Facebook always has a news feed filtering system-they claim there would just be too much stuff to read in the feed, otherwise. In the case of the study, they simply adapted the filtering system to randomize users to happy or sad arms, and they claim that this is, "...consistent with Facebook's Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research."

Their goals seem to have included demonstrating that online interactions can have significant emotional impact, and debunking the theory that people get sad when they see how happy all their friends are on Facebook.

Well, I guess this is a powerful use of big data, but I just wish they had asked first. They claim they did, that informed consent was implicit in the Facebook user agreement. I think that's pushing it.

I happen to know a depressed adolescent (not my patient), a big user of Facebook, who was hospitalized a couple weeks after this experiment. Am I claiming this is why he was hospitalized? No. Could it have influenced his need for hospitalization? Maybe. Does an adolescent need a parent to sign informed consent? Could be.

It's a complicated issue. Every advertiser on the planet tries to manipulate people's emotions, so how is this different? Because Facebook wasn't trying to sell anything, it was just manipulating emotions to see what would happen? I don't know if that makes it better or worse.

An inquiry was made to Dr. Fiske about IRB approval. Her response:



Would it have adversely affected their data if they had done a mass posting?:


We interrupt this waste of time to bring you an important news bulletin! You may be randomized to participate in a study of your emotions. If you experience prolonged and unremitting sadness for more than 4 hours, please seek emergency medical assistance. If you wish to opt out of this experiment, click HERE.


I don't know. What do you think?