[I have a hunch that this will be the first in a series of posts...]
As with the Facebook study, the details of the specific OkCupid experiments are less of an issue than the sheer fact that they are being conducted in the first place. I decided not to weigh in on the Facebook controversy last month (Tarleton Gillespie has a nice roundup if you’re interested), but one of the things that struck me at the time was that the study itself was oversold.* Pretty much everyone participated in the overselling. The authors wanted us to think they’d found something extraordinary (as every author always does). Critics of all stripes wanted to agree, so they could focus on the broader implications of this extraordinary study. And those critiques fell into roughly three camps:
#1. Facebook has too much power! They shouldn’t be able to manipulate people like this without any check or oversight!
#2. People should become better informed! Companies do this all the time, and no one realizes it. If we want a better internet, we have to demand a better internet!
#3. Academia shouldn’t be involved! The Institutional Review Board (IRB) messed up or wasn’t properly consulted here!
Each of these three perspectives take us toward a different ethical question. Personally, I’d rank them #2>#1>#3 in terms of importance.
Regarding #2, one of the great things about the Facebook Study is that it spawned the Facebook Controversy. Everyone vaguely knows that Facebook manipulates their algorithm for learning, fun, and profit. Nearly everyone chooses to go about their merry way, blithely ignoring the implications of this manipulation. Facebook partnered with academics, and that led to a public conversation about those implications. Let’s file all it under “positive-but-unintended consequences.” Ethically, if we want the public to be more aware, then we should also hope that these companies keep publishing their experimental findings.
Regarding #1… Yes, Facebook is crazy-powerful. It is a quasi-monopoly and a quasi-utility. It ought to be at least somewhat regulated as such. But I have difficulty getting too incensed about this for two reasons. First, the FCC currently isn’t even willing to treat Internet access as a public utility. Comcast is the most blatant monopolistic empire of this century. Let’s get around to regulating Facebook after we convince the government to follow basic common sense and expert consensus on regulating ISPs. My pitchfork and torch are already spoken for.
And second, A/B testing (experimentally manipulating the user experience to track results) is indeed standard business practice for large websites and digital organizations. Daniel Kreiss has written about this in the Obama campaign as “Computational management.” I’ve described it in civil society organizations as “passive democratic feedback.” In Christian Rudder’s (President of OkCupid) post about OkC’s recent experiments, titled “We Experiment on Human Beings!“, he writes:
We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.
I think Rudder is basically right. These experiments are a core source of data on what users want. Organizations that don’t run tests aren’t listening to their users/customers/members. Ethically, I think organizations should learn to listen better, and listen responsibly. But I don’t think we should be angry that they’re listening at all.
Regarding #3, academia’s role in all this, I have decidedly mixed feelings. IRBs were designed as a check on the power of researchers to unnecessarily harm subjects in the name of science. IRBs are important because academic research has a specific type of power and authority. But we probably have less power and authority than we’d like to think… particularly in the digital arena.
Facebook and Google and OkCupid hire research scientists. They conduct experiments all the time. If University-based academics keep this research at arms’ length, the research still happens. It just doesn’t get presented at our conferences or published in the journals we happen to read. And that leaves academic social scientists further adrift from the lived experience of actual human beings in modern society.
So yeah, there should probably be some improvements to the IRB process for online experiments of this type. But I’m not sure if that’s the right ethical line-in-the-sand to draw. It’s easy to argue over IRB reforms, because academics have the ability to directly affect the state of Institutional Review Boards. We can decide what sorts of studies we’ll participate in, even though we can’t decide what sorts of studies will be performed beyond our cloistered halls.
Publicly-engaged/publicly-oriented scholarship is always messy. As experimental research finds a welcome home outside of academia, it becomes even messier. Christian Rudder tells us, “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” The more we acknowledge and engage with this reality, the better off we’ll be.
*The authors claim to have found evidence of “emotional contagion”: by modifying the Facebook newsfeed to contain slightly more positive or more negative postings, they were able to observe an impact on users’ posting habits. Put plainly, if you see lots of positive posts on Facebook, you become marginally more likely to post something positive. Negative posts tend to suppress negativity.
That’s a legitimate finding, but it isn’t actually proof that Facebook affects our emotions. It’s proof that Facebook affects how we express emotions within Facebook. And that might just be evidence of commonplace social norms, applied to a social network site: If I’m having a bad day, and everyone on Facebook is sharing happy news, I’m a bit less likely to pipe up and spoil the mood. That doesn’t necessarily mean I’ve adopted my peers’ emotions (as reported via Facebook algorithm). It just means that I’m adopting similar phraseology to what I’m seeing around me.