Analytics versus Slacktivism

A recent study by researchers at the University of British Columbia Saunder School of Business has brought “slacktivism” back into the headlines.  As usual, this has more to do with gaming for media attention than it does with the substantive findings.

The authors have conducted an interesting series of experiments, aimed at comparing “public tokens of support” (such as ‘liking’ on facebook) with “private tokens of support” (such as signing a petition).  They demonstrate that public tokens of support satisfy the psychological need for “impression management,” and thus reduce the urge to donate under experimental settings.  Displaying a pin or some other low-effort public token of affiliation can grant individuals “moral license” to slack off and not take further actions.  The study, published in the Journal of Consumer Research, seems well-executed to me.  But it doesn’t quite show what they’d like it to show.

In a press release earlier this m0nth, the university press office announces “‘Slacktivism: Liking’ on Facebook May Mean Less Giving.”

Well, sure. …Maybe.

They go on to proclaim: “Would-be donors skip giving when offered the chance to show public support for charities in social media.”

Hmm… no. Not quite.  You’ve got an external validity problem.

Under their experimental design, the researchers make the exact same donation request, regardless of whether participants took a public action, a private action, or no action.  (It wouldn’t be much of an experiment if they didn’t.)

But in the real world, social change organizations routinely optimize their donation requests to account for different levels of participation.  Dan Kreiss offers an example in his book, Taking Our Country BackWhen you visited the 2008 Obama campaign website, they altered the splash page based on whether you had visited the site before, signed up, ordered a tshirt, and created a MyBO account (pages 150-151).  These various characteristics led to different donation requests and alternate donation language — all rigorously tested to maximize participation.

All that testing requires a LOT of traffic (h/t Kyle Rush).  And one of the benefits of “public tokens” like Facebook likes/shares is that it can generate increased traffic.  One of the secrets to Upworthy’s phenomenal growth has been optimizing their content for Facebook sharing (slide 21 in their slidedeck).  Companies like ShareProgress and CrowdTangle specialize in helping make these public tokens of support even more public.  Doing so brings in more potential supporters, which in turn leads to more engagement.

—-

I’ve written about this before.  A lot.  The problem with calling this experimental design a study of “slacktivism” is that it completely ignores the feedback loop that occurs between individual acts of participation and a larger organizational context.  Advocacy groups are using sophisticated analytics tools to listen to their supporters in novel ways, and to reach new supporters that they otherwise wouldn’t encounter.  If you ignore all that real-world activity, then you can’t effectively measure whether the net impact of digital participation is positive or negative.

I’m not trying to trash the authors’ work.  They’ve produced a nice experimental study.  And they’ve packaged that study to attract media attention.  “slacktivism” works in headlines a lot better than “public vs private tokens of engagement.”  But the end result is that a lot of advocacy professionals are going to see the headline and think, “ah hah.  Research has shown that Facebook is bad for giving.  I knew it!”  Something gets lost in translation when you start packaging research for media soundbites.

The solution to decreased digital participation isn’t to stop asking supporters to engage online; it’s to embrace a culture of testing that leads you to start asking them better.

 

 

2 thoughts on “Analytics versus Slacktivism

  1. sounds like “moral balancing” (flawed logic of: if I do this good thing I can do a bad thing later, or vice-versa)

  2. Also shows how the long-standing defense of experimental psychology methodology – “it’s the best we’ve got” – doesn’t really wash any more.

Comments are closed.