Hashtag Activism Isn’t Activism (A comment on #cancelColbert)

If I can’t dance, I don’t want to be part of your revolution.” -Emma Goldman

That was one of my favorite slogans, back in my organizing days.  I met plenty of campus activists who were permanently serious.  The stakes were dire, and nothing was ever a laughing matter.  I couldn’t stand those activists.  I always felt their personal severity made them a lot less effective in their work.  They existed in an echo chamber of constant agreement, and drove away anyone who failed to tow the party line.  And their tactics always adopted the form of “let’s make our peers feel uncomfortable!  Then they’ll all realize…”

I was reminded of all this last night, when I briefly logged on to twitter and saw the #CancelColbert trending hashtag.

Here’s what happened: Dan Snyder (owner of the Washington Redskins) has faced increasing pressure over the racist name of his team.  He decided to defuse that pressure through a PR maneuver, launching the “Washington Redskins Original Americans Foundation.”  He’ll give a little money to Native American communities, so long as they’ll agree to be photographed in Redskins gear.  (If he’s polite, maybe he’ll leave the money on the bedside table…)

Colbert ran a segment on Snyder, pointing out the absurdity of it.  He ended by announcing that, in the spirit of Snyder, he’d be launching the “Ching Chong Ding Dong Foundation for Sensitivity to Orientals or Whatever.”  It was, in my opinion, an appropriate skewering of a desperate and offensive PR move.


Comedy Central’s @ColbertReport account tweeted the punchline to the joke.  Losing the context made the joke completely unfunny.  As Erin Gloria Ryan points out at Jezebel, “The bit only works as a whole; it doesn’t work in parts. Colbert’s character is saying here that naming a charity “Washington Redskins Original Americans Foundation” is just as offensive as naming a charity the “Ching Chong Ding Dong Foundation for Sensitivity to Orientals or Whatever.” That’s the joke.”

From there, it appears the professional “twitter activists” took over.  Tweeter Suey Park announced her outrage at Colbert’s “racist joke” and launched a #CancelColbert hashtag.

Now, Colbert isn’t in any actual danger of cancellation.  And Park explained on Huffington Post Live that she used this language because “unfortunately people don’t usually listen to us when we’re being reasonable.” So that’s fine, make an unreasonable demand, start a conversation.  Park will gain some more twitter followers out of the exchange, Colbert will tape his next segment, and we’ll all move on to another outrage in time for dinner.

But I can’t help being reminded of those far-too-severe environmental activists.  The #CancelColbert “conversation” hasn’t been much of a conversation.  When invited onto Huffington Post Live to explain “why Cancel Colbert,” Park’s immediate response was “well that’s a loaded question.”  She then went on to accuse the host (who was giving her airtime) of “silencing” her.

Episodes like this one don’t build your movement. They concentrate your movement.  They foster an umbrage mentality and more-serious-than-thou sensibility.  It isn’t fun for anyone, and it isn’t appealing to anyone.

This hashtag activism is the digital version of an old, severe strain of activism.  Unfortunately, it’s a strain that gives activists, as a whole, a bad name.

If I can’t dance, I don’t want to be part of your revolution.

News Coverage of Economic Immobility: Free of Historical Context

A recent Harvard study has found that economic mobility has not changed substantially in the last couple decades.

This has been framed repeatedly in the media as “mobility has not declined.” The Times headline is literally, “Upward Mobility Has Not Declined, Study Says”.

The NPR headline, “Study: Upward Mobility No Tougher In U.S. Than Two Decades Ago“, captures that story’s spin. Over at the New Yorker? “Social Mobility Hasn’t Fallen: What It Means and Doesn’t Mean“.

The reason for this framing is surely that political leaders of several stripes have contended that mobility actually is going down. Remarkably, this has included not only by Obama and other Dems, but also visible Republicans like Paul Ryan.

Still, just because political leaders are wrong does not justify using their claims as a starting point. A more accurate headline would be, “Study Finds Economic Mobility Remains Low”. Economic mobility has been remarkably low in the US since the middle of the 20th century. The new Harvard study is a valuable addition to the literature, but it is consistent with years-old studies suggesting that we’ve plateaued near the bottom of the scale.

Here’s a graph from a 2007 study using Social Security data, showing how mobility dropped sharply in the 1940s and ’50s, and has stayed low since then. (Click for a larger version.)

Graph: Decrease in Economic Mobility

Even the 1960s and ’70s had slightly more variability in mobility and were (on average) higher in mobility. The Harvard study, however, covers the working years of those born in the 1970s and later — that is, roughly the last twenty years.

Look again at the graph. There is about a 3% chance that somebody in the bottom 40% will climb to the top 40%, and vice-versa, in a given year. Through 1950, the odds of moving up from the bottom to the top 40% were at least 6%, and as high as 12%, depending on the year. Compared to that range especially, the Reagan years basically saw everyone cemented in place.

When mobility is already so very low, and has been for decades, the key finding of this study is not that it has failed to drop further. This is akin to a sports section headline of “Cubs Fail to Win World Series”. Nobody would write that headline. “Cubs Wrap Another Miserable Year” is more like it.

This would likely be true even if the GM had promised a title at the start of the year — though the New York Post would probably go with throwing that promise back in his face. Sadly, the reporters who cover economics research know far, far less about that subject than sports reporters do about the games.

These headlines are a good example of political coverage only taking place within the boundaries set by policy leaders, even when the facts should militate otherwise. Political reporters and editors don’t know whether economic mobility has gone up or down over the 20th Century; they only know what Paul Ryan and Barack Obama say about it. That’s shameful, of course, when good information is publicly available — much of which is readable to the outsider.

Shouldn’t reporters be fact checking whether mobility really has gone down? Asking politicians where they got their data? Reading enough books and scholarly articles (or at least the darned abstracts) to have at least a semblance of an idea where to start looking for such an answer? Regardless, they are not doing so, and it takes the PR flacks at Harvard (who have apparently done their job very well this week) to put such research on their desks.

Thankfully, both the paper and the coverage have put this finding in the broader context of growing concentration of wealth. On this question there is widespread agreement that inequality is (a) worse in the US than in any other industrial country, and (b) growing. Here’s the relevant chart from the 2007 study linked above that shows the growth of inequality:

Graph: Rising Economic Inequality

This graph depicts the “Gini Coefficient,” which is a measure of economic inequality. Inequality dipped after the war, and it has climbed steadily since then. This graph stops in 2004, but it has continued unabated in the decade since as well.

The study and the coverage are also right to highlight important geographic differences in mobility. A kid who grows up in the bottom fifth in San Francisco or New York City is over twice as likely to reach the top fifth as a similarly positioned kid growing up in Atlanta or Charlotte. (Could it possibly be that collective investment leads to greater mobility?) Check out the Times‘ really cool interactive map of economic mobility.

This wealth of great detail notwithstanding, the new Harvard study’s framing in the news headlines and leads is disappointing. “Cubs Not Champions” is not the right frame; this is a lot closer to “Cubs Continue Futility”.

P.S. Thankfully, economic inequality is now being treated as an economic problem. In that vein, we should be looking at the political explanation for inequality — which brings me, for the umpteenth time, to Winner-Take-All Politics by Jacob Hacker and Paul Pierson. If you have not read this book and give a gram of care about inequality, go read it now. Even for those with no training in economics or political science, it’s a very accessible — and persuasive — read.

My Trouble with VictoryKit

We lost Aaron Swartz a year ago today.

I’ve been thinking a lot recently about VictoryKit, Aaron’s final unfinished project. He told me just a little bit about it last year, when we were both at the OPEN Summit.  The overlaps between his tech product and my emerging research puzzle (on analytics and activism) were uncanny, and the last conversation we had ended with a promise that we’d discuss it further soon.

As far as I can tell, VictoryKit is a growth engine for netroots advocacy groups.  It automates A/B testing, and draws signal from a wider range of inputs (open-rates, click-rates, social shares, etc) than usual.

The thing is, as I’ve conducted my early book research and learned more about VictoryKit, I think I’ve identified a real problem in the design.  I’m worried that VictoryKit automates too much.  It puts too much faith in revealed supporter opinion, at least as it is constructed through online activity.  And in the long term, that’s dangerous.

VictoryKit is designed to “send trickles, not blasts.”  The idea is to be constantly testing, constantly learning.

I heard Jon Carson from OFA give a talk last summer where he remarked “if you get our email before 8AM, you’re in our testing pool.”  OFA basically is the industry standard for email testing.  They test their messaging in the morning, sending variant appeals out to random subsets of their list*. They refine their language a few hours later, based on the test results, then they can send a full-list blast in the afternoon.  That’s one of the basic roles of A/B testing in computational management.

VictoryKit gets rid of the full-list blast.  Instead, you keep feeding petitions into the magical unicorn box**, it judges which petition is more appealing, and it then sends that petition to another incremental segment of the list.  I haven’t looked into the exact math yet, but the basic logic is clear: analytics represent member opinion.  Automate more decisions by entrusting the analytics, and you’ll be both more representative and more successful.

The problem here is that our revealed preferences are not the entirety of our preferences.

A.O. Hirschmann wrote about this in “Against Parsimony.”  Essentially, we have two types of preferences: revealed preferences and meta-preferences.  Revealed preferences are what we do, what we buy, what we click.  But Hirschmann points out that we also have systematic preferences for what kind of options we are presented with.

I always think of this as the Huffington Post’s “Sideboob” problem.  Huffpo has a sideboob vertical because celebrity pics generate a lot of clicks.  That’s a revealed preference: if Huffpo gives us a story about inequality and a story about Jennifer Lawrence at juuuuust the right camera angle, JLawr will be far more popular.  So Huffpo provides a ton of sideboob and a medium amount of hard-nosed journalism.

But!

If the Huffington Post gauged reader preferences through different inputs ((by asking them to take online surveys, for instance), then they’d get a different view of reader preferences.  More people click on celebrity pics than will say “yes, that’s what I want from the Huffington Post.”

There’s a narrow version of economic thought that rejects meta-preferences as being unreal.  If people say they want hard news, but they click on the celeb pics, then they must really want the celeb pics.  But that’s unsupportable upon deeper reflection.  People are complex entities.  We can simultaneously watch junk tv and wish there was higher-quality programming.  New gym memberships peak around new years and late spring, as people who generally don’t reveal a preference for regular exercise act on their meta-preference for healthier living.

In online political advocacy, the signals from revealed preferences are even weaker.  We click on the petitions that are salient, or engaging, or heart-rending.  But we want our organizations to work on campaigns that are the most important and powerful.  Some of those campaigns won’t be very “growthy.”  But that doesn’t mean they’re unimportant.

Take a look, for instance, at question #6 in Avaaz’s 2013 member survey.  Avaaz asked global members their opinion on a wide range of issues.  It also asked them “how should Avaaz use this poll.”  Only 5% thought their opinions should be binding on the organization.  The other 95% felt it should be as minor input or as a loose guide.  When asked, Avaaz members announce a meta-preference that the staff reserve a lot of room to trust their own judgment.

The problem with analytics-based activism is that it can lead us to prioritize issues the most clickable issues, instead of the most important issues.  That’s what can happen if you equate revealed preferences, as evidenced by analytics signals, with the totality of member preferences.

There’s a simple solution to that problem: maintain a mix of other signals.  Keep running member surveys.  Make phone calls to your most active volunteers to hear how they think things are going.  HIre and empower the right people, then trust their judgment.  Treat analytics as one input, but don’t put your system on autopilot.

If I understand it right, VictoryKit promotes exactly the type of autopilot that I’m worried about.

Maybe Aaron would have had a good rebuttal to this concern.  He was incredibly thoughtful, and it’s entirely possible that he envisioned a solution that I haven’t thought of.

But today, one year later, as we reflect on his legacy, I want to offer this up as a conversation topic:

Does VictoryKit automate too much?  And if so, how do we improve it?

*I have a hunch that they also test during the day. …otherwise their response pool would be biased toward earlybirds.

**Adam Mordecai refers to Upworhty’s analytics engine as a “magical unicorn box.” Adam Mordecai is funnier than I am.  Ergo, I’m going to start stealing language from him.

Frank Luntz as a Man Out of His Time

Molly Ball has a typically excellent article at TheAtlantic, profiling Republican spin guru Frank Luntz.  In the 1990s, Luntz was the guy who told Republicans that they should rename the estate tax “the death tax.”  Since then, he’s become a fixture of political media, synonymous with spin.  He is a one-man-confirmation of all your most cynical fears about congressional politics.

The premise of Ball’s article is that Luntz has grown depressed and disheartened about the American public.  I think the more surprising thing is that the man truly seems to believe that his techniques still work just fine.  Consider:

“I spend more time with voters than anybody else,” Luntz says. “I do more focus groups than anybody else. I do more dial sessions than anybody else. I don’t know shit about anything, with the exception of what the American people think.”

Focus groups and dial sessions were the cutting edge of 1994.  They’re laughably antiquated today.  And what’s more, they were never a perfect approximation of public opinion.  They’re useful-but-limited tools that reveal an imperfect artifact, which in turn can serve as a stand-in for public opinion.

Focus groups and dial sessions are technologies that can help you pick out particularly resonant phrases and images.  They were excellent tools back when the 30-second attack ad was virtually the only messaging vehicle in town: (1) Run a focus group.  (2) Find resonant language.  (3) Produce a commercial.  (4) Test it with some people.  (5) Run the commercial.  (6) Get paid crazy money.  Sounds like a pretty sweet gig.

The problem for Frank Luntz isn’t that people have gotten “more contentious and argumentative”*.  The problem is that his two nifty tools aren’t the only game in town anymore.  We’ve realized that campaign ads are pretty weak persuasion tools.  We’ve developed plenty of other outreach mechanisms (*cough* Analyst Institute *cough*) that don’t rely solely on Luntz’s preferred form of crafted talk.  And we’re developing new techniques for gauging activated public opinion through social media and analytics.**

Luntz is a lot like the old school scouts in Moneyball. He “knows baseball,” and he knows it based on the same old techniques that he pioneered 20 years ago.

If he seems sad, it’s probably because he’s in denial about how the game has changed.

 

*I’ve just started reading Berry and Sobieraj’s new book, The Outrage Industry.  I’m pretty sure they would argue that we have gotten more contentious and argumentative.  I’m inclined to agree.  But I find it hard to believe that’s the real problem Frank Luntz is facing.

**Which is the subject of the book manuscript that I’ll go back to working on as soon as I’m finished with this blog post.

On Change.org’s 50 Million Milestone and the Importance-Meter

This weekend, Change.org hit a big milestone: 50 million people worldwide have now taken action on their site*.

That’s huge.  By way of comparison, Avaaz.org has just over 31 million people.  It seems that Change.org’s controversial decision to stay politically neutral is paying off**.

For the past month, I’ve been visiting the homepages of Change.org and SignOn.org every day.  I record the top 10 petitions promoted by each site.  I’ll be doing this for another five months to create a dataset that I can use to draw some firm comparisons. Despite the milestone, I have to admit that the more time I spend studying Change.org, the more ambivalent I feel about the company.

The thing that bugs me about the top Change.org petitions is what we might call the lack of an “importance-meter.”

The #1 petition at Change.org today is titled “Justice for Andra Grace — Tougher Animal Abuse Laws Are Necessary!”  The petition tells the heart-rending story of a South Carolina man who tried to kill a dog by dragging it behind his pickup truck.  The maximum penalty for his crime in South Carolina is only $1,100 and/or 30 days in jail.  The author concludes by calling for tougher animal abuse laws.

Now, that can be a worthy cause.  People love their pets, and if pet-lovers get organized through Change.org and start taking on the government, I think that’s a Good Thing.  But this petition isn’t addressed to the South Carolina legislature.  It’s addressed to “animal lovers of the world.”  Signing this petition is an act of social solidarity, not an act of political pressure.

By comparison, SignOn/MoveOn’s #1 petition today is titled “Breaking News: House Republicans to Torpedo President Obama’s Iran Agreement.”  It tells the story of congressional maneuvering by Eric Cantor’s office that could undermine tense international diplomatic negotiations with Iran.  The author explains the interim deal with Iran, and the ways that Cantor’s bill could destroy our negotiating ability.  The petition is directed to members of the House of Representatives.

Let’s set aside for a moment whether one of these issues is innately more important than the other.  The real problem is in how each is constructed.

Three years ago, I wrote a long ShoutingLoudly post titled “In Praise of Petitions (Sort of).”  The TL;DR version is that the best high-volume tactics like petitions (online or off) have layers to them.  An online petition act as a springboard for offline tactics like solidarity rallies, marches, and citizen lobbying.  The easy first step of signing your name leads into a “ladder of engagement” the promotes more intense participation.

The Andra Grace petition is directionless.  The Iran petition is focused.  The Andra Grace petition calls on no one in particular to promote tougher animal abuse laws.  The Iran petition calls on members of Congress to oppose a specific bill, currently under debate.

But the Andra Grace petition has a clickable image and a heart-rending story.  The Iran petition has no image and six footnotes.

Let me be clear: we should not expect every petition on either site to be professionally produced.  One of the benefits of distributed petition platforms is that anyone can launch these campaigns.  I don’t mean to insult the author for being new to online campaigning.  But the top of the homepage is valuable digital real estate and algorithms can automate value-judgments.  The campaigns that you promote and highlight say something about your identity as an organization.

Promoting the Andra Grace petition (or, two weeks ago, the petition to Family Guy creator Seth McFarlane to bring back the cartoon dog he’d killed off) represents an algorithmic value-judgment.  It says that the most clickable campaigns — the ones that will bring in the widest audiences — are the best campaigns.  And I doubt that anyone at Change.org entirely believes that.

50 million people is a hell of a milestone.  No other social change organization comes close to that reach.  I wonder, though, whether they are optimizing for the right things.

 

 

*(via PD+ First Post, which ShoutingLoudly readers should really subscribe to.)

**Note: those are all self-hyperlinks.  I maybe write too much about Change.org.

Analytics versus Slacktivism

A recent study by researchers at the University of British Columbia Saunder School of Business has brought “slacktivism” back into the headlines.  As usual, this has more to do with gaming for media attention than it does with the substantive findings.

The authors have conducted an interesting series of experiments, aimed at comparing “public tokens of support” (such as ‘liking’ on facebook) with “private tokens of support” (such as signing a petition).  They demonstrate that public tokens of support satisfy the psychological need for “impression management,” and thus reduce the urge to donate under experimental settings.  Displaying a pin or some other low-effort public token of affiliation can grant individuals “moral license” to slack off and not take further actions.  The study, published in the Journal of Consumer Research, seems well-executed to me.  But it doesn’t quite show what they’d like it to show.

In a press release earlier this m0nth, the university press office announces “‘Slacktivism: Liking’ on Facebook May Mean Less Giving.”

Well, sure. …Maybe.

They go on to proclaim: “Would-be donors skip giving when offered the chance to show public support for charities in social media.”

Hmm… no. Not quite.  You’ve got an external validity problem.

Under their experimental design, the researchers make the exact same donation request, regardless of whether participants took a public action, a private action, or no action.  (It wouldn’t be much of an experiment if they didn’t.)

But in the real world, social change organizations routinely optimize their donation requests to account for different levels of participation.  Dan Kreiss offers an example in his book, Taking Our Country BackWhen you visited the 2008 Obama campaign website, they altered the splash page based on whether you had visited the site before, signed up, ordered a tshirt, and created a MyBO account (pages 150-151).  These various characteristics led to different donation requests and alternate donation language — all rigorously tested to maximize participation.

All that testing requires a LOT of traffic (h/t Kyle Rush).  And one of the benefits of “public tokens” like Facebook likes/shares is that it can generate increased traffic.  One of the secrets to Upworthy’s phenomenal growth has been optimizing their content for Facebook sharing (slide 21 in their slidedeck).  Companies like ShareProgress and CrowdTangle specialize in helping make these public tokens of support even more public.  Doing so brings in more potential supporters, which in turn leads to more engagement.

—-

I’ve written about this before.  A lot.  The problem with calling this experimental design a study of “slacktivism” is that it completely ignores the feedback loop that occurs between individual acts of participation and a larger organizational context.  Advocacy groups are using sophisticated analytics tools to listen to their supporters in novel ways, and to reach new supporters that they otherwise wouldn’t encounter.  If you ignore all that real-world activity, then you can’t effectively measure whether the net impact of digital participation is positive or negative.

I’m not trying to trash the authors’ work.  They’ve produced a nice experimental study.  And they’ve packaged that study to attract media attention.  ”slacktivism” works in headlines a lot better than “public vs private tokens of engagement.”  But the end result is that a lot of advocacy professionals are going to see the headline and think, “ah hah.  Research has shown that Facebook is bad for giving.  I knew it!”  Something gets lost in translation when you start packaging research for media soundbites.

The solution to decreased digital participation isn’t to stop asking supporters to engage online; it’s to embrace a culture of testing that leads you to start asking them better.

 

 

On the Limits of “Big Data”: hidden structures and network backchannels

Fellow Internet researchers, we need to have a little talk.  It’s about “big data,” and what it isn’t.  

Consider the following case:

Over the summer, David Corn at Mother Jones published an investigative piece about a conservative insider group named Groundswell.  Groundswell included in-person meetings and a Google-Group that tea party activists, think tankers, conservative media journalist/activists, and government staffers used to discuss strategy and coordinate messaging.  In essence, it was yet another “journolist” for the right (and, as such, it received basically zero public outrage …as David Weigel puts it “conservative news outlets talking to conservatives on background?  Who didn’t figure this was happening anyway?”).

Weigel calls out the following passage from Corn’s reporting:

At the March 27 meeting, Groundswell participants discussed one multipurpose theme they had been deploying for weeks to bash the president on a variety of fronts, including immigration reform and the sequester: Obama places “politics over public safety.” In a display of Groundswell’s message-syncing, members of the group repeatedly flogged this phrase in public. Frank Gaffney penned a Washington Times op-ed titled “Putting Politics Over Public Safety.” Tom Fitton headlined a Judicial Watch weekly update ”Politics over Public Safety: More Illegal Alien Criminals Released by Obama Administration.” Peter List, editor of LaborUnionReport.com, authored a RedState.com post called “Obama’s Machiavellian Sequestration Pain Game: Putting Politics Over Public Safety.” Matthew Boyle used the phrase in an immigration-related article for Breitbart. And Dan Bongino promoted Boyle’s story on Twitter by tweeting, “Politics over public safety?” In a message to Groundswellers, Ginni Thomas awarded “brownie points” to Fitton, Gaffney, and other members for promoting the “politics over public safety” riff.

The reason this passage is noteworthy is that it reveals an underlying flaw in virtually every academic study of online information diffusion.

Imagine if you were conducting a study of how the “politics over public safety” meme diffused through the blogosphere.  You’d likely combine data from google trends, lexis-nexis, and the twitter firehose to identify instances of the phrase.  You’d rely on the digital traces from social network ties and hyperlinks to identify where the phrase started and how it spread.  You’d probably produce some fancy network graphs.  If it’s part of a larger study, you might combine this case with several others to assess Granger causality.  In the end, the data would tell a sophisticated story about what sorts of news outlets, pieces of content, or individuals in a network drive meme diffusion.

But you’d be wrong.  You’d be wrong because, according to public data, it looks like the phrase diffused online from Frank Gaffney to Tom Fitton, then to Peter List, Matt Boyle, and Dan Bongino.  But it actually diffused through an in-person meeting and a backchannel GoogleGroup.  The public data can’t account for the hidden structure provided by offline and online-but-private communication systems.

This is a simple point, but it’s also a point that I inevitably make at every academic panel on “big data.”  We, as a research community are repeatedly, comprehensively deriving incorrect conclusions. We’re able to draw upon more and more data, and we’re confusing that with comprehensive data.

Big data isn’t comprehensive data.  It is systematically incomplete.

 

 

The Latest Change at Change.org

Five months ago, Change.org received $15 million in venture capital from the Pierre Omidyar. This week, we’re getting an initial look at what they’ve invested the money in.  I’m a bit skeptical.

The big new feature is called Decision Makers (screenshot below).  It’s a portal for members of congress, corporate CEOs, and other common targets of Change.org petitions to engage in a dialogue with petition creators. Jake Brewer (formerly of PopVox) is the development lead on the project.  In an interview with Issie Lapowski at Inc magazine, Brewer said “With this product, we’re bringing the government out to where the people are, versus bringing the people into where the government is.” and “We totally expect that users won’t always like the responses, because they’ll be press release-y, inauthentic, or might not address the problem. But what I’m most excited about is the ability of users to respond to the response. That’s a conversation.”

change decisionmakers

 

I applaud Jake’s enthusiasm, but have my doubts about just how effective this new feature will be.  Here are four things to keep an eye on as the new product launches:

1. Total elite buy-in.  Elizabeth Warren and Paul Ryan headline the decision makers who have signed up so far.  I imagine plenty of members of Congress will follow suit (Popvox has been heavily adopted in congress. Jake Brewer is the right person to be launching this new features).  But what about statehouses and corporations?  The top 5 petitions featured at Change.org right now are targeted at Yahoo, Chuck E. Cheese, CraigsList, Oklahoma Child Welfare Services, and Mars Incorporated.  Unless change.org starts promoting Congressionally-targeted petitions, or immediately starts attracting Fortune 500 CEOs, there’s going to be a disconnect between the new tool and the core product.

2. Actual elite participation.  Getting decision makers to sign up is only the first hurdle.  When I click on Elizabeth Warren’s or Paul Ryan’s pages, it tells me how many open petitions with more than 10 signatures are addressed to each (74 for Warren, 96 for Ryan) and how many responses each has written (0 for Warren, 0 for Ryan).  This tool has only been around for two days so far, so its far too early to declare this a failure.  But CEOs and congresspeople lead pretty busy lives. Asking them to engage in deliberative conversations with digital publics (or even asking them to delegate staff time to this purpose) is a heavy lift. Decision Makers could easily become a ghost town.

3. Change.org petitioners’ behavior.  I’m presenting a conference paper this Tuesday that compares change.org and petitions.moveon.org as distributed petition platforms.*  It’s part of my new book project, on analytics and activism.  One of the major differences between the platforms is the character of their users.  Last week, about half of MoveOn’s top petitions were focused on the government shutdown or the ACA rollout.  That’s to be expected — those were the two issues dominating the national political agenda and media agenda.  Only one of Change.org’s top petitions concerned either of these issues: a petition framed around cancer treatment that called for an end to the government shutdown.  Change.org has cultivated a public that mostly focuses on non-traditional political issues.  Today’s top petition airs frustration over the new version of Yahoo! mail.  Last week the top petition asked a high school to revoke a student’s alcohol-related suspension.  These are social issues, not traditional political issues.  If Change.org petition-creators don’t target individual congresspeople, then a feature initially aimed at cultivating congressional response is going to face a steep climb.

4. Neutrality in a moment of overt partisanship.  Change.org prides itself on being a neutral platform.  They want to cater to Democrats and Republicans, teachers unions and school reformers.  That neutrality is one reason why they are well-situated to launch the Decision Makers feature.  Paul Ryan isn’t going to start a dialogue through MoveOn’s website anytime soon.  But that neutrality also is at odds with the reality of our political moment.  We just had a government shutdown because a small enclave within one half of one branch of government didn’t like the rest of our government.  That isn’t gridlock.  The Republican party network is moving towards an internal civil war, between the extreme ideologues and the much-more-extreme ideologues.  It’s unreasonable to feel “neutral” about these events.  What sort of “dialogue” are we supposed to foster with Paul Ryan or Elizabeth Warren, exactly?  Depending on which side you’re on, you think one of them is a hero and the other is a villain.  There isn’t a lot of room in between.

—–

A few years ago, I wrote a long post and ShoutingLoudly called “In Praise of Petitions (Sort of).”  My point was that petitions are an excellent initial “low bar” action. They lay the groundwork for later, “higher bar” actions.  We have to view petitions through the lens of a broader campaign.  My lingering concern with Change.org is that they are treating the petition as the sole tactic in a campaign.  (Citizen starts petition –> citizens sign petition –> media takes note –> decision-maker gives in to the pressure.)  That’s like painting in only one color.  Even if its a bold color, its still monotonous.

Among Change.org’s proclaimed victories is last week’s petition to “Help me fight cancer and stop the shutdown.”  It is true that 150,000 people signed that petition.  It is true that the shut down stopped.  But I sure hope no one believes that the former caused the latter.

Decision Makers is an innovation at Change.org.  But I’m not convinced quite yet that it’s an innovation that really improves our democracy or empowers citizens.

 

 

*So, y’know, the rollout of this new feature and accompanying website overhaul was just EPIC timing.  Thanks a lot, folks! [/snark]

 

The Analytics Floor, part II

Brian Fung at the Washington Post has a sharp new piece about analytics and social media.  It’s the first reporting I’ve seen on Jim Pugh’s work at ShareProgress or Milan de Vreis’s work at MoveOn. The key quote in the piece comes from de Vries:

“We’ve gotten really good over the last few years at how to broadcast through e-mail with one big megaphone,” he said. “But here, if we can harness the dynamics of the networks our members are a part of, we can broadcast with hundreds of megaphones at once.”

This is a theme I’m trying to develop for my next book.  I got into it a bit this summer in a piece titled “a web of persuasion or a web of mobilization?”  MoveOn’s first innovation came through using the Internet to mobilize its engaged issue public.  Their second innovation is coming through using the Internet to persuade people who aren’t already engaged.

There’s one comment in the article that I disagree with, though.  It brings up a concept I’ve talked about previously, the “Analytics Floor.”  Here’s the quote:

“Testing is critical, especially for smaller clients,” said Serenety Hanley, a former Republican National Committee technology director who now runs a boutique social media consulting firm. “The smaller the client, the more vital it is to maximize their dollars.”

This seems… completely backwards to me.  The trick with A/B testing is that it becomes more valuable the larger your organization is.  If you have an e-mail list/Twitter follower count/Facebook fan list of 1000 people, then you will almost never be able to reach statistically significant conclusions based on A/B tests.  The effect size (the difference between options A and B) would have to be HUGE in order for you to be confident that it wasn’t just random variance.  But if you have an e-mail list of 10,000,000, then you can follow the Obama campaign and run twelve-way simultaneous A/B tests, netting extra millions in donations along the way.

That IS the analytics floor — the threshold below which organizations cannot reap the benefits of computational management.

Jim Pugh at ShareProgress has found one work-around for this problem.  As I understand it, by working with lots of clients — small and large — ShareProgress can test individual website components (thank-you pages, landing pages, etc) across an extended user-base, develop best practices, and then spread those best practices across several organizations.  It mimics the scale of a MoveOn or OFA.

The ShareProgress approach works well for the social components of web design, but it doesn’t allow for day-to-day passive democratic feedback through analytics.  For large-scale computational listening, you really need to start with a large member list.

This might be a small critique, but I think it’s an important one.  Fung’s article is noteworthy because it’s the first to really grapple with this stuff.  Social media analytics is a growing field, and we’re going to see some interesting innovations over the next few years.  But those innovations are probably going to come from the large orgs, or from third-party infrastructure providers.

Analytics benefits from scale.  The more we rely on analytics, the more we advantage already large-scale organizations.

 

 

(h/t Bob Boynton, who pointed the article out to me)

 

My Twitter Spambot and Me

There are two @Davekarpf’s on Twitter.  The first one is me, @davekarpf.  The second is a spambot, named @davekarpf_.  The spambot seems pretty benign.  It has taken my name and my avatar photo, but otherwise neither impersonates me nor spreads noxious links through the web.

spambot bio

I learned about my spambot last month, while I was in London for the International Communication Association Annual Meeting.  I had just finished reading Finn Brunton’s excellent book, Spam: A Shadow History of the Internet, so I found the experience particularly intriguing.  Someone mentioned “hey Dave, did you know that you have a fake account?”  I tried contacting Twitter to have the account shut down, but couldn’t jump through all of the required hoops while I was out of the country.  I tried again two weeks ago, but no luck.

The interesting thing about this spam account is that it does so little actual spamming.  It has 34 tweets, 6 followers, and follows only 90 people.  Only one tweet includes a shortlink, and that one is a retweet.  The poster sounds like a high school or college kid, and isn’t going out of their way to either impersonate me or damage my reputation.  They’ve simply appropriated my likeness.

spambot profile

What’s more interesting about the spambot is that its 6 followers are ALSO probably spambots.  4 of those followers are @Bradleywi_, @Joshuadav_,  @BETV_Rockitweb_, and JustinJMarcus_.  Notice the underscores at the end of each name.  Each of these appropriate a real person’s avatar, name, and account details, then add an underscore at the end.  Each includes similar, benign tweets.

So what’s the big deal?  What’s going on here?

I’ve actually written about this phenomenon before, in my 2012 article, “Social Science Research Methods in Internet Time“:

When  financial value or public attention is determined by an online metric, an incentive is created for two industries of code-writers: spammers/distorters, who falsely inflate the measure, and analytics professions, who algorithmicaly spearate out the spam/noise to provide a proprietary value-added. …Any metric of digital influence that becomes financially valuable, or is used to determine newsworthiness, will become increasingly unreliable over time.

Twitter has a well-known spambot problem.  Analytics professionals have gotten good at identifying the obvious spambots.  Gibberish names, zero tweets, no picture, following-thousands-with-zero-followers… All of these serve as flags for spam-detecting code-writers.  So the spammers have to get more sophisticated.  They appropriate profiles, seed them with harmless tweets, and keep the follow counts manageable.  That can all be accomplished through a pretty simple script.  Then, voila, you’ve got yourself a botnet, which you can use to goose metrics like Klout rankings, follower counts, and trending topics.  Tweetspam is evolving.

My spambot doesn’t appear to mean me any harm, so I won’t try all that hard to get it deleted.  I’ll devote another half hour of effort next week.  But if Twitter Central makes it too difficult, then I’ll have little reason to bother.  The bot is aimed at the broader Twitter ecology, not at me personally.

…And that, ladies and gentlemen, is how tweetspam got a little bit trickier.