A Tale of Two Analytics Programs: It Really Matters What You Optimize For

Steve Olson wrote a real barnburner at Medium last week, “DCCC, I’m pleading with you.” [h/t Personal Democracy Forum FirstPost]

He wasn’t the only one.  Jenny Lawson wrote a nice post about the over-the-top fundraising language, “Nancy Pelosi Is extremely disappointed in me for destroying the Democratic Party.” And, of course, there’s also the ubiquitous Emails from the DCCC Tumblr site.  For anyone who spends professional time reading or writing fundraising emails, the DCCC is an unavoidable topic of conversation.

Part of the reason, as Olson notes, is that the DCCC emails are as effective as they are annoying.  According to Shane Goldmacher at the National Journal, the DCCC is outraising its Republican counterpart by $41,000,000 this cycle. “Hey, it works, whatareyougonnado?” is a pretty effective conversation-stopper.

But Olson points out that the fundraising haul is based on some dirty, unethical, that-can’t-really-be-legal-ugh-why-is-it-still-legal techniques.  Like (probably) lying about small donations being matched.  Like making it near-impossible to unsubscribe.  Like auto-selecting the “make it monthly” checkbox, so that unsuspecting donors accidentally give a lot more than they intended to.*

And the broader lesson here is about analytics and testing. It really matters what you optimize for.  The DCCC email program is run with a single goal in mind: generate as much money as possible. Period.  That’s a reasonable goal.  But it can lead you into perverse habits.  Habits that turn your strongest supporters and allies into vocal critics.  Habits that degrade your image while you open that (digital) bank vault time and time again.

What would happen if the DCCC optimized for two goals?  What if they were trying (1) to raise a ton of money and (2) to improve the standing of the Democratic Party brand amongst supporters?

They would have to measure more results.  They would need to develop more sophisticated listening tools, which could measure how email recipients view the party organization, and monitor changes over time.  They would have to run more complicated email tests, but the DCCC has a huge list and talented staffers. They could pull it off.  Their emails would start looking different.  Over time, the Democratic Party could potentially become more likable.

If all of this sounds like a pipe dream, take a look at this new slide deck from SumOfUs.org.  SumOfUs announced a new metric last week: MeRA (Members Returning for Action).  Rather than focusing on the easy measurables like list growth, petition signatures, or donation totals, SumOfUs is going to track success internally based on “the number of unique members who have taken an action other than their first one.”

Why is SumOfUs rejecting donations, signatures, and list size as its main metrics?  Because list size =/= movement power.  And money raised =/= movement power.  And if you perform digital optimization on those easy measurables, you encourage and reward bad habits.

Upworthy.com made a similar metrics adjustment in February, switching from pageviews and unique visitors to “attention minutes.

Ever since the 2008 Obama campaign, digital politics professionals have been talking about the value of analytics and the “culture of testing.”  Rigorous testing programs can help you optimize tactics, compare the impact of competing programs, experiment with new strategies, and unearth member/supporter preferences.

But if many of the largest political organizations have learned lesson 1 (“you should test things”), they still haven’t begun to grapple with lesson 2: You should think hard about how your metrics match your goals.  It really matters what you optimize for.

—–

*Holy hell, that last one is just inexcusable.  I’m heading to Home Depot later this week, and ready to buy pitchforks and torches in bulk.

Bill Simmons and ESPN’s Ombudsman: Is Goodell enough of a “certified liar”?

In his column on Bill Simmons’ suspension, ESPN Ombudsman Robert Lipsyte comes off as blissfully unaware of how ESPN’s action looks — parroting and even sanitizing the company line.

For those who missed the details, Judd Legum nicely sums up the silliness of the suspension: “ESPN Suspends Bill Simmons For Calling [NFL Commissioner] Roger Goodell A Liar, After ESPN Reported Roger Goodell Is A Liar.”

What really happened is the network suspended him primarily for taunting and thereby implicitly criticizing his superiors, but more on that in a bit.

As for whether Simmons should be allowed to call Goodell a liar, Lipsyte insists that, until there’s “a smoking gun that proves when the NFL viewed the Ray Rice video” (emphasis added), Simmons is off base. Until and unless such a smoking gun emerges, Roger Goodell is not a “certified liar”, Lipsyte argues.

Contrast this with what Simmons actually said on his podcast: “Goodell, if he [says he] didn’t know what was on that tape, he’s a liar.” (Emphasis added.)

There is a major difference between seeing a video and knowing what is on the video, and conflating the two is exceptionally sloppy for an award-winning journalist.

To help illustrate: Thanks to several young children, I know a great deal about “Frozen”, despite not having seen the film.

If I watch Frozen this weekend and say, “Wow, I had no idea it would have so much singing!”, I would be a liar. If I were to claim that I had desperately wanted to see the film earlier, but before that point, I had had no way to see the film — you know, as opposed to deliberately having avoided some pretty clear opportunities — I would be a liar. Just like Roger Goodell is a liar. A lying liar who lies.

(Also, I dare Roger Goodell to sue me for libel.)

Simmons’ actual claim — that Goodell knew what was on the video and is lying when he says otherwise — were already well-documented by the fine investigative piece by Don Van Natta Jr. and Kevin Van Valkenburg published on Sep. 19 — that is, days before Simmons’ Sep. 23 podcast for which he was suspended.

Goodell fibbing about whether he knew what was on the tape is only part of what Van Natta and Van Valkenburg identify as “a pattern of misinformation and misdirection employed by the Ravens and the NFL since that February night.”

Of course, to accuse someone of a “pattern of misdirection and misinformation” is to call them a liar, albeit using five-dollar words.

In a now-infamous CBS interview, Goodell says explicitly that he had no idea what was on the video. Not only has ESPN reported that several insiders say otherwise, as Simmons himself pointed out in a Sep. 11 column, “back in July, two well-connected reporters (Chris Mortensen and Peter King) reported what NFL sources had told them happened in that second elevator video … and they got the details correct.”

Follow those Mortensen and King links (reproduced from Simmons’ column). For those of you who couldn’t stand to watch the video but wanted to know what was on it, Mortensen’s account is startlingly accurate. Again, this is from July and based on his insider access to league sources.

What Peter King wrote should, in hindsight, be viewed as an even bigger deal than what Simmons implies:

There is one other thing I did not write or refer to, and that is the other videotape the NFL and some Ravens officials have seen, from the security camera inside the elevator at the time of the physical altercation between Rice and his fiancée. I have heard reports of what is on the video… (emphasis added)

King walked back this claim on Sep. 8, after the video was leaked and the league denied that anyone had seen it earlier:

Earlier this summer a source I trusted told me he assumed the NFL had seen the damaging video… The source said league officials had to have seen it. This source has been impeccable, and I believed the information. So I wrote that the league had seen the tape. I should have called the NFL for a comment, a lapse in reporting on my part. The league says it has not seen the tape, and I cannot refute that with certainty. No one from the league has ever knocked down my report to me, and so I was surprised to see the claim today that league officials have not seen the tape.

Again, he wrote in July that the league and team had seen the inside-the-elevator tape. Then, over a month elapsed without anybody pulling him aside and correcting him.

To understand how significant this is, you have to know Peter King’s place in the NFL universe: one of the least critical, best-connected reporters whose rolodex of sources is a close approximation of “everyone”. King regularly takes calls from, and casually calls, league sources all year. He’s widely known as a friendly mouthpiece. (This is mostly true of Mortensen as well.)

If Peter King says something that the league doesn’t think is accurate, or even something they would like to add to or clarify, to any degree, King is essentially guaranteed to receive — and take, and respond to — a call from an insider.

The last sentence from King’s Sep. 8 correction is as close to damnation as we are likely to see from him on this point. It rightfully implies that (especially coming from him), “No one from the league has ever knocked down my report to me” pretty much speaks for itself.

Thus, Roger Goodell is a liar, on this and many other counts. Simmons says as much. Then, alluding to his past troubles with ESPN, he dares them to discipline him, and they take the bait.

Little wonder the network is being excoriated all around the web. Deadspin points out that Simmons was merely “restating conventional wisdom.”

Business Insider fairly characterizes it as a hint “at the idea of corruption and censorship” at the network.

As if on cue for their entry as the protagonist in a Greek tragedy, management has enacted a suspension that proves Simmons’ implicit point splendidly. They’ve provided pretty good evidence that certain people (management) cannot be criticized, and that others (NFL leadership) should generally be criticized only in the most high-brow language — five dollar words only, please, and only when the evidence is incredibly overwhelming.

The suspension is feeding already-extant skepticism about the network’s ability to consistently (as opposed to intermittently) allow their talent to reach their own conclusions and share these publicly.

It is reminding many fans and writers of the network’s 2013 decision to pull out of its partnership in the “Frontline” documentary about concussions in football. Right now, Google News shows 788 results for [Simmons suspended Frontline documentary].

The message to Simmons was, undoubtedly, “You can’t criticize us publicly like this.” That is chilling enough. A substantial portion of the population, though, hears (at least in part), “You can’t criticize our content partner like this.” Even if that’s not the real motivation, the optics are (to quote Charles Barkley) just turrible.

This is where an Ombudsman is supposed to provide an outsider’s corrective — a reassurance to the reader that well-founded outside criticism will always have at least one ally in the building.

The more defensible (and, in reality, motivating) reason Simmons was suspended was for dissing management. While Lipsyte alludes to this (implying that the suspension is also due to management’s “thin skin”), he opens and closes by insisting that this story is really about whether Simmons had the goods for his claim — and he concludes that Simmons didn’t have the goods.

That takes real chutzpah from somebody who substantially misrepresents the claim in question.

Even as the hordes crash at the gates in Bristol, the Ombudsman — the Ombudsman — writes to reassure us that management basically got this one right, without even deigning to rebut claims that this sure looks like a result of the network’s conflict of interest. “Obviously I disagree” with such critics is all we get. When the very integrity of the network is being questioned, blowing off those questions is tone deaf indeed.

Goodell is a liar. Simmons was correct in calling him a liar. And ESPN was some combination of corrupt and petulant to discipline him for it.

If even the Ombudsman is this tone deaf, ESPN still has a lot of tuning up to do.

#FreeSimmons

Some Birthday Wishes for the White House Petition Site

The White House petition site, WeThePeople turned three years old yesterday.

stock-footage-birthday-candles-melt-and-blow-out-fast-motion-hd

In a post at the White House blog, Ezra Mechaber shares the good news, both in text and infographic form (below):3year_infographic

I’ve played the cranky academic critic role for WeThePeople before. And the core of my criticism hasn’t changed.  But… I also like birthdays.  And Mechaber’s post highlights a few things that are worthy of comment.  So with that said, here are a couple of birthday compliments and one birthday critique for the folks running WeThePeople:

1. I’m really excited about Write API. Mechaber writes “Beginning in October, third-party websites can submit signatures to We the People on behalf of their own signers, using our soon-to-be-released Write API (which is currently in beta). It’s the result of months of hard work, and we can’t wait to share it with the public.”

This looks like something genuinely new and different.  One of the structural weaknesses of WeThePeople is that it doesn’t let petition-creators capture signup data and engage supporters in further actions.  That creates a stumbling block.  The government is both the venue for and target of these petitions, and limiting the ability of creators to build further connections with signers can short-circuit long-term efforts at political change. Write API could be a very powerful work-around.  If it works right, it could be a bit like the ActBlue fundraising widget.  Organizations can gather signatures, capture momentum, and then digitally deliver them to the government.  The government gets citizen input without being on the hook for enabling follow-up citizen mobilization.

The big question will be whether Write API actually gets used.  And it’s impossible to tell right now.  I could imagine organized issue publics seizing the opportunity; I could imagine them yawning at the opportunity.  But it’s definitely a worthwhile idea, and I’ll be watching with hope and interest.

2. The in-person summit is a lovely touch.  Mechaber writes “To celebrate We the People’s third birthday, the White House will host the first-ever social meetup for We the People users and petition creators right here at 1600 Pennsylvania Avenue. It will be an exciting chance for users to meet with policy experts and connect with each other in person.”

I think the future of distributed petition campaigns lies in a move towards distributed organizing.  Petitions are a nice, simple, flexible tool.  But they’re one-dimensional if you don’t build something out of them.  The first step to deepening member/supporter engagement is building new pathways for listening to them.  And in-person listening, rewarding the most active participants, is an important step.

I’d be thrilled to see MoveOn.org or Change.org or Avaaz host a meetup where they connect in-person with some of their frequent participants and petition-creators.  I would see it as a step towards building a deeper civic infrastructure.

Of course, I would then hope that one of those groups would treat these members as active stakeholders, and the relationship between the White House and its petitioners is fundamentally different from the relationship between MoveOn and its petitions (again, because the White House is playing dual roles as target and venue).  So this social meetup has less long-term potential.  But kudos for taking this step, I hope others choose to emulate it.

3. But now here’s the critique.  Egads, that user survey… Mechaber reports the results from a 2014 user survey.  He writes “…over the course of 2014, an average of response surveys showed a majority of signers thought it was ‘helpful to hear the Administration’s response,’ even if they didn’t agree. Nearly 80 percent said they would use We the People again.” (emphasis added)

80% sounds promising.  But some quick arithmetic makes it look abysmal.

15,559,272 people have created accounts at WeThePeople.  There have been 21,882,419 total signatures. That’s… an average of 1.4 signatures per person.  By the most generous possible estimate, that would be around 9 million people who signed only once, and around 6 million people who signed two times.*  At the very most, only 40 percent of users have actually used WeThePeople twice in its first three years.  And the actual percentage (which they can calculate, but have never made public) is probably dramatically lower than that.

So here’s the friendly birthday critique.  80% of the users who took your survey have indicated that they would use WeThePeople again**.  Let’s call that the potential participatory energy in the system.  Let’s call the actual percentage of users who returned a second time the kinetic participatory energy in the system.  …It’s currently somewhere between 1% and 40%.

Next September, when WeThePeople celebrates its fourth birthday, I hope the kinetic participatory energy has moved closer to the potential participatory energy.

That would make it a very good year indeed.

—-

*And if we have a power law or other fat-tailed distribution of signatures (which we almost certainly do), then its more likely to be 12 million single signatures to 3 million multiple signatures, or 15 million to 1 million.

**Survey bias issue: Depending on response rate, this represents a much tinier portion of the user base.  The people who would never use WeThePeople again are more likely to delete the survey invite than the people who love the thing.

 

 

The Blurring Boundaries of “the Blogosphere” (Or, Research in Internet Time, Exhibit #8571)

I used to study the political blogosphere.  My first three research papers were on the blogosophere.  First I put together a ranked tracking system for comparing elite political blogs.  Then I designed a typology of “blogspace” that separated individual blogs from community blogs, and institutionally-based blogs from personal blogs.  Then I researched the role of community blogs like DailyKos in driving turning Republican political gaffes into substantial political mobilization.

Then I became convinced that there isn’t any such thing as the blogosphere anymore.  Blogging is just a format for typing things and putting them online.  In the early days of blogging (1999-2006ish), the subset of Internet-writers that used this format was small and relatively well networked.  It made sense to talk about “the blogosphere,” because there were identifiable clusters of people using this digital tool, and they had distinct goals, priorities, and values.

But as blogging proved useful, it was adopted by more people, and adapted to a wider set of aims.  Talking about “bloggers versus journalists” stopped making much sense once the New York Times and Washington Post started hosting blogs on their sites.  Talking Points Memo used to be the blog of just-some-guy named Joshua Micah Marshall.  Then he developed a business model and started hiring journalists.  Then his site won the Polk Award for investigative journalism.

And then, of course, we started getting alternate digital formats that better supported some of the purposes that blogs used to be aimed at.  Atrios (Duncan Black) and Instapundit (Glenn Reynolds) were two early influential bloggers who both stylistically chose to writes 20 or so brief posts per day.  They were usually a sentence or two, with a link to something interesting.  Today, most bloggers write longer posts.  A couple sentences plus a link has become a tweet.

Andrew Chadwick calls this rapid dissolution of media genres “hybridity.”  One of the major points he makes in The Hybrid Media System is that our newer, hybrid media system encourages nimble organizations that experiment with a wide assortment of tools and technologies.

The latest reminder of this trend comes from DailyKos.  I’ve been thinking a lot recently about Markos Moulitsas’s post from earlier this month, on traffic surges at the site.  Here’s a key point:

Email action list. We’re no longer just a website, or a mobile site. Our email action list has grown so large, it’s now one of the largest in the (non-campaign) progressive movement. As of the end of August, the list is 1.6 million strong, which means it has literally doubled in size every year for the last three four years. That list gives us the ability to create massive pressure when necessary. For example, check out this report from the Sunlight Foundation on the 800,000 public comments the FCC received on its Net Neutrality plan. Of those comments that Sunlight could directly source to their sponsorship organization, fully 10 percent of them came from Daily Kos, making us the fourth largest source of pro-Net-Neutrality energy (behind CREDO, Battle for the Net, and EFF).

DailyKos.com has 1.6 million members on its email list.  Those members receive daily updates on breaking stories and popular diaries at DailyKos.  They also receive calls-to-action, urging them to participate in online activism.  I’ve heard that DailyKos is building a field program as well, with a goal of supporting offline organizing.

There’s still blogging at DailyKos.  There will always be blogging at DailyKos.  And there’s still a community of diarists who use DailyKos to publish thoughts, opinions, comments, and reportage.  But it no longer makes sense to talk about DailyKos as a part of “the blogosphere.”  The blogosphere is a concept from ten years ago that seems to have already gone past its expiration date.  DailyKos has succeeded because it has morphed from a community blog into a more complex digitally-mediated political organization.

Just when we researchers get comfortable talking about a digital phenomenon, the phenomenon itself morphs and changes into something new.

The Win-Loss Gap in Civic and Partisan Technology

What is Civic Technology?

I’ve been reading a lot of smart pieces about civic tech recently.  Two weeks ago, Mike Connery wrote a piece titled “Better Listening through Technology,” which built on Anthea Watson Strong’s article/Personal Democracy Forum talk, “The Three Levers of Civic Engagement,” and also drew from the Knight Foundation’s interactive report, “What Does the Civic Tech Landscape Look Like?”  Last week, Micah Sifry added a piece titled “Civic Tech and Engagement: In Search of a Common Language,” which built off of a Google Hangout-based panel on “Designing for Online Civic Engagement.”  This is all really interesting stuff.  It seems like there’s an important conversation brewing here.

Micah points out that one problem weighing down the conversation is that we don’t have a shared, clear language for describing civic technology.  What are the boundaries?  What are the shared goals?  Connery describes civic tech as “the intersection of technology and government/politics.”  Sifry describes it as “any tool or process that people as individuals or groups may use to affect the public arena, be it to gain power, influence power, disrupt power or change the processes by which power is used.”

That’s a little too broad for me.  I think is glosses over an important distinction:

Civic technology presumes a positive-sum game.  But many areas of politics are zero-sum games.

Let’s take SeeClickFix.com as an example.  SeeClickFix is an app that lets people report problems in their neighborhood.  It uses the logic of crowdsourcing  to improve the lines of communication between everyday citizens and government officials.  SeeClickFix lets people report potholes and busted streetlamps without spending an hour on hold, waiting to talk with an overworked, overstressed, underpaid, and underappreciated government bureaucrat.  You can watch Ben Berkowitz’s keynote talk about SeeClickFix below:

It’s easy to get excited about civic tech like this, because SeeClickFix is good for everyone involved.  To use some basic of game theory, it is what’s known as a positive-sum game.  The more people who use the app, the more rewarding SeeClickFix becomes for everyone involved.  It’s very difficult to come up with a list of people who lose as a result of SeeClickFix usage.  Most civic technologies follow this same positive-sum logic.

But politics is often a zero-sum game.  Elections are the most obvious case: you have two candidates from opposing parties fighting for one Senate seat.  One candidate will win, the other candidate will lose.  That’s zero-sum.  Every additional plus of value to you is a minus for me.

Zero-sum games foster more competitive dynamics than positive-sum games.  If I’m working on a campaign that has a great database, it would be really nice if my opponent was stuck using shoeboxes full of index cards.

Theoretically, both sides in an election should also be rooting for the (positive-sum) outcome of a healthier democracy.  Wins will be more legitimate if there is high voter knowledge and high voter turnout.  You won’t find a lot of people out there arguing that distracted, disengaged voters are good for America.

But where theory meets practice, we also know that the lofty goals of a healthy citizenry are a distant second to the immediate goal of winning.  Hence the annual GOP proposals to make voter registration harder, and the drive to limit online voting, and the attempts to reduce early voting.  When Republicans try to combat the (nonexistent) threat of voter fraud, they’re acting strategically within the confines of a zero-sum scenario.  Republicans are more habitual voters than Democrats.  Throw up barriers to likely-Democrats voting, and you increase your chance of winning.

Why Does This Matter?

Take a look at the Knight Foundation’s breakdown of Civic Tech Growth Trends by Cluster (h/t Mike Connery. …Really, go read his Medium piece).  Voting is one of the areas with the slowest growth.

Knight Civic Tech

That probably shouldn’t surprise any of us, because the dynamics of voting technology are so different than the dynamics of peer-to-peer sharing or online community-building.  Most areas of civic tech are positive-sum, and foster cooperation.  Voting is zero-sum, and fosters harsh competition.*

Likewise, companies such as NationBuilder and Change.org have faced intense criticism and threats of boycotts for working across party lines.  Here’s Raven Brooks, describing his outrage over NationBuilder signing a contract with the Republic State Leadership Committee in 2012 (excerpted from Sarah Lai Stirling’s reporting):

“This is like saying Blue State Digital saying: ‘Here Mitt Romney, you can have Obama’s technology,” Brooks said. “It’s an advantage for Democratic campaigns — we’ve had a technology advantage that we’ve built up over the years, and to just hand that off to the Republican party — it could be the difference-maker in some elections. If it allows even one of these candidates to win over someone else, then you’ve chosen a side there.”

Jim Gilliam‘s counterargument was, in essence, that NationBuilder is civic technology. Everyone ought to have it, because improving campaigns will improve democracy!  Many progressives disagreed, and have taken their business elsewhere as a result.  Whether you side with Brooks or side with Gilliam, we can all probably agree that this debate wouldn’t happen over potholes.

The partisan dynamics of voting technology and campaign technology represent a distinct category within the broader civic tech space.  I’m calling it the win-loss gap, at least until someone comes up with a better name for it.

Most civic tech is meant for positive-sum social problems.  Most political tech is meant for zero-sum social problems.  And that fundamental difference results in an distinctly different challenges for each space.

(I’ll write another post soon on what some of those distinct challenges seem to be.)

———–

*Basically.  You’ll also find competition in positive-sum games, particularly where multiple sites are seeking to benefit from the same network effects.  And you’ll find various pockets of collaboration in voting.  But I don’t want to go full-wonk in this blog post, so I’m speaking in generalities.

Lessons about Digital Government from the Cell Phone Unlocking Victory

Six weeks ago, I wrote a piece for TechPresident that labeled the White House’s We The People petition site a “virtual ghost town.”

Last week, Congress passed a cell phone unlocking bill.  President Obama signed it into law today.  That’s noteworthy, since this particular Congress never passes anything.  But it’s also noteworthy because the campaign to introduce this bill originated with a We The People e-petition.  If you’re looking for evidence that the White House e-petition site is a big deal, this legislation has become Exhibit A.

But if we pause and listen to the originators of the petition itself, evidence of the very limitations I described in the article becomes quickly apparent:

Here’s Derek Khanna, quoted in an article by Alex Howard (which is a really smart and well-reported piece. It’s worth reading the whole thing):

“…One reason why the unlocking petition was more successful than others was because it was only a tool in the toolkit. While it was ongoing, I was arguing our cause in the media, writing op-eds, meeting with Congress, giving speeches, and working with think-tanks. We basically saw the petition as energy to reinforce our message and channel our support, not the entire ballgame. Some petition campaigns fail because they assume that the petition is it: you get it to 100,000 signatures and you win or lose. Some fail because they don’t have a ground presence in Washington, DC, trying to influence the actual channels that Members of Congress and their staff follow.”

The hardest part, according to Khanna, was  keeping the momentum going after the e-petition succeeded and the White House responded, agreeing with the petitioners.

“We had no list-serve of our signatories, no organization, and no money,” he said. “It was extremely difficult. In fact, some of us were pushing for a more unified organization at the time. Others were more reluctant to go in that direction. A unified organization will be critical to future battles. Special interests were actively working against us and even derailed the original House bill after it passed Committee; having a unified organization would have helped move this process more quickly.” (emphasis added)

No listserv, no organization, no money.  Those are three critical ingredients that online petition are usually supposed to help you develop.

And here’s Kyle Wiens and Sina Khanifar, writing at Wired.com

We teamed up with smart people who cared about the all-too-fragile intersection between technology and freedom: the Electronic Frontier FoundationPublic Knowledge, and Derek Khanna.

 

Fueled by Reddit, Hacker News, and others, the Internet rallied around a common theme: If you bought it, you should own it. We got noticed. The White House issued a formal response calling on Congress to fix unlocking.

Their We The People petition took off because key elements of the Internet’s “attention backbone” helped amplify it.  That’s smart campaigning by Wiens, Khanifar, and Khanna.  But it also points us toward a major limitation: if this had been a non-tech issue, then the sites that drove all those signatures probably wouldn’t have taken part.

Launching the online petition at We The People created the conditions for a formal response from the White House.  That was a plus.  We The People provided no help in amplifying the petitions through email and social media.  That was neutral in this case, since Reddit, EFF, Public Knowledge, and others were helping to amplify instead.  But the site left the petition-creators with no residual list for follow-up actions.  That’s a huge minus.

If the petition had been launched through a different site (like Change.org), then it would have been less likely to get a formal White House response, but more likely to facilitate the follow-up actions that Khanna/Howard, Wiens and Khanifar say are vital to eventual success.

So maybe “ghost town” isn’t the right metaphor for We The People.  Instead, maybe we should think of We The People as Nevada’s Black Rock Desert.  It seems deserted 98% of the time. But once in a while, a well-organized community shows up and uses it to organize a massive event.  (…I suppose in this case, they burned a cell phone contract instead of a giant stick-figure-man.)

The cell phone unlocking bill is rare good news out of the U.S. Congress.  Congratulations are due to the organizers who petitioned, rallied, cajoled and lobbied to make it possible.

I’m not sure how big of a win it is for digital government writ large, though.  Wiens, Khanifar and Khanna effectively navigated the limitations of the petition site.  They didn’t disprove those limitations.

On the Ethics of A/B Testing

[I have a hunch that this will be the first in a series of posts...]

The ethics of A/B testing are back in the news this week.  First it was Facebook, fiddling with our emotions.  Now it’s OkCupid, meddling with love.

As with the Facebook study, the details of the specific OkCupid experiments are less of an issue than the sheer fact that they are being conducted in the first place.  I decided not to weigh in on the Facebook controversy last month (Tarleton Gillespie has a nice roundup if you’re interested), but one of the things that struck me at the time was that the study itself was oversold.*  Pretty much everyone participated in the overselling.  The authors wanted us to think they’d found something extraordinary (as every author always does).  Critics of all stripes wanted to agree, so they could focus on the broader implications of this extraordinary study.  And those critiques fell into roughly three camps:

#1. Facebook has too much power!  They shouldn’t be able to manipulate people like this without any check or oversight!

#2. People should become better informed!  Companies do this all the time, and no one realizes it.  If we want a better internet, we have to demand a better internet!

#3. Academia shouldn’t be involved! The Institutional Review Board (IRB) messed up or wasn’t properly consulted here!

Each of these three perspectives take us toward a different ethical question.  Personally, I’d rank them #2>#1>#3 in terms of importance.

Regarding #2, one of the great things about the Facebook Study is that it spawned the Facebook Controversy.  Everyone vaguely knows that Facebook manipulates their algorithm for learning, fun, and profit.  Nearly everyone chooses to go about their merry way, blithely ignoring the implications of this manipulation.  Facebook partnered with academics, and that led to a public conversation about those implications.  Let’s file all it under “positive-but-unintended consequences.”  Ethically, if we want the public to be more aware, then we should also hope that these companies keep publishing their experimental findings.

Regarding #1… Yes, Facebook is crazy-powerful.  It is a quasi-monopoly and a quasi-utility.  It ought to be at least somewhat regulated as such.  But I have difficulty getting too incensed about this for two reasons.  First, the FCC currently isn’t even willing to treat Internet access as a public utility.  Comcast is the most blatant monopolistic empire of this century.  Let’s get around to regulating Facebook after we convince the government to follow basic common sense and expert consensus on regulating ISPs.  My pitchfork and torch are already spoken for.

And second, A/B testing (experimentally manipulating the user experience to track results) is indeed standard business practice for large websites and digital organizations.  Daniel Kreiss has written about this in the Obama campaign as “Computational management.”  I’ve described it in civil society organizations as “passive democratic feedback.”  In Christian Rudder’s (President of OkCupid) post about OkC’s recent experiments, titled “We Experiment on Human Beings!“,  he writes:

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

I think Rudder is basically right.  These experiments are a core source of data on what users want.  Organizations that don’t run tests aren’t listening to their users/customers/members.  Ethically, I think organizations should learn to listen better, and listen responsibly.  But I don’t think we should be angry that they’re listening at all.

Regarding #3, academia’s role in all this, I have decidedly mixed feelings.  IRBs were designed as a check on the power of researchers to unnecessarily harm subjects in the name of science.  IRBs are important because academic research has a specific type of power and authority.  But we probably have less power and authority than we’d like to think… particularly in the digital arena.

Facebook and Google and OkCupid hire research scientists.  They conduct experiments all the time.  If University-based academics keep this research at arms’ length, the research still happens.  It just doesn’t get presented at our conferences or published in the journals we happen to read.  And that leaves academic social scientists further adrift from the lived experience of actual human beings in modern society.

So yeah, there should probably be some improvements to the IRB process for online experiments of this type.  But I’m not sure if that’s the right ethical line-in-the-sand to draw.  It’s easy to argue over IRB reforms, because academics have the ability to directly affect the state of Institutional Review Boards.  We can decide what sorts of studies we’ll participate in, even though we can’t decide what sorts of studies will be performed beyond our cloistered halls.

Publicly-engaged/publicly-oriented scholarship is always messy.  As experimental research finds a welcome home outside of academia, it becomes even messier.  Christian Rudder tells us, “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”  The more we acknowledge and engage with this reality, the better off we’ll be.

 

*The authors claim to have found evidence of “emotional contagion”: by modifying the Facebook newsfeed to contain slightly more positive or more negative postings, they were able to observe an impact on users’ posting habits.  Put plainly, if you see lots of positive posts on Facebook, you become marginally more likely to post something positive.  Negative posts tend to suppress negativity.

That’s a legitimate finding, but it isn’t actually proof that Facebook affects our emotions.  It’s proof that Facebook affects how we express emotions within Facebook.  And that might just be evidence of commonplace social norms, applied to a social network site: If I’m having a bad day, and everyone on Facebook is sharing happy news, I’m a bit less likely to pipe up and spoil the mood.  That doesn’t necessarily mean I’ve adopted my peers’ emotions (as reported via Facebook algorithm).  It just means that I’m adopting similar phraseology to what I’m seeing around me.

 

The Red Queen’s Race, and Media Entropy in Campaigns

We live in the best of times for political persuasion: campaigns have more data then ever before.  They use that data to target, target, and refine.

We live in the worst of times for political persuasion: the old pipelines for reaching a persuadable audience — television, landline phones, and mail — are growing rusty from disuse.

This is the big takeaway from journalist Andrew Rice in his feature article, “How Far Can Political Technology Reach?”  It’s excellent writing, I strongly recommend it to you.

Here’s the key passage in Rice’s article:

THE INNOVATORS ARE always working around a central irony: The very advances that make it possible to know so much about voters also make them more difficult to reach. A DVR records your viewing habits, but it also allows you to fast-forward through the standard 30-second campaign spot. Spam filters are rising; network audience numbers are falling. It takes plenty of invention just to counteract the relentless force of media entropy.

I remember noticing this same trend when I was interviewing nonprofit direct mail professionals for The MoveOn Effect.  I heard two countervailing reports, often from the same individuals: (1) Prospect Direct Mail has become more efficient than ever.  With more data and better modeling, organizations could do a much better job of building an initial mailing list.  The days of blind list swaps are over; now we can fine-tune and microtarget.  (2) Prospect Direct Mail is in its death throes.  People under 55 don’t pay bills through the mail anymore.  Response rates are so low that mail will inevitably switch from profit center to resource drain.

We see this same Dickensian pattern in polling: we’re living through a veritable revolution in modeling and aggregation techniques, all while response rates dip into the single digits.

And, as Rice reports, we see it in television and online ads. He quotes NationBuilder founder Jim Gilliam:

“The stuff I am extremely skeptical of is this idea that we can turn data into ad campaigns and magically turn people into voters,” says Jim Gilliam. “That’s not real.”

Now, all of this isn’t to say that technology in campaigns doesn’t matter.  It matters a great deal!  It matters for how campaigns are run.  It matters for who gets rich off of them.  It matters for how they engage (or don’t engage) citizens.*

And it also matters for who gets elected.  The sum total of all the testing, targeting, and refinement may only be a couple percentage points at the polls, but in the deeply polarized country we live in, those couple percentage points decide the balance of power.

Still, the point of this blog post is that (a) Andrew Rice’s article is really good, you should read it, and (b) he captures this balancing act better than most.

Viewed in isolation, the rise of testing and microtargeting can seem all-powerful, even ominous.  Much of the journalism on the subject has a tendency towards alchemy or mysticism: “these new campaign pros have math!  All bow before them…”

Viewed as a whole, the evolving industry looks much more like the Red Queen’s Race. “It takes all the running you can do, to keep in the same place.”

 

*necessary plug: you should read, at a minimum, Rasmus Kleis Nielsen and Daniel Kreiss and Jennifer Stromer-Galley.  They all make excellent beach-reading, I promise.

I Won’t Attend Netroots Nation Next Year in Phoenix (and you shouldn’t either)

I’ve spent the past 48 hours stewing over Netroots Nation ’15.

The Netroots Nation convention will be in Phoenix next summer.  Markos Moulitsas has announced that DailyKos will not be participating or supporting the convention.  So long as SB 1070 is still law in Arizona, so long as latinos are routinely harrassed and threatened by agents of the state, Moulitsas has pledged not to spend a dime in the state.  He writes:

As a Latino, I do not feel safe in Arizona, a state that continues to profile and harrass Latinos because of the way they look. So I’m not going to go, nor am I going to put my family or my staff at risk.

This whole controversy calls to mind the 2012 American Political Science Association (APSA) annual meeting boycott.  The APSA convention was scheduled for New Orleans that year.  Louisiana had a “super-DOMA” statute on the books.  If an LGBT political scientist got sick while attending the convention, his/her partner would be denied hospital visitation rights.  Many APSA members felt that it was wrong for the association to hold our annual meeting in a state which puts members in this sort of jeopardy.  They organized through petitions and joint letters to the APSA leadership.  They pointed out that the association changed the location of the 2011 APSA meeting from San Francisco to Seattle because of labor disputes in San Francisco.  APSA wouldn’t cross a picket line (good!).  But it didn’t accord the same respect to the rights LGBT members.  The APSA leadership ignored these protests, and the New Orleans meeting proceeded on schedule.*

I’m particularly reminded of a conversation I had with my former undergraduate mentor a few months before the APSA boycott.  He told me that he would be boycotting the annual meeting.  His longtime friends and colleagues in the discipline were boycotting as well.  But he also told me that he expected me to attend.  “You’re still pre-tenure and building your career,” he said, “this meeting is important for your job, you should be there. No one will think less of you for it.”

So I signed the petitions and the joint letters, but I also booked my reservations for the damn conference.  “Looks like I’ll take a stand next time,” I told myself.

Well, this sure seems like next time.

Here’s the argument in favor of selecting this conference location:

We are going there because that’s where our voices and presence are needed right now. We’re going there because that’s where organizing power is needed right now. We’re going there because that’s where we can have the greatest impact and affect the greatest change. We as a community need to go there because we need to join those on the ground who are fighting this fight everyday.

That sounds nice and all, but it rests on a misdiagnosis of what a national convention siting decision can accomplish.  National conventions don’t build lasting local activist infrastructure or organizing power.  If you go back to San Jose or Minneapolis or Providence, you won’t find concrete examples of progressive power building that emerged because the Netroots Nation convention was held there in years’ past.  That isn’t how it works. We fly in, we enrich the economy, we shine a brief spotlight, we fly out.  That’s all.

But national conventions are a real boon to the local Chamber of Commerce and elected officials.  Conventions are a concentrated form of economic power.  Cities compete for them.  You can use that power to reward your allies.  You can use it to demand concessions from your wavering targets.  You can use it to impose an opportunity cost on your enemies.

To the organizing committee’s credit, they are right that placing the conference in Phoenix will put immigration at the top of the Netroots’ radar.  (Or, to be more precise, it will signal that immigration is already at the top of the Netroots’ radar.)  And that’s a laudable choice. I can see how they came to believe that this would be bold and empowering.  Any Presidential candidates who chose to attend the event should be ready for some tough questions.  But Arizona isn’t the only border state.  They can accomplish those goals without putting attendees in this position.

Here’s (one part) of Markos Moulitsas’s argument against the siting decision:

 …look to labor: Netroots Nation refuses to hold events in cities without union hotel and conference facilities. They’re not “taking the fight” to non-unionized locations because we, as a movement, stand for the right of people to organize and we don’t reward those places that deny those rights. It’s the right call. Also, would the conference have been happy to stay in Arizona had Gov. Jan Brewer signed the virulently anti-gay SB 1062 earlier this year? Hard to see that happening.

Latinos deserve that same kind of respect.

Markos is right.  Latinos are being targeted in Arizona.  Flying 4,000 people in for a weekend of workshops, keynote speeches, and a rally or two doesn’t provide lasting help.  The local Chamber of Commerce and elected officials will happily endure our presence so long as we’re all staying in their hotel chains and buying their products.  On Sunday, the conventioneers fly home, leaving their money behind.

You know what would have an even bigger impact?  Publicly dropping Phoenix because it has racist laws on the books.  Make it clear that their anti-immigrant agenda costs the city tourism dollars.  That would “shine a light” too.  That would be grist for news stories and tough questions to public officials.  That would provide more tangible long-term help to the activists on the ground than a few mainstage speeches and breakout panels, and solidarity marches.

The bottom line is this: if you are an undocumented American, or if you look a bit like an undocumented American, then attending a conference in Phoenix involves putting yourself at risk.  The Netroots Nation organizing committee shouldn’t be assigning that risk on behalf of thousands of other people.

Among the ~4,000 expected attendees next year will be plenty of individuals who are required to attend by their jobs.  Netroots Nation 2007 (which was then still called YearlyKos) played host to a televised presidential primary debate.  It’s a safe bet that Netroots Nation 2015 will be angling for another one.  That’s an awful lot of early career campaign staff who will have to attend the convention whether they feel right about it or not.  Those of us who don’t have to attend have the responsibility to speak up now and object.

Netroots Nation isn’t APSA.  Netroots Nation cares about this fight for justice. That’s why they’ve selected Phoenix in an attempted show of solidarity.  But selecting Phoenix also requires every Latino attendee to accept a type of risk that every white attendee gets to avoid.  And it does so while providing much more of a boon to local officials than it does to local activists.  I understand that they reached this decision in good faith, but it’s still the wrong choice.

I hope the organizing committee rethinks this decision.  Otherwise, I can’t in good conscience attend.

 

———-

*Fun Fact: The APSA meeting was eventually canceled because a hurricane hit New Orleans during the week of the annual meeting.  Some might call that cosmic retribution.  I call it bad planning.  Simple rule, folks: don’t plan a big meeting in Louisiana during hurricane season.  Or in Rochester during the winter. …Or in Phoenix during frickin’ JULY!

The Deliverability Sinkhole

File this under “Things I Got a Little Bit Wrong In My Book”:

That’s a quote from Laura Packard, who really knows her stuff.  She’s highlighting a problem that rarely gets talked about: e-mail deliverability.

In The MoveOn Effect, I talk a lot about how the shift from direct mail to email has changed organizational membership practices.  The short, oversimplified version is this: Direct mail carries a marginal cost for every additional recipient, so it incentivizes smaller lists with high response rates.  Email carries virtually* no marginal cost for additional recipients — sending an email blast to 10,000 people costs the same amount as sending it to 10,001 people.  The lack of a marginal cost per recipient incentivizes larger lists with lower response rates.  Hence, we get a lot of multi-issue progressive generalists… like MoveOn, Progressive Change Campaign Committee, Democracy for America, Demand Progress, Credo Action, Leadnow (Canada), 38 Degrees (UK), GetUp (Australia), and Campact (Germany).

*Whenever I talk about the diminished marginal cost of increasing the size of your email membership, I include a verbal asterisk.  I say that the costs “approach zero” or “approximate zero.”  And when I do that, it’s because I’m tiptoeing around the deliverability sinkhole.

There is an artificial aggregate cost to adding low-performing email addresses.  ISPs are constantly monitoring mass email traffickers, looking to identify spam algorithmically.  The cost of being algorithmically treated as spam can range from being diverted to the “spam” folder in gmail to being automatically rejected and returned to sender.  Being auto-filtered as spam is a problem.  Being undelivered is a disaster.  And one of the biggest flags for deliverability trackers is aggregate open rate.  If 98% of your recipients are not opening your message, then ISPs are going to guess that you are spamming them.

The big problem for online political organizers is that deliverability issues requires a distinct skillset and knowledge base.  Spammers and scammers have poisoned the well with increasingly sophisticated tricks meant to fool the filters and land messages in your inbox.  ISPs and whitehat engineers have increased their own sophistication in response.  It is probably too much to ask nonprofit civil society organizations to keep up with all this algorithmic sophistication while also making headway on their actual political/civic goals.

The practical result is that small issues like dead email addresses in a mailing list can compile into big deliverability problems.  If your email list is too broad, vague, and unresponsive, then you may get stuck in the deliverability penalty box.  It’s a sinkhole, forcing large organizations to pay for outside technical assistance.  And while this marginal cost isn’t nearly as large as the cost of direct mail printing and postage, it’s an important element that often goes ignored.

So consider this my way of coming clean.  When I talk about the marginal costs of online communication dropping toward zero, I’m consciously talking around sinkholes like deliverability.

The practical costs of online communication are always higher than the theoretical costs of online communication.