We lost Aaron Swartz a year ago today.
I’ve been thinking a lot recently about VictoryKit, Aaron’s final unfinished project. He told me just a little bit about it last year, when we were both at the OPEN Summit. The overlaps between his tech product and my emerging research puzzle (on analytics and activism) were uncanny, and the last conversation we had ended with a promise that we’d discuss it further soon.
As far as I can tell, VictoryKit is a growth engine for netroots advocacy groups. It automates A/B testing, and draws signal from a wider range of inputs (open-rates, click-rates, social shares, etc) than usual.
The thing is, as I’ve conducted my early book research and learned more about VictoryKit, I think I’ve identified a real problem in the design. I’m worried that VictoryKit automates too much. It puts too much faith in revealed supporter opinion, at least as it is constructed through online activity. And in the long term, that’s dangerous.
VictoryKit is designed to “send trickles, not blasts.” The idea is to be constantly testing, constantly learning.
I heard Jon Carson from OFA give a talk last summer where he remarked “if you get our email before 8AM, you’re in our testing pool.” OFA basically is the industry standard for email testing. They test their messaging in the morning, sending variant appeals out to random subsets of their list*. They refine their language a few hours later, based on the test results, then they can send a full-list blast in the afternoon. That’s one of the basic roles of A/B testing in computational management.
VictoryKit gets rid of the full-list blast. Instead, you keep feeding petitions into the magical unicorn box**, it judges which petition is more appealing, and it then sends that petition to another incremental segment of the list. I haven’t looked into the exact math yet, but the basic logic is clear: analytics represent member opinion. Automate more decisions by entrusting the analytics, and you’ll be both more representative and more successful.
The problem here is that our revealed preferences are not the entirety of our preferences.
A.O. Hirschmann wrote about this in “Against Parsimony.” Essentially, we have two types of preferences: revealed preferences and meta-preferences. Revealed preferences are what we do, what we buy, what we click. But Hirschmann points out that we also have systematic preferences for what kind of options we are presented with.
I always think of this as the Huffington Post’s “Sideboob” problem. Huffpo has a sideboob vertical because celebrity pics generate a lot of clicks. That’s a revealed preference: if Huffpo gives us a story about inequality and a story about Jennifer Lawrence at juuuuust the right camera angle, JLawr will be far more popular. So Huffpo provides a ton of sideboob and a medium amount of hard-nosed journalism.
If the Huffington Post gauged reader preferences through different inputs ((by asking them to take online surveys, for instance), then they’d get a different view of reader preferences. More people click on celebrity pics than will say “yes, that’s what I want from the Huffington Post.”
There’s a narrow version of economic thought that rejects meta-preferences as being unreal. If people say they want hard news, but they click on the celeb pics, then they must really want the celeb pics. But that’s unsupportable upon deeper reflection. People are complex entities. We can simultaneously watch junk tv and wish there was higher-quality programming. New gym memberships peak around new years and late spring, as people who generally don’t reveal a preference for regular exercise act on their meta-preference for healthier living.
In online political advocacy, the signals from revealed preferences are even weaker. We click on the petitions that are salient, or engaging, or heart-rending. But we want our organizations to work on campaigns that are the most important and powerful. Some of those campaigns won’t be very “growthy.” But that doesn’t mean they’re unimportant.
Take a look, for instance, at question #6 in Avaaz’s 2013 member survey. Avaaz asked global members their opinion on a wide range of issues. It also asked them “how should Avaaz use this poll.” Only 5% thought their opinions should be binding on the organization. The other 95% felt it should be as minor input or as a loose guide. When asked, Avaaz members announce a meta-preference that the staff reserve a lot of room to trust their own judgment.
The problem with analytics-based activism is that it can lead us to prioritize issues the most clickable issues, instead of the most important issues. That’s what can happen if you equate revealed preferences, as evidenced by analytics signals, with the totality of member preferences.
There’s a simple solution to that problem: maintain a mix of other signals. Keep running member surveys. Make phone calls to your most active volunteers to hear how they think things are going. HIre and empower the right people, then trust their judgment. Treat analytics as one input, but don’t put your system on autopilot.
If I understand it right, VictoryKit promotes exactly the type of autopilot that I’m worried about.
Maybe Aaron would have had a good rebuttal to this concern. He was incredibly thoughtful, and it’s entirely possible that he envisioned a solution that I haven’t thought of.
But today, one year later, as we reflect on his legacy, I want to offer this up as a conversation topic:
Does VictoryKit automate too much? And if so, how do we improve it?
*I have a hunch that they also test during the day. …otherwise their response pool would be biased toward earlybirds.
**Adam Mordecai refers to Upworhty’s analytics engine as a “magical unicorn box.” Adam Mordecai is funnier than I am. Ergo, I’m going to start stealing language from him.