My Trouble with VictoryKit

We lost Aaron Swartz a year ago today.

I’ve been thinking a lot recently about VictoryKit, Aaron’s final unfinished project. He told me just a little bit about it last year, when we were both at the OPEN Summit.  The overlaps between his tech product and my emerging research puzzle (on analytics and activism) were uncanny, and the last conversation we had ended with a promise that we’d discuss it further soon.

As far as I can tell, VictoryKit is a growth engine for netroots advocacy groups.  It automates A/B testing, and draws signal from a wider range of inputs (open-rates, click-rates, social shares, etc) than usual.

The thing is, as I’ve conducted my early book research and learned more about VictoryKit, I think I’ve identified a real problem in the design.  I’m worried that VictoryKit automates too much.  It puts too much faith in revealed supporter opinion, at least as it is constructed through online activity.  And in the long term, that’s dangerous.

VictoryKit is designed to “send trickles, not blasts.”  The idea is to be constantly testing, constantly learning.

I heard Jon Carson from OFA give a talk last summer where he remarked “if you get our email before 8AM, you’re in our testing pool.”  OFA basically is the industry standard for email testing.  They test their messaging in the morning, sending variant appeals out to random subsets of their list*. They refine their language a few hours later, based on the test results, then they can send a full-list blast in the afternoon.  That’s one of the basic roles of A/B testing in computational management.

VictoryKit gets rid of the full-list blast.  Instead, you keep feeding petitions into the magical unicorn box**, it judges which petition is more appealing, and it then sends that petition to another incremental segment of the list.  I haven’t looked into the exact math yet, but the basic logic is clear: analytics represent member opinion.  Automate more decisions by entrusting the analytics, and you’ll be both more representative and more successful.

The problem here is that our revealed preferences are not the entirety of our preferences.

A.O. Hirschmann wrote about this in “Against Parsimony.”  Essentially, we have two types of preferences: revealed preferences and meta-preferences.  Revealed preferences are what we do, what we buy, what we click.  But Hirschmann points out that we also have systematic preferences for what kind of options we are presented with.

I always think of this as the Huffington Post’s “Sideboob” problem.  Huffpo has a sideboob vertical because celebrity pics generate a lot of clicks.  That’s a revealed preference: if Huffpo gives us a story about inequality and a story about Jennifer Lawrence at juuuuust the right camera angle, JLawr will be far more popular.  So Huffpo provides a ton of sideboob and a medium amount of hard-nosed journalism.

But!

If the Huffington Post gauged reader preferences through different inputs ((by asking them to take online surveys, for instance), then they’d get a different view of reader preferences.  More people click on celebrity pics than will say “yes, that’s what I want from the Huffington Post.”

There’s a narrow version of economic thought that rejects meta-preferences as being unreal.  If people say they want hard news, but they click on the celeb pics, then they must really want the celeb pics.  But that’s unsupportable upon deeper reflection.  People are complex entities.  We can simultaneously watch junk tv and wish there was higher-quality programming.  New gym memberships peak around new years and late spring, as people who generally don’t reveal a preference for regular exercise act on their meta-preference for healthier living.

In online political advocacy, the signals from revealed preferences are even weaker.  We click on the petitions that are salient, or engaging, or heart-rending.  But we want our organizations to work on campaigns that are the most important and powerful.  Some of those campaigns won’t be very “growthy.”  But that doesn’t mean they’re unimportant.

Take a look, for instance, at question #6 in Avaaz’s 2013 member survey.  Avaaz asked global members their opinion on a wide range of issues.  It also asked them “how should Avaaz use this poll.”  Only 5% thought their opinions should be binding on the organization.  The other 95% felt it should be as minor input or as a loose guide.  When asked, Avaaz members announce a meta-preference that the staff reserve a lot of room to trust their own judgment.

The problem with analytics-based activism is that it can lead us to prioritize issues the most clickable issues, instead of the most important issues.  That’s what can happen if you equate revealed preferences, as evidenced by analytics signals, with the totality of member preferences.

There’s a simple solution to that problem: maintain a mix of other signals.  Keep running member surveys.  Make phone calls to your most active volunteers to hear how they think things are going.  HIre and empower the right people, then trust their judgment.  Treat analytics as one input, but don’t put your system on autopilot.

If I understand it right, VictoryKit promotes exactly the type of autopilot that I’m worried about.

Maybe Aaron would have had a good rebuttal to this concern.  He was incredibly thoughtful, and it’s entirely possible that he envisioned a solution that I haven’t thought of.

But today, one year later, as we reflect on his legacy, I want to offer this up as a conversation topic:

Does VictoryKit automate too much?  And if so, how do we improve it?

*I have a hunch that they also test during the day. …otherwise their response pool would be biased toward earlybirds.

**Adam Mordecai refers to Upworhty’s analytics engine as a “magical unicorn box.” Adam Mordecai is funnier than I am.  Ergo, I’m going to start stealing language from him.

1 thought on “My Trouble with VictoryKit

  1. it’s an interesting question. now, i haven’t spent much time with VictoryKit specifically, but you lay out basic problem here well enough that i’ll take a stab.

    you’re clearly on the right track when you say that alternate feedback mechanisms are necessary to temper the tunnel-vision that VictoryKit can acquire. but i to that end, i don’t think the software itself can be held overly accountable for “automating too much.” the only real problem is if it is built in such a way that actively discourages its users from soliciting additional feedback from out-of-band mechanisms.

    being that good software is built in layers, we see this class of problem quite often: there’s a system which does quite a lot of good work, but it’s missing some vital piece of higher-level context that might allow it to meet our needs in a more sophisticated way. once this reality is recognized, a basic question must be answered:

    “is this additional issue tightly related enough to the software’s role and scope that it merits expanding the original software to account for it? or would such a change irrevocably damage the clarity of purpose the original software had?”

    many factors are relevant in the answer, but in general, building the extension in a separate layer tends to keep everything more stable and sane in the long term.

    of course, this analogy doesn’t necessarily transfer so well from things like lower-level UNIXy programs into end-user-facing applications. but there are still lessons to be drawn. if VictoryKit is a bit of a victim of its own success, then the segments of its workload that function so well should, perhaps, be kept as they are. effort should instead be invested in building more tooling that is further back away from the automation point, that helps to ensure/encourage users to incorporate the broader feedback information.

    of course, taking this view is an oversimplification. the relationship between VictoryKit and some wrapping application (or new additions to it that wrap its existing behaviors), is quite a bit different than the relationship between low-level tools like, say, `make` and `sed`.

    so maybe the answer is, rather than thinking along a ‘wrapping’ axis, to follow the ‘meta’ axis. if VictoryKit is so good at automating the tasks it has thus far conceived of, but there is doubt that that set of tasks is sufficiently capturing the domain, then basically, turn on and improve VictoryKit’s debug mode. make it spew information about how and why it’s making its decisions, build systems that construct metadata on top of the spew, and use that to to cement the feedback loop to users about what is going on. that, in turn, would like lead to the development of wrapping-type tooling. but that’s fine – in fact, these are inherently complementary. the first step towards building any good wrapper is to understand the meta layer first, in terms the wrapped layer can understand. doing that well will tell you pretty clearly what needs to be built.

Comments are closed.