Making Peace with Campaign Microtargeting: four principles of responsible algorithms [book blogging]

[This post is part of an irregular series where I tinker with big concepts for my book.  Comments and disagreements are extra-appreciated …and can earn you a spot in the acknowledgements section!]

I had to skip this year’s Personal Democracy Forum, and have slowly been watching archived versions of the keynote talks to see what I missed.  One talk that really stands out for me as Cathy O’Neil speaking about “Weapons of Math Destruction.”  O’Neil is writing a book about algorithms, and how social institutions cloak their decisions behind mathematical equations in order to obscure the choices that they make.  It’s an important topic, and I’m looking forward to the book.  Of the three examples she gives, though, one did not seem to be much like the others.

O’Neil provides three examples of algorithms as “weapons of math destruction.”  The first is the Value Added Model (VAM) in public education.  The VAM is an algorithm that is supposed to separate the good teachers from the bad teachers.  That’s a laudable goal.  We probably need a good model for grading teachers and incentivizing good teaching. But O’Neil explains that the model is a complete black box.  No teacher, no administrator, no data scientist is allowed to look at the algorithm itself and determine if it is measuring the right things.  When teachers and administrators ask to see information about the model, they are told “oh you wouldn’t want to know about it–it’s math.”  We are evaluating teachers without explaining to them what answers they got wrong or how they can improve their scores.  Let that pedagogical irony sink in for a moment.

This is a case where algorithms and Big Data take on an almost alchemical quality.  “Put your trust in the data wizards,” we are essentially told, “they know things that you cannot fathom.”  And as with all other forms of alchemy, if you dig beneath the surface you’ll quickly detect a faint scent of manure.

O’Neil’s second example is even more troubling: predictive policing and evidence-based sentencing in the criminal justice system.  Judges rely on predictive models to estimate a “recidivism score,” which factors into their sentencing decisions.  Likelihood of recidivism is, again, an important consideration.  Policing, like teaching, is a massive public good, and it seems like better data would be a good thing.  But the problem with these recidivism models is that they include factors (high school graduate?  Currently employed? Did your father serve jail time?) which would be plainly illegal if they were brought to a judge directly.  By cloaking these factors behind mathematics, the justice system becomes less just..

But then there’s her third example: microtargeting in political campaigns.  And this is where I think the argument stumbles some.  The first example she provides is Facebook’s 61 million person Get Out The Vote experiment.  (Micah Sifry has written previously about how this experiment demonstrates Facebook’s implicit electoral power).  But that experiment is not technically microtargeting.  The second example she gives is a hypothetical: Rand Paul could highlight his positions on financial reform when she visits his website, while hiding other positions that she is less likely to agree with.  “What is efficient for campaigns is inefficient for democracy,” she concludes.

This last example seems like a stretch to me.  Political campaigns have always used targeting in their communications.  Candidates spice up their stump speeches with local anecdotes and local issues.  Mailings are targeted based on demographics, issues, and vote history.  Broadcast political commercials are targeted to focus on the issues that swing voters (or base voters) find most appealing.  Targeting and modeling in political campaigns isn’t particularly new.  What we’re seeing with microtargeting is a difference in degree, rather than a difference in kind.  The databases are becoming less terrible.  The campaigners are taking testing and modeling more seriously.

The case of political microtargeting seems different from the VAM and predictive sentencing because of four general properties: let’s call them The Principle of Potential Harm, the Principle of Approximate Transparency, the Data Quality Principle, and The Principle of Potential Redress.

The Principle of Potential Harm asks “what (unintended) harms might befall an individual if this algorithmic model produces a faulty decision?” In the case of the VAM, good teachers could be unfairly punished.  They could be denied raises or potentially fired.  In the case of predictive sentencing, people of color and poor people could be sentenced to longer, harsher sentences than their white and well-off peers.*  In the case of campaign microtargeting, an individual… might encounter less political advertising that they disagree with.

Within electoral politics, algorithmic models have also been used to purge voter rolls.  There the potential harm is that an individual can be denied their right to vote simply because their name is similar to the name of a convicted felon.  The Principle of Potential Harm states that we should be more concerned with algorithms in “vote cleansing” programs than with algorithms in political advertising.

The Principle of Approximate Transparency states “if someone asks why an algorithm categorized them as it did, they should be able to get a clear answer.”  This is a rule that some of the leading netroots advocacy groups follow: if they are going to use predictive modeling to decide who gets what communications, then they should be prepared to explain what factors went into that decision.  If they would be embarrassed to explain it, then they should not use predictive modeling in that case.

I call this “approximate transparency” because there are actually some quite good reasons to keep the details of a predictive algorithm obscure.  If Facebook or Google were fully transparent about their algorithms, then malicious actors would be much more successful in gaming their ranking systems.  If a predictive model is being used to make valuable decisions, then we should assume people will try to distort that model.  A little bit of opaqueness can go a long way in helping the models to perform effectively over time.  But if a model is completely secret, then we are unable to consider its merits and its flaws.

In the area of political microtargeting, political journalists enforce an approximate form of transparency.  In the 2012 election, ProPublica set up a system that monitored emails from both presidential campaigns to see how they were microtargeting their messages.  Political journalists and academics paid close attention to political advertisements as well.  This was not full transparency — the Obama Campaign was not going to tell anyone its strategy for determining who got which messages — but it was enough of to keep the worst potential excesses in check.  Any value the campaigns might get from extremely microtargeted advertisements would be washed away if it led to a front-page story about their deceptive practices.

The Data Quality Principle states that we should stay aware, and wary, of the underlying quality of the data going into the model.  Again, the 2000 Florida voter purge is a helpful example.  If that company had perfect data, then its computerized removal of names from the voter rolls would be a trivial matter.  But their data was junk, and that rendered the model suspect.

The Data Quality Principle is a major reason why I am not particularly concerned about voter microtargeting.  Even though the databases are better than they’ve ever been before, they still have lots of flaws and errors.  Electoral campaigns (particularly the big ones that are flush with cash) lean towards overinclusion rather than overexclusion in their communications.  So while they might use an enhanced voter file to help isolate the neighborhoods and households most in need of a door-knock, we are pretty far removed from the future dystopia where household A and household B receive entirely different messages at their door.

The Principle of Potential Redress holds that, since algorithms are flawed, there should be a clear avenue for redress when a person feels they have been algorithmically wronged.  Teachers should be able to effectively challenge their VAM score.  Convicts (or their lawyers) should have clear tools for arguing why the predictive sentencing algorithm is making the wrong prediction.  Voters who have been algorithmically excluded from the rolls should be able to cast a provisional ballot, and that ballot should be counted after minimal procedural headaches.

The potential redress for citizens who receive microtargeted political advertisements is… read some political journalism! Electoral campaigns are awash in political advertisements.  Better targeting of those advertisements is efficient for the campaigns and, for the most part, less of a headache for the citizens.

My main point here is that some algorithms are much more ethically dicey than others.  It depends on what the data is being used for, how trustworthy it is, how transparent it is, and what pathways we have to challenge it.

Smart critiques of emerging digital decision-making often lump campaign microtargeting in with a laundry list of other, deeper problems. I, for one, have made my peace with campaign microtargeting.  And I think the differences between it and other “weapons of math destruction” can help us understand which algorithms are the most dangerous.

 

*Note: people of color and poor people already face major sentencing disparities.  So I suppose the potential harm here is that these disparities will be even more difficult to address.

One thought on “Making Peace with Campaign Microtargeting: four principles of responsible algorithms [book blogging]

  1. Pingback: Campaign Microtargeting, Part II: Eitan Hersh’s “Hacking the Electorate” | shouting loudly

Comments are closed.