Thoughts on Eli Pariser’s “The Filter Bubble”

Eli Pariser, the former Executive Director of MoveOn, has a new book out on the social impacts of the internet.  It’s quite good – reminiscent of Cass Sunstein’s Republic.com and Infotopia, in that it is utterly readable, carefully constructed, and critical in tenor.  The important difference between Pariser’s book and Sunstein’s books is temporal in nature: the digital environment continues to evolve, and Eli highlights some elements of that evolution that rightly should concern all of us. Essentially, we’re dealing with a different online environment in 2011 than we were in 2001, and Pariser’s book is a nice guide to the current threats and opportunities coming out of that space.

I had one big “ah hah” moment in the course of reading the book.  “Multidimensionality can be outstripped by improved point prediction.  And that would be a bad thing.”  Allow me to riff on that a bit below:

“Multidimensionality” is a shorthand that I often use when teaching Sunstein’s work.  In Republic.com, Sunstein introduces the concept of the “Daily Me.”  First envisioned by MIT Media Lab’s Nicholas Negroponte, the Daily Me was a personalized web portal, in which each individual received news and information customized to their interests.  Sunstein raised concern about the Daily Me, suggesting that it could produce “cyberbalkanization,” in which competing ideological communities only receive news that reinforce their own points of view, leading in turn to further radicalization.  American democracy has never been calm and deliberative, but we at least have historically been divided through divergent interpretations of the same events.  In the world of the Daily Me, we don’t even interpret the same events – our news becomes hypercustomized instead.

The Daily Me is a provocative concept.  It’s also clearly limited in two respects.  First, the concept is anchored in a time period when personalized web portals (Yahoo or MSN landing pages) were viewed as the future of the internet.  The developmental path of the internet veered off in a different direction.  Web 2.0 took off, and we increasingly spent our time at sites that feature user-generated content and community activity.  When I log on to the web, I check gmail, 3 blogs, and facebook.  Corporations are behind each of these spaces, to be sure, but they’re different corporations than in 2001, and they’re inviting me to engage in different activities than Yahoo and MSN were.  Rather than a hypertargeted news feed, there’s the socially-derived postings on my facebook wall.  So, for that reason, the Daily Me is a bit dated.  Sunstein himself noted this in Republic.com 2.0, where he suggested we’ve developed elements of a “Daily Us” instead.

The Daily Us can still provide reinforcing views and divergent news agendas though.  Take a minute to scan the blog posts at DailyKos and HotAir, the top political blogs on the left and right.  Depending on the day, you’re likely to find that they aren’t just using different frames to discuss the days news, but instead are talking about different news topics altogether.  Members of these communities, then, are still at risk of cyberbalkanization.

“Multidimensionality” mitigates the cyberbalkanization problem.  Simply put, members of a political online communities have non-political interests as well.  I may only interact with liberals on DailyKos, but I have several libertarian friends through Yehoodi and there are a few Republicans who are active Washington Wizards fans as well.  As a member of several communities-of-interest, I’m exposed to people with cross-cutting views on politics, broadly defined.  Our personalities, interests, and affiliations cannot be reduced to a simple one dimensional (left-right) spectrum, because we also build social capital through a variety of hobbyist communities.  The answer to online communities is …more online communities (cue the recitations of Federalist 10).

For those reasons, I’ve long been convinced that we don’t need to be all that concerned about cyberbalkanization.

And then I read Eli’s book.

The core of Pariser’s concern is well explained in his TED Talk.  Eli is a progressive.  He also has other hobbies and interests.  Thus, he consciously has developed conservative friends, and is tied to them through facebook.  One day however, he noticed that he was no longer seeing their updates in his news feed.  Facebook’s algorithm had recorded that he didn’t click on those links very often.  So it “optimized” his experience by removing those updates.

On the surface, that’s a small issue.  A progressive doesn’t see headlines that weren’t all that appealing to begin with.  But it points to a much bigger problem.  Even at the social layer of the web, multidimensionality is viewed as a type of inefficiency – an engineering problem to be solved.  For the engineers and the third-party advertisers, the goal is better point prediction.  Through improvements in automated filtering, they can reduce the incidental knowledge gains that come through membership in multiple communities.  Facebook, ideally, would like to only show me sports-related updates from my Wizards fan-friends, and only show me politics-related updates from my netroots friends.  Advertisers, ideally, would like to know which elements of those subcommunities most fit my profile.  It’s an engineering problem to them, with an engineering solution.

Of particular concern is that this personalization is going on without our knowledge.  Even if I don’t want it to happen – even if I’d like to hear the contrarian opinions of blues dancing Ron Paul fans – large social media hubs are going to treat those voices as noise and try to remove it.  Unless I decide to put outstanding effort into “fooling the filters,” I’m going to be stuck solely with reinforcing views.  And that increases the threat of cyberbalkanization.

I’m tempted to call this another example of the “beneficial inefficiencies” problem.  Multidimensionality may appear as an engineering problem for social media purveyors and the third-party advertisers who pay them.  But it also serves to mitigate some social problems.  As the social web continues to develop, cyberbalkanization could easily reemerge as a substantial threat.  In short, multidimensionality can be trumped by improved point prediction.  And that would be a bad thing.

It isn’t easy to conduct academic research on this sort of “point prediction.”  The engineers and data industries operate under copyright protection, proprietary data, nondisclosure agreements, and trade secret rules.  This is non-transparent data, and there are strong incentives for the companies and engineers to keep it that way.  Pariser’s interviews with Yahoo and Google engineers, as well as his conversations with dozens of social scientists, represent a substantial step forward in understanding the current digital environment.

I’m impressed with Pariser’s book.  It’s well worth reading, and explains these concepts with greater clarity and better examples that I’m providing above.  It’s a nice departure from the normal “cyberskeptic” book (Jaron Lanier and Nicholas Carr providing two recent examples).  It’s well-balanced, thoughtful, and serious.  In a rapidly changing medium, it helps highlight what the Internet has become, where it may be heading, and why that matters.  Pariser asks us not to fear, criticize, or dislike the digital landscape, but to help make it better.  As he notes in his conclusion, “the Internet isn’t doomed, for a simple reason: This new medium is nothing if not plastic.”

Indeed.

2 thoughts on “Thoughts on Eli Pariser’s “The Filter Bubble”

  1. Looking forward to reading Eli’s book. I share the worry about segmentation (as opposed to “cyberbalkanization”), expressed also by people like Joe Turow in various books in recent years. I know that Sandra Gonzalez-Bailon has forthcoming research based on large scale quantitative data to show that, interestingly, disagreement is something that keeps online communities alive. If no one disagrees, people become less active and eventually drop out. This kind of insight may convince corporations like Facebook to do things differently, to accept more multidimensionality and less point prediction.

  2. The Facebook News Feed filtering problem raises yet another reason for independent, federated identity management. That is – not only should we own our own data and networks, but that *we* should be the ones to decide who to listen to, and tune out. It could be more, in other words, more fun!

Comments are closed.