Neglect and Uncle Sam, not the Internet, Killed the Middle Class

In an interview with Salon and his newest book, “digital visionary” (Salon’s words) Jaron Lanier claims that the internet has destroyed the middle class. Kodak employed 140,000 people, while at the point of its sale to Facebook, Instagram employed just 13, and (without much exaggeration) thus, the internet killed the middle class. QED.

What a crock.

Lanier is apparently incapable of stepping back from technological determinism and looking at the actual causes of our ballooning economic inequality — which, to cut to the chase, is primarily a result of our policy choices. Yet the role of government in determining the overall shape of the economy is too often understated or outright ignored by those who wring their hands about growing economic inequality.

With some noted exceptions, those who criticize Lanier still mostly point at the old standby twin bogeymen of automation and outsourcing. The HuffPost chat in which all of the guests are willing to challenge Lanier’s conclusions is typical on this count but hardly alone. To his credit, Buffalo State College economist Bruce Fisher starts heading in the right direction with his concerns about fostering and preserving the political and social engagement of those who are being left out, but he fails to take it the next step and discuss the major policy changes and political neglect that have brought us to this point.

The best explanation that I’ve seen of America’s growing wealth inequality is Winner-Take-All Politics, in which Jacob Hacker and Paul Pierson start with a simple look at other industrialized countries to show that inequality isn’t an inexorable outcome trade and automation. The Germans and Swedes certainly have similar chances to outsource their manufacturing and use technology to reduce labor forces.

Not only does the rest of the industrial world have the internet, too, better telecom policy means they generally have faster connections and cheaper prices. Yet as measured by the Gini Coefficient, a measure of economic inequality, their economies have far more equal distributions of income in take-home pay and wealth.

The wealth distribution in particular is just shocking — the US has a wealth Gini of .801 (where 1.000 is “one person owns everything”), the fifth highest among all included countries and almost exactly the same as the distribution of wealth across the entire planet (.803). Think about that for a second; we have the same radically unequal distribution of capital within the US as among the entire population of the world across all countries — from Hong Kong and Switzerland to Nigeria and Haiti.

With our paper-thin social safety net and highly unequal distribution of income and wealth, we’re left with an economy where tens of millions struggle to get by while wealthy Manhattanites are hiring handicapped “relatives” for $1,000 per day to be able to skip the lines at Disney World.

Across countless major policy areas —health care, education, financial regulation, taxation, support for the unemployed, and many more — the rest of the industrialized world generally does far more to make their societies fairer for all. Our shrinking protections for workers may be the greatest single cause of the shrinking middle class. Of course, this can be done badly — I would certainly not want to swing as far as Italy and Spain, where it’s nearly impossible to fire somebody once they’re a regular, fulltime employee. Yet we should not allow employers to fire union organizers with near impunity. We should not force organizers to wait for months between card check and votes to unionize so that employers can “educate” their captive audience workforce with the most pernicious disinformation and intimidation. We should not sit idly while nearly half of states fail to meet even “minimum workplace-safety inspection goals, due to state budget cuts and reduced staffing.”

It’s true that the middle class is being gutted in the US, but this is primarily due to how our political system turns the act of surviving and thriving into a high-wire act for an ever-larger slice of the population. Laid-off baby boomers, even those with desirable skills, are having a devil of a time finding work in a country where age discrimination is only nominally illegal. Meanwhile, our children attend public schools with an unconscionably unequal distribution of funding, so moving or being born into a more affordable neighborhood may cost kids their futures, too.

Teens and laid off workers alike are told that college is the route to a better future, but the cost of education is skyrocketing as states and the feds slash public investment in higher education. Many families — even many families with health insurance — are one major medical problem away from unemployment and bankruptcy. Since it’s totally legal to use credit reports and current employment status in making hiring decisions, being laid off or losing one’s job after a medical problem can quickly become a death spiral. None of this is due to outsourcing or automation, but is instead the result of a noxious combination of deliberate policy changes (the privileged seeking to strengthen their own hand) and policy drift (the rest of us sitting idly by or being ignored when we do speak up).

Frankly, I’m glad that Lanier has released this book, sloppy though it may be. (The people raving about this book as a carefully wrought masterpiece are deluding themselves — and not, as Lanier accuses others of doing, “diluting themselves”.) This is not primarily because he has some insights here and there, but because we need to talk about the gutting of the middle class as loudly and as frequently as possible. We must do so, however, in a way that examines how our collective decisions have gotten us to this point. That includes making international comparisons with other “laboratories of democracy” to see how we can do better.

After even a cursory glance abroad, we will see that we should stop returning to the too-easy explanations based on globalization and technology. These forces are at play across the world, and the other wealthy industrialized countries have generally not had the same dismal results. The more likely culprit is in the halls of government.

Jaron Lanier’s technologist myopia

Jaron Lanier is at it again.  Two weeks ago, at Personal Democracy Forum, Lanier unveiled the central thesis of his next book: computer networks are causing the demise of the American middle class, threatening Democracy as we know it.  New information technology has exacerbated the distribution of wealth.  The Internet has undermined a set of “levees,” including academic tenure, copyright, and taxi medallions (???).  All sorts of social problems — from Wall Street shenanigans to the decline of unions — can be laid at the feet of technologists.  And, in Lanier’s eyes, these technological problems have technological solutions.  We simply have to rewrite the entire Internet, embrace Ted Nelson’s failed Project Xanadu and rebuild from the hyperlinks on up.

If that sounds extreme, don’t worry.  It’s supposed to be.  Jaron Lanier is the Great Curmudgeon of the Internet Community.  An influential technologist in the 1980s and 1990s, Lanier later began to ask “what hath our efforts wrought” sorts of questions.  He thinks on a grand and abstract scale, he does not like what he sees, and the Internet Community regularly provides a platform for him to voice his objections.  Among my friends and colleagues in the Internet research community, everyone either loves or hates what Lanier has to say.  But he is always provocative, and that indeed is largely the point.

We need good curmudgeons (or skeptics, at least) in the world.  Particularly in the technology & society community, which has a habit of falling into boundless optimism.  Good curmudgeons force smart optimists to engage in healthy self-reflection.  For that, if nothing else, they should be thanked.  My problem with Lanier, however, is that I don’t think he’s a particularly good curmudgeon.

As usual, there’s a kernel of truth in his work.  Technologists ought to be mindful of the values that they encode in software.  The individuals who construct our digital environment make up an increasingly important social elite.  Facebook and Google are, indeed, monetizing our every action – we create value, they harvest that value and turn a profit.  We ought to think through the social consequences of technology-driven disruptions.

But, as with his last book, the power of his critique evaporates due to a pair of gaping flaws.  The first is a problem of style, the second an error of analysis.

Stylistically, Lanier writes the way he talks — stream of consciousness, hoping from one example to another.  If you have trouble following his argument in the book or in the video, that isn’t because he’s just so brilliant.  Like many technologists, he likes to begin from first principles, designing his arguments basically from scratch.  That’s a fine method for create operating systems.  It’s a poor tool for social analysis though.  There are too many complicated moving parts, too much that cannot be simplified or assumed away.

His haphazard style exacerbates a habit of treating correlations as causation.  Lanier sees the rise of computer networks and the decline of unions in America and thinks “this is all connected!”  Yet unions began their decline well in advance of personal computing.   He sees computer networks driving wealth creation, but seems to forget that past advances in technology drove wealth creation as well.  History cannot be neatly divided into “pre-Internet” and “post-Internet” categories.  In doing so, he fails to take history seriously.

The bigger problem is Lanier’s error of analysis: Technologists are an elite, not The Elite.   The decline of unions (particularly the recent union fights in Wisconsin and Ohio) is not caused by the new information environment.  It is caused by motivated political elites, enacting policies that favor their own narrow interests.  The Wall Street crash was orchestrated by “quants” using computer networks, but it was made possible by the repeal of Glass-Steagall.   The decline of the American middle class has not been caused by technology.  The solutions to that decline lie not in the realm of bits and bytes, but in the realm of policies and votes.

A better skeptic would take other social forces into account.  To borrow from Larry Lessig, the information environment is shaped by four forces: laws, norms, markets, and architecture.  Indeed, one of the lessons from SOPA was that, if internet architects don’t exert political pressure, then Hollywood will reforge the internet.  Lanier looks at the Internet and sees the rise of a digital elite.  He then makes the moral argument that they should give up their power, creating an egalitarian internet along the lines of Ted Nelson’s original vision instead. Better skeptics, like Siva Vaidhyanathan, also see the rise of a digital elite.  But instead, Siva concludes that we should think of companies like Google and Facebook as though they were utilities, and regulate them accordingly.  Siva’s perspective is not just more realistic, it’s also more nuanced and accurate.

We dealt with the old robber barons (eventually) by regulating their influence.   Even if you agree with Lanier’s claim that technology is undermining the old “middle class levees,” the solution is to create new ones through public policy.  Blaming Facebook and Google for our social problems may be gratifying, but it lets the real culprits off the hook.

For Jaron Lanier, All Roads Lead to Code.  That perspective has made him the most popular internet curmudgeon.  Lanier has the ears of the entire tech community.  He occupies a space in a network – the space reserved for the critic.  With that role comes the responsibility to use it well!  Sloppiness either in thought or execution makes it too easy for his audience to dismiss all such criticism.  I can only hope that, as he transforms his PDF talk into his next book, he takes this responsibility seriously.  Technologists are not the only architects of our society.  We should be mindful of the values encoded in our technologies, but just as mindful of the values embraced by our public policies.

 

 

Stop Online Piracy Act: Terrible Law. Great Example of Internet Mobilization?

We’re in trouble. The future of the internet is in danger, and if that danger comes to pass, it’s both unhealthy for and a very bad indicator of the health of our democracy.

Congress is already very close to passing companion bills to censor the internet, the Stop Online Piracy Act (SOPA, H.R. 3261) and the Protect IP Act (PIPA, S. 968). This is in addition to the domain name seizures already underway by Immigrations and Customs Enforcement (ICE).

All of these efforts are terrible ideas. Their supporters don’t understand or care about the internet and are happily willing to break the internet to appease the content industry. It is among the very worst contemporary examples of a government that is of, by, and for special interests, and if it passes, it will be a slap in the face of democracy, free expression, due process, and technological innovation. To top it all? It won’t even do much to stop online infringement.

Fortunately, there may be signs that things are turning our way. I’ll get to that further below.

EFF has a great summary of the several ways SOPA can lead to a site getting shut down. Section 102 deals with foreign sites and is the most all-encompassing, but 103 and 104 are actually easier for rights holders to (mis)use, and they apply to domestic as well as foreign sites, so I’ll start there.

Section 103 allows IP rights holders to go directly to a website’s payment processors and advertisers—and to demand that these third parties cease all business with the website operator. These payment processors and advertisers then have just five days to act. The website operator has the right to file a counter-notice that they are not substantially dedicated to infringement, but (a) they may not get the chance until after the payment processors and advertisers have already cut off payments, and (b) the third parties have no obligation to take the counter-notice as final and re-establish a business relationship.

Section 104 takes this “default=censorship” strategy even further. Everyone in the internet ecosystem—registrars, web hosts, advertisers, financial processors, search engines, etc. etc.—gets near-categorical federal and state immunity for any decision to terminate a business relationship with a site (or even to shutter a site) “in the reasonable belief” that the site is dedicated to infringement. Under Section 103, a rights holder must at least file a claim. Under Section 104, even the intimation that a site is infringing might be enough to get it shut down—and the site would have no legal recourse.

The Administration also gets in on the fun in Section 102, which gives the Attorney General the power to use government-mandated Domain Name System (DNS) filtering to stop Americans from accessing “foreign infringing sites.” A domain name, such as Google.com, is an easy-to-remember way to tell one’s computer to go to a specific numeric address (e.g., 74.125.39.147). It is this number (the IP address) that identifies that site’s server (the computer that hosts the website). Everyone enters the domain name into their browser’s internet address bar, but the numbers would take one to the same site. Click on the numbers above or paste them into your browser to see for yourself.

Under Section 102, if a site were found to be primarily dedicated to infringement, the government could “seize” the site’s domain name. More precisely, the domain name registrar—a company that keeps track of which domain names are attached to which servers—would, if US-based, be compelled to stop sending users to the correct server. All domestic ISPs would also be forbidden to take you to the right server (the number behind the name), and advertisers and banks would be forbidden from doing business with these companies.

If the government found a foreign site to be infringing under these bills, the government would try to make it disappear for US audiences.

If this bill becomes law, we will see the shuttering and/or financial starvation of thousands of websites—which are, of course, a form of speech and/or press. They would be silenced and/or starved based on either an affidavit by a rights holder, a mere suspicion by a business partner, or (at best!) a one-sided court hearing with a low burden of proof. Little wonder then that legal scholars from (my friend and) rising star Marvin Amorri to the legendary constitutional scholar Laurence H. Tribe (pdf) have concluded that the bills are unconstitutional threats to the First Amendment.

By now it should be clear that, if passed into law, SOPA or PIPA would have devastating consequences for innocent actors who are mistakenly identified. The web seizures undertaken by U.S. Immigrations and Customs Enforcement (ICE), beginning in 2010, illustrate this peril all too well. Several websites have been taken down for posting media files that were authorized and even actively shared by the copyright holders or their representatives. Others have apparently been seized merely for linking to allegedly infringing content.

One in particular, DaJaz1.com, has become the cause célèbre of the anti-domain-seizures movement. It was one of a cluster of hip hop websites seized last year. Major voices from Vibe to Kanye to P. Diddy were actively promoting the sites, hardly a sign that they are dedicated to copyright infringement.

Last week, the feds finally gave up on DaJaz1. TechDirt (which has nearly gone all-SOPA, all-the-time) had the headline:

Feds Falsely Censor Popular Blog For Over A Year, Deny All Due Process, Hide All Details…

Their opening clarifies exactly how unconstitutional this is:

Imagine if the US government, with no notice or warning, raided a small but popular magazine’s offices over a Thanksgiving weekend, seized the company’s printing presses, and told the world that the magazine was a criminal enterprise with a giant banner on their building. Then imagine that it never arrested anyone, never let a trial happen, and filed everything about the case under seal, not even letting the magazine’s lawyers talk to the judge presiding over the case. And it continued to deny any due process at all for over a year, before finally just handing everything back to the magazine and pretending nothing happened. I expect most people would be outraged. I expect that nearly all of you would say that’s a classic case of prior restraint, a massive First Amendment violation, and exactly the kind of thing that does not, or should not, happen in the United States.

They go on to detail how DaJaz1’s owners were stonewalled, blockaded, and never allowed their day in court by the feds—for over a year—while the feds managed to arrange a court process during which all court proceedings (including several granting extensions that DaJaz1’s owners should have been able to contest) were secret and all the filings were sealed and not open to the site owners.

Once the details of the accusations came out, it turned out that the allegedly infringing songs were given directly to the blog by copyright holders’ agents in the hopes of promoting the music. The RIAA was the source of the original complaint, and one of the songs in question was not even released by an RIAA label.

Another operation using similar methods but for a different goal—seizing sites with child pornography—mistakenly took down 84,000 sites in one shot, resulting in each of those thousands of sites being down for 3 days. Even worse, each domain was redirected to an ICE notice that the website had been seized for trafficking in child pornography. Nearly all of those sites were not dedicated to child pornography, and to my knowledge, ICE never even apologized to them for the error.

Further, it takes little imagination to picture a devastating chill on legitimate sites that make fair uses of copyrighted content. If I run a news and commentary site, I may be less likely to include portions of copyrighted works, even if such inclusion is very likely fair use and crucially relevant to my discussion of the matters at hand.

In particular, media criticism sites would be in grave peril; how long after the bill’s passage would it be before partisan news outlets started using the new law to silence their critics? How long before FoxNews goes after Media Matters for America? Think that’s far fetched? Witness Righthaven’s efforts to sue bloggers for using even brief quotations. And what was on the list of threats they used to scare people into paying licensing fees? Domain seizure. Among other things, these bills would give a hunting license for those who would like to shutter the sites of upstarts, competitors, and critics.

At least these bills will stop piracy, right? Hardly.

Dedicated infringers will still find infringing sites—especially foreign sites that host infringing files with impunity. Remember, the feds are seizing the site name (e.g., Google.com) but not the number behind it (74.125.39.147). All you need is a small program to tell your computer to go to the right number—and, because the bill will forbid your ISP from getting you there, a proxy server in the middle. The same strategies have already proven successful for dissidents behind government firewalls, who still manage to upload and download forbidden information—despite far more active, on-the-fly, and resource-intensive censorship schemes.

Programmers have already developed tools to work around these restrictions. The law hasn’t even passed yet, and already there is a Firefox plugin that would help users work around SOPA-like restrictions.

You might think that at least payment processors and advertiser networks would be scared off of dealing with these sites. If it were that easy—if we could target the banks and advertisers that support internet scofflaws—then spam and other internet evils would have long since been wiped out.

The internet breeds decentralized innovation, and innovators will spring into action to help users circumvent ISP and search engine filters as well. This software will also be considered grounds for legal action—with the goal being to ban the tools, as the 1998 DMCA bans DRM-hacking devices. That’s worked so poorly that multiple free circumvention tools are available for most major DRM systems. There are so many DVD rippers that LifeHacker has a post comparing rippers to help you choose the best.

As if all of the above failures and offenses were not enough, these bills would harm our economy and reduce our competitiveness in the internet age. If SOPA were law when YouTube was getting started, the site probably would have been shuttered. The next YouTube will be much less likely to be born in the US if it can be kicked out of the legitimate portion of the web before it has really grown up. The EFF warns that sites like Etsy, Flickr, and Vimeo would be in danger.

Internet innovation is one of the few bright spots in the economy, and major internet firms have warned that this will increase the cost of regulatory compliance and decrease our competitiveness. Venture capitalists have also warned that SOPA would substantially decrease their willingness to invest in US technology start-ups. Union Square Ventures, just down the street here in NYC, even put this link saying the same thing on their homepage.

Senator Ron Wyden (D-OR) has placed a hold on PROTECT-IP, and he has even vowed to filibuster the bill should it come to the Senate floor. Because of this principled opposition and his long record of standing up for internet freedom, I made a donation to Sen. Wyden’s re-election campaign—even though my wife and I are watching every dollar as we save to buy our first home.

So these bills are terrrrrible, but they enjoy a lot of support in the House and Senate—30 cosponsors in the House, and a whopping 40 in the Senate. This post is derived from an email I sent to my Senators and Representative, and all three wrote back with disappointing notes to the effect of, “Yeah, but we gotta stop internet infringement.” Surely this is unrelated to the content industries having spent far, far more money on lobbying and campaign donations than their opponents on this issue.

Which brings us back to democracy.

In response to these bills, we have seen the swelling of a major internet movement—nearly the groundswell we saw around network neutrality in 2006. Opponents created a campaign declaring November 16—the day of a hearing in the House that was heavily stacked in favor of SOPA—as “American Censorship Day,” a campaign that went viral in a major way. Over 6,000 sites including Wikipedia, Creative Commons, Mozilla (including the default start page in Firefox), Reddit, TechDirt, and BoingBoing, directed traffic to a single action site, AmericanCensorship.org. At the time, the site said that it had generated over 1,000,000 emails and four calls per second to Congress. To date, AmericanCensorship.org has earned over 650,000 Facebook likes and 63,000 tweets.

This is democracy in action. After all, most people don’t support draconian copyright enforcement, and a solid majority of people oppose government attempts to block access to infringing materials. (40% support, 56% oppose; this skews to 33% for, 64% against when framed as censorship.)

If Wyden’s hold and the opposition can stop this fast-moving train(wreck), then perhaps democratic values and majority opinion can actually shape the future of the internet. Just maybe, a public outcry can stop a terrible idea backed by special interests.

If not, we may be in big trouble—and not just because the internet will be broken.

Thoughts on Eli Pariser’s “The Filter Bubble”

Eli Pariser, the former Executive Director of MoveOn, has a new book out on the social impacts of the internet.  It’s quite good – reminiscent of Cass Sunstein’s Republic.com and Infotopia, in that it is utterly readable, carefully constructed, and critical in tenor.  The important difference between Pariser’s book and Sunstein’s books is temporal in nature: the digital environment continues to evolve, and Eli highlights some elements of that evolution that rightly should concern all of us. Essentially, we’re dealing with a different online environment in 2011 than we were in 2001, and Pariser’s book is a nice guide to the current threats and opportunities coming out of that space.

I had one big “ah hah” moment in the course of reading the book.  “Multidimensionality can be outstripped by improved point prediction.  And that would be a bad thing.”  Allow me to riff on that a bit below:

“Multidimensionality” is a shorthand that I often use when teaching Sunstein’s work.  In Republic.com, Sunstein introduces the concept of the “Daily Me.”  First envisioned by MIT Media Lab’s Nicholas Negroponte, the Daily Me was a personalized web portal, in which each individual received news and information customized to their interests.  Sunstein raised concern about the Daily Me, suggesting that it could produce “cyberbalkanization,” in which competing ideological communities only receive news that reinforce their own points of view, leading in turn to further radicalization.  American democracy has never been calm and deliberative, but we at least have historically been divided through divergent interpretations of the same events.  In the world of the Daily Me, we don’t even interpret the same events – our news becomes hypercustomized instead.

The Daily Me is a provocative concept.  It’s also clearly limited in two respects.  First, the concept is anchored in a time period when personalized web portals (Yahoo or MSN landing pages) were viewed as the future of the internet.  The developmental path of the internet veered off in a different direction.  Web 2.0 took off, and we increasingly spent our time at sites that feature user-generated content and community activity.  When I log on to the web, I check gmail, 3 blogs, and facebook.  Corporations are behind each of these spaces, to be sure, but they’re different corporations than in 2001, and they’re inviting me to engage in different activities than Yahoo and MSN were.  Rather than a hypertargeted news feed, there’s the socially-derived postings on my facebook wall.  So, for that reason, the Daily Me is a bit dated.  Sunstein himself noted this in Republic.com 2.0, where he suggested we’ve developed elements of a “Daily Us” instead.

The Daily Us can still provide reinforcing views and divergent news agendas though.  Take a minute to scan the blog posts at DailyKos and HotAir, the top political blogs on the left and right.  Depending on the day, you’re likely to find that they aren’t just using different frames to discuss the days news, but instead are talking about different news topics altogether.  Members of these communities, then, are still at risk of cyberbalkanization.

“Multidimensionality” mitigates the cyberbalkanization problem.  Simply put, members of a political online communities have non-political interests as well.  I may only interact with liberals on DailyKos, but I have several libertarian friends through Yehoodi and there are a few Republicans who are active Washington Wizards fans as well.  As a member of several communities-of-interest, I’m exposed to people with cross-cutting views on politics, broadly defined.  Our personalities, interests, and affiliations cannot be reduced to a simple one dimensional (left-right) spectrum, because we also build social capital through a variety of hobbyist communities.  The answer to online communities is …more online communities (cue the recitations of Federalist 10).

For those reasons, I’ve long been convinced that we don’t need to be all that concerned about cyberbalkanization.

And then I read Eli’s book.

The core of Pariser’s concern is well explained in his TED Talk.  Eli is a progressive.  He also has other hobbies and interests.  Thus, he consciously has developed conservative friends, and is tied to them through facebook.  One day however, he noticed that he was no longer seeing their updates in his news feed.  Facebook’s algorithm had recorded that he didn’t click on those links very often.  So it “optimized” his experience by removing those updates.

On the surface, that’s a small issue.  A progressive doesn’t see headlines that weren’t all that appealing to begin with.  But it points to a much bigger problem.  Even at the social layer of the web, multidimensionality is viewed as a type of inefficiency – an engineering problem to be solved.  For the engineers and the third-party advertisers, the goal is better point prediction.  Through improvements in automated filtering, they can reduce the incidental knowledge gains that come through membership in multiple communities.  Facebook, ideally, would like to only show me sports-related updates from my Wizards fan-friends, and only show me politics-related updates from my netroots friends.  Advertisers, ideally, would like to know which elements of those subcommunities most fit my profile.  It’s an engineering problem to them, with an engineering solution.

Of particular concern is that this personalization is going on without our knowledge.  Even if I don’t want it to happen – even if I’d like to hear the contrarian opinions of blues dancing Ron Paul fans – large social media hubs are going to treat those voices as noise and try to remove it.  Unless I decide to put outstanding effort into “fooling the filters,” I’m going to be stuck solely with reinforcing views.  And that increases the threat of cyberbalkanization.

I’m tempted to call this another example of the “beneficial inefficiencies” problem.  Multidimensionality may appear as an engineering problem for social media purveyors and the third-party advertisers who pay them.  But it also serves to mitigate some social problems.  As the social web continues to develop, cyberbalkanization could easily reemerge as a substantial threat.  In short, multidimensionality can be trumped by improved point prediction.  And that would be a bad thing.

It isn’t easy to conduct academic research on this sort of “point prediction.”  The engineers and data industries operate under copyright protection, proprietary data, nondisclosure agreements, and trade secret rules.  This is non-transparent data, and there are strong incentives for the companies and engineers to keep it that way.  Pariser’s interviews with Yahoo and Google engineers, as well as his conversations with dozens of social scientists, represent a substantial step forward in understanding the current digital environment.

I’m impressed with Pariser’s book.  It’s well worth reading, and explains these concepts with greater clarity and better examples that I’m providing above.  It’s a nice departure from the normal “cyberskeptic” book (Jaron Lanier and Nicholas Carr providing two recent examples).  It’s well-balanced, thoughtful, and serious.  In a rapidly changing medium, it helps highlight what the Internet has become, where it may be heading, and why that matters.  Pariser asks us not to fear, criticize, or dislike the digital landscape, but to help make it better.  As he notes in his conclusion, “the Internet isn’t doomed, for a simple reason: This new medium is nothing if not plastic.”

Indeed.

Google v. Bing Lawsuit? Not for Violating Copyright

(As always: I’m not a lawyer, I’m definitely not your lawyer, and nothing herein is to be taken as legal advice.)

In light of the revelations that Microsoft has been copying Google’s search results and feeding them into its Bing results, there’s a discussion about whether and how Google might seek a legal remedy. While “sue for copyright infringement” is perhaps a good default answer in internet law, I don’t think it’s the right one here. There may be other good options, though; I discuss one further below.

Senior Google Counsel William Patry knows a lot more about copyright than I ever will, but I’d be shocked if his team went into court with the claim that their search results are copyrightable. Copyright is only granted to creative expressions fixed in a tangible medium. Databases (compilations of data, including the association between various bits of data) are not subject to copyright unless there’s some creative expression involved, and then, only the creative expression is protected.

I think the clearest case law analogy here is Feist v. Rural, in which the defendant acknowledged having copied the plaintiff’s white pages. Still, the SCOTUS found unanimously for the defense. Why? Because there’s no creativity in collecting the data and alphabetizing the list of names. This is true even though several of the names were fake—and appeared in both the original and the copied version. Sound familiar?

The technology is different, but the legal question is remarkably similar. Google doesn’t create the websites to which it links, and it is exceptionally clear that the sorting that happens in the black box is fully automated and governed by complex equations. In other words, it’s like a much more complicated version of alphabetizing.

Imagine similar copying based on a sorting mechanism that is more complicated than alphabetical order but less complicated than Google search rankings—say, NFL quarterbacks’ passer ratings. If I were a sports blogger, I would have no compunction about copying the list of starters ranked by passer rating from the NFL.com site. Why? It’s just a list of which quarterbacks had which ratings, sorted by a somewhat complicated but ultimately mathematical rating. The NFL could sue me, but it would be pointless.

We don’t know how the math behind the search results and rankings work, but we do know that it’s an automatic process. Anybody who knows the formula could apply it and get the same results. This means the results aren’t sufficiently creative to be copyrightable. Even though Google’s search software is much more complicated, it’s probably best described as the legal equivalent of alphabetizing or ranking quarterbacks by formulaic passer ratings. I’m perhaps overstating the case, but on a scale from “Shakespeare” to “phone book,” search engine results are practically tripping on the white pages.

One might object, “But software is copyrightable!” Yes, software written by creative human programmers is copyrightable. This includes the code inside Google’s black box. But Bing didn’t copy the code. That would be infringement, not to mention a violation of trade secrets. Bing just copied the results–and not even whole hog, but as input for their own formula–and the results are not themselves a creative expression.

So where does that leave Google’s legal strategy? I know much less about this area of law, but I think they could go for the other default answer for internet law: “Sue for violating the clickwrap license.”

Here, the case law seems to be much more on their side. One reasonably analogous case is Register.com v. Verio. In this case, Plaintiff Register.com won an injunction against Verio for repeatedly and automatically harvesting subscriber data from Register.com’s site in violation of the terms of use.

The fit here is also not bad. Google’s Terms of Service forbid certain uses, including accessing any services “through any automated means (including use of scripts or web crawlers).” Even though the IE users themselves are not automatons, IE is, and apparently it’s serving as a web crawler, harvesting the data and sending it back to Redmond.

Funny coincidence that I’d pick this case, too. Read the slip opinion here (pdf), and check out the participating attorneys. Guess who was the lead attorney for Register.com, the victorious plaintiff… William Patry. Maybe I’m not so far off base here in predicting a Register v. Verio-based strategy.

Google may well let Bing’s actions speak for themselves and avoid the legal route altogether. That’s a fine PR strategy, and suing also may not be worth the political cost of giving fodder to Google’s opponents on other issues down the road. But if they want to sue, I think copyright is a terrible route, while breach of contract may be a good route.

There are still other legal options, to be sure. But as “Chainsaw” Dan Snyder reminds us, suing isn’t always the best option.

Why Media and Journalism Scholars Support Network Neutrality

[This is a draft blog post as submitted to SaveTheInternet.]

Academic associations tend to be politically conservative.

I don’t mean they revere Ronald Reagan and Milton Friedman, though plenty of scholars do. Rather, each group–representing a field’s professors and graduate students–tends to evade controversy, rarely taking a public stance on an issue that might divide the membership.

Thus, it is remarkable that the Association for Education in Journalism and Mass Communication (AEJMC) has declared its support for network neutrality.

The issue is too important to stay on the sideline any longer.

AEJMC represents a diverse group of scholars who research and teach nearly everything related to mass media. Based on our research–and, in some cases, years of industry experience–we know the media business, and letting ISPs pick online winners and losers is bad policy.

Nearly all revolutionary internet ideas–from Amazon and Google to Skype and Twitter–came from cash-strapped outsiders. Somewhere in the world right now, another tinkerer is developing what might become the next big idea. Before it catches on, though, ISP demands for a broadband toll might strangle this idea in its crib.

Also, some of the best stuff online never turns a profit. Imagine if, in 2001, Wikipedia had to pay through the nose just to compete on a level playing field with Encarta. It may have stalled, and even today, forcing Wikipedia into the slow lane would harm and might kill the project.

AEJMC is also concerned about the slow death of the daily newspaper’s business model. We embrace the internet age, but we also hope to ensure financial viability for “print” journalism. ISP tolls would make this much harder.

MSNBC and FoxNews could afford to pay extra for the rapid delivery of rich, interactive media. Most newspapers could not, forcing them to choose between deeper debts and worse user experience. Citizen journalists and exciting nonprofit experiments would also be muted by ISPs.

In addition to concern about the media system in general, we also have a selfish motivation to support network neutrality: Our roles as scholars and teachers. Academics in all disciplines depend heavily on the internet, and most of the educationally valuable content is not backed by big corporations.

If ISPs choose winners and losers online, the online content we professors assign would not often win. Would ISPs bend over backward to ensure my students’ access to the PDF of James Boyle’s Creative Commons-licensed book? Or the Internet Archive audio of WWII-era radio broadcasts?

Boyle and Archive.org are great, but I don’t expect them to pay off Verizon just to make my students’ downloads faster. This means my students have less access to educationally valuable content, they learn less, and the educational value of the internet drops. The same will be true of my research productivity.

As students of the media system and as researchers and educators, we have deep value and respect for the neutral internet. It is a privilege to have contributed to the drafting of the AEJMC statement, and I thank AEJMC President Carol Pardun for having the courage to lead this charge.

P.S. As if ISP profiteering weren’t enough, other interested parties are muddying the issue. The copyright industries, for instance, are desperately trying to force and cajole ISPs into serving as the copyright cops.

P.P.S. In the interest of full disclosure, I am the co-author (along with Minjeong Kim of Colorado State) of a research project examining the online framing of network neutrality. This project won a competitive research grant from AEJMC, though this is in no way related to my long-established opinions on this issue.

AEJMC Supports Net Neutrality

I was excited when Carol Pardun, President of the Association for Education and Journalism and Mass Communication, told me that the group would be issuing a statement supporting network neutrality. I was ecstatic when she asked for my input on the statement.

Now, the statement is out, and I’m listed as a contact. Later today, thanks to the good eye of Josh Stearns at Free Press, I’ll be writing a post for the SaveTheInternet blog.

Here’s the text of AEJMC’s statement on net neutrality:

AEJMC Supports Net Neutrality

FOR IMMEDIATE RELEASE

January 26, 2010

Contacts:
Carol Pardun, AEJMC President (803) 777-3244, pardunc@mailbox.sc.edu
Bill Herman, AEJMC Member and Media Law Scholar, (215) 715.3507 (mobile), billdherman@gmail.com

AEJMC Supports Net Neutrality

The Association for Education in Journalism and Mass Communication (AEJMC) urges the Federal Communications Commission to adopt rules preserving open and nondiscriminatory access to the internet.

The debate about network neutrality is complex and contentious, but we wish to address a specific myth advanced by network neutrality opponents: that this regulation would stifle innovation and create disincentives for investment in next-generation broadband networks.

The AEJMC rejects this claim.

The most important internet innovations have not come from network providers, but from creative outsiders who built their inventions on top of a neutral network. Requiring network neutrality is vital to preserve competition and investment in internet content, services, and applications.

The FCC should codify the internet openness principles that already guide the agency, and Congress and the courts should support this move. The rules would protect both consumers and innovators of content, services, and applications from unfair discrimination by internet service providers. Perhaps most importantly, these rules would help preserve and develop the internet as a key tool for communication that serves our democracy.

This statement was issued by the President of AEJMC and through the President’s Advisory Council.

Related links

* Federal Communications Commission
* Network Neutrality (Wikipedia)
* “Net Neutrality” in the news (Google)

About AEJMC

The Association for Education in Journalism and Mass Communication is a nonprofit, educational association of journalism and mass communication educators, students and media professionals. The Association’s mission is to advance education, foster scholarly research, cultivate better professional practice and promote the free flow of communication.

# # #

Thanks to Newsweek for Having Me at News/Geek

Just a quick, 24-hours-overdue thanks to the folks at the Newsweek Dev Team for hosting me last night at their third News/Geek event.

I had a rollicking good time, the questions were awesome, and the post-talk celebration was even better. If you want the Powerpoint, it’s here in all its 12.2 MB glory.

Further discussion welcome.

Tiered Broadband Pricing and the Myth of the Internet Flood

Over at Public Knowledge, Robb Topolski has written an inspirational post, ISPs Behaving Badly, which criticizes Time Warner’s trial runs at tiered pricing.

I’m not opposed to tiered pricing in principle, though TW appears to have handled it rather badly, and it still fails to solve the root problem of weak competition in the wireline ISP market. Also, I’m skeptical that it’s necessary–rather than a way for TW to keep maintenance costs down and prices up in a market where consumers have few other options.

I really appreciate Topolski taking on the ever-invoked myth that the internet is about to become so choked up as to become unreliable. This is the threat that the “Internet Tubes” will get full, invoked by then-Senator, now-convict Ted Stevens was threatening all the way back in 2006.

Basically, this threat is still a bogeyman and looks to be so indefinitely. Last year, Telegeography concluded, “Internet traffic is growing fast, but capacity is keeping pace.”

Further, DSL Reports debunks the “exaflood myth” in their typical sharply opinionated style.

For a more detached, scholarly view of internet traffic, see the Minnesota Internet Traffic Studies (MINTS) site. Chief investigator Andrew Odlyzko and company are doing great work here. He also suggests that, if anything, the rate of growth in wireline broadband traffic is decreasing. The most recent MINTS post cites a Cogent estimate of 30% growth in internet traffic in Q4 2008 versus 2007.

Last February, Odlyzko argued that, at least as far as the network industries are concerned internet growth may be too slow. This was even based on higher estimates of growth; Odlyzko’s estimate at the time was that internet traffic grows at about 50% per year.

The key is that the cost of managing a network declines by about one third per year. Even exaflood believer Lawrence G. Roberts adopts the latter estimate, following Moore’s law.

If the cost of managing network traffic next year will be roughly 2/3 of this year’s per-bit price, and total traffic is around 3/2 of this year’s total, network providers spend about the same year-over-year for network maintenance (2/3 * 3/2 = 1) and thus make the same profit per subscriber.

Of course, it’s very un-sexy to tell your stockholders that per-subscriber profits will be the same as last year, especially considering the ever-decreasing potential for new subscribers in a broadband market that is approaching saturation.

Thus, dare I suggest: Maybe the exaflood threat is actually about broadband providers leveraging their way into a new business model–whether the Tony Soprano business model of “Charge Google,” or the wireless carriers’ model of tiered pricing.

To draw a comparison with the wireless industry is instructive; even when wireless data transmission is more than doubling every year, wireless carriers keep charging lower prices for better service and rolling out every more reasonably priced all-you-can-everything plans.

Where there’s even modest (and far from ideal) competition, customers come out far better than in the duopoly-at-best home broadband market.

But then again, maybe “global traffic will exceed the Internet’s capacity as soon as this year.” That is, if you listen to Phil Kerpen’s commentary at Forbes–from January 2007.

On community blogs and dealing with the crazies in their own midst

In attempting to build a vibrant community-of-interest, every political blog faces a policy choice of sorts: what kind of commentary will we allow.  Some of the basics are easy to sketch out and universally applicable.  Disagreement is good, but flame wars are bad.  Don’t engage in ad hominem attacks. The site owner/moderators reserve the right to take away posting privileges if you are obviously just there to antagonize the community.  The low transaction costs of the internet make it very easy for a hostile liberal or conservative to jump onto the comment boards of their ideological opponents and start acting obnoxious.  Whether this ideological diversity is supported when polite is an open question.  There aren’t a lot of Republicans on DailyKos or Democrats on RedState, but that could either be because they get banned or because they eventually get bored and give up.

A trickier policy choice can perhaps be summarized as “how do we deal with our own crazies.”  On either end of the political spectrum, there exist a tiny minority of tinfoil hat-wearers.  The most radical offline leftists set fire to auto dealerships and ski resorts.  The most radical offline conservatives start militias and shoot up churches.  Online, how are we to distinguish them, and what are we to do about them?

Conveniently, online crazies tend to grab hold of a popular conspiracy theory and not let go.  On the left, these are the “9/11 Truthers” and, post-2004, the “Ohio Fixed Election” folks.  On the right, we have the “Obama birth certificate” fanatics.

I raise this because I’ve started to think recently that Markos Moulitsas made a particularly important policy decision in the early days of DailyKos.  Wanna see how fast you can banned from DailyKos?  Post a 9/11 conspiracy diary.  Same with the 2004 election conspiracy theory.  Kos took a hard line on this talk and said that it would have no place on his site.  You want to help build a progressive majority?  Welcome to dKos.  You want to talk about statistically variations between exit polls and final results, or the spookiness of Diebold?  Banned.

The conservative blogosphere is currently experiencing a surge in traffic, as online conservatives have something more to complain about than all the liberals on tv (this supports a deeper theoretical argument about “political opportunity structures” and innovative campaign technologies, but I don’t want to give away the ENTIRE dissertation on this blog…).  From what I can tell, they haven’t drawn the same policy stance (I haven’t conducted a large-scale content analysis yet, so feel free to correct me in the comments).  Want to claim that Obama is a foreign-born Manchurian candidate?  Welcome!  Gateway Pundit, in particular, has soared up the conservative rankings in the past few months, all while exhibiting a type of borderline hysteria that cannot be too attractive to mainstream conservatives (you may not like Obama’s tax proposal, but that doesn’t make him Mao or Stalin).

My hunch is that this policy choice serves as sort of a path dependent critical juncture in the development of online political communities.  When a new visitor drops by the site, what is the tenor of the conversation like?  DailyKos has made a series of policy choices in support of their goal of being a “reality-based community.”  Whether you like them or not, the tenor of the conversation bears little resemblance to the caricature presented by Bill O’Reilley, and a reasonable argument for why dKos has gotten so large is because Kos chose to lop off the most extreme-left commenters, making the tenor of the conversation better reflect the preferences and opinions of the much larger population of less-extreme, but less outspoken, progressives.  Which conservative community blogs will take a similar policy stance, and how will it play out in the development of online conservativism?  Anybody have a good guess or two?