The Blurring Boundaries of “the Blogosphere” (Or, Research in Internet Time, Exhibit #8571)

I used to study the political blogosphere.  My first three research papers were on the blogosophere.  First I put together a ranked tracking system for comparing elite political blogs.  Then I designed a typology of “blogspace” that separated individual blogs from community blogs, and institutionally-based blogs from personal blogs.  Then I researched the role of community blogs like DailyKos in driving turning Republican political gaffes into substantial political mobilization.

Then I became convinced that there isn’t any such thing as the blogosphere anymore.  Blogging is just a format for typing things and putting them online.  In the early days of blogging (1999-2006ish), the subset of Internet-writers that used this format was small and relatively well networked.  It made sense to talk about “the blogosphere,” because there were identifiable clusters of people using this digital tool, and they had distinct goals, priorities, and values.

But as blogging proved useful, it was adopted by more people, and adapted to a wider set of aims.  Talking about “bloggers versus journalists” stopped making much sense once the New York Times and Washington Post started hosting blogs on their sites.  Talking Points Memo used to be the blog of just-some-guy named Joshua Micah Marshall.  Then he developed a business model and started hiring journalists.  Then his site won the Polk Award for investigative journalism.

And then, of course, we started getting alternate digital formats that better supported some of the purposes that blogs used to be aimed at.  Atrios (Duncan Black) and Instapundit (Glenn Reynolds) were two early influential bloggers who both stylistically chose to writes 20 or so brief posts per day.  They were usually a sentence or two, with a link to something interesting.  Today, most bloggers write longer posts.  A couple sentences plus a link has become a tweet.

Andrew Chadwick calls this rapid dissolution of media genres “hybridity.”  One of the major points he makes in The Hybrid Media System is that our newer, hybrid media system encourages nimble organizations that experiment with a wide assortment of tools and technologies.

The latest reminder of this trend comes from DailyKos.  I’ve been thinking a lot recently about Markos Moulitsas’s post from earlier this month, on traffic surges at the site.  Here’s a key point:

Email action list. We’re no longer just a website, or a mobile site. Our email action list has grown so large, it’s now one of the largest in the (non-campaign) progressive movement. As of the end of August, the list is 1.6 million strong, which means it has literally doubled in size every year for the last three four years. That list gives us the ability to create massive pressure when necessary. For example, check out this report from the Sunlight Foundation on the 800,000 public comments the FCC received on its Net Neutrality plan. Of those comments that Sunlight could directly source to their sponsorship organization, fully 10 percent of them came from Daily Kos, making us the fourth largest source of pro-Net-Neutrality energy (behind CREDO, Battle for the Net, and EFF).

DailyKos.com has 1.6 million members on its email list.  Those members receive daily updates on breaking stories and popular diaries at DailyKos.  They also receive calls-to-action, urging them to participate in online activism.  I’ve heard that DailyKos is building a field program as well, with a goal of supporting offline organizing.

There’s still blogging at DailyKos.  There will always be blogging at DailyKos.  And there’s still a community of diarists who use DailyKos to publish thoughts, opinions, comments, and reportage.  But it no longer makes sense to talk about DailyKos as a part of “the blogosphere.”  The blogosphere is a concept from ten years ago that seems to have already gone past its expiration date.  DailyKos has succeeded because it has morphed from a community blog into a more complex digitally-mediated political organization.

Just when we researchers get comfortable talking about a digital phenomenon, the phenomenon itself morphs and changes into something new.

Facebook at 10, and Internet Time Revisited

Robinson Meyer has a nice piece at TheAtlantic, discussing Facebook’s web publishing surge.  Websites within the Buzzfeed Partner Network now get nearly 4x more traffic through Facebook than through Google.  That’s… a pretty big deal.  Google used to be synonymous with the “attention backbone” of the internet*.  Now, it appears as though the Facebook “wall” is overtaking the Google search.

It’s a particularly timely piece, because Facebook just turned 10.  And Facebook’s digital publishing surge is not a natural outgrowth of its ten years of success.  As Meyer puts it:

“The kind of traffic surge from Facebook—so vertiginous to be almost hockey-stick-ish—wasn’t an accident. Facebook didn’t grow at that rate in 2013, especially among U.S. users, and “naturally” eclipse Google. As I’ve written before, Facebook’s directing that kind of traffic because it wants to direct that traffic—it wants to be a digital publishing kingmaker.”

I remember learning about Facebook in 2005.  I was in grad school, and a teaching assistant for a large undergraduate intro-to-politics class.  All of my students had created Facebook accounts to go along with their Myspace accounts.  Since I had a university email address, I created one too.  But I didn’t see much point to the site.  It was an exclusive, barebones version of Myspace.  No one I wanted to socialize with was on the thing, and “poking” seemed innately stupid.

Facebook-as-digital-publishing-kingmaker was not foreseeable in Facebook’s initial years.  Hell, it wasn’t even foreseeable two years ago.  Facebook changed as it grew, and as other parts of the World Wide Web grew around it.  That change doesn’t occur along a single vector, or in response to a stable five year strategic plan.  I’ve written on this subject before.  It’s a concept that I call “Internet Time.”

In secular time (normal human being time) a decade isn’t really that long.  Ten years ago, everyone was watching J.J. Abrams shows on television (and Lost hadn’t disappointed us yet), and watching Peter Jackson’s film adaptations of J.R.R. Tolkien on the big screen.  Hollywood was being awful about copyright, and environmentalists were warning that it was long-past-time that we got serious about addressing climate change.

By comparison, 10 years is an eon in Internet Time.  Blogs were still in their nascent stage ten years ago.  The iPhone wasn’t invented until 2007.  The iPad was science fiction. Hell, YouTube didn’t even exist in 2004.

This is a pretty important distinction.  It means, when we study Facebook use over time, the object of analysis is unstable.  Facebook in 2014 performs a different function than Facebook in 2009.  And this isn’t simply because people have started to use it in different ways.  It’s because Facebook’s engineers have modified the system itself.  In its first few years, the Facebook Wall didn’t exist.  Then it provided you with status updates from your friends.  Now it provides you with news and opinion pieces, and steers you away from low-quality content farms, and charges companies to boost their wall content.  All of these engineering decisions and policy decisions matter.  They make Facebook at 10 something different than Facebook at 7 or 5 or 1.

When we study Facebook’s role in politics, or news, or entertainment, our empirical research has a relatively short half-life.  By the time an article makes it through peer-review and publishing, the object of analysis may have changed in ways that invalidate  many of the findings.  (Example: if someone conducted a solid study of Facebook and digital publishing traffic in 2011, it likely wouldn’t be published until this year.  Those findings would be robust for Facebook circa 2011, but inaccurate for Facebook circa 2014.)

This all reminds me of a passage from Kurt and Gladys Langs’ classic 1968 book, Television and Politics. (further discussed at QualPoliComm).  the Langs discuss how television does not reflect reality, it refracts reality.  The introduction of the tv camera alters and helps to create the scene.  The Langs write “Refraction inheres in the technology, but the particular angle of vision rests on the decisions and choices within news organizations and how an event is to be reported.”

Facebook is also a refracting media technology.  And the angle of vision rests on the decisions of engineers and A/B testers.  But that angle of vision is also constantly changing, constantly evolving.

We can be confident that social media refracts, rather than reflects.  But Internet Time means we constantly have to revisit just what is being magnified or obscured.

 

*”Attention Backbone” is Yochai Benkler’s term.  I love it and am borrowing it for a slightly different context here.  You should read his recent paper about the SOPA mobilization, though.

On Coding My Own Data (Reflecting on Research Methods)

[a long research methods post.  Because who doesn’t like reading about research methods during their holiday break?]

I’ve developed a daily routine.  At 2PM EST, I stop whatever I’m doing and go collect data.  I launch an excel spreadsheet, open browser windows for petitions.moveon.org and change.org, and record data on the top 10 petitions at each site.  It  takes about 15 minutes per day.  I’ve done this for two months so far. I have another four months of the activity planned.

It’s an intentionally low-tech approach to studying digital activism.  I have friends and coauthors who could scrap together some python script and automate the whole process.  I also have a part-time research assistant who could obviously handle this herself.  And I have contacts at both organizations that could probably compile six months of this data over their lunch break.  Oh, and don’t even get my started on the clunky use of excel.  Combing through all these data points and converting them into graphs is going to be a PAIN. Data collection is pretty boring work.  This sure doesn’t appear to be an efficient use of my time.

There are three advantages to coding my own data, though.  (And they’re advantages that I never see discussed in methods textbooks or research appendices.)

Distributed petitions screenshot

 

1. Thought-work: Those 15 minutes per day are a cognitive commitment on my part.  It’s time that I have set aside to think about distributed petition platforms.  And since the actual data entry is a rote and mechanical activity, my mind is free to wander on the topic.  How are the two sites similar?  Where do they diverge?  What topics are popular?  What drives signature spikes? Am I seeing any patterns?

The human mind is a pattern-recognition machine.  And digging into the data often reveals those patterns as false-positives.  But without this daily thought-work, I wouldn’t have many worthwhile hypotheses to test with my data.*  There’s nothing inherently fascinating about the daily churn of distributed petition platforms.  If someone handed me a complete six-month dataset tomorrow, I wouldn’t immediately know what to look for.  The scheduled rigor of data collection helps me to figure it out.

Essentially, I’m establishing a beneficial inefficiency within my research process.  Offloading the data collection to a python script or a research assistant would be more efficient, but would also relieve me of a useful cognitive commitment.

2. (Blogable) Moments of Clarity: Visiting the two sites every day can lead to moments of clarity, where I think I figure something out.  These moments turn into ShoutingLoudly posts.  In the past month, I’ve written two posts about distributed petition platforms.  There are a few benefits (and one drawback) to this habitual blogging.

My dissertation advisor gave me a great piece of advice once: “just start writing.  That’s how you figure out what you know.”  Writing is a lot smoother when an idea is fresh in your mind.  It’s a lot easier to convert messy blog posts into clean academic articles or book chapters than it is when you start with a big dataset and a blank page.

Another benefit of the blogging is it leads people to read my stuff and challenge me.  I find out what resonates and what falls flat.  I get pointers toward interesting new directions.

The one drawback is that the blogging may alter the data.  Earlier this month, I criticized Change.org for putting a solidarity petition with no theory-of-change in their #1 slot.  The next day, the petition had dropped to #8.  That might have been affected by the critique.   Research methods textbooks caution against “infecting” the data in this manner.  If the act of observation alters the process you are observing, then your results are tainted.  That’s a reasonable concern.  But it’s balanced against the value a gain from sharing early findings.  I find it to be a net positive.  (And really, if their rankings can be influenced by an academic blog post, then that suggests there’s too much variance in the system to speak confidently about causal processes anyway.)

Augmenting Mixed Methods: I never rely solely on one research method.  I count things, process-trace through case studies, interview people, and experience processes firsthand.  The daily data collection has spillover effects for these other methods.  As I collect my data, I  take note of cases that deserve a deeper look.  I also figure out the right questions to pose during interviews.  And blogging my early insights can lead to email and twitter exchanges with smart practitioners which, in turn, can lead to additional interviews or research questions.

All of this is messier than it sounds in the textbooks.  That’s also by design.  I wrote an article last year titled “Social Science Research Methods in Internet Time” which talked about the values of “transparency” and “kludginess.”  The idea is, when studying underlying phenomena that are still in flux (like digital politics), it’s important to embrace the messiness of your research design and be transparent about its limitations.  Keeping close to the data is one way that I stay aware of my kludges and invent new hacks for understanding the field of analytics-based political advocacy.

—–

“Collecting Your Own Data Builds Character.  I now have enough character.”

That’s a laugh-line I used to use when presenting research from the Membership Communications Project.  Back in 2010, I signed up for the email lists of 70 advocacy groups.  I collected over 2100 emails from them over a six-month period, and hand-coded each of them.  I also watched Rachel Maddow and Keith Olbermann every night and recorded the topics of the two shows.  The data analysis was tedious and left me with a wicked caffeine addiction.  But it also left me with an unmatched understanding of e-mail membership activation strategies.

So that’s why I hand-code all my own data.  Call me the crotchety old guy of the “big data” age.  While everyone else is learning hadoop and python, I’m still futzing around with Excel.  But there’s a method to the madness.  It’s thought-work, which leads to insights, which improve my other methods.  Coding my own data gives me a feel for the research topic.

It’s inefficient as hell.  But it’s a beneficial inefficiency.

Happy New Year, everyone.  Thanks for reading.

 

*This is one difference between the Internet politics subfield and other, more established subfields.  If I was studying negative advertising, for instance, then I could port over testable hypotheses from a robust literature that has developed over the past 30 years.  But barely anyone has studied distributed petition platforms before.  So my research process has to include both theory-building and theory-testing.

 

 

 

Johns Hopkins Gets It Right: Let’s Have Fewer PhD Students

In an effort to begin to address the glut of overqualified adjunct instructors, Johns Hopkins has announced that it is planning to cut its PhD enrollment by 25% and raise the stipend (read: salary) of the remaining graduate employees from $20,000 a year to $30,000.

Hundreds of current Hopkins PhD students are protesting, but they shouldn’t be, and in her writeup at Slate, Rebecca Schuman hits the nail squarely on the head — so much so that I’d like to elaborate a bit on how very right she is.

Generally speaking, a PhD — at least, one earned in the reasonable expectation of getting a “real” faculty job — is becoming a worse bet every year. Schools keep accepting more (and more schools keep creating new PhD programs in more disciplines), while colleges at all levels are relying ever-more-heavily on non-tenure track faculty. This includes adjuncts and (drumroll please) grad students.

This makes tremendous sense as a strategy for a given research university. Adjuncts and grad students (even if you count the tuition waiver) are way cheaper, more disposable, and easier to push around than full-time faculty. The star tenure-track faculty then get to teach more grad seminars. Advise more dissertations. Have more potential co-authors and research assistants floating about. Teach fewer lower-level undergrad courses.

The problem here, though, is that universities acting individually are not acting in the best interests of the academy overall or the nation in general. Collectively, PhD programs are burning through — and burning out — many of the nation’s best and brightest, then turning those same former rising stars into a lurking labor revolt.

Too often today, the people who did the best in undergraduate courses are becoming the burned-out, uninsured, woefully underpaid faces of college education to first- and second-year students. This makes college less valuable in a direct way. It’s hard enough to teach well when you’re paid fairly, have a reliable office, and teach 3 or 4 courses per semester while trying to do research and service. It’s damn near impossible when you’re teaching 5 or 6 courses, on multiple campuses, with little or no office space, little institutional support, and unsure how you’re going to pay your electric bill this month.

This system is also a poor advertisement for the product itself and even the “life of the mind” mentality that college is supposed to foster. If that’s what “too much” college education leads to, students might wonder if they should err on the side of too little. If the mastery of core liberal arts skills like critical thinking, reading difficult texts, and making sophisticated arguments has the appearance of leaving one broke, why should I put my best efforts into reading this book? Writing this essay? The savvy undergrad might think, “Give me the credential and let me get started at a ‘real’ job before your love of knowledge infects me and I wind up in your shoes.”

You know the “correction” the field of law just went through? The one with lots of freshly-minted JDs saying “I just spent a bajillion dollars and 3 years, and there are way too many candidates for every job”? We’ve been doing that in slow-mo in academia for heaven knows how long. It’s taking longer to sink in, of course, because compared to what you earned in whatever crap job you had during your BA, $15k/year and no tuition bill sounds like a great deal. Folks can’t or don’t account for opportunity costs, such as tens of thousands in lost salary, and heaven knows how much in lost opportunity to learn & rise up in other sectors.

More strikingly, nobody (not their undergrad faculty who graduated many moons ago, and certainly not the PhD programs who want as many apps as possible) tells these best-and-brightest about the real costs, benefits, and risks. Undergrad faculty in particular should be much more honest with themselves and their students about how much less repeatable their career trajectory is today versus 10+ years ago and how much depends on raw luck.

We’re also afraid to tell would-be applicants about the importance of the sub-discipline studied. Here, in my jauntiest department chair voice, is what the academy tells PhD students (outside STEM fields):

You there, doing critical cultural studies? And you there, doing detailed historical/archival/anthropological work? Welcome to the adjunct office! You’ll be here until you decide you want to own a home. Or get health care. Or not have your ability to pay rent be contingent on whether a tenured professor gets sabbatical.

You, however… You, with the experience working on a giant grant-funded data-collection-and-article-production machine? With lots of statistical savvy, who can teach the research methods and (field-specific quant) classes that befuddle and/or bore most of your soon-to-be colleagues? We’d really like to talk to you! Pay no attention to those poor souls all crammed into that tiny office there. Their working conditions are the just and fair recompense for their recalcitrant poststructuralism. Now, let me introduce you to our grant support staff.

I’m glad to have postponed my higher earning years to have chosen what is (for me) a highly rewarding career, even with the substantially diminished long-term earnings potential — versus, e.g., becoming a private-sector IP attorney. I love researching in an environment where research productivity is celebrated but not fetishized. I’m happy to have the chance to shape students’ lives, despite students’ highly varying levels of college readiness. I love teaching, despite the occasional class disruption due to our building’s mouse infestation. (Wish that was a joke.) That should be the expectation for more faculty, further up and down the prestige chain, and it should be a more likely outcome for a smaller set of PhD students.

Even though I’m quite happy where I’m at, there was a point where I realized how very in-doubt this outcome was. I was lucky to have picked communication; I believe we hire a larger portion of our PhD grads as tenure-track faculty than pretty much any other comparable discipline. I was lucky to get into Penn — by acclamation, the top program in media studies in the country, and the co-sponsor (along with Annenberg USC) of the party that all party crashers crash at the conference.

Despite this good fortune, even during my coursework at mighty Annenberg U Penn, I realized that I had only the thinnest grasp on what a Plan B (other than law school — and even more debt and postponed earnings) might look like. I realized that most potential Plan B employers would see my PhD as having little additional value versus an MA. More stunningly, I realized how very far from certain Plan A was from working out.

I don’t blame anyone for not telling me all of the above, not least because I think awareness on this point was much lower when I started my PhD program ten years ago. But today, in late 2013, programs and research faculty and teaching faculty and would-be students all need to come to the same conclusion as Hopkins. We should have fewer, not more, PhD students.

And while we’re at it, how about we work on making a BA more valuable, more broadly taught by tenure-track faculty, and (the horror) harder to earn?

The Downward Spiral of Online Data Quality

Today in the New York Times “bits” blog, Nicole Perlroth brings us the latest cautionary tale for those who want trust online metrics a little too much.  Titled “Fake Twitter Followers Become a Million Dollar Business,” the article documents the growing market for fake follower numbers.

You can buy 1,000 followers on Fiverr for $5.  It took me a couple years to reach the 1,000 follower threshold.  …I’m such a sucker.

Perlroth’s post highlights a phenomenon that I’ve discussed elsewhere.  In “Social Science Research Methods in Internet Time,” I phrased it as a general rule: “Any metric of digital influence that becomes financially valuable, or is used to determine newsworthiness, will become increasingly unreliable over time.”*

The drivers of this process are abundantly clear.  Attach value to a digital metrics (hyperlinks, followers, retweets, site visitors) and you create an incentive for talented coders.  There’s money to be made in spam blogs and fake twitter accounts.  It isn’t particularly honest money, but it isn’t particularly dishonest money either.

Those coders will introduce noise into the system.  Another set of coders will work on proprietary counter-methods that help cut through the noise.  But that isn’t much use to researchers who are reliant on the publicly-available data itself.  The result is an ever-deepening GIGO (garbage in, garbage out) problem.  Academics often decide to treat follower count, retweet count, site traffic, etc as direct indicators of influence/success/prominence.  But those indicators were more accurate in 2009, than they were in 2011, than they are in 2013, than they will be in 2015, etc.  The data itself becomes less reliable over time.

This is a systemic property, which means we should be able to plan around it.  Theoretically, that is.  Practically, it’s devilishly hard to do so.  Our best options include (1) relying on metrics that fly under the radar, and thus (potentially) attract less spammer-attention, (2) thinking carefully about what biases to expect (which Twitter-users are most likely to buy spam accounts?  Presidential candidates > Physicists), and (3) developing partnerships with proprietary coders who can offer you higher-quality, constantly refined data.  Each of those options carries its own set of risks and problems, though.

Consider this your semi-regular reminder that the future of Big Data is going to involve just as much messiness and muddling-through as the past and present have.

 

*Self-quoting is weird.

The Dissertation As Teacher

A…provocative article was published today at the Chronicle of Higher Education, titled “The Dissertation Can No Longer Be Defended.”  The article is premised on a pretty flimsy claim: that “The dissertation is broken. Many scholars agree.  So now what?”

The author never actually makes it clear that “many scholars” agree with her premise.  She cites examples of a few innovative new dissertation formats in the digital humanities, and also cites examples of improved CUNY fellowship package that helps graduate students focus on research rather than TAing.  Those are two very different things.  Better fellowship packages help promote stronger traditional dissertations.  Innovative formats, particularly when they play an augmenting role, are no direct challenge to the dissertation.  (If a dissertation committee opposes your innovative proposal, they quite possibly have a point!)  And speaking as a still-pretty-new faculty member (I defended my dissertation in 2009), I’ve never once heard from a scholar who felt that the dissertation as a whole was “broken.”

Speaking for myself, the process of writing a dissertation was the centerpiece of my graduate experience.  It took me about two and a half years to write the damn thing.   It was far from perfect, though I hardly realized it at the time.  But the finished product was far less important than the process.  Writing a dissertation forced me to learn the habits of a successful academic, which are wholly different than the habits of a successful graduate student.

In my early graduate years, my attention was divided between coursework and the Sierra Club Board of Directors.  The object of my distraction was unique, but most of my peers had their own life-priority outside of The Literature.  I often found myself reading for class on redeye flights, and writing seminar papers during the lunch breaks of weekend meetings.  I developed a set of work habits that let me live in both worlds, though the work in both of them suffered as a result.  They were the habits of a successful graduate student, doing enough to pass classes and leave a decent impression on potential mentors/letter writers/dissertation advisors.

Then it came time for me to write a dissertation.  Instead of a 30-35 page paper, I was staring at a 300-350 page manuscript. It felt foreign. It was a gaping abyss.  None of my previous experiences prepared me for it.  The defining feature of the dissertation, in fact, was that it was too big.  I simply couldn’t use my old hacks and workarounds to get this thing written.  I had to create new ones.

In the early months, I got a little lost.  I had a big idea.  It was too big, and I didn’t quite know that yet.  I let myself stew about and get nothing done.  I watched a lot of sitcoms.  A few meetings with my advisor got me out of that rut.  I settled on a related project that was more manageable in scale — big, but not too big.

Once I had a clear research topic, I still didn’t know what to do next.  I had to learn.  I had to figure out how to break a large project down into meaningful pieces.  I had to learn how to self-motivate, constructing a plan each day for what I was going to get done.  I had to learn how to revise — unlike those seminar papers, the endpoint of these writing sessions went beyond a graded assignment.  I had to learn how to set deadlines for myself, and build a work schedule that encouraged day-to-day productivity on a project that I knew would be years in the making.

None of this was easy.  But neither was it a “hazing ritual,” as today’s Chronicle article suggests.  The endpoint of the dissertation is a large document (or a collection of smaller articles, depending on the norms of your subfield) that an audience of three people find acceptable.  As the old saying goes, “the best dissertation is a done dissertation.” The process of reaching that endpoint molds you into an actual working academic, though.

I reap the benefits of that process every day.  At GW this semester, I’m teaching two graduate courses.  I also have prep hours and office hours, along with faculty meetings.  Those are pretty much fixed on my schedule.  But the rest of the time is self-directed.  I am expected to maintain an active research agenda, but no one tells me what, where, when, or how to conduct that research.  Some of my projects involve coauthors.  Some are independent.  Some are as short as a blog post.  Others will stretch across another 300-350 pages.  I succeed or fail as an academic based on my ability to create in vast, unstructured time allotments. It is the same gaping abyss that I first stared into 7 years ago.  But this time, I’ve been there before.

This isn’t to say that dissertation committees should oppose evolution of the final product.  Scholarly work changes, dissertations can change too.  But the dissertation isn’t “broken.”  It works exactly the way it is supposed to.  It is long, and challenging, and serves as a major hurdle to entry into professional academia.  There is room to adjust the slope and angle of that hurdle.  There is room for graduate programs to rethink how they support people as they approach it.  But I can honestly say that writing a dissertation was the single best preparation I had for life in academia.

I suspect the goal of the Chronicle article was to attract pageviews, rather than defend a genuine argument.  Mission accomplished, I certainly wouldn’t have penned a lengthy response if any of these points were acknowledged by the article.  But in attacking the dissertation in such a haphazard manner, it seems the author is incidentally attacking all of academia.  My friend C.W. Anderson once remarked that ours is the last profession paid to “think slowly.”  I believe that’s a social good; we ought to have segments of society that mull things over.  The dissertation socializes us into the profession, forcing us to develop the right habits.  That shouldn’t change any time soon.

 

Dear Commissioner Copps: Thank You for Your Public Service

On Monday evening, the Hunter College Roosevelt House is hosting an event on media policy and reform, featuring former FCC Commissioner Michael Copps. Sadly, it’s in the middle of my Monday class, so I will be unable to attend — and it’s oversubscribed, so I can’t urge you to attend either.

Still, I’m really excited for my colleague Andrew Lund, who is leading the conversation with Mr. Copps, as well as the many Hunter students and faculty who will be able to attend. Thus, I wanted to share a bit about what I’d like them (and the world) to know about this great public servant.

To fully appreciate how exceptional Copps was as an FCC Commissioner, a role he fulfilled from 2001 to 2011, you need to know how thoroughly the Commission has traditionally been a “captured” agency — that is, generally doing the bidding of the industries that it was constructed, in principle, to regulate.

You should also know how the “revolving door” of government works: After working in government in a position of any real importance, many former public servants often take plum jobs in the private sector where they can leverage their regulatory knowledge and even their interpersonal connections to the advantage of their new employers.

Once he started his term at the FCC, Commissioner Copps knew that, after his time in government, he could easily walk into a plum job in the private sector. After all, this had been the route taken by many of his predecessors — as well as many of his colleagues who stepped down in the interim.

Unfortunately, when looking at the decisions that many of these FCC folks who turned that experience into very-well-paid private sector jobs, one could be forgiven for wondering whether many of them truly had the public interest at heart. Some of their decisions suggest that they were, at least in part, also thinking about their long-term earning potential. I won’t name names, but all of us who follow communication law reasonably closely know the most obvious examples.

When looking at Commissioner Copps’ decisions, however, nobody could possibly doubt that his true allegiance really was with the public for the full decade of his service. Media reform groups like Free Press and Public Knowledge finally had an unabashed, reliable ally with his hand on the levers of power, on issues from broadcasting to telecommunications to pluralism and diversity.

Want a sense of where Copps stands on the issues? Go listen to this interview with Democracy Now. Or this one. Read this collection of speeches or this collection of op-eds. Over and over again, you see him supporting the importance of using the power of the state to shape a more democratic, fair, and representative media system.

Copps is probably best known for his opposition to consolidation in ownership between media companies. He “was the one vote against approving Comcast’s takeover of AT&T’s cable systems in 2002” (p. 261), but this was just a warm-up.

The real sea change on ownership came in late 2002 and 2003, as then-Chair Michael Powell proposed a substantial roll-back in the rules against media consolidation. Copps and fellow Commissioner Jonathan Adelstein pushed to have substantial public discussion around the proposal, including multiple, well-publicized hearings. Powell said no — allowing just one hearing — so Copps and Adelstein went on tour, holding 13 unofficial hearings.

Through this and other efforts, working alongside public interest-minded NGOs, Copps helped bring major public attention to Powell’s proposal, ultimately bringing it to a halt. This slowed (though certainly did not stop) the process of media consolidation, through which ever fewer companies control ever more of our media landscape.

Copps has continued to be known for his opposition to media consolidation — though unfortunately, when Adelstein stepped down in 2009, Copps lost an important ally in the fight. Echoing the 2002 vote, Copps was the only Commissioner to vote against allowing Comcast to purchase NBC-Universal in 2011.

I would love to say a great deal more about Copps’ time at the FCC, but I’ll say just a few more words on one more issue: broadband regulation. He came in just in time to dissent from the FCC’s decisions to give away the keys to the kingdom on broadband interconnection, in the decision that led to the Brand X ruling by the Supreme Court.

The FCC ruled that broadband infrastructure companies — the folks who’ve used imminent domain and massive public subsidies as key tools as they’ve laid the cable, phone, or fiber lines over which broadband is transmitted — are not obligated to share their “last mile” systems with competitors. (This requirement for “interconnection” was already in place for landline local and long-distance telephone service, which led to an explosion of competition and plummeting prices.)

The Supremes held that the FCC was within their rights to make the decision, not that it had to come out that way; if Copps had won the day, we wouldn’t be dogging it in the horse latitudes of poor service, high prices, and slow broadband speeds as the world runs past us on all three counts. In the years after, Copps made the best of a bad regulatory position, serving as the most reliable vote for for mandatory network neutrality.

Again, though ownership and broadband policy are among his best-known issues, Copps was a tireless voice for the public interest on virtually every issue imaginable that came before the Commission. Even though he stepped down from the Commission over a year ago, he continues the work today.

Even as a former Commissioner who spent a decade being the thorniest thorn in the sides of those seeking to make a quick buck at the public’s expense, Mr. Copps could still quickly make a quick buck himself working for industry. There are a large number of companies, industry trade groups, and swanky D.C. law firms that would be quite happy to give him a huge salary, cushy office, and first class travel budget to speak on their behalf.

Instead, Copps has moved on to work for Common Cause, one of our nation’s strongest voices fighting for the best interests of ordinary people. This is just the latest in a long line of decisions in which he has chosen to fight for the public interest, even though it’s easier and more lucrative to fight for those who already have disproportionate money and influence.

For public interest advocates, Michael Copps was, at a minimum, the greatest FCC Commissioner since Nicholas Johnson retired nearly 40 years ago — and perhaps the greatest ever. His work at the Commission will be missed, but I look forward to seeing him continue to have a major role in pushing for a fairer, more just media system for many years to come.

One more point, for anybody who’s read this far: As of now, Copps’ Wikipedia page is a mere stump — the Wikipedia term for an article that is too short and needs to be expanded. In this case, a great deal more needs to be said in order to do its subject justice. I call on you to help me do this in the coming weeks. Mr. Copps was and remains a tireless and effective servant of the public, and this is but a small favor we can do in return.

Research Note: The Trouble With Studying Big Data in Campaigns

What are political campaigns doing with our data?  How would we know?

Sasha Issenberg, author of The Victory Lab, gave a talk at GW last night.  The book offers a strong take on the impact of the Analyst Institute on American political campaigning.  It traces the emergence of more sophisticated (and more widely available) voter data, and also traces the emergence of rigorous social scientific experiments that help campaigns optimize their outreach tactics.  It’s well worth your time.

During Q&A, an interesting tangent came up: political campaigns won’t talk with reporters about their data practices.  They didn’t want to give anything away that their opponents could use.  The Obama campaign told its staff not to talk to Issenberg.  When other reporters write articles about campaign data mining, the campaigns don’t offer corrections if they’ve gotten it wrong.  What little public record we have of these activities is based on reporters’ best guesses, without the usual corrective of sources shouting them down via the blogosphere.

This morning, one of those potential sources weighed in.  Ethan Roeder, data director of Obama for America, wrote an Op-Ed for the New York Times titled “I Am Not Big Brother.”  Pushing back against some of hype, he tells us, “You may chafe at how much the online world knows about you, but campaigns don’t know anything more about your online behavior than any retailer, news outlet or savvy blogger.”

The truth is probably somewhere between Roeder and the underinformed headlines.  It’s true that campaigns don’t know anything more about our online behavior than retailers like Target, but what those retailers know is pretty disturbing.  And c’mon, the Obama campaign operates at a scale and complexity far greater than any “savvy blogger.”  That scale matters for what questions the campaign is going to ask, and what it is going to do with our information.

As a researcher who studies how organizations adapt to the digital environment, the real trouble here is that it’s nearly impossible to move beyond vague impressions.  Campaigns have an incentive not to talk to reporters.  They have an even greater incentive not to talk to academic researchers (at least without a non-disclosure agreement firmly in hand…).  When the journalistic coverage gets basic facts wrong, scholars have little way of knowing.  When campaigners disagree after-the-fact, we can’t tell whether they’re correcting the public record or trying to smooth away rightful mistrust.

Academics at our best offer healthy skepticism to the public discourse.  There are important conversations for us to have about the implications of refined digital marketing, management, and persuasion techniques for a healthy democracy.  But it’s going to be systematically difficult to engage in those conversations, because the underlying facts just aren’t going to be very clear.

 

Signing Myself Up for #AcWriMo

26 months ago, a good friend told me about NaNoWriMo (National Novel Writing Month).  It’s a literary form of life-hacking.  The hardest part of writing is getting the damn words on the damn page.*  So aspiring novelists pick the month of November, set the audacious goal of 50,000 words, and then they just write the damn thing.**  My friend had tried it the year before, and was planning to use November 2010 for edits.  She asked if I’d like to join her.

At the time, I had a dissertation, a book proposal, and a long list of excuses for why I hadn’t gotten to work yet.  I decided committing to November 2010 (2 months into my first job as an Assistant Professor) was a perfectly reasonable crazy thing to do, so I said yes.

I didn’t hit 50,000 words by the end of November.  I have these awesome nieces that I only get to see on Thanksgiving.  They’re way more adorable than a book manuscript.  But my improvised academic version of NaNoWriMo yielded about 12,000 words of a book manuscript.  November turned into December/January/February/March, yielding a legitimate first draft of The MoveOn Effect.  

It appears other academics have also been clued in to NaNoWriMo.  Charlotte Frost at PhD2Published.com launched AcBoWriMo (Academic Book Writing Month) last year.  It sounds like a nice online community formed in the process.  And since plenty of academic writing isn’t book-shaped, this year they’re calling it AcWriMo (Academic Writing Month) instead.

Sounds great, sign me up!

Here are my just-audacious enough goals for the month.  They aren’t going to be book-related (starting research on the second book in summer 2013), so I’m instead going to set some aggressive goals for finishing up accumulated side projects:

1. I have a pair of conference papers that are begging for revisions.  I’m going to edit both of them, adding a new empirical section to one and heavily revising the lit review and discussion sections of the other.  By the end of the month, both will be sent off to journals for review.

2. There’s a short piece on the Occupy Movement that I was invited to write for a journal special issue.  That requires combing through all my #occupy blog posts, finding the nuggets of insight that represent genuine contributions, and then rewriting it in the appropriate style.  I’m going to do that as well.

3. It’s also time to start properly planning for the second book project.  I have a grant proposal deadline in November, and I’d also like to write a draft of the second book proposal.

4. I’ve agreed to blog occasionally for a few outlets.  I plan to write at least 3 solid blog posts over the course of November (I’ll post links at ShoutingLoudly).

Those four items unfortunately don’t leave me with a daily word goal to report on twitter to the rest of the #AcWriMo community.  But they do leave me with weekly goals.  2 journal submissions, 1 short essay, 1 grant proposal, and 1 book proposal.  That’s five substantial projects completed in one month, plus a sprinkling of blog posts.  That’s crazy-productive.  Just crazy-productive enough, I think.

The writing starts in earnest tomorrow.  Here’s hoping it turns out as well as last time.

 

—–

*Actually, the hardest part of writing is all of the parts. ALL OF THE PARTS ARE THE HARDEST!

**If you think I’m using the word “damn” too much, I’m guessing you haven’t written a book.  If you think I’m not using it enough, you definitely have.

Research meta-housekeeping: On HuffPo and BAI 2.0

Yesterday morning, I wrote my first piece for the Huffington Post.  I also posted a note to the Blogosphere Authority Index site, explaining that the rankings have been suspended while I tinker with the tracking system.*  There’s a relationship between the two.  Take a look at the toolbar listing under “share this story” in the screencap below:

 

1,139 people “liked” the story.  480 shared it.  163 tweeted, 63 e-mailed, and 4 Google +’ed.  The post also attracted 14 comments.**

That’s a lot of community activity.  The Blogosphere Authority Index would treat it as very little activity, though.  The BAI algorithm draws upon four types of public data: passive (blogroll) hyperlinks, active (in-text) hyperlinks, total site traffic, and community activity (total number of comments).  When I designed the BAI in 2007, those were the right sources to track.  Content wasn’t easily shareable on Facebook or Twitter.  Both platforms existed, but deep software integration was still years away.

The experience of blogging at HuffingtonPost is different from the experience of blogging at ShoutingLoudly.  There’s no “share this story” toolbar at SL.  I announce these posts on twitter and facebook, but any social media traction they get is strictly D.I.Y.  Facebook isn’t integrated.  And ShoutingLoudly isn’t *quite* the hub that HuffingtonPost is (if AOL wants to purchase the site too, I’m sure all of us authors are willing to listen!).  When I launched the BAI, HuffingtonPost was a blog with aspirations towards being a media operation.  Now, it’s a full-fledge media operation with bloggy roots.

And that signals the reason why I’ve taken the current BAI offline to focus on BAI 2.0.  When I designed the BAI, the goal was to make it “swappable.”  I knew what the best available metrics were at the time, and I knew they would not stay the best available metrics.  The idea was to create a system that could be reengineered without too much headache.

But it’s still a bigger headache than I thought it would be.  The current metrics (sitemeter/alexa for site traffic, blogroll crawls for network centrality, technorati for hyperlinks, and hand-counting/automated counting of blog comments)simply aren’t good enough anymore.  Blogrolls are too static.  They provide a decent map of blog clusters, but no real measure of changes in influence.  Facebook and Twitter have become core tools for sharing and discussion.  They have to be factored into the ranking system.

That’s going to take some time, particularly because it’s practically impossible to automate the data collection on the more-sophisticated sites.  The top sites tend to use customized platforms, which means hand-counting their thousands of reader comments.  I can’t simultaneously run the current BAI and design the next BAI.

So, with apologies to my fellow researchers who want to study the blogosphere in the 2012 election, the dataset is on hiatus (I can already foresee some very disappointed doctoral students in 2014, finding out that the dataset has a hole in it).  The February 2012 snapshot is a decent stand-in for the state of the blogosphere — past research shows that there isn’t a lot of month-to-month fluctuation in the among the elite blogs.  After three and a half years of data collection, though, it’s time to get under the hood and tinker with the mechanics some more.

Blogging at a major site today works differently than blogging at a major site in 2007.  The architecture has changed, and that has to be factored in to how we measure blog influence.

 

 

*They’ve actually been suspended since March.  I just got around to posting the note yesterday though.

**That screencap is from yesterday afternoon.  The post now has over 2,400 likes.  Which is probably more people than will read my book.  …I can’t actually decide how to feel about that.