Academic-Vent: Bad Habits in Academic Articles

This blog post is going to be more fellow-academics oriented than most of what I write for Shouting Loudly.  I review a lot of early work – both as a conference discussant and as a peer reviewer for journals.  I’ve noticed a few trends in the digital politics literature that, frankly, bug the hell out of me. These are small items, related to how we present  and frame our research findings.  So I thought I’d start a thread for listing “academic pet peeves” so-to-speak.  Please feel free to add, modify, or challenge in the comments section!

4 Things to Avoid When Writing a Research Article on digital politics:

1. “Optimists vs Pessimists.”  The standard academic article begins by asserting the existence of two camps: optimists and pessimists.  Digital optimists, we’re told, hold utopian dreams about the transformative potential of Information and Communications Technologies (ICT).  Digital pessimists, meanwhile, think that the new media environment will be horrible, just horrible.  The author then announces that their research develops a middle path, one that proves both camps to be not-quite-right.

That’s an easy crutch for writing a paper introduction.  It’s also unforgivably lazy in 2011.  As Rasmus Kleis Nielsen noted two years ago, we’ve all moved into the “it’s complicated” camp already.  There aren’t any serious digital utopians or dystopians left.  So if you’re framing your research against these poles, what you’re actually telling the reader is that you haven’t been paying attention to the literature or to the latest research findings.  You’re asking for your research to be ignored (regardless of its breathtaking empirical sophistication), by signaling at the outset that you’re challenging a straw man.

Do better than that.  Provide an illuminating anecdote or case example.  Highlight the substantive or surprising finding up front.  Let go of the old construction regarding “two debating academic camps,” already.  The debates are happening on more interesting terrain today anyway.

2. Heavily dated bibliographies. This point obviously relates to the first point.  I don’t care so much whehter you cite me in particular*.  But I do care that an author is engaging with existing debates.  Digital politics is a new field.  It’s cross-disciplinary, and concerned with a subject matter that is still rapidly diffusing/morphing/evolving/changing.  The “shelf life” of any individual research finding can be pretty brief.  But I’d estimate 40% of the papers I’m asked to review have barely a single citation from after 2003 or so.  It’s uncanny.

If I’m asked to review your paper and you don’t have a single citation from after 2003, that sends a very strong, very negative signal.  Eight years is more than a generation in “internet time.”  That’s the beginning of the blogosphere, the beginning of the Dean campaign, and two years pre-YouTube.  Hell, the iPhone was only introduced in 2007.  Internet mediated politics is changing.  The research community has struggled to keep pace.  You aren’t doing yourself any favors if you’re ignoring everything published in the past half-dozen years in favor of whatever you read when you took that one grad class that one time.

If something in the digital politics literature is a decade old and still important, you should of course still be citing it.  But if you’re spending your research-time refuting ideas about Internet politics circa-2003, while ignoring research on the Internet circa-2011, than you’re starting off on a serious incline.  Chances are you’re chasing ghost-findings.

3. Put your design limitations in the text.  Put your Krippendorffs alpha in the footnote!  Krippendorff’s alpha, for the uninitiated, is a standard measure of intercoder reliability.  If you’re running a content analysis, you intercoder reliability is a necessary means of determining that your findings aren’t based on a haphazard coding scheme.  A high Krippendorff’s alpha assures the reader that your findings are replicable.**

This is an important element of research, particularly in large-scale projects.  But I’ll venture to guess that no one has ever been convinced of an argument’s importance on the basis of a high intercoder reliability score.

Yet I routinely read papers that spend two lengthy paragraphs discussing these basic robustness checks.

Meanwhile, they’ll banish frank discussion of the limitations of their dataset or research design to a footnote.  (“Content was coded three days a week -Monday, Tuesday and Thursday by a team of well-fed graduate students in the afternoon and early evening.  All coders were trained at a daylong summit that included a teambuilding ropes course.  Intercoder reliability was established by presenting a second coder with a random 20% of the dataset.  Krippendorf’s alpha was .93, which is considered gloriously high.  Our study is based on comparing the top 10 current political blogs with a handful of 1998 AOL and Compuserv websites that one author happened to have archived on a web browser.  As such, there may be some limits to our external validity.”)

Digital datasets tend to be hellishly flawed and messy.  There’s no way around it.  And your peer reviewers know this.  They’re being asked to review your article because they work with the same hellishly messy data.  Let’s be frank about our research limitations, and consign our robustness checks to footnotes and appendices.  It’ll make for a healthier research community and better-written papers to boot.

4. Good Writing Matters.  I’ve come to appreciate this more and more.  We don’t have to be journalists or essayists.  I’m not asking you to become Malcolm Gladwell.  (Really. One of him is enough, thankyouverymuch!)  Don’t oversell your argument, and don’t be glib or oversimplify.  But there’s no excuse for writing a 70-word sentence.  Make your thesis clear.  Use topic sentences.  Make firm claims and back them up with evidence.

Clear writing is evidence of clear thinking.  Memorable writing is more likely to be quoted and cited.  Strong claims are easier to falsify, support, and challenge.  If your research is solid but your writing is muddy, your work will be unfairly overlooked.  And really, why are you making your readers work that hard anyway?  Effort should be spent on considering your theory or evidence, not on understanding your argument.

I mention this last point because, well, no one teaches us how to write in graduate school.  It’s a matter of trial-and-error, with minimal feedback built in.  Only the most egregious writing will cause an article to be rejected, so only the most egregious writing-mistakes are called out in the peer review process.  That’s a shame, because it leaves us sloppier writers and sloppier thinkers.

I’ve spent the past year refining a dissertation into a manuscript draft and a manuscript draft into an actual book (which comes out May 2012, by the way).  In the process, I came to realize that (1) I was a terrible writer with plenty of bad habits, (2) I had always mistakenly though I was one of the good ones, and (3) I could get better.  I think the book is actually pretty well-written now.  And I think I’ve gotten better at writing as a result.  But that took almost as much time and energy as the original research did.  It was a hefty learning experience, and improved my understanding of the substantive subject matter tremendously.

—–

This is intended to be a running list.  I’ll add to it as more items come up, and I’d encourage readers to add their own in the comments section.  To be clear, these aren’t items that will turn an R&R into a rejection.  I’m not drawing lines in the sand or anything.  These are just little things that I’m seeing too much of.  They’re (relatively) easy to fix, and at least one of your colleagues would be a bit happier if you followed them.

 

*Okay, yeah I do.  But I don’t hold it against anyone that they failed to see the brilliance and applicability of the obscure article I wrote.

**It doesn’t assure the reader that your coding scheme is correct.  The easiest way to get a high alpha is to make your codebook too simple, casting all of the interesting border cases into one simplified category.  If your coding scheme is wrong-but-rigorous, the alpha score will be high.

4 thoughts on “Academic-Vent: Bad Habits in Academic Articles

  1. I have had a number of friends who have served as editors of major journals. They all decried the quality of writing in the manuscripts they were receiving. As far as I could see, however, none turned down manuscripts with 70 word sentences.

  2. yep, and I’m not sure that they should!

    I think we should reward good writing in the discipline, rather than punishing bad writing through the coarse signal of an article rejection.

  3. Taken generally, these guidelines more than apply to the peer-reviewed medical research articles I edit for a living. Presuming the reverse may also be true, I propose this:

    Treat every submission like the final draft. When my journal accepts a manuscript for publication, professional editors get to work putting it in its final form. We have a fixed amount of time to edit and lay out the text and figures, and more often than I’d like, that time is eaten up by pointing out inconsistent terminology or data, requesting missing required elements, and applying minor corrections to wording that the authors only notice after acceptance.

    This applies just as much for peer review: a referee listing typographical errors or pointing out where n is not reported is a referee who is missing subtler points that would better strengthen the work.

    An A paper probably started out as a B; likewise, a C paper may have been a D or worse, but referees and editors can only improve so much on what they’re presented with. The more polished and complete the submission is, the more time a production staff can devote to perfecting, rather than repairing. The author can invite more substantive feedback and attention to detail by ensuring that all requirements are met, that statistical reporting is complete and correct, that citations are still in the right order, and that the most recently implemented changes are applied and referred to with consistency.

    It stands to reason that every author submits his work hoping that it will be accepted for publication; by this logic, every submission and revision should be dressed for the part, so to speak.

Comments are closed.