The Micromedium and Monomedium

I’ve become interested in the manner in which private ownership of digital interfaces has altered our understanding of what constitutes a medium.  Traditional media integrated hardware and interface and allowed a greater division between the roles of manufacturer, content programmer and user.  But new technologies have challenged those conceptions.  I’ve been thinking about these in terms of the “micromedium” and the “monomedium.”

MICROMEDIUM

We can think of digital interfaces like Twitter, Facebook, Linked In, and other virtual platforms as “micromediums”—media that are:
1. Private
2. Unique
3. Convergent

Private

Televisions, radios and telephones are distinct mediums that could be produced by a variety of manufacturers.  The programming that came through these as either one way (radio and television) or two way (telephones) could be produced by a variety of communicators and bore no direct relation to the manufacturer.  But digital interfaces privatize mediums. You may have hundreds of “friends” and “followers” but there is a unitary Twitter, Facebook etc.  When the popularity of these micromediums fade, as fade they must (remember when everybody you knew was on Friendster?), then the micromedium itself will fade.

Unique

Micromediums are so unique in their capabilities as to often bear little resemblance to one another, even within categories like microblogging and social networking.  Despite their clear lineage and similar function as “social networking sites” Friendster and Facebook are worlds apart.  They have unique terminologies and tools that define not only how the user interacts with them, but how the user understands communication.  This is why a company like Squidoo insists on calling its user generated webpages “lenses.” This is why Twitter “Tweets” and Facebook “Friends”.  In naming a thing, we both mark it as our own and distinguish it from similar products.  The highly competitive, global and growing nature of the web demands that these differentiations manifest through both unique language and function.

 Convergent

Micromediums are not so unique as to be truly distinct mediums (in the way that the telephone was).  They are often variations, remixes and evolutions of preexisting mediums that come together in new and changing ways.  The open source nature of applications that may run on these micromediums only increases this blurring and converging of technologies.  This technological convergence, combined with corporate conglomeration leads to walled gardens of compatible technologies such as the Google owned Blogger, which integrates the Google owned Picasa/Gmail/YouTube/etc. into a single format that is both recognizable as a micromedium but still belongs to the larger medium of the blog.

MONOMEDIUM

While digital interfaces have fragmented and become highly specialized, the physical objects on which we access them have changed as well.  Mobile computing tools like the iPhone are characterized by their flexibility rather than functionality.  They cede control of their interface to the digital micromedia that they channel.  A heavily mechanical device like the Blackberry, with its tiny keypad and other strong physical attributes is looking antiquated in comparison to the blank and fully plastic interface of the iPhone.

As microcomputing brings our screens and processors closer together and physical objects like mice and keyboards cede to touchscreen technology, we can look forward to a future in which our virtual interfaces are more real and recognizable to us than the physical interfaces through which we access them.

 *

Anyway, these are a few thoughts I’ve been having.  They aren’t fully matured to the point where they might constitute an article.  I’d love to hear your feedback, thoughts and suggestions for evolving this subject.

NYT Story on R, the Open Source Stats Program

In the summer of 2005, I attended a 2 week course at the Annenberg Sunnylands’ Institute in Statistics and Methodology (ASIMS, aka “Stats Camp”) in Palm Desert, CA. I took the class on regression and ANOVA, taught (very well) by Wharton stats professor Bob Stine, used the statistical program R.

Out of dissatisfaction with the very high prices and very poor customer service of SPSS ($200 for Grad Pack v. 11, another $200 for v. 16, only to be told that the ridiculous number of bugs in v. 16 would only be solved in future releases–requiring yet another license), I’ve been thinking I should fully migrate to R but have done little in this direction. Then, I discovered this Times story about R software, complete with glowing reviews from people at big deal companies–for instance, Hal Varian, chief economist at Google.

R is a little intimidating for those not entirely at home with non-graphic computing interfaces (think DOS instead of Windows) and those who know little about statistics. For a teaching situation in a resource-strapped environment, for instance, these are not insurmountable obstacles. I would still recommend R as rather usable given a little patience for anybody who needs to do serious data analysis, and it’s even usable in almost any teaching environment that requires anything more sophisticated than Excel.

For texts, Stine used John Fox’s Applied Regression (now in a 2d edition) with his R and S Plus Companion. This text was quite helpful, and while I’m particularly enthusiastic to try new software and fairly good at learning statistics, I think everyone caught on and got a lot out of the class. More importantly than the software package, we all learned a lot about the process of data analysis using regression, and in my case, this knowledge has stayed with me even as I’ve used SPSS for the past few years.

One of the best parts about using R was that we used Fox’s “Companion to Applied Regression” (CAR) package, which was highly tailored to the kinds of work we did in the class. (See John Fox’s homepage for this and other helpful links, including a similar summer quickie class that Fox taught.) Think of it as a plugin. 

SPSS charges outrageous prices for their Regression package (less so when bundled with the Grad Pack, but still), but this was free–and, in my opinion, superior on most counts. This add-on is just one of thousands available, all for free. As the Times notes, a lot of the R packages are tailored to exceptionally complicated tasks, such as econometrics and biostatistics.

This is how open source software gets popular. Once you have a critical mass of users who are invested enough to do some work to improve a package, the distributed innovation can quickly outpace the work done at the labs behind even expensive proprietary software. See Benkler’s Wealth of Networks for more on this.

As for me, I’m seriously bunkered into SPSS for my dissertation research–it’s easier to work around the bugs for now than to re-learn an entirely new package–so I’m not making the 2-footed jump for a good bit. The article reminded me to download R, though, and I’ll get back into it very soon, I’m sure.

If I can get better software for free, why not?