Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 21 2011

17:00

Marshall McLuhan, Superstar

Today would have been Marshall McLuhan’s 100th birthday. Continuing our informal McLuhan Week at the Lab, we present this essay by Maria Bustillos on McLuhan’s unique status as a media theorist who was also a media star.

There was no longer a single thing in [the] environment that was not interesting [...] “Even if it’s some place I don’t find congenial, like a dull movie or a nightclub, I’m busy perceiving patterns,” he once told a reporter. A street sign, a building, a sports car — what, he would ask himself and others, did these things mean?

—Philip Marchand, Marshall McLuhan:
The Medium and the Messenger

The public intellectual was invented in the mid-20th century. Certainly there were others before that who started the ball rolling — talented writers and academics with flexible, open minds taking the whole culture into account, trying to make sense of things as they were happening — but few of them penetrated far beyond the walls of the academy or the confines of some other single discipline. We might count Bertrand Russell as an early prototype, with his prominence in pacifist circles and campaigns against nuclear disarmament, or better still G.B. Shaw, an autodidact of boundless energy who cofounded the London School of Economics and also helped popularize Jaeger’s “sanitary” woolen undies. Until Al Gore came along, Shaw was the only person to have won both a Nobel Prize and an Oscar.

Both Russell and Shaw gained a great deal of influence outside their own spheres of work, but remained above it all, too; they were “authorities” who might be called on to offer their views to the public on this topic or that. But it was a devoutly Catholic, rather conservative Canadian academic who first succeeded in breaking down every barrier there was in the intensity of his effort to understand, interpret, and influence the world. Marshall McLuhan was quite possibly the first real public intellectual. That wide-ranging role having once been instantiated, others came to fill it, in ever-increasing numbers.

Though he was an ordinary English prof by trade, McLuhan’s work had measurable effects on the worlds of art, business, politics, advertising and broadcasting. He appeared on the cover of Newsweek and had office space at Time. Tom Wolfe took him to a “topless restaurant” and wrote about him for New York magazine (“What If He is Right?”). He was consulted by IBM and General Motors, and he coined the phrase, “Turn on, tune in, drop out,” according to Timothy Leary. He made the Canadian Prime Minister, Pierre Trudeau, shave off his beard.

In 1969, McLuhan gave one of the most revealing and best interviews Playboy ever published (a high bar, there.)

PLAYBOY: Have you ever taken LSD yourself?

McLUHAN: No, I never have. I’m an observer in these matters, not a participant. I had an operation last year to remove a tumor that was expanding my brain in a less than pleasant manner, and during my prolonged convalescence I’m not allowed any stimulant stronger than coffee. Alas! A few months ago, however, I was almost “busted” on a drug charge. On a plane returning from Vancouver, where a university had awarded me an honorary degree, I ran into a colleague who asked me where I’d been. “To Vancouver to pick up my LL.D.,” I told him. I noticed a fellow passenger looking at me with a strange expression, and when I got off the plane at Toronto Airport, two customs guards pulled me into a little room and started going over my luggage. “Do you know Timothy Leary?” one asked. I replied I did and that seemed to wrap it up for him. “All right,” he said. “Where’s the stuff? We know you told somebody you’d gone to Vancouver to pick up some LL.D.” After a laborious dialog, I persuaded him that an LL.D. has nothing to do with consciousness expansion — just the opposite, in fact — and I was released.

Until the mid-century, there was a wall between what we now call popular culture and the “high culture” of the rich and educated, and there was another wall, at least as thick, between popular and academic discourse. Cracks had begun to appear by the 1930s, when the Marxist theorists of the Frankfurt School began to take on the subject of mass culture, culminating in works such as Theodor Adorno and Max Horkheimer’s The Culture Industry: Enlightenment as Mass Deception (1944). These academics saw popular culture as a positive evil, though, undermining the chances of revolution; a new kind of “opiate of the masses.” Later critics such as Edward Shils and Herbert J. Gans would elaborate on the same themes. But none of these writers personally ID’d with mass culture in any way. Far from it. Indeed Shils said in 1959: “Some people dislike the working classes more than the middle classes, depending on their political backgrounds. But the real fact is that from an esthetic and moral standpoint, the objects of mass culture are repulsive to us.” To some degree, that academic standoffishness is with us even today. The sneering of the “high” for the “low”.

Marshall McLuhan’s first book, The Mechanical Bride: The Folklore of Industrial Man, was published in 1951, and it took a quite different approach to the task of lifting the veil of mass culture in order to expose the workings beneath. The chief difference was that McLuhan never saw or really even acknowledged that wall between the critic of culture and the culture itself. After all, he too was a human being, a citizen, a reader of newspapers and magazines. McLuhan’s critique took place from the inside.

“[B]eing highbrow, in McLuhan’s eyes, never conferred the slightest moral value on anything,” observed his biographer, Philip Marchand.

McLuhan’s student Walter J. Ong wrote magnificently on this theme in his essay, “McLuhan as Teacher: The Future Is a Thing of the Past,” published in the Sept. 1981 Journal of Communication.

When [McLuhan] did attend to [...] popular works, as in his first book, The Mechanical Bride (1951), it was to invest them with high seriousness. He showed that such things as advertising and comic strips were in their own way as deeply into certain cyclonic centers of human existence — sex, death, religion, and the human-technology relationship — as was the most “serious” art, though both naively and meretriciously. However, awareness of the facts here was neither naive nor meretricious; it was upsetting and liberating.

Marshall Soules of Malaspina University-College had this comment on the “high seriousness” with which McLuhan treated popular works:

It is this strategic stance which distinguishes McLuhan from many media critics — like those associated with the Frankfurt or Birmingham Schools, or like Neil Postman, Mark Miller, Stewart Ewen and others — whose views imply an idealized literate culture corrupted by popular, commercialised, and manipulative media. McLuhan used his training as a literary critic to engage in a dialogue with the media from the centre of the maelstrom.

The Mechanical Bride consists of a selection of advertisements with essays and captions attached.

“Where did you see that bug-eyed romantic of action before?

Was it in a Hemingway novel?

Is the news world a cheap suburb for the artist’s bohemia?

— from The Mechanical Bride

The playful and wide-ranging tone of The Mechanical Bride was entirely new, given that its intentions were as serious as a heart attack. McLuhan thought that the manipulative characteristics of advertising might be resisted once they were understood. “It was, if anything, a critique of an entire culture, an exhilarating tour of the illusions behind John Wayne westerns, deodorants, and Buick ads. The tone of McLuhan’s essays was not without an occasional hint of admiration for the skill of advertisers and capturing the anxieties and appetites of that culture,” Marchand wrote.

The Mechanical Bride was way too far ahead of its time, selling only a few hundred copies, but that was okay because the author was just warming up. McLuhan had found the voice and style of inquiry that he would employ for the rest of his career. In the Playboy interview he said, “I consider myself a generalist, not a specialist who has staked out a tiny plot of study as his intellectual turf and is oblivious to everything else [...] Only by standing aside from any phenomenon and taking an overview can you discover its operative principles and lines of force.”

This inclusiveness, the penetrating, metaphorical free-for-all investigative method that appeared in McLuhan’s first book would gain him increasing admiration, as an understanding of the “rearview mirror view” of the world he used to talk about gained currency: “[A]n environment becomes fully visible only when it has been superseded by a new environment; thus we are always one step behind in our view of the world [...] The present is always invisible because it’s environmental and saturates the whole field of attention so overwhelmingly; thus everyone but the artist, the man of integral awareness, is alive in an earlier day.”

Because he refused to put himself on a pedestal, because everything was of interest to him, McLuhan was able to join the wires of pure academic curiosity with the vast cultural output of the mid-century to create an explosion of insights (or a “galaxy”, I should say) that is still incandescent with possibility a half-century later. Simply by taking the whole of society as a fit subject for serious discourse, he unshackled the intellectuals from their first-class seats, and they have been quite free to roam about the cabin of culture ever since.

As his books were published, McLuhan’s influence continued to spread through high culture and low. He loved being interviewed and would talk his head off to practically anyone, about the Symbolist poets and about Joyce, about car advertisements and cuneiform. You might say that he embraced the culture, and the culture embraced him right back. The Smothers Brothers loved him, and so did Glenn Gould and Goldie Hawn, Susan Sontag, John Lennon and Woody Allen. (Apropos of the latter, McLuhan very much enjoyed doing the famous cameo in Annie Hall, though he had, characteristically, his own ideas about what his lines ought to have been, and a “sharp exchange” occurred between Allen and himself. McLuhan’s most famous line in the movie, “You know nothing of my work,” is in fact one that he had long employed in real life as a put-down of opponents in debate.)

An aside: In 1977, Woody Allen was very far from being the grand old man of cinema that he is now. He had yet to win an Oscar, and had at that time directed only extremely goofy comedies. It was a mark of McLuhan’s willingness to get out there and try stuff, his total unpretentiousness, that he went along with the idea of being in a Woody Allen film. Only imagine any of today’s intellectuals being asked, say, to appear in an Apatow comedy. Would Noam Chomsky do it? Jürgen Habermas? Slavoj Zizek? (Well, Zizek might.)

Even better was Henry Gibson’s recurring two-line poem about McLuhan on the U.S. television show Laugh-In:

Marshall McLuhan,
What are you doin’?

Last year, I briefly attended the Modern Language Association conference in Los Angeles, met a number of eminent English scholars, and attended some of their presentations on Wordsworth and Derrida and on the development of that new, McLuhanesque-sounding discipline, the digital humanities. What I wished most, when I left the conference, was that these fascinating theorists were not all locked away behind the walls of the academy, and that anyone could come and enjoy their talks. The McLuhan manner of appearing anywhere he found interesting, which is to say all over the place, instead of just during office hours, does not diminish serious academics or writers: It enlarges them.

Is this, when it comes down to it, a mere matter of shyness? Or is it a matter of professional dignity, of amour-propre? The academy has so much to contribute to the broader culture; huge numbers of non-academics, I feel sure, would enjoy a great deal of what they have to say, and perhaps vice-versa. But somehow I find it difficult to imagine most of the academics I know agreeing to visit a topless restaurant with Tom Wolfe (on the record, at least). I hope, though, that they will consider venturing out to try such things more and more, and that today’s Wolfes will feel emboldened to ask them, and that the culture indeed becomes more egalitarian, blurrier, “retribalized” as McLuhan seemed to believe it would.

Personally, I have a great faith in the resiliency and adaptability of man, and I tend to look to our tomorrows with a surge of excitement and hope.

— from the 1969 Playboy interview

June 02 2011

17:30

Is Twitter writing, or is it speech? Why we need a new paradigm for our social media platforms

New tools are at their most powerful, Clay Shirky says, once they’re ubiquitous enough to become invisible. Twitter may be increasingly pervasive — a Pew study released yesterday shows that 13 percent of online adults use the service, which is up from 8 percent six months ago — but it’s pretty much the opposite of invisible. We talk on Twitter, yes, but almost as much, it seems, we talk about it.

The big debates about Twitter’s overall efficacy as a medium — like the one launched by, say, Malcolm Gladwell and, more recently, Bill Keller, whose resignation from the New York Times editorship people have (jokingly, I think?) chalked up to his Twitter-take-on column — tend to devolve into contingents rather than resolve into consensus. An even more recent debate between Mathew Ingram and Jeff Jarvis, which comparatively nuanced, comparatively polite) ended with Ingram writing, “I guess we will have to agree to disagree.”

But why all the third-railiness? Twitter, like many other subjects of political pique, tends to be framed in extremes: On the one hand, there’s Twitter, the cheeky, geeky little platform — the perky Twitter bird! the collective of “tweets”! all the twee new words that have emerged with the advent of the tw-efix! — and on the other, there’s Twitter, the disruptor: the real-time reporting tool. The pseudo-enabler of democratic revolution. The existential threat to the narrative primacy of the news article. Twetcetera.

The dissonance here could be chalked up to the fact that Twitter is simply a medium like any other medium, and, in that, will make of itself (conversation-enabler, LOLCat passer-onner, rebellion-facilitator) whatever we, its users, make of it. But that doesn’t fully account for Twitter’s capacity to inspire so much angst (“Is Twitter making us ____?”), or, for that matter, to inspire so much joy. The McLuhany mindset toward Twitter — the assumption of a medium that is not only the message to, but the molder of, its users — seems to be rooted in a notion of what Twitter should be as much as what it is.

Which begs the question: What is Twitter, actually? (No, seriously!) And what type of communication is it, finally? If we’re wondering why heated debates about Twitter’s effect on information/politics/us tend to be at once so ubiquitous and so generally unsatisfying…the answer may be that, collectively, we have yet to come to consensus on a much more basic question: Is Twitter writing, or is it speech?

Twitter versus “Twitter”

The broader answer, sure, is that it shouldn’t matter. Twitter is…Twitter. It is what it is, and that should be enough. As a culture, though, we tend to insist on categorizing our communication, drawing thick lines between words that are spoken and words that are written. So libel is, legally, a different offense than slander; the written word, we assume, carries the heft of both deliberation and proliferation and therefore a moral weight that the spoken word does not. Text, we figure, is: conclusive, in that its words are the deliberate products of discourse; inclusive, in that it is available equally to anyone who happens to read it; exclusive, in that it filters those words selectively; archival, in that it preserves information for posterity; and static, in that, once published, its words are final.

And speech, while we’re at it, is discursive and ephemeral and, importantly, continual. A conversation will end, yes, but it is not the ending that defines it.

Those characteristics give way to categories. Writing is X; speaking is Y; and both have different normative dimensions that are based on, ultimately, the dynamics of power versus peer — the talking to versus the talking with. So when we talk about Twitter, we tend to base our assessments on its performance as a tool of either orality or textuality. Bill Keller seems to see Twitter as text that happens also to be conversation, and, in that, finds the form understandably lacking. His detractors, on the other hand, seem to see Twitter as conversation that happens also to be text, and, in that, find it understandably awesome.

Which would all be fine — nuanced, even! — were it not for the fact that Twitter-as-text and Twitter-as-conversation tend to be indicated by the same word: “Twitter.” In the manner of “blogger” and “journalist” and even “journalism” itself, “Twitter” has become emblematic of a certain psychology — or, more specifically, of several different psychologies packed awkwardly into a single signifier. And to the extent that it’s become a loaded word, “Twitter” has also become a problematic one: #Twittermakesyoustupid is unfair, but #”Twitter”makesyoustupid has a point. The framework of text and speech falls apart once we recognize that Twitter is both and neither at once. It’s its own thing, a new category.

Our language, however, doesn’t yet recognize that. Our rhetoric hasn’t yet caught up to our reality — for Twitter and, by extension, for other social media.

We might deem Twitter a text-based mechanism of orality, as the scholar Zeynep Tufekci has suggested, or of a “secondary orality,” as Walter Ong has argued, or of something else entirely (tweech? twext? something even more grating, if that’s possible?). It almost doesn’t matter. The point is to acknowledge, online, a new environment — indeed, a new culture — in which writing and speech, textuality and orality, collapse into each other. Speaking is no longer fully ephemeral. And text is no longer simply a repository of thought, composed by an author and bestowed upon the world in an ecstasy of self-containment. On the web, writing is newly dynamic. It talks. It twists. It has people on the other end of it. You read it, sure, but it reads you back.

“The Internet looking back at you”

In his social media-themed session at last year’s ONA conference, former Lab writer and current Wall Street Journal outreach editor Zach Seward talked about being, essentially, the voice of the outlet’s news feed on Twitter. When readers tweeted responses to news stories, @WSJ might respond in kind — possibly surprising them and probably delighting them and maybe, just for a second, sort of freaking them out.

The Journal’s readers were confronted, in other words, with text’s increasingly implicit mutuality. And their “whoa, it’s human!” experience — the Soylent Greenification of online news consumption — can bring, along with its obvious benefits, the same kind of momentary unease that accompanies the de-commodification of, basically, anything: the man behind the curtain, the ghost in the machine, etc. Concerns expressed about Twitter, from that perspective, may well be stand-ins for concerns about privacy and clickstream tracking and algorithmic recommendation and all the other bugs and features of the newly reciprocal reading experience. As the filmmaker Tze Chun noted to The New York Times this weekend, discussing the increasingly personalized workings of the web: “You are used to looking at the Internet voyeuristically. It’s weird to have the Internet looking back at you….”

So a Panoptic reading experience is also, it’s worth remembering, a revolutionary reading experience. Online, words themselves, once silent and still, are suddenly springing to life. And that can be, in every sense, a shock to the system. (Awesome! And also: Aaaah!) Text, after all, as an artifact and a construct, has generally been a noun rather than a verb, defined by its solidity, by its thingness — and, in that, by its passive willingness to be the object of interpretation by active human minds. Entire schools of literary criticism have been devoted to that assumption.

And in written words’ temporal capacity as both repositories and relics, in their power to colonize our collective past in the service of our collective future, they have suggested, ultimately, order. “The printed page,” Neil Postman had it, “revealed the world, line by line, page by page, to be a serious, coherent place, capable of management by reason, and of improvement by logical and relevant criticism.” In their architecture of sequentialism, neatly packaged in manuscripts of varying forms, written words have been bridges, solid and tangible, that have linked the past to the future. As such, they have carried an assurance of cultural continuity.

It’s that preservative function that, for the moment, Twitter is largely lacking. As a platform, it does a great job of connecting; it does, however, a significantly less-great job of conserving. It’s getting better every day; in the meantime, though, as a vessel of cultural memory, it carries legitimately entropic implications.

But, then, concerns about Twitter’s ephemerality are also generally based on a notion of Twitter-as-text. In that, they assume a zero-sum relationship between the writing published on Twitter and the writing published elsewhere. They see the written, printed word — the bridge, the badge of a kind of informational immortality — dissolving into the digital. They see back-end edits revising stories (which is to say, histories) in an instant. They see hacks erasing those stories altogether. They see links dying off at an alarming rate. They see all that is solid melting into bits.

And they have, in that perspective, a point: While new curatorial tools, Storify and its ilk, will become increasingly effective, they might not be able to recapture print’s assurance, tenacious if tenuous, of a neatly captured world. That’s partly because print’s promise of epistemic completeness has always been, to some extent, empty; but it’s also because those tools will be operating within a digital world that is increasingly — and actually kind of wonderfully — dynamic and discursive.

But what the concerns about Twitter tend to forget is that language is not, and has never been, solid. Expression allows itself room to expand. Twitter is emblematic, if not predictive, of the Gutenberg Parenthesis: the notion that, under the web’s influence, our text-ordered world is resolving back into something more traditionally oral — more conversational and, yes, more ephemeral. “Chaos is our lot,” Clay Shirky notes; “the best we can do is identify the various forces at work shaping various possible futures.” One of those forces — and, indeed, one of those futures — is the hybrid linguistic form that we are shaping online even as it shapes us. And so the digital sphere calls for a new paradigm of communication: one that is discursive as well as conservative, one that acquiesces to chaos even as it resists it, one that relies on text even as it sheds the mantle of textuality. A paradigm we might call “Twitter.”

Photos by olalindberg and Tony Hall used under a Creative Commons license.

June 25 2010

15:00

Clay Shirky’s “Cognitive Surplus”: Is creating and sharing always a more moral choice than consuming?

In 1998, People magazine, trying to figure out how to use this new-ish tool called the World Wide Web, launched a poll asking readers to vote for People.com’s Most Beautiful Person of the year. There were the obvious contenders — Kate Winslet, Leonardo diCaprio — and then, thanks to a Howard Stern fan named Kevin Renzulli, a write-in candidate emerged: Henry Joseph Nasiff, Jr., better known as Hank, the Angry Drunken Dwarf. A semi-regular presence on Stern’s shock-jockey-y radio show, Hank — whose nickname pretty well sums up his act — soon found himself on the receiving end of a mischievous voting campaign that spread from Renzulli to Stern to online message boards and mailing lists.

By the time the poll closed, Hank had won — handily — with nearly a quarter of a million votes. (DiCaprio? 14,000.)

In Cognitive Surplus, his fantastic follow-up to Here Comes Everybody, Clay Shirky explains what HTADD can teach us about human communications: “If you give people a way to act on their desire for autonomy and competence or generosity and sharing, they might take you up on it,” he notes. On the other hand: “[I]f you only pretend to offer an outlet for those motivations, while actually slotting people into a scripted experience, they may well revolt.”

Scarcity vs. abundance

Shirky may be a pragmatist and a technologist and, in the best sense, a futurist; what gives his thinking its unique verve, though, is that he also thinks like an economist. To read his work is to be presented with a world defined by the relationships it contains: the exchanges it fosters, the negotiations it demands, the tugs and torques of transaction. In the Shirkian vision of our information economy, supply-and-demand, scarcity-and-abundance, and similar polar pairings aren’t merely frames for coaxing complex realities into bite-sized specimens of simplicity; they’re very real tensions that, in their polarity, act as characters in our everyday life.

In Cognitive Surplus, as in Here Comes Everybody, the protagonist is abundance itself. Size, you know, matters. And, more specifically, the more matters: The more people we have participating in media, and the more people we have consuming it — and the more people we have, in particular, creating it — the better. Not because bigger is implicitly better than the alternative compact, but because abundance changes the value proposition of media as a resource. “Scarcity is easier to deal with than abundance,” Shirky points out, “because when something becomes rare, we simply think it more valuable than it was before, a conceptually easy change.” But “abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with.”

Cognitive Surplus, in other words — the book, and the concept it’s named for — pivots on paradox: The more abundant our media, the less specific value we’ll place on it, and, therefore, the more generally valuable it will become. We have to be willing to waste our informational resources in order to preserve them. If you love something…set it free.

Love vs. money

So the book’s easiest takeaway, as far as journalism goes, is that we should be willing to experiment with our media: to be open to the organic, to embrace new methods and modes of production and consumption, to trust in abundance. But, then, that’s both too obvious (does anyone really think we shouldn’t be experimenting at this point?) and too reductive a conclusion for a book whose implied premise is the new primacy of communality itself. Shirky isn’t simply asking us to rethink our media systems (although, sure, that’s part of it, too); he’s really asking us to embrace collectivity in our information — in its consumption, but also in its creation.

And that’s actually a pretty explosive proposition. The world of “post-Gutenberg economics,” as Shirky calls it — a world defined, above all, by the limitations of the means of (media) production, be they printing presses or broadcast towers — was a world that ratified the individual (the individual person, the individual institution) as the source of informational authority. This was by necessity rather than, strictly, design: In an economy where freedom of the press is guaranteed only to those who own one, the owners in question will have to be limited in number; distributed authority is also diffused authority. When the power of the press belongs to everyone, the power of the press belongs to no one.

But now we’re moving, increasingly and probably inevitably, toward a media world of distributed authority. That’s a premise not only of Cognitive Surplus, but of the majority of Shirky’s writings — and it’s a shift that is, it hardly needs to be said, a good thing. But it also means, to extrapolate a bit from the premises of the cognitive surplus, that the reflexively individualistic assumptions we often hold about the media — from the primacy of brand structures to the narrative authority of the individual correspondent to the notion of the singular article/issue/publication as a self-contained source of knowledge itself — were not immutable principles (were not, in fact, principles at all), but rather circumstantial realities. Realities that can — and, indeed, will — change as our circumstances do.

And now that scarcity is being replaced by abundance, our whole relationship with our media is changing. The new communality of our news technologies — the web’s discursive impulses, the explosiveness of the hyperlink — means that news is, increasingly, a collective endeavor. And we’re just now beginning to see the implications of that shift. In Here Comes Everybody, Shirky focused on Wikipedia as a repository of our cognitive surplus; in its sequel, he focuses on Ushahidi and LOLCats and PickupPal and the Pink Chaddi campaign and, yes, the promotional efforts made on behalf of our friend Hank, the Angry Drunken Dwarf. What those projects have in common is not simply the fact that they’re the result of teamwork, the I serving the we; they’re also the result of the dissolution of the I into the broader sphere of the we. The projects Shirky discusses are, in the strictest sense, authorless; in a wikified approach to media, individual actors feed — and are dissolved into — the communal.

And: They’re fine with that. Because the point, for them, isn’t external reward, financial or reputational or otherwise; it’s the intrinsic pleasure of creation itself. Just as we’re hard-wired to love, Shirky argues, we are hard-wired to create, to produce, to share. As he reminds us: “amateur” derives from the Latin amare. A corollary to “the Internet runs on love” is that it does so because we run on love.

Creativity vs. creation

As a line of logic, that’s doubly provocative. First — without getting into the whole “is he or isn’t he (a web utopian)?” debate — there’s the chasm between the shifts Shirky describes and what we currently tend to think of when we think of The Media. Our news economy is nowhere near comprehensively communal. It’s one whose architecture is built on the coarser, capitalistic realities of individuality: brands, bylines, singular outlets that treat information as proprietary. It’s one where the iPad promises salvation-of-brands by way of isolation-of-brands — and where the battle of open web-vs.-walled garden, the case of Google v. Apple, seems to be locked, at the moment, in a stalemate.

It’s an economy, in other words, that doesn’t run on love. It runs on more familiarly capitalistic currencies: money, power, self-interest.

But, then, the trends Shirky describes are just that. He’s not defining a holistic reality so much as identifying small tears in the fabric of our textured media system that will, inevitably, expand. Cognitive Surplus deals with trajectory. And the more provocative aspect of the book, anyway, is one built into the framework of the cognitive surplus itself: the notion of creativity as a commodity. A key premise of the surplus idea is that television has sucked up our creative energies, siphoning them away from the communality of culture and allowing them to pool, unused, in the moon-dents in our couches. And that, more to the point, with the web gradually reclaiming our free time, we can refocus those energies of creative output. Blogging, uploading photos, editing Wikipedia entries — these are all symptoms of the surplus put to use. And they should be celebrated as such.

That rings true, almost viscerally: Not only has the web empowered our expression as never before, but I think we all kind of assumed that Married…with Children somehow portended apocalypse. And you don’t have to be a Postmanite to appreciate the sense-dulling effect of the TV screen. “Boob tube,” etc.

But the problem with TV, in this framing, is its very teeveeness; the villain is the medium itself. The differences in value between, say, The Wire and Wipeout, here, don’t much matter — both are TV shows, and that’s what defines them. Which means that watching them is a passive pursuit. Which means that watching them is, de facto, a worse way — a less generous way, a more selfish way — to spend time than interacting online. As Shirky puts it: “[E]ven the banal uses of our creative capacity (posting YouTube videos of kittens on treadmills or writing bloviating blog posts) are still more creative and generous than watching TV. We don’t really care how individuals create and share; it’s enough that they exercise this kind of freedom.”

The risk in this, though, for journalism, is to value creation over creativity, output over impulse. Steven Berlin Johnson may have been technically correct when, channeling Jeff Jarvis, he noted that in our newly connected world, there is something profoundly selfish in not sharing; but there’s a fine line between Shirky’s eminently correct argument — that TV consumption has been generally pernicious in its very passivity — and a commodified reading of time itself. Is the ideal to be always producing, always sharing? Is creating cultural products always more generous, more communally valuable, than consuming them? And why, in this context, would TV-watching be any different from that quintessentially introverted practice that is reading a book?

Part of Shirky’s immense appeal, as a thinker and a writer, is his man-of-science/man-of-faith mix; he is a champion of what can be, collectively — but, at the same time, inherent in his work is a deep suspicion of inherence itself. (Nothing is sacred, but everything might be.) And if we’re looking for journalistic takeaways from Cognitive Surplus, one might be this: We need to be similarly respectful of principles and open to challenging them — and similarly aware of past and future. Time itself is, both as a context and a commodity, is a crucial factor in our journalism — and how we choose to leverage it will determine what our current journalism becomes. It’s not just about what to publish, what to filter — but about when to publish, when to filter. And there’s something to be said for preserving, to some degree, a filter-first approach to publication: for taking culture in, receptively if not passively, before putting culture out. For not producing — or, at least, for producing strategically. And for creating infrastructures of filtration that balance the obvious benefits of extroversion with the less obvious, but still powerful, benefits of introversion.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl