Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 27 2010


Uneven depths: Why the printed page has always had room for scholarly brilliance and dirty jokes

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, four, five, and six. — Josh]

In a chapter called “The Deepening Page,” Nicholas Carr offers a swift and graceful account of the history of writing. He traces the rise of logic, coherence, and depth from magical formulae scratched on potsherds and wax tablets by the ancients, through the pious allusions of the middle ages to the graceful periodic sentences of the eighteenth century. Their prose represented not only a formal triumph, but a neural one as well. “To read a book was to practice an unnatural process of thought,” writes Carr, “one that demanded sustained, unbroken attention to a single, static object.”

The reading of a sequence of printed pages was valuable not just for the knowledge readers acquired from the author’s words but for the way those words set off intellectual vibrations within their own minds. In the quiet spaces opened up by the prolonged, undistracted reading of a book, people made their own associations, drew their own inferences and analogies, fostered their own ideas. They thought deeply as they read deeply.

To Carr, the story of manuscript, printing, and publishing is the rise of the “deep page,” with modern literature as the apotheosis of literacy. The process a grimy Gutenberg started in the mid fifteenth century culminates in Wallace Stevens, whose poem “The House Was Quiet and the World Was Calm” glories in the deep page: “The quiet was part of the meaning, part of the mind / The access of perfection to the page.”

The trouble is, it didn’t feel this way to many people going through these changes at various times in the past. Not to the manuscript bookseller Vespasiano da Bisticci, who condemned the coarsening presence of printed volumes in libraries devoted to books in manuscript; not to Pope Paul IV, who started the Index of Prohibited Books during the so-called “incunable era” following the advent of moveable type; not to Pope Urban VIII, who tangled with Galileo; not to Jonathan Swift and Alexander Pope; not to the French monarchy in advance of the Revolution.

The printing press never only produced the kind of deep reading we admire and privilege today. It also produced propaganda and misinformation, penny dreadfuls and comic books offensive to public morality, pornography, self-help books, and much that was generally despised and rejected by polite culture. Any account of the history of “The Gutenberg Era” that lacks these is incomplete — just as any picture of the Internet that privileges LOLcats and 4chan is insufficient. We must consider both — for pornography, misinformation, and sheer foolishness have thrived from the age of incunables to the advent of the Internet. And the deep-reading brain evolved in the midst of it all.

In his report about ROFLCon in last week’s New York Times Magazine, Rob Walker argues that open culture needs the slipshod, the shifty, and the shallow in order to maintain its health.

The more traditional pundits and gurus who talk about the Internet often seem to want to draw strict boundaries between old mass-media culture and the more egalitarian forms taking shape online — and between Internet life and life in the physical world…Sometimes the pointless-seeming jokes that spring from the Web seem to be calling a bluff and showing a truth: This is what egalitarian cultural production really looks like, this is what having unbounded spaces really entails, this is what anybody-can-be-famous means, this is how the hunger for “moar” gets sated, this is what’s burbling in the hive mind’s id. But the real point is that to pretend otherwise isn’t denying the Internet — it’s denying reality.

Walker references a talk the computer historian Jason Scott gave at the first ROFLCon in 2008 in which he discussed the shallow and seemingly antisocial memes spread by communications networks long before the Internet. Scott discusses electric media going back to the telegraph, but the printing press teemed with the shallow stuff well before the advent of telegraphy. Readers in the 18th century in particular were offered a tantalizing selection of bawdy images and tawdry tales. As the great book historian Robert Darnton has shown, the age of Voltaire and Rousseau was awash in erotica, dirty cartoons, and fancifully libelous tales of the rich and famous.

So where did the deep page come from? Not merely from ignoring the dross — for many alloys exist between poetry and pornography, and at any given moment, it’s never entirely clear which is which. Jonathan Swift, writing his “Battle of the Books” in 1704, didn’t even bother with the bawdy writers. Swift’s satire depicts a war between ancient and modern authors, with the ancients on the side of sweetness and light; it was Descartes and classicist Richard Bentley that drew his ire as much as any Grub Street hack. Swift and other early modern readers engaged in an encounter with a murky multiplicity of shifting possibilities in print. And it was the multiplicity that produced the deep page — presumably along with the brain circuitry underlying it.

At the edges of the deep page lie miles of shallow estuaries, stinking, muddy — and teeming with life. Our plastic brains have been navigating their effluents for a very long time.

June 25 2010


Clay Shirky’s “Cognitive Surplus”: Is creating and sharing always a more moral choice than consuming?

In 1998, People magazine, trying to figure out how to use this new-ish tool called the World Wide Web, launched a poll asking readers to vote for People.com’s Most Beautiful Person of the year. There were the obvious contenders — Kate Winslet, Leonardo diCaprio — and then, thanks to a Howard Stern fan named Kevin Renzulli, a write-in candidate emerged: Henry Joseph Nasiff, Jr., better known as Hank, the Angry Drunken Dwarf. A semi-regular presence on Stern’s shock-jockey-y radio show, Hank — whose nickname pretty well sums up his act — soon found himself on the receiving end of a mischievous voting campaign that spread from Renzulli to Stern to online message boards and mailing lists.

By the time the poll closed, Hank had won — handily — with nearly a quarter of a million votes. (DiCaprio? 14,000.)

In Cognitive Surplus, his fantastic follow-up to Here Comes Everybody, Clay Shirky explains what HTADD can teach us about human communications: “If you give people a way to act on their desire for autonomy and competence or generosity and sharing, they might take you up on it,” he notes. On the other hand: “[I]f you only pretend to offer an outlet for those motivations, while actually slotting people into a scripted experience, they may well revolt.”

Scarcity vs. abundance

Shirky may be a pragmatist and a technologist and, in the best sense, a futurist; what gives his thinking its unique verve, though, is that he also thinks like an economist. To read his work is to be presented with a world defined by the relationships it contains: the exchanges it fosters, the negotiations it demands, the tugs and torques of transaction. In the Shirkian vision of our information economy, supply-and-demand, scarcity-and-abundance, and similar polar pairings aren’t merely frames for coaxing complex realities into bite-sized specimens of simplicity; they’re very real tensions that, in their polarity, act as characters in our everyday life.

In Cognitive Surplus, as in Here Comes Everybody, the protagonist is abundance itself. Size, you know, matters. And, more specifically, the more matters: The more people we have participating in media, and the more people we have consuming it — and the more people we have, in particular, creating it — the better. Not because bigger is implicitly better than the alternative compact, but because abundance changes the value proposition of media as a resource. “Scarcity is easier to deal with than abundance,” Shirky points out, “because when something becomes rare, we simply think it more valuable than it was before, a conceptually easy change.” But “abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with.”

Cognitive Surplus, in other words — the book, and the concept it’s named for — pivots on paradox: The more abundant our media, the less specific value we’ll place on it, and, therefore, the more generally valuable it will become. We have to be willing to waste our informational resources in order to preserve them. If you love something…set it free.

Love vs. money

So the book’s easiest takeaway, as far as journalism goes, is that we should be willing to experiment with our media: to be open to the organic, to embrace new methods and modes of production and consumption, to trust in abundance. But, then, that’s both too obvious (does anyone really think we shouldn’t be experimenting at this point?) and too reductive a conclusion for a book whose implied premise is the new primacy of communality itself. Shirky isn’t simply asking us to rethink our media systems (although, sure, that’s part of it, too); he’s really asking us to embrace collectivity in our information — in its consumption, but also in its creation.

And that’s actually a pretty explosive proposition. The world of “post-Gutenberg economics,” as Shirky calls it — a world defined, above all, by the limitations of the means of (media) production, be they printing presses or broadcast towers — was a world that ratified the individual (the individual person, the individual institution) as the source of informational authority. This was by necessity rather than, strictly, design: In an economy where freedom of the press is guaranteed only to those who own one, the owners in question will have to be limited in number; distributed authority is also diffused authority. When the power of the press belongs to everyone, the power of the press belongs to no one.

But now we’re moving, increasingly and probably inevitably, toward a media world of distributed authority. That’s a premise not only of Cognitive Surplus, but of the majority of Shirky’s writings — and it’s a shift that is, it hardly needs to be said, a good thing. But it also means, to extrapolate a bit from the premises of the cognitive surplus, that the reflexively individualistic assumptions we often hold about the media — from the primacy of brand structures to the narrative authority of the individual correspondent to the notion of the singular article/issue/publication as a self-contained source of knowledge itself — were not immutable principles (were not, in fact, principles at all), but rather circumstantial realities. Realities that can — and, indeed, will — change as our circumstances do.

And now that scarcity is being replaced by abundance, our whole relationship with our media is changing. The new communality of our news technologies — the web’s discursive impulses, the explosiveness of the hyperlink — means that news is, increasingly, a collective endeavor. And we’re just now beginning to see the implications of that shift. In Here Comes Everybody, Shirky focused on Wikipedia as a repository of our cognitive surplus; in its sequel, he focuses on Ushahidi and LOLCats and PickupPal and the Pink Chaddi campaign and, yes, the promotional efforts made on behalf of our friend Hank, the Angry Drunken Dwarf. What those projects have in common is not simply the fact that they’re the result of teamwork, the I serving the we; they’re also the result of the dissolution of the I into the broader sphere of the we. The projects Shirky discusses are, in the strictest sense, authorless; in a wikified approach to media, individual actors feed — and are dissolved into — the communal.

And: They’re fine with that. Because the point, for them, isn’t external reward, financial or reputational or otherwise; it’s the intrinsic pleasure of creation itself. Just as we’re hard-wired to love, Shirky argues, we are hard-wired to create, to produce, to share. As he reminds us: “amateur” derives from the Latin amare. A corollary to “the Internet runs on love” is that it does so because we run on love.

Creativity vs. creation

As a line of logic, that’s doubly provocative. First — without getting into the whole “is he or isn’t he (a web utopian)?” debate — there’s the chasm between the shifts Shirky describes and what we currently tend to think of when we think of The Media. Our news economy is nowhere near comprehensively communal. It’s one whose architecture is built on the coarser, capitalistic realities of individuality: brands, bylines, singular outlets that treat information as proprietary. It’s one where the iPad promises salvation-of-brands by way of isolation-of-brands — and where the battle of open web-vs.-walled garden, the case of Google v. Apple, seems to be locked, at the moment, in a stalemate.

It’s an economy, in other words, that doesn’t run on love. It runs on more familiarly capitalistic currencies: money, power, self-interest.

But, then, the trends Shirky describes are just that. He’s not defining a holistic reality so much as identifying small tears in the fabric of our textured media system that will, inevitably, expand. Cognitive Surplus deals with trajectory. And the more provocative aspect of the book, anyway, is one built into the framework of the cognitive surplus itself: the notion of creativity as a commodity. A key premise of the surplus idea is that television has sucked up our creative energies, siphoning them away from the communality of culture and allowing them to pool, unused, in the moon-dents in our couches. And that, more to the point, with the web gradually reclaiming our free time, we can refocus those energies of creative output. Blogging, uploading photos, editing Wikipedia entries — these are all symptoms of the surplus put to use. And they should be celebrated as such.

That rings true, almost viscerally: Not only has the web empowered our expression as never before, but I think we all kind of assumed that Married…with Children somehow portended apocalypse. And you don’t have to be a Postmanite to appreciate the sense-dulling effect of the TV screen. “Boob tube,” etc.

But the problem with TV, in this framing, is its very teeveeness; the villain is the medium itself. The differences in value between, say, The Wire and Wipeout, here, don’t much matter — both are TV shows, and that’s what defines them. Which means that watching them is a passive pursuit. Which means that watching them is, de facto, a worse way — a less generous way, a more selfish way — to spend time than interacting online. As Shirky puts it: “[E]ven the banal uses of our creative capacity (posting YouTube videos of kittens on treadmills or writing bloviating blog posts) are still more creative and generous than watching TV. We don’t really care how individuals create and share; it’s enough that they exercise this kind of freedom.”

The risk in this, though, for journalism, is to value creation over creativity, output over impulse. Steven Berlin Johnson may have been technically correct when, channeling Jeff Jarvis, he noted that in our newly connected world, there is something profoundly selfish in not sharing; but there’s a fine line between Shirky’s eminently correct argument — that TV consumption has been generally pernicious in its very passivity — and a commodified reading of time itself. Is the ideal to be always producing, always sharing? Is creating cultural products always more generous, more communally valuable, than consuming them? And why, in this context, would TV-watching be any different from that quintessentially introverted practice that is reading a book?

Part of Shirky’s immense appeal, as a thinker and a writer, is his man-of-science/man-of-faith mix; he is a champion of what can be, collectively — but, at the same time, inherent in his work is a deep suspicion of inherence itself. (Nothing is sacred, but everything might be.) And if we’re looking for journalistic takeaways from Cognitive Surplus, one might be this: We need to be similarly respectful of principles and open to challenging them — and similarly aware of past and future. Time itself is, both as a context and a commodity, is a crucial factor in our journalism — and how we choose to leverage it will determine what our current journalism becomes. It’s not just about what to publish, what to filter — but about when to publish, when to filter. And there’s something to be said for preserving, to some degree, a filter-first approach to publication: for taking culture in, receptively if not passively, before putting culture out. For not producing — or, at least, for producing strategically. And for creating infrastructures of filtration that balance the obvious benefits of extroversion with the less obvious, but still powerful, benefits of introversion.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!