Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 03 2011

19:30

The cognitive surplus hates pigs. Also, Snooki.

Last week, Josh located Clay Shirky’s cognitive surplus — in an epic battle against pigs.

The fact that Angry Birds consumes 200 million minutes of human attention a day, Josh pointed out, suggests an important caveat to the surplus idea: that the increasing popularity of the web — and, with it, the decreasing popularity of television — doesn’t automatically lead to more creativity capital in the world. “Even if the lure of the connected digital world gets people to skimp on the Gilligan’s Island reruns,” he noted, “that doesn’t necessarily mean their replacement behaviors will be any more productive.”

Today brings another caveat to the creativity-from-surplus concept: a finding that “television remains a refuge in the media revolution.” To the extent that, per The New York Times:

Americans watched more television than ever in 2010, according to the Nielsen Company. Total viewing of broadcast networks and basic cable channels rose about 1 percent for the year, to an average of 34 hours per person per week.

In other words: Each of us Americans spends, on average, nearly five hours boob-tubing it every day.

Having fallen prey to an epic Jersey Shore marathon over the break, I am in no position to pass judgment on this teeveetastic state of affairs. But it’s worth noting that the cognitive surplus, as an argument and a concept, is predicated on the idea that the future will find us paying significantly less attention to TV than we do now. While the Times’s “more TV than ever” framing may not be as black-and-white as it appears — as Table 2 of Nielsen’s Q3/2009 Three Screen Report makes clear, hours-per-month fluctuations are common, and you wouldn’t want to read too much into this particular blip — the numbers here are a reminder of the fragility of the surplus itself. Even with all the new digital distractions available to us, there’s very little about TV, and our relationship with it, that suggests “decline.” In the matter of wiki versus Snooki, to the extent that the two are mutually exclusive, it’s not at all clear who will emerge victorious.

That’s not to question the nuanced ideas that inform Shirky’s framing of the surplus as time spent engaged in generativity and generosity. But it is to wonder: What happens to the cognitive surplus if the surplus itself never fully shows up?

December 30 2010

01:00

I have found the cognitive surplus, and it hates pigs

2008: Clay Shirky, outlining the basic idea that would become his book Cognitive Surplus:

So how big is that surplus? So if you take Wikipedia as a kind of unit, all of Wikipedia, the whole project — every page, every edit, every talk page, every line of code, in every language that Wikipedia exists in — that represents something like the cumulation of 100 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it’s a back-of-the-envelope calculation, but it’s the right order of magnitude, about 100 million hours of thought.

And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that’s 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus.

2010: Hillel Fuld, citing data from Peter Vesterbacka of Rovio, the Finnish company behind the hit game Angry Birds:

Another mind boggling statistic about Angry Birds, and you should sit down for this one, is that there are 200 million minutes played a day on a global scale. As Peter put it, that number compares favorably to anything, including prime time TV, which indicates that 2011 will be a big year in the shift of advertisers’ attention from TV to mobile.

Some math: 200 million minutes a day / 60 minutes per hour * 365 days per year = 1.2 billion hours a year spent playing Angry Birds.

Or, if Shirky’s estimate is in the right ballpark, about one Wikipedia’s worth of time every month.

Just a lighthearted reminder that, even if the lure of the connected digital world gets people to skimp on the Gilligan’s Island reruns, that doesn’t necessarily mean their replacement behaviors will be any more productive. They could instead bring an ever greater capacity for distraction and disengagement and slingshot precision.

Now, if you’ll excuse me, I have a couple more levels to get three stars on.

[Aside: Note that Angry Birds still has a long way to go to catch up to television: 200 billion hours a year vs. 1.2 billion hours. And the TV number is U.S. only, while the Angry Birds one is global.]

October 04 2010

18:00

A movie with its own backchannel: How “The Social Network” shows our reweaving of conversations

Many of the big-time reviews of The Social Network have focused on the film’s characterization of Mark Zuckerberg, “the youngest billionaire in the world.” Is he an evil genius — or simply a genius? Is he a menace to society, or a savior of it?

To know where the film stands, at any rate, all you have to do is listen to its score — so ominous, so severely simplistic, so straight-out-of-Jaws, that, at moments, you’re sure you see a dorsal fin poking through Zuck’s hoodie. It’s not just that The Social Network is plagued with anxiety about its subject matter; it’s that The Social Network is plagued with anxiety about territory itself. The film’s settings — the dorm rooms, the board rooms, the back rooms — feel tight and dark and crowded, even when they’re not. The spaces suffocate. For a movie named for a collective, The Social Network has a bad case of claustrophobia.

And the BuhDUMBuhDUM brand of anxiety that the film indulges toward its pseudo-protagonist translates to its 500 million other pseudo-protagonists: the “friends” — the film’s corps and chorus, omnipresent if mostly absent — who lurk just beyond the borders of its other confined space: the screen. While its trailer, a work of art unto itself, highlights The 500 Million, people-izing them, implying the profound consequences they suggest for human communication and connection, the film deals with their power by marginalizing them. In director David Fincher’s rendering, the individuals we find at the other end of the Internet are numbers, little more. (“Thousand. Twenty-two thousand.”)

But here’s where things, for the Lab’s purposes, get interesting. If Facebook teaches us anything, it’s that people don’t tend to appreciate being blurred together as backgrounds to other people’s stories. And, not content with being marginalized, several of the 500 million have fought back against the film’s downplay of their power — by, simply, asserting it. By creating a backchannel to the movie and contributing to it. People I’ve never seen writing movie reviews before have been reviewing The Social Network in earnest — writing their reactions on their blogs and sending them around on Facebook and Twitter and Tumblr. Others have posted, appropriately enough, directly to Facebook. The background has blossomed to life. In addition to the typical movie-subsidiary stuff — the director/writer/actor interviews, the professional reviews — we’re seeing as well a counterweight to the mainstream narrative: the embrace of the sense that we viewers are not merely viewers at all, but characters. We own this film. We are this film.

The Social Network’s backchannel, in other words, represents the crystallization of a phenomenon playing out across the culture, not least in journalism: the normalization of participation. Until recently, when it came to films, that participation was pretty much limited to the up-or-down vote that was a ticket purchase at the box office. Now, though — via, appropriately/ironically enough, the new architectures for discourse whose foundation Facebook helped to build — our participation is much more than facelessly financial. We augment the film through our public reactions to it. (Sometimes, we augment it even more directly than that.) We challenge the thing-itself quality of the movie by insisting that we are part of the thing in question.

We talk about the problem of context in news: the fact that a single article or segment, while it works fine as a singular narrative, is a poor conduit for the contextual information that people need to understand a story in full. The response to The Social Network is a reminder that our evolving relationship with context extends far beyond the news. It represents, in fact, something of a perfect storm of new media maxims — Shirky’s cognitive surplus, Jarvis’ network economy, Rosen’s people formerly known as the audience — waving its way into the culture at large. The film suggests something of a critical mass…expressed as, quite literally, a critical mass. Amid our anxieties about the atomization of cultural consumption — the isolation of Netflix, the personalization of choose-your-own-adventure-style news — we’re seeing a corrective in collaborative culture. We’re re-networking ourselves, flattening our relationship with Hollywood as much as with The New York Times.

What’s most noteworthy about that is how completely un-noteworthy it seems. Movies, of course, have always been more than what they are; films have always had as much to do with the social experience outside the theater as the personal experience within it. What’s new, though, and what The Social Network suggests so eloquently, almost in spite of itself, is our ability to transform the sidewalk experience of theater-going, the how’d-you-like-its and what’d-you-thinks, into cultural products of their own. There’s The Social Network, the film…and then there’s The Social Network, the experience — the conversations and contributions and debates and ephemera. And those two things are collapsing into each other, with the film-as-process and the film-as-product increasingly, if by no means totally, merging into one experience. Participation is becoming normative. We know that because it’s also becoming normal.

July 15 2010

16:00

With surplus comes expendability? When the publishing club expands

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, four, and five. — Josh]

It’s a common belief that the average human uses a mere 10 percent of her total brain power. Of course, it’s a myth. “Brain power,” after all, is an impossible quality to define. Are we talking about the kinds of things Mensa tests for, such as computational ability, spatial intelligence, and logical acuity? Do we mean artistic genius, emotional intelligence? And how do we separate these things from the social expression of thinking, the emergence of powerful new ideas and cultural forms, the rebooting of beliefs and ethical norms? Even (and perhaps especially) in brain science, the terms are constantly shifting — as in recent research suggesting that the ratio of glial and neuronal cells is roughly equal, and that glial cells aid intervene in cognition to a previously unknown extent. Most authorities agree that while a small percentage of neurons are firing at any given moment, over the course of a day the brain is active throughout its entire cellular complement. We use the whole brain — even if not always to our own satisfaction.

While Shirky never mentions it, the 10-percent meme haunts the notion of the cognitive surplus — the linchpin of Shirky’s argument, which remains murky throughout the book. In fact Shirky is not talking about free cognitive power or brain cycles per se, but free time — time we may choose to spend on cognitive or creative work, but which may be spent in other ways as well. Eighteenth-century Britons, in Shirky’s telling, spent newfound funds of time getting pissed on gin. (It’s worth pointing out that this is a very selective view of the 1700’s, and the passages in which Shirky discusses gin are unsourced.) If the tools to make use of spare time — not only gin but religion, books, and newspapers, to name a few — seem crude by our lights, so too were the machinations that princes and politicians could undertake to bend spare cycles to their own ends. As our tools for sharing and creating grow more sophisticated, it becomes crucial that we understand whose purposes they truly serve.

Surpluses are complicated things. In a post dark with foreboding, Quiet Babylon’s Tim Maly quotes Ira Basen of the CBC’s Media Watch column on the effects of a surplus of quasi-journalistic documentation at the G20 protests in Toronto: “Perhaps the best way of understanding police behaviour,” Basen writes, “is to recognize that almost everyone in that crowd had some sort of camera-equipped mobile device, which meant that, in the minds of the police, almost everyone was a potential journalist. That meant they could either give special treatment to everyone or to no one. They chose no one.” Maly follows Basen’s logic to its scary ends:

In a network of cheap ubiquitous sensors, any given node becomes disposable. At highly documented events, the rate at which recordings are made far outstrips the rate at which we can view them. Any given photo or video can be lost but the loss is not that great. Any given observer can be beaten, arrested, even killed, and the loss is not that great. At least not that much greater than if it was any other participant.

This is the terrifying endpoint that Basen does not reach. When everyone is a journalist, not only do their fates no longer warrant special attention by the people being covered, their fates no longer warrant special attention by the people consuming their work.

There’s much to be celebrated in the technological changes that have driven the costs of recording, broadcasting, and publishing towards zero. But we do well to remember that with surplus comes expendability. Basen’s concerns notwithstanding, the danger is less to professional journalism than the nature of the public sphere: the prospect that, faced with the potential marginal costs of spending our cognitive surpluses on oversight, witness, and commentary, we nodes in the network will choose to stick to LOLcats and fanfiction.

July 07 2010

17:30

From prefab paint to the power of typewriters to the Internet: Distrust of the Shallows is nothing new

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, and four. — Josh]

When factory-produced paint was first made available in tubes in the 1840s, it transformed the practices of painters. Previously, paint-making had been part of the artist’s craft (a messy task, ideally handled by assistants). Grinding pigments, measuring solvents, and decanting the resulting concoctions into containers (glass vials and pigs’ bladders were frequently used). The texture of the artist’s work was determined by the need to make paint, but the paints themselves also literally determined the palette, and even to a certain extent the subject matter, of their works. With the advent of cheap, manufactured tube paints, paint could be a sketching medium; it became easier to carry paints into the field to paint en plein air. Renoir even went so far as to say that without tube paints, the Impressionist movement would never have happened.

Looking through new tubes, so to speak, nineteenth-century artists found a new way to look at the world. Rarely will you find an art historian who will complain about the damaging effects of manufactured paints, or talk about ready-made pigments as if they determined the course of nineteenth-century art in some limiting fashion. Technological innovation made possible a creative renascence in painting.

Nicholas Carr begins the second chapter of The Shallows with a similar story, describing the transformation that took place in Nietzsche’s work when the beleaguered, convalescent philosopher purchased his first typewriter. A friend noticed a change in Nietzsche’s work after he began to use the machine — and Nietzsche agreed, noting that “our writing equipment takes part in the forming of our thoughts.”

Observers have long noted the uncanny capacity of the typewriter to tranform the very thinking of its users. Indeed the typewriter transformed many aspects of life in the last decades of the nineteenth century, from correspondence and private literacy to the role of women in the workplace — changes both lamentable and liberating. But the machine’s cognitive capacity was particularly disturbing to writers and critics.

Carr traces the advent of the typewriter brain to the concept of neuroplasticity: the notion that the brain is susceptible to changes in structure and function throughout its lifetime. It’s a broad concept, as Carr allows, covering addiction, neural adaptation to the loss of a limb or the mastering of a novel musical instrument, and the sort of changes in working pattern, attention, and even aesthetic sensibility that seem to accompany the advent of new tools. Carr begins in this chapter to trace some of the history of our understanding of the brain’s adaptability, arguing that neuroplasticity is a relatively new discovery in the cognitive sciences. (It isn’t; as this post at the blog Mind Hacks shows, research into various aspects of plasticity has been going on for more than a century.)

Carr’s concern about the effect of the Internet on our brains hinges on the slipperiness of neuroplasticity as an idea. Because after all, there’s good change and bad change, and little way of telling whether the Internet will induce all one or all the other — or (far likelier, if history is any guide) a fair share of both the good and the bad. Carr puts it like this: “Although neuroplasticity provides an escape from genetic determinism, a loophole for free thought and free will, it also imposes its own form of determinism…As particular circuits in our brain strengthen through the repetition of a physical or mental activity, they begin to transform that activity into a habit.”

I have no doubt that the Internet has changed my brain, and in a few of the ways Carr worries about. Some of those changes feel like transformations of consciouness; others feel like worrisome addictions. Thinking of Shirky’s cognitive surplus in particular, I can conceive a peril that Carr might agree with. Shirky’s surplus can be thought of as a newly-discovered resource — and it’s in the nature of capital to try and harness such resources. It’s becoming obvious that one way to harness it is by creating systems of reward that neurochemically goose our brains in exchange for access to our spare cycles. Cory Doctorow has aptly described the social media, and in particular Facebook, as “Skinner boxes” that reward our brain’s desire to communicate in return for access to our minds and our information.

So I agree with Carr to this extent: As users of new tools, we need to take care. But for our brains’ ability to adapt and change over time, we should be grateful. Looking to the past, we see that new tools have led to new possibilities, new ways of thinking and seeing, again and again. As a writer, I’m curious to find out where the tools of our time will take me; to the extent I’m an historian, I’m very skeptical that we can discern the form those transformations will take in aggregate.

To many art lovers in the nineteenth century, Impressionism looked like the Shallows in its obsession with surfaces and in its overturning of deep-rooted canons of painterly sensibility. But today, most of us are likely grateful for the changes the new tubes wrought in the brains of Renoir and Monet. Perhaps the tubes of our time should be approached in a like spirit.

July 02 2010

16:00

Papering over the bumps: Is the online media ecosystem really flat?

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, and three. — Josh]

In Cognitive Surplus, Clay Shirky adopts the mode of a police procedural, analyzing the means, motives, and opportunities we have to use our cumulative free time in creative and generous ways. It’s a strange move, treating a notional good as the object of criminal activity, but it affords Shirky with a simple structure for his book.

Beginning with a chapter on “means,” then, Shirky looks at the tools we now have at our disposal for the sharing of stories, images, and ideas. He doesn’t immediately turn to the usual suspects — Facebook, Twitter, the blogosphere — but instead looks at outpourings of shared concern and interest that have erupted in surprising places. His first example is the explosive outbreak of protest that occurred in South Korea when US-produced beef was reintroduced to markets in Spring 2008. South Korea had banned American meat during the bovine spongiform encephelopathy or “mad cow disease” scare in 2003, later reopening its market in a quiet agreement between the two countries’ governments. Protests against this move began among followers of the popular Korean boy band Dong Ban Shin Ki. Exchanging messages in the decidedly non-political forum of the bulletin boards on DBSK’s web site, they ignited a nationwide furor and nearly brought down the South Korean president, Lee Myung-bak.

As Shirky describes it, the fluid and soluble nature of the new media helped to leverage the power of the protests. “[M]edia stopped being just a source of information and became a locus of coordination as well,” Shirky writes, as protesters used not only the DBSK web site but “a host of other conversational online spaces. They were also sending images and text via their mobile phones, not just to disseminate information and opinion but to act on it.”

When I read such stories of burgeoning viral foment, I think of Arthur Machen, a British author of ghost stories writing at the time of the First World War. During the run-up to the bloody campaign of the Somme, Machen published a short story called “The Bowmen,” in which he imagined soldiers who died five hundred years earlier at the Battle of Agincourt, led by Saint George, riding out of the sky to rescue an outgunned British force at the Battle of Mons. The story appeared in the London Evening News in September 1914. In the months that followed, parish magazines throughout Britain reprinted the story; and soon, fragments of the tale began to circulate, virally as it were, in the form of rumor and testimony from the combatants themselves. The story grew: Dead German soldiers had been found transfixed by arrows; Saint George and Agincourt’s band of brothers had been joined by winged angels and Joan of Arc. Although Machen sought to publicize the fictional origins of the tale, it had gone viral thanks to the flattened transmedia of newspapers and church gossip.

We’re in Walter Lippmann territory here. In World War I and the World Wide Web alike, we come to the public sphere with a kit of reflexes and assumptions. Of course, unlike angels on the battlefield, mad cow disease is real. The extent of its threat to public health, however, may have more in common with the supernatural dangers faced by German soldiers in 1914; the ways the two stories engage our reflex-kit have much in common. From history, we can take comfort in the knowledge that public opinion could be infected with viral memes before the emergence of the Internet. Can history also help us to cope with the shocks and tremors such rumors induce? Are they the signs of a healthy public sphere, or symptoms of a viral disease? Shirky would proclaim the former; Nicholas Carr likely inclines to the latter diagnosis. But both sides lack a necessary degree of richness and complexity.

The flattening of the media — the Internet’s ability to break down barriers between broadcast and print, between advocacy and information — is recognizable to us all. But it’s worth questioning how truly flat it all has become. Shirky extolls the liberating frisson that comes from clicking the “publish now” button familiar to casual bloggers — but he fails to mention that invariably a few of those buttons are hooked up to more pipes than others. He talks about the end of scarcity: the resource-driven economics of print (and even the limits of the electromagnetic spectrum, in the case of broadcast media) are a thing of the past, he observes, and the opportunity to publish is now abundant. But we must recognize that on the Internet, large audiences remain a scarce resource — and they’re largely still in the hands of transmedia conglomerates busy leveraging their powers in the old media of scarcity to dominate traffic.

Is the notion of flatness truly descriptive, or does it merely paper over the bumps? Real differences in the power of platforms exist throughout the digitial media, as they did among the analog; the new political economy of communication is largely about shifting those differences around. The bumps used to lie before the doors of access, making it difficult to get published in the first place. Those bumps have been flattened out — but as with an oversized carpet, they’ve popped up elsewhere, in front of the audiences. Sure, you can “publish now.” But who will know that you have published? On the Internet, no one may know that you’re a dog, but they can tell from your traffic and your follower counts whether you’re a celebrity or a major media outlet lurking in the social media. When CBS News has a Facebook account and you can follow CNN on Twitter, there’s little point in pretending that the means of communication have truly been flattened.

But flatland is extending itself everywhere, according to Shirky. “Now that computers and increasingly computerlike phones have been broadly adopted, the whole notion of cyberspace is fading. Our social media tools aren’t an alternative to real life, they are part of it.” No doubt this is true — cyberspace and meatspace are everywhere meeting and interpenetrating. But just as in the “real life” of old, the tools are not created equal. Some still have more leverage than others.

“Ideology addresses very real problems,” Slavov Žižek has said with unaccustomed clarity, “but in a way that mystifies them.” Flatness in the media is an ideology. It mystifies the bumps and valleys of the real which, as ever, are composed of talent, power, and liberty.

What then is the answer? Carr’s mandarin approach — to leave great thoughts to the great thinkers, to preserve the fiction of another dominant style — isn’t so much idealistic as it is impossible. For the phenomenon that Shirky calls our cognitive surplus has proven (if proof were needed) that curiosity and ingenuity are widely dispersed throughout the population. And without a doubt, technologies that offer a means to furthering those qualities are worth promoting. But an ideology of flatness isn’t the way to promote them. We need to engage the new media tools as if our actions and ideas have real power in the world. The ethical implications of such a stance may be debatable, but they cannot be trivial.

June 30 2010

16:00

Not all free time is created equal: Battles on “Cognitive Surplus”

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here's part one. — Josh]

Putting The Shallows into dialogue with Shirky’s Cognitive Surplus, the latter book seems like the one with an actual idea. However smartly dressed, Carr’s concern about the corrosiveness of media is really a reflex, one that’s been twitching ever since Socrates fretted over the dangers of the alphabet. Shirky’s idea — that modern life produces a surplus of time, which people have variously spent on gin, television, and now the Internet — is something to sink one’s teeth into. Here’s his formulation:

This book is about the novel resource that has appeared as the world’s cumulative free time is addressed in aggregate. The two most important transitions allowing us access to this resource have already happened — the buildup of well over a trillion hours of free time each year on the part of the world’s educated population, and the invention and spread of public media that enable ordinary citizens previously locked out, to pool that free time in pursuit of activities they like or care about.

I remember reading an early essay Shirky wrote about this idea and finding it enormously compelling. Perhaps that’s because like Shirky I grew up in the 1970s, whiling away many a half-hour in front of Gilligan’s Island reruns. If only I had been able to pursue activities I liked or cared about, rather than burn off my extra cognitive cycles by consuming mass-market drivel…

Only hang on — I did pursue such activities, as I recall. I played in the woodlot near my friend’s house, fished in an actual river, worked a paper route, watched ant colonies go to war in the backyard. I rode my bicycle to the library.

Child’s play, right? Cognitive Surplus is about a specific kind of free time: not the Hundred-Acre-Wood or the endless summer, but the stock of leisure hours produced by modernity, and the rise of technologies that make it possible to spend that time in engaging ways.

And yet the notion of free time itself should be suspicious to us, shouldn’t it? “Free time” is something born of an industrial economics of time, a commoditized temporality. Leisure is a boon granted by the system — a perk, a benny. Compensation. And as long as it helps us recharge our batteries and never keeps us from being productive, high-performance workers, free time isn’t free.

What if this enormous new resource — billions of hours of “free time” — might actually be a product of a machine that’s constantly reproducing and extending itself through us? Gin at least was a release from the shops and trades of early modern life; TV too provides counterpoint to the workday. But with the Internet, for creative-class types at least, we entertain ourselves with the very tools we spend our work time using.

This is a good time to name-check Herbert Marcuse. It’s also where Nick Carr’s understanding of intellectual and creative work begins to seem more attractive. Because for Carr such things are not leisure-time activities; they’re at the heart of the human enterprise.

I’m still excited by Shirky’s idea. But I want to bring Carr’s highbrow concern for the vital uses of cognition, contemplation, and communication to bear upon it. The technologies Shirky celebrates present us with a choice: do we use them as the means of liberation, or as Skinner boxes to while away the off-hours? As liberators they can be incredibly powerful; as producers of auto-stimulation, they’re highly efficient, and incredibly seductive.

This choice — between labor and work, between alienation and freedom — is an ancient one. And in facing it, technology is only a means, and never an end or answer.

June 29 2010

16:00

Reading isn’t just a monkish pursuit: Matthew Battles on “The Shallows”

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's a former rare books librarian here at Harvard, author of Library: An Unquiet History, and one of the cofounders of HiLobrow.com, which Time just named one of the year's best blogs. He's reading and reacting to two alternate-universe summer blockbusters: Clay Shirky's Cognitive Surplus: Creativity and Generosity in a Connected Age and Nicholas Carr's The Shallows: What the Internet is Doing to Our Brains. (We've written about both.) Over the next several weeks, we'll be running Matthew's ongoing twinned review. —Josh]

Early in The Shallows, Nick Carr stirringly describes what he sees at stake in our time:

For the last five centuries, ever since Gutenberg’s printing press made book reading a popular pursuit, the linear, literary mind has been at the center of art, science, and society. As supple as it is subtle, it’s been the imaginative mind of the Renaissance, the rational mind of the Enlightenment, the inventive mind of the Industrial Revolution, even the subversive mind of Modernism. It may soon be yesterday’s mind.

It’s an inspiring image, this picture of the modern mind arrayed in the glories of progress and possibility.

It’s also wrong.

As readers of my site, library ad infinitum, likely know by now, I’m suspicious of any pronouncements that begin with Gutenberg. To say that the printing press was an agent of change, or that moveable type inaugurated a series of transformations in world culture, is reasonable, if very preliminary; but to treat the goldsmith from Mainz as modernity’s master builder simply is wrong: wrong on the biography, wrong on the facts, wrong from the perspective of a theory of history.

Biography

Gutenberg and his investors were trying to corner the market in Bibles—a market that already existed. Time made him its Person of the Millennium, but Gutenberg was no Leonardo, no Michelangelo, no Descartes. The fact that he wasn’t — that he was a man of no particular account in his own time — drives the subsequent story of his invention, moveable type, in interesting ways — ways too complex to boil down to the kind of simplistic formula Carr likes to proclaim.

Facts

Moveable type is not what made book reading a popular pursuit. That it played a role is not in doubt — although it may just as easily be said that the increasing popularity of book-reading spurred transformations in the technology, spurring inventors to find ways to increase the output of the press. But Gutenberg’s invention, however epochal it appears in retrospect, is rightly seen not as an origin point but as a station along the way — an important one, a real Penn Station or Kings Cross, with lots of branch lines and spurs sprouting from its many platforms — but a station nonetheless.

Theory of history

Here’s the most esoteric part, and the most vital. There is no unitary mind at work in history, neither a plan nor a Geist, no questing Spirit of Modernity or Truth or Righteousness. There’s a damaging irony at work in the model to which Carr seems to ascribe: for if the modern mind truly is the the direct descendant of Gutenberg’s invention, then so is the Internet. And like the host of cultural innovations that partook of the possibilities of the press — humanism, the Reformation, rationalism, the modern novel — critics fear its disruptive powers. In retrospect, we mistake those innovations for the charted course of history; to our counterparts in their respective eras, they looked like the Internet does to Carr: exciting but disruptive, soothing but dangerous, seductive but corrosive.

What is the four-sided Mind of which Nick Carr speaks — this imaginative, rational, inventive, subversive angel striding through the ages, showering the generations with its beneficence? Who is this promethean shapeshifter, whom we’re now in our churlishness binding to some rock for the crows to feast on its innards? What Carr is describing isn’t a historical reality — it’s a god. And it does not exist.

What troubles me most in the first chapter of The Shallows is the simplistic definition of reading Carr offers. It may seem strange to call it simplistic, as the epithets that characterize reading at its best for Carr all derive from the matrix of “complex,” subtle,” and “rich.” But he writes as if these are all that reading has been (ever since Gutenberg, anyway), as if the kind of reading he ascribes to the web — quick and fitful, easily distracted — is a new and disruptive spirit. But dipping and skimming have been modes available to readers for ages. Carr makes one kind of reading — literary reading, in a word — into the only kind that matters. But these and other modes of reading have long coexisted, feeding one another, needing one another. By setting them in conflict, Carr produces a false dichotomy, pitting the kind of reading many of us find richest and most rewarding (draped with laurels and robes as it is) against the quicksilver mode (which, we must admit, is vital and necessary).

In ecosystems like the Gulf of Mexico, the shallows are crucial. They’re the nurseries, where larval creatures feed and grow in relative safety, liminal zones where salt and sweet water mix, where light meets muck, where life learns to contend with extremes. The Internet, in this somewhat dubious metaphor, is no blowout — it’s a flourishing new zone in the ecosystem of reading and writing. And with the petrochemical horror in the Gulf growing daily, we’re learning that the shallows, too, need their champions.

June 28 2010

08:12

Nieman Journalism Lab: Clay Shirky’s ‘Cognitive Surplus’

Nieman Journalism Lab has a review and analysis of media theorist Clay Shirky’s latest book and concept. ‘Is creating and sharing always a more moral choice than consuming,’ asks reviewer Megan Garber.

Cognitive Surplus, in other words – the book, and the concept it’s named for – pivots on paradox: The more abundant our media, the less specific value we’ll place on it, and, therefore, the more generally valuable it will become. We have to be willing to waste our informational resources in order to preserve them. If you love something…set it free.

Full post at this link…Similar Posts:



June 25 2010

15:00

Clay Shirky’s “Cognitive Surplus”: Is creating and sharing always a more moral choice than consuming?

In 1998, People magazine, trying to figure out how to use this new-ish tool called the World Wide Web, launched a poll asking readers to vote for People.com’s Most Beautiful Person of the year. There were the obvious contenders — Kate Winslet, Leonardo diCaprio — and then, thanks to a Howard Stern fan named Kevin Renzulli, a write-in candidate emerged: Henry Joseph Nasiff, Jr., better known as Hank, the Angry Drunken Dwarf. A semi-regular presence on Stern’s shock-jockey-y radio show, Hank — whose nickname pretty well sums up his act — soon found himself on the receiving end of a mischievous voting campaign that spread from Renzulli to Stern to online message boards and mailing lists.

By the time the poll closed, Hank had won — handily — with nearly a quarter of a million votes. (DiCaprio? 14,000.)

In Cognitive Surplus, his fantastic follow-up to Here Comes Everybody, Clay Shirky explains what HTADD can teach us about human communications: “If you give people a way to act on their desire for autonomy and competence or generosity and sharing, they might take you up on it,” he notes. On the other hand: “[I]f you only pretend to offer an outlet for those motivations, while actually slotting people into a scripted experience, they may well revolt.”

Scarcity vs. abundance

Shirky may be a pragmatist and a technologist and, in the best sense, a futurist; what gives his thinking its unique verve, though, is that he also thinks like an economist. To read his work is to be presented with a world defined by the relationships it contains: the exchanges it fosters, the negotiations it demands, the tugs and torques of transaction. In the Shirkian vision of our information economy, supply-and-demand, scarcity-and-abundance, and similar polar pairings aren’t merely frames for coaxing complex realities into bite-sized specimens of simplicity; they’re very real tensions that, in their polarity, act as characters in our everyday life.

In Cognitive Surplus, as in Here Comes Everybody, the protagonist is abundance itself. Size, you know, matters. And, more specifically, the more matters: The more people we have participating in media, and the more people we have consuming it — and the more people we have, in particular, creating it — the better. Not because bigger is implicitly better than the alternative compact, but because abundance changes the value proposition of media as a resource. “Scarcity is easier to deal with than abundance,” Shirky points out, “because when something becomes rare, we simply think it more valuable than it was before, a conceptually easy change.” But “abundance is different: its advent means we can start treating previously valuable things as if they were cheap enough to waste, which is to say cheap enough to experiment with.”

Cognitive Surplus, in other words — the book, and the concept it’s named for — pivots on paradox: The more abundant our media, the less specific value we’ll place on it, and, therefore, the more generally valuable it will become. We have to be willing to waste our informational resources in order to preserve them. If you love something…set it free.

Love vs. money

So the book’s easiest takeaway, as far as journalism goes, is that we should be willing to experiment with our media: to be open to the organic, to embrace new methods and modes of production and consumption, to trust in abundance. But, then, that’s both too obvious (does anyone really think we shouldn’t be experimenting at this point?) and too reductive a conclusion for a book whose implied premise is the new primacy of communality itself. Shirky isn’t simply asking us to rethink our media systems (although, sure, that’s part of it, too); he’s really asking us to embrace collectivity in our information — in its consumption, but also in its creation.

And that’s actually a pretty explosive proposition. The world of “post-Gutenberg economics,” as Shirky calls it — a world defined, above all, by the limitations of the means of (media) production, be they printing presses or broadcast towers — was a world that ratified the individual (the individual person, the individual institution) as the source of informational authority. This was by necessity rather than, strictly, design: In an economy where freedom of the press is guaranteed only to those who own one, the owners in question will have to be limited in number; distributed authority is also diffused authority. When the power of the press belongs to everyone, the power of the press belongs to no one.

But now we’re moving, increasingly and probably inevitably, toward a media world of distributed authority. That’s a premise not only of Cognitive Surplus, but of the majority of Shirky’s writings — and it’s a shift that is, it hardly needs to be said, a good thing. But it also means, to extrapolate a bit from the premises of the cognitive surplus, that the reflexively individualistic assumptions we often hold about the media — from the primacy of brand structures to the narrative authority of the individual correspondent to the notion of the singular article/issue/publication as a self-contained source of knowledge itself — were not immutable principles (were not, in fact, principles at all), but rather circumstantial realities. Realities that can — and, indeed, will — change as our circumstances do.

And now that scarcity is being replaced by abundance, our whole relationship with our media is changing. The new communality of our news technologies — the web’s discursive impulses, the explosiveness of the hyperlink — means that news is, increasingly, a collective endeavor. And we’re just now beginning to see the implications of that shift. In Here Comes Everybody, Shirky focused on Wikipedia as a repository of our cognitive surplus; in its sequel, he focuses on Ushahidi and LOLCats and PickupPal and the Pink Chaddi campaign and, yes, the promotional efforts made on behalf of our friend Hank, the Angry Drunken Dwarf. What those projects have in common is not simply the fact that they’re the result of teamwork, the I serving the we; they’re also the result of the dissolution of the I into the broader sphere of the we. The projects Shirky discusses are, in the strictest sense, authorless; in a wikified approach to media, individual actors feed — and are dissolved into — the communal.

And: They’re fine with that. Because the point, for them, isn’t external reward, financial or reputational or otherwise; it’s the intrinsic pleasure of creation itself. Just as we’re hard-wired to love, Shirky argues, we are hard-wired to create, to produce, to share. As he reminds us: “amateur” derives from the Latin amare. A corollary to “the Internet runs on love” is that it does so because we run on love.

Creativity vs. creation

As a line of logic, that’s doubly provocative. First — without getting into the whole “is he or isn’t he (a web utopian)?” debate — there’s the chasm between the shifts Shirky describes and what we currently tend to think of when we think of The Media. Our news economy is nowhere near comprehensively communal. It’s one whose architecture is built on the coarser, capitalistic realities of individuality: brands, bylines, singular outlets that treat information as proprietary. It’s one where the iPad promises salvation-of-brands by way of isolation-of-brands — and where the battle of open web-vs.-walled garden, the case of Google v. Apple, seems to be locked, at the moment, in a stalemate.

It’s an economy, in other words, that doesn’t run on love. It runs on more familiarly capitalistic currencies: money, power, self-interest.

But, then, the trends Shirky describes are just that. He’s not defining a holistic reality so much as identifying small tears in the fabric of our textured media system that will, inevitably, expand. Cognitive Surplus deals with trajectory. And the more provocative aspect of the book, anyway, is one built into the framework of the cognitive surplus itself: the notion of creativity as a commodity. A key premise of the surplus idea is that television has sucked up our creative energies, siphoning them away from the communality of culture and allowing them to pool, unused, in the moon-dents in our couches. And that, more to the point, with the web gradually reclaiming our free time, we can refocus those energies of creative output. Blogging, uploading photos, editing Wikipedia entries — these are all symptoms of the surplus put to use. And they should be celebrated as such.

That rings true, almost viscerally: Not only has the web empowered our expression as never before, but I think we all kind of assumed that Married…with Children somehow portended apocalypse. And you don’t have to be a Postmanite to appreciate the sense-dulling effect of the TV screen. “Boob tube,” etc.

But the problem with TV, in this framing, is its very teeveeness; the villain is the medium itself. The differences in value between, say, The Wire and Wipeout, here, don’t much matter — both are TV shows, and that’s what defines them. Which means that watching them is a passive pursuit. Which means that watching them is, de facto, a worse way — a less generous way, a more selfish way — to spend time than interacting online. As Shirky puts it: “[E]ven the banal uses of our creative capacity (posting YouTube videos of kittens on treadmills or writing bloviating blog posts) are still more creative and generous than watching TV. We don’t really care how individuals create and share; it’s enough that they exercise this kind of freedom.”

The risk in this, though, for journalism, is to value creation over creativity, output over impulse. Steven Berlin Johnson may have been technically correct when, channeling Jeff Jarvis, he noted that in our newly connected world, there is something profoundly selfish in not sharing; but there’s a fine line between Shirky’s eminently correct argument — that TV consumption has been generally pernicious in its very passivity — and a commodified reading of time itself. Is the ideal to be always producing, always sharing? Is creating cultural products always more generous, more communally valuable, than consuming them? And why, in this context, would TV-watching be any different from that quintessentially introverted practice that is reading a book?

Part of Shirky’s immense appeal, as a thinker and a writer, is his man-of-science/man-of-faith mix; he is a champion of what can be, collectively — but, at the same time, inherent in his work is a deep suspicion of inherence itself. (Nothing is sacred, but everything might be.) And if we’re looking for journalistic takeaways from Cognitive Surplus, one might be this: We need to be similarly respectful of principles and open to challenging them — and similarly aware of past and future. Time itself is, both as a context and a commodity, is a crucial factor in our journalism — and how we choose to leverage it will determine what our current journalism becomes. It’s not just about what to publish, what to filter — but about when to publish, when to filter. And there’s something to be said for preserving, to some degree, a filter-first approach to publication: for taking culture in, receptively if not passively, before putting culture out. For not producing — or, at least, for producing strategically. And for creating infrastructures of filtration that balance the obvious benefits of extroversion with the less obvious, but still powerful, benefits of introversion.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl