Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 13 2010

15:00

What will 2011 bring for journalism? Clay Shirky predicts widespread disruptions for syndication

Editor’s Note: To mark the end of the year, we at the Lab decided to ask some of the smartest people we know what they thought 2011 would bring for journalism. We’re very pleased that so many of them agreed to share their predictions with us.

Over the next few days, you’ll hear from Steve Brill, Vivian Schiller, Michael Schudson, Markos Moulitsas, Kevin Kelly, Geneva Overholser, Adrian Holovaty, Jakob Nielsen, Evan Smith, Megan McCarthy, David Fanning, Matt Thompson, Bob Garfield, Matt Haughey, and more.

We also want to hear your predictions: take our Lab reader poll and tell us what you think we’ll be talking about in 2011. We’ll share those results later this week.

To start off our package of predictions, here’s Clay Shirky. Happy holidays.

The old news business model has had a series of shocks in the 15 or so years we’ve had a broadly adopted public web. The first was the loss of geographic limits to competition (every outlet could reach any reader, listener or viewer). Next was the loss of progressive layers of advertising revenue (the rise of Monster and craigslist et alia, as well as the “analog dollars to digital dimes” problem). Then there is the inability to charge readers easily without eviscerating the advertising rate-base (the failure of micropayments and paywalls as general-purpose solutions).

Next up for widespread disruption, I think, is syndication, a key part of the economic structure of the news business since the founding of Havas in the early 19th century. As with so many parts of a news system based on industrial economics, that model is now under pressure.

As Jonathan Stray pointed out in “The Google/China Hacking Case” and Nick Carr pointed out in “Google in the Middle,” the numerator of organizations producing original news is tiny — absolutely tiny — compared to the denominator of those re-publishing that news. Stray notes that only 7 of the 121 outlets running the China story were based mainly on original reporting, while the vast majority was just wire service copy. Carr similarly pointed out that Google news showed 11,264 separate outlets for the Somali pirate story in 2009, almost all of them re-running the same couple of stories. (I was similarly surprised, last year, to discover that syndicated content outweighed locally created content in my old hometown paper by a 2:1 margin.)

The idea that syndication should be different in a digital era has been around for a while now. Jeff Jarvis’s formulation — “Do what you do best and link to the rest” — dates from 2007, and the AP started talking about about holding back some stories from subscribers in order to drive their PageRank up last year. What could make 2011 the year of general restructuring is Google’s attempt to give credit where credit is due, in the words of their blog post, by offering tags that identify original and preferred sources for syndicated stories.

This kind of linking, traffic driving, and credit are natively web-like ideas, but they are also inimical to the older logic of syndication. Put simply, syndication makes little sense in a world with URLs. When news outlets were segmented by geography, having live human beings sitting around in ten thousand separate markets deciding which stories to pull off the wire was a service. Now it’s just a cost.

Giving credit where credit is due will reward original work, whether scoops, hot news, or unique analysis or perspective. This will be great for readers. It may not, however, be so great for newspapers, or at least not for their revenues, because most of what shows up in a newspaper isn’t original or unique. It’s the first four grafs of something ripped off the wire and lightly re-written, a process repeated countless times a day with no new value being added to the story.

Taken to its logical conclusion, giving credit where credit is due will mean things like 11,260 or so outlets getting out of the business or re-running the same three versions of the Somali pirate story. If Reuters has the best version, why shouldn’t people just read it from Reuters?

Like other forces brought to bear by the web, there’s no getting around this one — rewards for originality are what we want, not just as consumers but as citizens — but creating an environment that generates those rewards will also mean dismantling the syndication model we’ve had since Havas first set up shop.

July 27 2010

18:00

Uneven depths: Why the printed page has always had room for scholarly brilliance and dirty jokes

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, four, five, and six. — Josh]

In a chapter called “The Deepening Page,” Nicholas Carr offers a swift and graceful account of the history of writing. He traces the rise of logic, coherence, and depth from magical formulae scratched on potsherds and wax tablets by the ancients, through the pious allusions of the middle ages to the graceful periodic sentences of the eighteenth century. Their prose represented not only a formal triumph, but a neural one as well. “To read a book was to practice an unnatural process of thought,” writes Carr, “one that demanded sustained, unbroken attention to a single, static object.”

The reading of a sequence of printed pages was valuable not just for the knowledge readers acquired from the author’s words but for the way those words set off intellectual vibrations within their own minds. In the quiet spaces opened up by the prolonged, undistracted reading of a book, people made their own associations, drew their own inferences and analogies, fostered their own ideas. They thought deeply as they read deeply.

To Carr, the story of manuscript, printing, and publishing is the rise of the “deep page,” with modern literature as the apotheosis of literacy. The process a grimy Gutenberg started in the mid fifteenth century culminates in Wallace Stevens, whose poem “The House Was Quiet and the World Was Calm” glories in the deep page: “The quiet was part of the meaning, part of the mind / The access of perfection to the page.”

The trouble is, it didn’t feel this way to many people going through these changes at various times in the past. Not to the manuscript bookseller Vespasiano da Bisticci, who condemned the coarsening presence of printed volumes in libraries devoted to books in manuscript; not to Pope Paul IV, who started the Index of Prohibited Books during the so-called “incunable era” following the advent of moveable type; not to Pope Urban VIII, who tangled with Galileo; not to Jonathan Swift and Alexander Pope; not to the French monarchy in advance of the Revolution.

The printing press never only produced the kind of deep reading we admire and privilege today. It also produced propaganda and misinformation, penny dreadfuls and comic books offensive to public morality, pornography, self-help books, and much that was generally despised and rejected by polite culture. Any account of the history of “The Gutenberg Era” that lacks these is incomplete — just as any picture of the Internet that privileges LOLcats and 4chan is insufficient. We must consider both — for pornography, misinformation, and sheer foolishness have thrived from the age of incunables to the advent of the Internet. And the deep-reading brain evolved in the midst of it all.

In his report about ROFLCon in last week’s New York Times Magazine, Rob Walker argues that open culture needs the slipshod, the shifty, and the shallow in order to maintain its health.

The more traditional pundits and gurus who talk about the Internet often seem to want to draw strict boundaries between old mass-media culture and the more egalitarian forms taking shape online — and between Internet life and life in the physical world…Sometimes the pointless-seeming jokes that spring from the Web seem to be calling a bluff and showing a truth: This is what egalitarian cultural production really looks like, this is what having unbounded spaces really entails, this is what anybody-can-be-famous means, this is how the hunger for “moar” gets sated, this is what’s burbling in the hive mind’s id. But the real point is that to pretend otherwise isn’t denying the Internet — it’s denying reality.

Walker references a talk the computer historian Jason Scott gave at the first ROFLCon in 2008 in which he discussed the shallow and seemingly antisocial memes spread by communications networks long before the Internet. Scott discusses electric media going back to the telegraph, but the printing press teemed with the shallow stuff well before the advent of telegraphy. Readers in the 18th century in particular were offered a tantalizing selection of bawdy images and tawdry tales. As the great book historian Robert Darnton has shown, the age of Voltaire and Rousseau was awash in erotica, dirty cartoons, and fancifully libelous tales of the rich and famous.

So where did the deep page come from? Not merely from ignoring the dross — for many alloys exist between poetry and pornography, and at any given moment, it’s never entirely clear which is which. Jonathan Swift, writing his “Battle of the Books” in 1704, didn’t even bother with the bawdy writers. Swift’s satire depicts a war between ancient and modern authors, with the ancients on the side of sweetness and light; it was Descartes and classicist Richard Bentley that drew his ire as much as any Grub Street hack. Swift and other early modern readers engaged in an encounter with a murky multiplicity of shifting possibilities in print. And it was the multiplicity that produced the deep page — presumably along with the brain circuitry underlying it.

At the edges of the deep page lie miles of shallow estuaries, stinking, muddy — and teeming with life. Our plastic brains have been navigating their effluents for a very long time.

July 23 2010

09:57

‘To the skimmer, all stories look the same and are worth the same’

Nicholas Carr has an interesting piece on Nieman Reports discussing the speed of news consumption online and the impact on journalism.

According to Carr, “skimming” of news is a threat to serious journalism, which requires “deep, undistracted modes of reading and thinking”.

On the web, skimming is no longer a means to an end but an end in itself. That poses a huge problem for those who report and publish the news. To appreciate variations in the quality of journalism, a person has to be attentive, to be able to read and think deeply. To the skimmer, all stories look the same and are worth the same.

The practice turns news into a “fungible commodity”, he writes, where the lowest-cost provider “wins the day”.

The news organization committed to quality becomes a niche player, fated to watch its niche continue to shrink. If serious journalism is going to survive as something more than a product for a small and shrinking elite, news organizations will need to do more than simply adapt to the net. They’re going to have to be a counterweight to the net.

See his full post here…Similar Posts:



July 15 2010

16:00

With surplus comes expendability? When the publishing club expands

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, four, and five. — Josh]

It’s a common belief that the average human uses a mere 10 percent of her total brain power. Of course, it’s a myth. “Brain power,” after all, is an impossible quality to define. Are we talking about the kinds of things Mensa tests for, such as computational ability, spatial intelligence, and logical acuity? Do we mean artistic genius, emotional intelligence? And how do we separate these things from the social expression of thinking, the emergence of powerful new ideas and cultural forms, the rebooting of beliefs and ethical norms? Even (and perhaps especially) in brain science, the terms are constantly shifting — as in recent research suggesting that the ratio of glial and neuronal cells is roughly equal, and that glial cells aid intervene in cognition to a previously unknown extent. Most authorities agree that while a small percentage of neurons are firing at any given moment, over the course of a day the brain is active throughout its entire cellular complement. We use the whole brain — even if not always to our own satisfaction.

While Shirky never mentions it, the 10-percent meme haunts the notion of the cognitive surplus — the linchpin of Shirky’s argument, which remains murky throughout the book. In fact Shirky is not talking about free cognitive power or brain cycles per se, but free time — time we may choose to spend on cognitive or creative work, but which may be spent in other ways as well. Eighteenth-century Britons, in Shirky’s telling, spent newfound funds of time getting pissed on gin. (It’s worth pointing out that this is a very selective view of the 1700’s, and the passages in which Shirky discusses gin are unsourced.) If the tools to make use of spare time — not only gin but religion, books, and newspapers, to name a few — seem crude by our lights, so too were the machinations that princes and politicians could undertake to bend spare cycles to their own ends. As our tools for sharing and creating grow more sophisticated, it becomes crucial that we understand whose purposes they truly serve.

Surpluses are complicated things. In a post dark with foreboding, Quiet Babylon’s Tim Maly quotes Ira Basen of the CBC’s Media Watch column on the effects of a surplus of quasi-journalistic documentation at the G20 protests in Toronto: “Perhaps the best way of understanding police behaviour,” Basen writes, “is to recognize that almost everyone in that crowd had some sort of camera-equipped mobile device, which meant that, in the minds of the police, almost everyone was a potential journalist. That meant they could either give special treatment to everyone or to no one. They chose no one.” Maly follows Basen’s logic to its scary ends:

In a network of cheap ubiquitous sensors, any given node becomes disposable. At highly documented events, the rate at which recordings are made far outstrips the rate at which we can view them. Any given photo or video can be lost but the loss is not that great. Any given observer can be beaten, arrested, even killed, and the loss is not that great. At least not that much greater than if it was any other participant.

This is the terrifying endpoint that Basen does not reach. When everyone is a journalist, not only do their fates no longer warrant special attention by the people being covered, their fates no longer warrant special attention by the people consuming their work.

There’s much to be celebrated in the technological changes that have driven the costs of recording, broadcasting, and publishing towards zero. But we do well to remember that with surplus comes expendability. Basen’s concerns notwithstanding, the danger is less to professional journalism than the nature of the public sphere: the prospect that, faced with the potential marginal costs of spending our cognitive surpluses on oversight, witness, and commentary, we nodes in the network will choose to stick to LOLcats and fanfiction.

July 07 2010

17:30

From prefab paint to the power of typewriters to the Internet: Distrust of the Shallows is nothing new

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, three, and four. — Josh]

When factory-produced paint was first made available in tubes in the 1840s, it transformed the practices of painters. Previously, paint-making had been part of the artist’s craft (a messy task, ideally handled by assistants). Grinding pigments, measuring solvents, and decanting the resulting concoctions into containers (glass vials and pigs’ bladders were frequently used). The texture of the artist’s work was determined by the need to make paint, but the paints themselves also literally determined the palette, and even to a certain extent the subject matter, of their works. With the advent of cheap, manufactured tube paints, paint could be a sketching medium; it became easier to carry paints into the field to paint en plein air. Renoir even went so far as to say that without tube paints, the Impressionist movement would never have happened.

Looking through new tubes, so to speak, nineteenth-century artists found a new way to look at the world. Rarely will you find an art historian who will complain about the damaging effects of manufactured paints, or talk about ready-made pigments as if they determined the course of nineteenth-century art in some limiting fashion. Technological innovation made possible a creative renascence in painting.

Nicholas Carr begins the second chapter of The Shallows with a similar story, describing the transformation that took place in Nietzsche’s work when the beleaguered, convalescent philosopher purchased his first typewriter. A friend noticed a change in Nietzsche’s work after he began to use the machine — and Nietzsche agreed, noting that “our writing equipment takes part in the forming of our thoughts.”

Observers have long noted the uncanny capacity of the typewriter to tranform the very thinking of its users. Indeed the typewriter transformed many aspects of life in the last decades of the nineteenth century, from correspondence and private literacy to the role of women in the workplace — changes both lamentable and liberating. But the machine’s cognitive capacity was particularly disturbing to writers and critics.

Carr traces the advent of the typewriter brain to the concept of neuroplasticity: the notion that the brain is susceptible to changes in structure and function throughout its lifetime. It’s a broad concept, as Carr allows, covering addiction, neural adaptation to the loss of a limb or the mastering of a novel musical instrument, and the sort of changes in working pattern, attention, and even aesthetic sensibility that seem to accompany the advent of new tools. Carr begins in this chapter to trace some of the history of our understanding of the brain’s adaptability, arguing that neuroplasticity is a relatively new discovery in the cognitive sciences. (It isn’t; as this post at the blog Mind Hacks shows, research into various aspects of plasticity has been going on for more than a century.)

Carr’s concern about the effect of the Internet on our brains hinges on the slipperiness of neuroplasticity as an idea. Because after all, there’s good change and bad change, and little way of telling whether the Internet will induce all one or all the other — or (far likelier, if history is any guide) a fair share of both the good and the bad. Carr puts it like this: “Although neuroplasticity provides an escape from genetic determinism, a loophole for free thought and free will, it also imposes its own form of determinism…As particular circuits in our brain strengthen through the repetition of a physical or mental activity, they begin to transform that activity into a habit.”

I have no doubt that the Internet has changed my brain, and in a few of the ways Carr worries about. Some of those changes feel like transformations of consciouness; others feel like worrisome addictions. Thinking of Shirky’s cognitive surplus in particular, I can conceive a peril that Carr might agree with. Shirky’s surplus can be thought of as a newly-discovered resource — and it’s in the nature of capital to try and harness such resources. It’s becoming obvious that one way to harness it is by creating systems of reward that neurochemically goose our brains in exchange for access to our spare cycles. Cory Doctorow has aptly described the social media, and in particular Facebook, as “Skinner boxes” that reward our brain’s desire to communicate in return for access to our minds and our information.

So I agree with Carr to this extent: As users of new tools, we need to take care. But for our brains’ ability to adapt and change over time, we should be grateful. Looking to the past, we see that new tools have led to new possibilities, new ways of thinking and seeing, again and again. As a writer, I’m curious to find out where the tools of our time will take me; to the extent I’m an historian, I’m very skeptical that we can discern the form those transformations will take in aggregate.

To many art lovers in the nineteenth century, Impressionism looked like the Shallows in its obsession with surfaces and in its overturning of deep-rooted canons of painterly sensibility. But today, most of us are likely grateful for the changes the new tubes wrought in the brains of Renoir and Monet. Perhaps the tubes of our time should be approached in a like spirit.

July 02 2010

16:00

Papering over the bumps: Is the online media ecosystem really flat?

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one, two, and three. — Josh]

In Cognitive Surplus, Clay Shirky adopts the mode of a police procedural, analyzing the means, motives, and opportunities we have to use our cumulative free time in creative and generous ways. It’s a strange move, treating a notional good as the object of criminal activity, but it affords Shirky with a simple structure for his book.

Beginning with a chapter on “means,” then, Shirky looks at the tools we now have at our disposal for the sharing of stories, images, and ideas. He doesn’t immediately turn to the usual suspects — Facebook, Twitter, the blogosphere — but instead looks at outpourings of shared concern and interest that have erupted in surprising places. His first example is the explosive outbreak of protest that occurred in South Korea when US-produced beef was reintroduced to markets in Spring 2008. South Korea had banned American meat during the bovine spongiform encephelopathy or “mad cow disease” scare in 2003, later reopening its market in a quiet agreement between the two countries’ governments. Protests against this move began among followers of the popular Korean boy band Dong Ban Shin Ki. Exchanging messages in the decidedly non-political forum of the bulletin boards on DBSK’s web site, they ignited a nationwide furor and nearly brought down the South Korean president, Lee Myung-bak.

As Shirky describes it, the fluid and soluble nature of the new media helped to leverage the power of the protests. “[M]edia stopped being just a source of information and became a locus of coordination as well,” Shirky writes, as protesters used not only the DBSK web site but “a host of other conversational online spaces. They were also sending images and text via their mobile phones, not just to disseminate information and opinion but to act on it.”

When I read such stories of burgeoning viral foment, I think of Arthur Machen, a British author of ghost stories writing at the time of the First World War. During the run-up to the bloody campaign of the Somme, Machen published a short story called “The Bowmen,” in which he imagined soldiers who died five hundred years earlier at the Battle of Agincourt, led by Saint George, riding out of the sky to rescue an outgunned British force at the Battle of Mons. The story appeared in the London Evening News in September 1914. In the months that followed, parish magazines throughout Britain reprinted the story; and soon, fragments of the tale began to circulate, virally as it were, in the form of rumor and testimony from the combatants themselves. The story grew: Dead German soldiers had been found transfixed by arrows; Saint George and Agincourt’s band of brothers had been joined by winged angels and Joan of Arc. Although Machen sought to publicize the fictional origins of the tale, it had gone viral thanks to the flattened transmedia of newspapers and church gossip.

We’re in Walter Lippmann territory here. In World War I and the World Wide Web alike, we come to the public sphere with a kit of reflexes and assumptions. Of course, unlike angels on the battlefield, mad cow disease is real. The extent of its threat to public health, however, may have more in common with the supernatural dangers faced by German soldiers in 1914; the ways the two stories engage our reflex-kit have much in common. From history, we can take comfort in the knowledge that public opinion could be infected with viral memes before the emergence of the Internet. Can history also help us to cope with the shocks and tremors such rumors induce? Are they the signs of a healthy public sphere, or symptoms of a viral disease? Shirky would proclaim the former; Nicholas Carr likely inclines to the latter diagnosis. But both sides lack a necessary degree of richness and complexity.

The flattening of the media — the Internet’s ability to break down barriers between broadcast and print, between advocacy and information — is recognizable to us all. But it’s worth questioning how truly flat it all has become. Shirky extolls the liberating frisson that comes from clicking the “publish now” button familiar to casual bloggers — but he fails to mention that invariably a few of those buttons are hooked up to more pipes than others. He talks about the end of scarcity: the resource-driven economics of print (and even the limits of the electromagnetic spectrum, in the case of broadcast media) are a thing of the past, he observes, and the opportunity to publish is now abundant. But we must recognize that on the Internet, large audiences remain a scarce resource — and they’re largely still in the hands of transmedia conglomerates busy leveraging their powers in the old media of scarcity to dominate traffic.

Is the notion of flatness truly descriptive, or does it merely paper over the bumps? Real differences in the power of platforms exist throughout the digitial media, as they did among the analog; the new political economy of communication is largely about shifting those differences around. The bumps used to lie before the doors of access, making it difficult to get published in the first place. Those bumps have been flattened out — but as with an oversized carpet, they’ve popped up elsewhere, in front of the audiences. Sure, you can “publish now.” But who will know that you have published? On the Internet, no one may know that you’re a dog, but they can tell from your traffic and your follower counts whether you’re a celebrity or a major media outlet lurking in the social media. When CBS News has a Facebook account and you can follow CNN on Twitter, there’s little point in pretending that the means of communication have truly been flattened.

But flatland is extending itself everywhere, according to Shirky. “Now that computers and increasingly computerlike phones have been broadly adopted, the whole notion of cyberspace is fading. Our social media tools aren’t an alternative to real life, they are part of it.” No doubt this is true — cyberspace and meatspace are everywhere meeting and interpenetrating. But just as in the “real life” of old, the tools are not created equal. Some still have more leverage than others.

“Ideology addresses very real problems,” Slavov Žižek has said with unaccustomed clarity, “but in a way that mystifies them.” Flatness in the media is an ideology. It mystifies the bumps and valleys of the real which, as ever, are composed of talent, power, and liberty.

What then is the answer? Carr’s mandarin approach — to leave great thoughts to the great thinkers, to preserve the fiction of another dominant style — isn’t so much idealistic as it is impossible. For the phenomenon that Shirky calls our cognitive surplus has proven (if proof were needed) that curiosity and ingenuity are widely dispersed throughout the population. And without a doubt, technologies that offer a means to furthering those qualities are worth promoting. But an ideology of flatness isn’t the way to promote them. We need to engage the new media tools as if our actions and ideas have real power in the world. The ethical implications of such a stance may be debatable, but they cannot be trivial.

July 01 2010

16:00

When “neuroplasticity” had a simpler name: Whispering books and other lionized memories

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here are parts one and two. — Josh]

In the first chapter of The Shallows, Nick Carr contrasts the curious ennui of his college’s computer lab with the sustaining calm of the library stacks:

Most of my library time…went to wandering the long, narrow corridors of the stacks. Despite being surrounded by tens of thousands of books, I don’t remember the anxiety that’s symptomatic of what we call “information overload.” There was something calming in the reticence of all these books….Take your time, the books seemed to whisper to me in their dusty voices. We’re not going anywhere.

Books, as I’m sure you’ve noticed, always speak in italics.

The whispering tomes resided in Dartmouth’s Baker Library (where I doubt they were allowed to gather much dust); they enlivened the halcyon days before computers took over Carr’s life. Beginning with a little beige Mac Plus in 1986, Carr began the technological joyride of upgrade and ever-increasing entanglement: from MS Word to AOL to Netscape to blogging, Carr was careening with the rest of us towards Web 2.0. By the time he started blogging, he had long since noticed the ways in which the computer transformed work, experience, even consciousness itself:

The more I used it, the more it altered the way I worked. At first I had found it impossible to edit anything on-screen….But at some point — and abruptly — my editing routine changed. I found I could no longer write or revise anything on paper. I felt lost without the Delete key, the scrollbar, the cut and paste functions, the Undo command. I had to do all my editing on-screen. In using the word processor, I had become something of a word processor myself.

This transformation — and the brain’s capacity for it — is the principal theme of Carr’s book. By the time the Internet had fully infiltrated Carr’s working life, he notes, “the very way my brain worked seemed to be changing….It was demanding to be fed the way the Net fed it — and the more it was fed, the hungrier it became.” Carr warns that brain’s susceptibility to such change leaves us open to being transformed by technology — and not in altogether positive ways. “[T]he Internet, I sensed, was changing me into something like a high-speed data-processing machine,” he writes darkly, “a human HAL.”

It’s a funny reference. For HAL, the deranged artificial intelligence at the center of Kubrick’s 2001: A Space Odyssey, intellectual inflexibility was his downfall: presented with seemingly incommensurable choices, his mind refused to expand, reflexively eliminating the variables (his changeable human crewmates) instead.

Again and again, Carr prefers to stack the deck against computers. The dusty books he extolls are quiet counselors, wise and infinitely patient. They refuse to intervene, to interact, as technology is wont to do; they prefer to wait until we’re ready to receive their gentle ministrations. But in fact books are no such thing. They’re seductive, manipulative, transformative. They’ve changed through time; they’ve changed us through time.

And we haven’t always agreed that those changes were for the better. Two hundred years ago, Washington Irving bemoaned a rising tide of newly-published books in terms Carr would find familiar:

The stream of literature has expanded into a torrent — augmented into a river — expanded into a sea…. The world will inevitably be overstocked with good books. It will soon be the employment of a lifetime merely to learn their names…. before long a man of erudition will be little better than a mere walking catalogue.

…which, I want to say, is perhaps an early nineteenth-century equivalent of a human processing machine. Irving was writing in a time when steam power was transforming the printing press from a craft into an agent of mass production, a book mill quite different from Gutenberg’s machine. With many of his contemporaries, he wondered whether we would adapt to the freshet of new books. But adapt we did. As intellectual historian Ann Blair has shown, early modern readers and writers worried about information overload — something Carr claims didn’t exist until roughly the time he bought his first Macintosh — and our strategies for dealing with it have been evolving for centuries.

The susceptibility to transformation that Carr discusses in The Shallows is real. It’s our native endowment — what the brain evolved to do. It is the vogue among scientists to call it neuroplasticity; before that, it was called learning.

June 30 2010

16:00

Not all free time is created equal: Battles on “Cognitive Surplus”

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's reading and reacting to Clay Shirky's Cognitive Surplus and Nicholas Carr's The Shallows. Over the next several weeks, we'll be running Matthew's ongoing twin review; here's part one. — Josh]

Putting The Shallows into dialogue with Shirky’s Cognitive Surplus, the latter book seems like the one with an actual idea. However smartly dressed, Carr’s concern about the corrosiveness of media is really a reflex, one that’s been twitching ever since Socrates fretted over the dangers of the alphabet. Shirky’s idea — that modern life produces a surplus of time, which people have variously spent on gin, television, and now the Internet — is something to sink one’s teeth into. Here’s his formulation:

This book is about the novel resource that has appeared as the world’s cumulative free time is addressed in aggregate. The two most important transitions allowing us access to this resource have already happened — the buildup of well over a trillion hours of free time each year on the part of the world’s educated population, and the invention and spread of public media that enable ordinary citizens previously locked out, to pool that free time in pursuit of activities they like or care about.

I remember reading an early essay Shirky wrote about this idea and finding it enormously compelling. Perhaps that’s because like Shirky I grew up in the 1970s, whiling away many a half-hour in front of Gilligan’s Island reruns. If only I had been able to pursue activities I liked or cared about, rather than burn off my extra cognitive cycles by consuming mass-market drivel…

Only hang on — I did pursue such activities, as I recall. I played in the woodlot near my friend’s house, fished in an actual river, worked a paper route, watched ant colonies go to war in the backyard. I rode my bicycle to the library.

Child’s play, right? Cognitive Surplus is about a specific kind of free time: not the Hundred-Acre-Wood or the endless summer, but the stock of leisure hours produced by modernity, and the rise of technologies that make it possible to spend that time in engaging ways.

And yet the notion of free time itself should be suspicious to us, shouldn’t it? “Free time” is something born of an industrial economics of time, a commoditized temporality. Leisure is a boon granted by the system — a perk, a benny. Compensation. And as long as it helps us recharge our batteries and never keeps us from being productive, high-performance workers, free time isn’t free.

What if this enormous new resource — billions of hours of “free time” — might actually be a product of a machine that’s constantly reproducing and extending itself through us? Gin at least was a release from the shops and trades of early modern life; TV too provides counterpoint to the workday. But with the Internet, for creative-class types at least, we entertain ourselves with the very tools we spend our work time using.

This is a good time to name-check Herbert Marcuse. It’s also where Nick Carr’s understanding of intellectual and creative work begins to seem more attractive. Because for Carr such things are not leisure-time activities; they’re at the heart of the human enterprise.

I’m still excited by Shirky’s idea. But I want to bring Carr’s highbrow concern for the vital uses of cognition, contemplation, and communication to bear upon it. The technologies Shirky celebrates present us with a choice: do we use them as the means of liberation, or as Skinner boxes to while away the off-hours? As liberators they can be incredibly powerful; as producers of auto-stimulation, they’re highly efficient, and incredibly seductive.

This choice — between labor and work, between alienation and freedom — is an ancient one. And in facing it, technology is only a means, and never an end or answer.

June 29 2010

16:00

Reading isn’t just a monkish pursuit: Matthew Battles on “The Shallows”

[Matthew Battles is one of my favorite thinkers about how we read, consume, and learn. He's a former rare books librarian here at Harvard, author of Library: An Unquiet History, and one of the cofounders of HiLobrow.com, which Time just named one of the year's best blogs. He's reading and reacting to two alternate-universe summer blockbusters: Clay Shirky's Cognitive Surplus: Creativity and Generosity in a Connected Age and Nicholas Carr's The Shallows: What the Internet is Doing to Our Brains. (We've written about both.) Over the next several weeks, we'll be running Matthew's ongoing twinned review. —Josh]

Early in The Shallows, Nick Carr stirringly describes what he sees at stake in our time:

For the last five centuries, ever since Gutenberg’s printing press made book reading a popular pursuit, the linear, literary mind has been at the center of art, science, and society. As supple as it is subtle, it’s been the imaginative mind of the Renaissance, the rational mind of the Enlightenment, the inventive mind of the Industrial Revolution, even the subversive mind of Modernism. It may soon be yesterday’s mind.

It’s an inspiring image, this picture of the modern mind arrayed in the glories of progress and possibility.

It’s also wrong.

As readers of my site, library ad infinitum, likely know by now, I’m suspicious of any pronouncements that begin with Gutenberg. To say that the printing press was an agent of change, or that moveable type inaugurated a series of transformations in world culture, is reasonable, if very preliminary; but to treat the goldsmith from Mainz as modernity’s master builder simply is wrong: wrong on the biography, wrong on the facts, wrong from the perspective of a theory of history.

Biography

Gutenberg and his investors were trying to corner the market in Bibles—a market that already existed. Time made him its Person of the Millennium, but Gutenberg was no Leonardo, no Michelangelo, no Descartes. The fact that he wasn’t — that he was a man of no particular account in his own time — drives the subsequent story of his invention, moveable type, in interesting ways — ways too complex to boil down to the kind of simplistic formula Carr likes to proclaim.

Facts

Moveable type is not what made book reading a popular pursuit. That it played a role is not in doubt — although it may just as easily be said that the increasing popularity of book-reading spurred transformations in the technology, spurring inventors to find ways to increase the output of the press. But Gutenberg’s invention, however epochal it appears in retrospect, is rightly seen not as an origin point but as a station along the way — an important one, a real Penn Station or Kings Cross, with lots of branch lines and spurs sprouting from its many platforms — but a station nonetheless.

Theory of history

Here’s the most esoteric part, and the most vital. There is no unitary mind at work in history, neither a plan nor a Geist, no questing Spirit of Modernity or Truth or Righteousness. There’s a damaging irony at work in the model to which Carr seems to ascribe: for if the modern mind truly is the the direct descendant of Gutenberg’s invention, then so is the Internet. And like the host of cultural innovations that partook of the possibilities of the press — humanism, the Reformation, rationalism, the modern novel — critics fear its disruptive powers. In retrospect, we mistake those innovations for the charted course of history; to our counterparts in their respective eras, they looked like the Internet does to Carr: exciting but disruptive, soothing but dangerous, seductive but corrosive.

What is the four-sided Mind of which Nick Carr speaks — this imaginative, rational, inventive, subversive angel striding through the ages, showering the generations with its beneficence? Who is this promethean shapeshifter, whom we’re now in our churlishness binding to some rock for the crows to feast on its innards? What Carr is describing isn’t a historical reality — it’s a god. And it does not exist.

What troubles me most in the first chapter of The Shallows is the simplistic definition of reading Carr offers. It may seem strange to call it simplistic, as the epithets that characterize reading at its best for Carr all derive from the matrix of “complex,” subtle,” and “rich.” But he writes as if these are all that reading has been (ever since Gutenberg, anyway), as if the kind of reading he ascribes to the web — quick and fitful, easily distracted — is a new and disruptive spirit. But dipping and skimming have been modes available to readers for ages. Carr makes one kind of reading — literary reading, in a word — into the only kind that matters. But these and other modes of reading have long coexisted, feeding one another, needing one another. By setting them in conflict, Carr produces a false dichotomy, pitting the kind of reading many of us find richest and most rewarding (draped with laurels and robes as it is) against the quicksilver mode (which, we must admit, is vital and necessary).

In ecosystems like the Gulf of Mexico, the shallows are crucial. They’re the nurseries, where larval creatures feed and grow in relative safety, liminal zones where salt and sweet water mix, where light meets muck, where life learns to contend with extremes. The Internet, in this somewhat dubious metaphor, is no blowout — it’s a flourishing new zone in the ecosystem of reading and writing. And with the petrochemical horror in the Gulf growing daily, we’re learning that the shallows, too, need their champions.

June 16 2010

13:00

Agents of immediacy: Nick Carr on why journalists need to “teach people to pay attention again”

[Our sister publication Nieman Reports is out with its latest issue, and its focus is the new digital landscape of journalism. There are lots of interesting articles, and we're highlighting a few. Here, Internet provocateur Nicholas Carr writes about the tension between immediacy and understanding online. —Josh]

“Thought will spread across the world with the rapidity of light, instantly conceived, instantly written, instantly understood. It will blanket the earth from one pole to the other — sudden, instantaneous, burning with the fervor of the soul from which it burst forth.”

Those opening words would seem to describe, with the zeal typical of the modern techno-utopian, the arrival of our new online media environment with its feeds, streams, texts and tweets. What is the Web if not sudden, instantaneous and burning with fervor? But French poet and politician Alphonse de Lamartine wrote these words in 1831 to describe the emergence of the daily newspaper. Journalism, he proclaimed, would soon become “the whole of human thought.” Books, incapable of competing with the immediacy of morning and evening papers, were doomed: “Thought will not have time to ripen, to accumulate into the form of a book—the book will arrive too late. The only book possible from today is a newspaper.”

Lamartine’s prediction of the imminent demise of books didn’t pan out. Newspapers did not take their place. But he was a prophet nonetheless. The story of media, particularly the news media, has for the last two centuries been a story of the pursuit of ever greater immediacy. From broadsheet to telegram, radio broadcast to TV bulletin, blog to Twitter, we’ve relentlessly ratcheted up the velocity of information flow.

To Shakespeare, ripeness was all. Today, ripeness doesn’t seem to count for much. Nowness is all.

Keep reading at Nieman Reports »

June 15 2010

14:00

“How is the Internet changing the way you think?”: Responses from Shirky, Pinker, Alda, and more

[Our sister publication Nieman Reports is out with its latest issue, and its focus is the new digital landscape of journalism. There are lots of interesting articles, and we'll be highlighting a few here over the next few days. Here, John Brockman writes about how he came to ask a passel of intellectual luminaries how the Internet is changing how they think. —Josh]

It’s not easy coming up with a question. As the artist James Lee Byars used to say: “I can answer the question, but am I bright enough to ask it?” Edge is a conversation. We are looking for questions that inspire answers we can’t possibly predict. Surprise me with an answer I never could have guessed. My goal is to provoke people into thinking thoughts that they normally might not have.

The art of a good question is to find a balance between abstraction and the personal, to ask a question that has many answers, or at least one for which you don’t know the answer. It’s a question distant enough to encourage abstractions and not so specific that it’s about breakfast. A good question encourages answers that are grounded in experience but bigger than that experience alone.

Before we arrived at the 2010 question, we went through several months of considering other questions. Eventually I came up with the idea of asking how the Internet is affecting the scientific work, lives, minds and reality of the contributors. Kevin Kelly responded:

John, you pioneered the idea of asking smart folks what question they are asking themselves. Well I’ve noticed in the past few years there is one question everyone on your list is asking themselves these days and that is, is the Internet making me smarter or stupid? Nick Carr tackled the question on his terms, but did not answer it for everyone. In fact, I would love to hear the Edge list tell me their version: Is the Internet improving them or improving their work, and how is it changing how they think? I am less interested in the general “us” and more interested in the specific “you”—how it is affecting each one personally. Nearly every discussion I have with someone these days will arrive at this question sooner or later. Why not tackle it head on?

Keep reading at Nieman Reports »

June 11 2010

15:51

June 07 2010

14:00

Maximizing the values of the link: Credibility, readability, connectivity

The humble, ubiquitous link found itself at the center of a firestorm last week, with the spark provided by Nicholas Carr, who wrote about hyperlinks as one element (among many) he thinks contribute to distracted, hurried thinking online. With that in mind, Carr explored the idea of delinkification — removing links from the main body of the text.

The heat that greeted Carr’s proposals struck me (and CJR’s Ryan Chittum) as a disproportionate response. Carr wasn’t suggesting we stop linking, but asking if putting hyperlinks at the end of the text makes that text more readable and makes us less likely to get distracted. But of course the tinder has been around for a while. There’s the furor over iPad news apps without links to the web, which has angered and/or worried some who see the iPad as a new walled garden for content. There’s the continuing discontent with “old media” and their linking habits as newsrooms continue their sometimes technologically and culturally bumpy transition to becoming web-first operations. And then there’s Carr’s provocative thesis, explored in The Atlantic and his new book The Shallows, that the Internet is rewiring our brains to make us better at skimming and multitasking but worse at deep thinking.

I think the recent arguments about the role and presentation of links revolve around three potentially different things: credibility, readability and connectivity. And those arguments get intense when those factors are mistaken for each other or are seen as blurring together. Let’s take them one by one and see if they can be teased apart again.

Credibility

A bedrock requirement of making a fair argument in any medium is that you summarize the opposing viewpoint accurately. The link provides an ideal way to let readers check how you did, and alerts the person you’re arguing with that you’ve written a response. This is the kind of thing the web allows us to do instantly and obviously better than before; given that, providing links has gone from handy addition to requirement when advancing an argument online. As Mathew Ingram put it in a post critical of Carr, “I think not including links (which a surprising number of web writers still don’t) is in many cases a sign of intellectual cowardice. What it says is that the writer is unprepared to have his or her ideas tested by comparing them to anyone else’s, and is hoping that no one will notice.”

That’s no longer a particularly effective strategy. Witness the recent dustup between NYU media professor Jay Rosen and Gwen Ifill, the host of PBS’s Washington Week. Early last month, Rosen — a longtime critic of clubby political journalism — offered Washington Week as his pick for something the world could do without. Ifill’s response sought to diminish Rosen and his argument by not deigning to mention him by name. This would have been a tacky rhetorical ploy even in print, but online it fails doubly: The reader, already suspicious by Ifill’s anonymizing and belittling a critic, registers the lack of a link and is even less likely to trust her account. (Unfortunately for Ifill, the web self-corrects: Several commenters on her post supplied Rosen’s name, and were sharply critical of her in ways a wiser argument probably wouldn’t have provoked.)

Readability

Linking to demonstrate credibility is good practice, and solidly noncontroversial. Thing is, Carr didn’t oppose the basic idea of links. He called them “wonderful conveniences,” but added that “they’re also distractions. Sometimes, they’re big distractions — we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head.”

Chittum, for his part, noted that “reading on the web takes more self-discipline than it does offline. How many browser tabs do you have open right now? How many are from links embedded in another piece your were reading and how many of them will you end up closing without reading since you don’t have the time to read Everything On the Internets? The analog parallel would be your New Yorker pile, but even that — no matter how backed up — has an endpoint.”

When I read Chittum’s question about tabs, my eyes flicked guiltily from his post to the top of my browser. (The answer was 11.) Like a lot of people, when I encounter a promising link, I right-click it, open it in a new tab, and read the new material later. I’ve also gotten pretty good at assessing links by their URLs, because not all links are created equal: They can be used for balance, further explanation and edification, but also to show off, logroll and name-drop.

I’ve trained myself to read this way, and think it’s only minimally invasive. But as Carr notes, “even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters.” I’m not sure about the matters part, but I’ll concede the point about the extra cognitive load. I read those linked items later because I want to pay attention to the argument being made. If I stopped in the middle for every link, I’d have little chance of following the argument through to its conclusion. Does the fact that I pause in the middle to load up something to read later detract from my ability to follow that argument? I bet it does.

Carr’s experiment was to put the links at the end. (Given that, calling that approach “delinkification” was either unwise or intentionally provocative.) In a comment to Carr’s post, Salon writer Laura Miller (who’s experimented with the endlinks approach), asked a good question: Is opening links in new tabs “really so different from links at the end of the piece? I mean, if you’re reading the main text all the way through, and then moving on to the linked sources through a series of tabs, then it’s not as if you’re retaining the original context of the link.”

Connectivity

Carr was discussing links in terms of readability, but some responses have dealt more with the merits of something else — connectivity. Rosen — who’s described the ethic of the web persuasively as “to connect people and knowledge,” described Carr’s effort as an attempt to “unbuild the web.” And it’s a perceived assault on connectivity that inflames some critics of the iPad. John Batelle recently said the iPad is “a revelation for millions and counting, because, like Steve Case before him, Steve Jobs has managed to render the noise of the world wide web into a pure, easily consumed signal. The problem, of course, is that Case’s AOL, while wildly successful for a while, ultimately failed as a model. Why? Because a better one emerged — one that let consumers of information also be creators of information. And the single most important product of that interaction? The link. It was the link that killed AOL — and gave birth to Google.”

Broadly speaking, this is the same criticism of the iPad offered bracingly by Cory Doctorow: It’s a infantilizing vehicle for consumption, not creation. Which strikes me now as it did then as too simplistic. I create plenty of information, love the iPad, and see no contradiction between the two. I now do things — like read books, watch movies and casually surf the web — with the iPad instead of with my laptop, desktop or smartphone because the iPad provides a better experience for those activities. But that’s not the same as saying the iPad has replaced those devices, or eliminated my ability or desire to create.

When it comes to creating content, no, I don’t use the iPad for anything more complex than a Facebook status update. If I want to create something, I’ll go to my laptop or desktop. But I’m not creating content all the time. (And I don’t find it baffling or tragic that plenty of people don’t want to create it at all.) If I want to consume — to sit back and watch something, or read something — I’ll pick up the iPad. Granted, if I’m using a news app instead of a news website, I won’t find hyperlinks to follow, at least not yet. But that’s a difference between two modes of consumption, not between consumption and creation. And the iPad browser is always an icon away — as I’ve written before, so far the device’s killer app is the browser.

Now that the flames have died down a bit, it might be useful to look at links more calmly. Given the link’s value in establishing credibility, we can dismiss those who advocate true delinkification or choose not to link as an attempt to short-cut arguments. But I think that’s an extreme case. Instead, let’s have a conversation about credibility, readability and connectivity: As long as links are supplied, does presenting them outside of the main text diminish their credibility? Does that presentation increase readability, supporting the ethic of the web by creating better conversations and connections? Is there a slippery slope between enhancing readability and diminishing connectivity? If so, are there trade-offs we should accept, or new presentations that beg to be explored?

Photo by Wendell used under a Creative Commons license.

June 03 2010

21:31
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl