Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 17 2012

18:00

NewsRight’s potential: New content packages, niche audiences, and revenue

When NewsRight — the Associated Press spinoff formerly known as News Licensing Group (and originally announced by the AP as an unnamed “rights clearinghouse”) — began to lift the veil a couple of weeks ago, most of the attention and analysis focused on “preserving the value” of news content for content owners and originators. In the first round of reports and commentary on the launch, various bloggers and analysts quickly made comparisons to Righthaven, the infamous and all-but-defunct Las Vegas outfit that pursued bloggers and aggregators for alleged copyright violations.

But most of that criticism misses an important point: Would NewsRight’s investors, all legacy news enterprises, really invest $30 million in a questionable model just to enforce copyrights? Or are they investing in a startup that has the capacity to create revenues from new, innovative ways of generating, packaging and, distributing news content?

While some of the reactions point to the former, I believe the opportunity (and NewsRight’s real intention) lies in the latter: NewsRight has the potential to create revenue for any content creator large or small, and to enable a variety of new business models around content that simply can’t fly today because there hasn’t been a clearinghouse system like it.

(As background, here at Nieman Lab in 2010, I first described the potential benefits of a news clearinghouse months before AP announced the concept. Then after AP made public their plans, I described a variety of new business models it could enable, if done right.)

First, let’s have a look at some of the critics:

  • TechDirt, disputing whether NewsRight would actually “add value,” asked: “AP finally launches NewsRight…and it’s Righthaven Lite?”
  • InfoWars, posting a video talk with Denver radio talk host David Sirota, inquired: “Traditional media to bully bloggers with NewsRight?” In the interview Sirota said, “What I worry about is that it ends up being used as a financial weapon against those voices out there who are citing that information in order to challenge it, scrutinize it, and question it.”
  • GigaOm’s Mathew Ingram pointed out that while NewsRight itself says it will stay out of pursuing copyright infractions via litigation, “one of the driving forces behind the agency is the sense on the part of AP and other members that their content is being stolen by news-filtering services…and news aggregators.” Ingram concludes: “What happens when an organization like The Huffington Post says no thank you? That’s when it will become obvious how much of NewsRight’s business model is based on carrots, and how much of it is about waving a big stick.”
  • Nieman Lab’s own coverage by Andrew Phelps also focused on the tracking and enforcement aspects of NewsRight’s core technology.

NewsRight’s launch PR didn’t do much to dispel these concerns. CEO David Westin said himself in a video: “NewsRight’s designed…to make sure that the traditional reporting organizations that are investing in original journalism are reaping some of the benefits that are being lost right now.” And the company’s press release, quoting Westin, went no further that the following in hinting that there were new business opportunities enabled by NewsRight: “[I]f reliable information is to continue to flourish, the companies investing in creating content need efficient ways to license it as broadly as possible.”

Those traditional news organizations (29 of them, including New York Times Co., Washington Post Co., Associated Press, MediaNews Group, Hearst, and McClatchy) are the investors who scraped together $30 million to launch NewsRight. The Associated Press also contributed technology and personnel to the effort.

Given those roots — along with the initial PR, Westin’s own background as a lawyer, and the fact that NewsRight’s underlying AP-derived technology, News Registry, was explicitly developed to help track content piracy — it’s not hard to see where all the skepticism comes from.

But ultimately, if NewsRight is to be successful, it will have to create a new marketplace. It’s going to have to do more than trying to get paid for the status quo — that is, to collect fees from aggregators and others who are currently repackaging the content of its 29 owners. It can do that, but in addition, like any business, it will have to develop new products that new customers will pay for; it will have to bring thousands of content sources into its network; and it will have to enable and encourage thousands of repackagers to use that content in many new ways. And it will have to focus on those new opportunities rather than on righting wrongs perceived by its investors.

I spoke last week with David Westin about where NewsRight was starting out and where it might ultimately go. While he repeated the company mantra about returning value to the originators of journalistic content — “NewsRight is designed with one mission: to recapture some of the value of original journalism that’s being lost in the internet and mobile world” — it’s clear that his vision for NewsRight goes well beyond that. Here’s some of what we covered:

NewsRight’s initial target is “closed-web” news aggregators. Media monitoring services like EIN News, Meltwater News, and Vocus provide customized news feeds to enterprise clients like corporations and government entities, typically at $100 per month or more. Essentially, they’re the digital equivalent of the old clipping services. Currently, these services must scrape individual news sites, and technically, they should deliver only snippets with links back to the original sources (although whether they limit themselves to that is not easy to monitor). What NewsRight offers the monitoring services is one-stop shopping that includes (a) fulfillment: an accurate content feed (obviating the need to scrape, and eliminating uncertainty by always delivering the latest, most complete version of a story); (b) rights clearance; and (c) usage metrics. The monitoring services will have the option to improve their offerings by supplying full text (or they can stick with first paragraphs); the content owners share the resulting royalties.

While NewsRight currently must individually negotiate content deals, it’s working toward a largely-automated content-exchange system. Clearly, as NewsRight grows, there will have to be an automated system with self-service windows. “I hope that’s right, because that means we will have been successful,” Westin said when I suggested that would have to happen. The deals with private aggregators being worked on now all require one-off negotiations for each deal, both with the aggregators and with the content suppliers. That’s marginally possible when there are 800 or so content contributors to the network, but to be a meaningful player in the information marketplace, the company will need to grow to encompass thousands of content creators, thousands of repackagers, republishers, or aggregators of content, and many millions of pieces of content (including text, images and video) — requiring a sizable infrastructure and high level of automation.

Any legitimate news content creator can join NewsRight for free for the duration of 2012. “Anyone who generates original reporting, original content, can benefit from this. We’re open to anyone who’s doing original work.” Westin says. That includes not only newspapers and other traditional news organizations — it can include hyperlocal sites and news blogs. Basically, that free membership will bring you back information on how and where your content is being used. NewsRight’s system is currently tracking several billion impressions for its investor-members and is capable of tracking billions more for those want to use the service. (All this is rather opaque on the website right now, but if you’re interested, just click on the “Contact us to learn more” link on their homepage, and they’ll get back to you.)

Down the road, NewsRight is looking for ways to create new content packaging opportunities. Westin: “There is a large number of possible businesses [that we can enable]. We don’t have any of them up and running yet; it’ll be a better story when we’ve got the first one up. But I do envision a number of people who might say, ‘I wanted to create this product, dipping into a large number of news resources on a specific subject, but it’s simply been too cumbersome and difficult to do’…We should be able to facilitate that.” What he envisions is something that reduces the friction and the transaction costs in setting up a news feed, app, or site on a niche topic and allows a multiplicity of such sites to flourish — “new products based around the content that don’t exist now.” That includes personalized news streams — products for one, but of which many can be sold: “As we continue to expand News Registry and the codes attached to content, it makes it possible to slice and dice the news content with essentially zero marginal cost.”

While the initial offerings to private aggregators carry a price tag set by NewsRight, in the ultimate networked and largely automated point-to-point distribution arrangement — individual asset syndication — NewsRight will likely stay out of pricing. The “paytags,” or the payment information embedded in the Registry tags, will be able to carry information on a variety of usage and payment terms — not only what the price is, but nuanced provisions like time constraints (e.g. this can’t be used until 24 hours after first published), geographic constraints (to limit usage by regional competitors), variable pricing (hot news costs more than old news), and pricing based on the size of the repackager’s audience. Content owners would likely have control over these options, but there’s also the potential for a dynamic pricing model — something similar to Google’s auction mechanism for AdWords — in order to optimize both revenue and usage.

The NewsRight network could make it possible to monetize topical niche content that’s too difficult to syndicate today. There a lot of bloggers, hyperlocals, and other niche sites today that earn zero or minimal revenue and are operated as labors of love. The potential for NewsRight is to find new markets for the content of these sites. And general publishers like newspapers might find it profitable to jump into specialized niches for which there’s no local audience, but which might generate revenue via redistribution through NewsRight to various content aggregators.

Could that grand vision come to fruition? As I’ve pointed out before, a very similar system has worked very nicely for ASCAP and BMI, the music licensing organizations, which not only collect royalties for musicians but enable a variety of music distribution channels. (This is on the performance and broadcast side of the music biz, not the rather broken recorded music side.) Both AP CEO Tom Curley in launching NewsRight and Westin in discussing it refer to ASCAP and other clearinghouses as models — not just for compensating content creators but for enabling new outlets and new forms of content. NewsRight’s is purely a business-to-business model — it doesn’t involve end users. So the traction it needs will come when it can point not just to compensation streams from private aggregation services, but to new products and new businesses made possible by its system.

July 29 2011

07:07

In transition from physical to digital product: can news publishers learn anything from Netflix?

GigaOM :: Users of Netflix’s digital movie-rental service have been up in arms about a sudden change to the company’s pricing plans, which appears to be aimed at reducing demand for its DVD-by-mail service by jacking up prices. In other words, Netflix is trying to manage the transition of users away from the physical product and toward digital streaming.

Are there any lessons newspapers and other media companies can learn as they try to move away from the physical print product and toward a digital-only future? "Yes and no," says Mathew Ingram.

Continue to read Mathew Ingram, gigaom.com

July 16 2011

16:46

June 05 2011

06:01

iPad apps News.me, Idea Flight - What media companies need to learn from startups

GigaOM :: By now, there are plenty of examples of mainstream media companies pumping out newspaper and magazine iPad apps that look exactly like their printed product — the San Francisco Chronicle just joined the crowd — as well as putting up paywalls, and so on. But few of these companies are doing anything different or innovative. Why? Because most large companies in an industry that is more than a century old simply aren’t equipped to really innovate.

Continue to read Mathew Ingram, gigaom.com

June 02 2011

17:30

Is Twitter writing, or is it speech? Why we need a new paradigm for our social media platforms

New tools are at their most powerful, Clay Shirky says, once they’re ubiquitous enough to become invisible. Twitter may be increasingly pervasive — a Pew study released yesterday shows that 13 percent of online adults use the service, which is up from 8 percent six months ago — but it’s pretty much the opposite of invisible. We talk on Twitter, yes, but almost as much, it seems, we talk about it.

The big debates about Twitter’s overall efficacy as a medium — like the one launched by, say, Malcolm Gladwell and, more recently, Bill Keller, whose resignation from the New York Times editorship people have (jokingly, I think?) chalked up to his Twitter-take-on column — tend to devolve into contingents rather than resolve into consensus. An even more recent debate between Mathew Ingram and Jeff Jarvis, which comparatively nuanced, comparatively polite) ended with Ingram writing, “I guess we will have to agree to disagree.”

But why all the third-railiness? Twitter, like many other subjects of political pique, tends to be framed in extremes: On the one hand, there’s Twitter, the cheeky, geeky little platform — the perky Twitter bird! the collective of “tweets”! all the twee new words that have emerged with the advent of the tw-efix! — and on the other, there’s Twitter, the disruptor: the real-time reporting tool. The pseudo-enabler of democratic revolution. The existential threat to the narrative primacy of the news article. Twetcetera.

The dissonance here could be chalked up to the fact that Twitter is simply a medium like any other medium, and, in that, will make of itself (conversation-enabler, LOLCat passer-onner, rebellion-facilitator) whatever we, its users, make of it. But that doesn’t fully account for Twitter’s capacity to inspire so much angst (“Is Twitter making us ____?”), or, for that matter, to inspire so much joy. The McLuhany mindset toward Twitter — the assumption of a medium that is not only the message to, but the molder of, its users — seems to be rooted in a notion of what Twitter should be as much as what it is.

Which begs the question: What is Twitter, actually? (No, seriously!) And what type of communication is it, finally? If we’re wondering why heated debates about Twitter’s effect on information/politics/us tend to be at once so ubiquitous and so generally unsatisfying…the answer may be that, collectively, we have yet to come to consensus on a much more basic question: Is Twitter writing, or is it speech?

Twitter versus “Twitter”

The broader answer, sure, is that it shouldn’t matter. Twitter is…Twitter. It is what it is, and that should be enough. As a culture, though, we tend to insist on categorizing our communication, drawing thick lines between words that are spoken and words that are written. So libel is, legally, a different offense than slander; the written word, we assume, carries the heft of both deliberation and proliferation and therefore a moral weight that the spoken word does not. Text, we figure, is: conclusive, in that its words are the deliberate products of discourse; inclusive, in that it is available equally to anyone who happens to read it; exclusive, in that it filters those words selectively; archival, in that it preserves information for posterity; and static, in that, once published, its words are final.

And speech, while we’re at it, is discursive and ephemeral and, importantly, continual. A conversation will end, yes, but it is not the ending that defines it.

Those characteristics give way to categories. Writing is X; speaking is Y; and both have different normative dimensions that are based on, ultimately, the dynamics of power versus peer — the talking to versus the talking with. So when we talk about Twitter, we tend to base our assessments on its performance as a tool of either orality or textuality. Bill Keller seems to see Twitter as text that happens also to be conversation, and, in that, finds the form understandably lacking. His detractors, on the other hand, seem to see Twitter as conversation that happens also to be text, and, in that, find it understandably awesome.

Which would all be fine — nuanced, even! — were it not for the fact that Twitter-as-text and Twitter-as-conversation tend to be indicated by the same word: “Twitter.” In the manner of “blogger” and “journalist” and even “journalism” itself, “Twitter” has become emblematic of a certain psychology — or, more specifically, of several different psychologies packed awkwardly into a single signifier. And to the extent that it’s become a loaded word, “Twitter” has also become a problematic one: #Twittermakesyoustupid is unfair, but #”Twitter”makesyoustupid has a point. The framework of text and speech falls apart once we recognize that Twitter is both and neither at once. It’s its own thing, a new category.

Our language, however, doesn’t yet recognize that. Our rhetoric hasn’t yet caught up to our reality — for Twitter and, by extension, for other social media.

We might deem Twitter a text-based mechanism of orality, as the scholar Zeynep Tufekci has suggested, or of a “secondary orality,” as Walter Ong has argued, or of something else entirely (tweech? twext? something even more grating, if that’s possible?). It almost doesn’t matter. The point is to acknowledge, online, a new environment — indeed, a new culture — in which writing and speech, textuality and orality, collapse into each other. Speaking is no longer fully ephemeral. And text is no longer simply a repository of thought, composed by an author and bestowed upon the world in an ecstasy of self-containment. On the web, writing is newly dynamic. It talks. It twists. It has people on the other end of it. You read it, sure, but it reads you back.

“The Internet looking back at you”

In his social media-themed session at last year’s ONA conference, former Lab writer and current Wall Street Journal outreach editor Zach Seward talked about being, essentially, the voice of the outlet’s news feed on Twitter. When readers tweeted responses to news stories, @WSJ might respond in kind — possibly surprising them and probably delighting them and maybe, just for a second, sort of freaking them out.

The Journal’s readers were confronted, in other words, with text’s increasingly implicit mutuality. And their “whoa, it’s human!” experience — the Soylent Greenification of online news consumption — can bring, along with its obvious benefits, the same kind of momentary unease that accompanies the de-commodification of, basically, anything: the man behind the curtain, the ghost in the machine, etc. Concerns expressed about Twitter, from that perspective, may well be stand-ins for concerns about privacy and clickstream tracking and algorithmic recommendation and all the other bugs and features of the newly reciprocal reading experience. As the filmmaker Tze Chun noted to The New York Times this weekend, discussing the increasingly personalized workings of the web: “You are used to looking at the Internet voyeuristically. It’s weird to have the Internet looking back at you….”

So a Panoptic reading experience is also, it’s worth remembering, a revolutionary reading experience. Online, words themselves, once silent and still, are suddenly springing to life. And that can be, in every sense, a shock to the system. (Awesome! And also: Aaaah!) Text, after all, as an artifact and a construct, has generally been a noun rather than a verb, defined by its solidity, by its thingness — and, in that, by its passive willingness to be the object of interpretation by active human minds. Entire schools of literary criticism have been devoted to that assumption.

And in written words’ temporal capacity as both repositories and relics, in their power to colonize our collective past in the service of our collective future, they have suggested, ultimately, order. “The printed page,” Neil Postman had it, “revealed the world, line by line, page by page, to be a serious, coherent place, capable of management by reason, and of improvement by logical and relevant criticism.” In their architecture of sequentialism, neatly packaged in manuscripts of varying forms, written words have been bridges, solid and tangible, that have linked the past to the future. As such, they have carried an assurance of cultural continuity.

It’s that preservative function that, for the moment, Twitter is largely lacking. As a platform, it does a great job of connecting; it does, however, a significantly less-great job of conserving. It’s getting better every day; in the meantime, though, as a vessel of cultural memory, it carries legitimately entropic implications.

But, then, concerns about Twitter’s ephemerality are also generally based on a notion of Twitter-as-text. In that, they assume a zero-sum relationship between the writing published on Twitter and the writing published elsewhere. They see the written, printed word — the bridge, the badge of a kind of informational immortality — dissolving into the digital. They see back-end edits revising stories (which is to say, histories) in an instant. They see hacks erasing those stories altogether. They see links dying off at an alarming rate. They see all that is solid melting into bits.

And they have, in that perspective, a point: While new curatorial tools, Storify and its ilk, will become increasingly effective, they might not be able to recapture print’s assurance, tenacious if tenuous, of a neatly captured world. That’s partly because print’s promise of epistemic completeness has always been, to some extent, empty; but it’s also because those tools will be operating within a digital world that is increasingly — and actually kind of wonderfully — dynamic and discursive.

But what the concerns about Twitter tend to forget is that language is not, and has never been, solid. Expression allows itself room to expand. Twitter is emblematic, if not predictive, of the Gutenberg Parenthesis: the notion that, under the web’s influence, our text-ordered world is resolving back into something more traditionally oral — more conversational and, yes, more ephemeral. “Chaos is our lot,” Clay Shirky notes; “the best we can do is identify the various forces at work shaping various possible futures.” One of those forces — and, indeed, one of those futures — is the hybrid linguistic form that we are shaping online even as it shapes us. And so the digital sphere calls for a new paradigm of communication: one that is discursive as well as conservative, one that acquiesces to chaos even as it resists it, one that relies on text even as it sheds the mantle of textuality. A paradigm we might call “Twitter.”

Photos by olalindberg and Tony Hall used under a Creative Commons license.

December 10 2010

15:00

This Week in Review: The WikiBacklash, information control and news, and a tightening paywall

[Every Friday, Mark Coddington sums up the week's top stories about the future of news and the debates that grew up around them. —Josh]

Only one topic really grabbed everyone’s attention this week in future-of-news circles (and most of the rest of the world, too): WikiLeaks. To make the story a bit easier to digest, I’ve divided it into two sections — the crackdown on WikiLeaks, and its implications for journalism.

Attacks and counterattacks around WikiLeaks: Since it released 250,000 confidential diplomatic cables last week, WikiLeaks and its founder, Julian Assange, have been at the center of attacks by governments, international organizations, and private businesses. The forms and intensity they’ve taken have seemed unprecedented, though Daniel Ellsberg said he faced all the same things when he leaked the Pentagon Papers nearly 40 years ago.

Here’s a rundown of what’s happened since late last week: Both Amazon and the domain registry EveryDNS.net booted WikiLeaks, leaving it scrambling to stay online. (Here’s a good conversation between Ethan Zuckerman and The Columbia Journalism Review on the implications of Amazon’s decision.) PayPal, the company that WikiLeaks uses to collect most of its donations, cut off service to WikiLeaks, too. PayPal later relented, but not before botching its explanation of whether U.S. government pressure was involved.

On the government side, the Library of Congress blocked WikiLeaks, and Assange surrendered to British authorities on a Swedish sexual assault warrant (the evidence for which David Cay Johnston said the media should be questioning) and is being held without bail. Slate’s Jack Shafer said the arrest could be a blessing in disguise for Assange.

WikiLeaks obviously has plenty of critics: Christopher Hitchens called Assange a megalomaniac who’s “made everyone complicit in his own private decision to try to sabotage U.S. foreign policy,” and U.S. Sens. Dianne Feinstein and Joe Lieberman called for Assange and The New York Times, respectively, to be prosecuted via the Espionage Act. But WikiLeaks’ many online defenders also manifested themselves this week, too, as hundreds of mirror sites cropped up when WikiLeaks’ main site was taken down, and various online groups attacked the sites of companies that had pulled back on services to WikiLeaks. By Wednesday, it was starting to resemble what Dave Winer called “a full-out war on the Internet.”

Search Engine Land’s Danny Sullivan looked at the response by WikiLeaks’ defenders to argue that WikiLeaks will never be blocked, and web pioneer Mark Pesce said that WikiLeaks has formed the blueprint for every group like it to follow. Many other writers and thinkers lambasted the backlash against WikiLeaks, including Reporters Without Borders, Business Insider’s Henry Blodget, Roberto Arguedas at Gizmodo, BoingBoing’s Xeni Jardin, Wired’s Evan Hansen, and David Samuels of The Atlantic.

Four defenses of WikiLeaks’ rights raised particularly salient points: First, NYU prof Clay Shirky argued that while WikiLeaks may prove to be damaging in the long run, democracy needs it to be protected in the short run: “If it’s OK for a democracy to just decide to run someone off the internet for doing something they wouldn’t prosecute a newspaper for doing, the idea of an internet that further democratizes the public sphere will have taken a mortal blow.” Second, CUNY j-prof Jeff Jarvis said that WikiLeaks fosters a critical power shift from secrecy to transparency.

Finally, GigaOM’s Mathew Ingram and Salon’s Dan Gillmor made similar points about the parallel between WikiLeaks’ rights and the press’s First Amendment rights. Whether we agree with them or not, Assange and WikiLeaks are protected under the same legal umbrella as The New York Times, they argued, and every attack on the rights of the former is an attack on the latter’s rights, too. “If journalism can routinely be shut down the way the government wants to do this time, we’ll have thrown out free speech in this lawless frenzy,” Gillmor wrote.

WikiLeaks and journalism: In between all the attacks and counterattacks surrounding him, Julian Assange did a little bit of talking of his own this week, too. He warned about releasing more documents if he’s prosecuted or killed, including possible Guantánamo Bay files. He defended WikiLeaks in an op-ed in The Australian. He answered readers’ questions at The Guardian, and dodged one about diplomacy that started an intriguing discussion at Jay Rosen’s Posterous. When faced with the (rather pointless) question of whether he’s a journalist, he responded with a rather pointless answer.

Fortunately, plenty of other people did some deep thinking about what WikiLeaks means for journalism and society. (The Atlantic’s Alexis Madrigal has a far more comprehensive list of those people’s thoughts here.) Former Guardian web editor Emily Bell argued that WikiLeaks has awakened journalism to a renewed focus on the purpose behind what it does, as opposed to its current obsession with the models by which it achieves that purpose. Here at the Lab, USC grad student Nikki Usher listed a few ways that WikiLeaks shows that both traditional and nontraditional journalism matter and pointed out the value of the two working together.

At the Online Journalism Review, Robert Niles said that WikiLeaks divides journalists into two camps: “Those who want to see information get to the public, by whatever means, and those who want to control the means by which information flows.” Honolulu Civil Beat editor John Temple thought a bit about what WikiLeaks means for small, local news organizations like his, and British j-prof Paul Bradshaw used WikiLeaks as a study in how to handle big data dumps journalistically.

Also at the Lab, CUNY j-prof C.W. Anderson had some thoughts about this new quasi-source in the form of large databases, and how journalists might be challenged to think about it. Finally, if you’re looking for some deep thoughts on WikiLeaks in audio form, Jay Rosen has you covered — in short form at PBS MediaShift, and at quite a bit more length with Dave Winer on their Rebooting the News podcast.

How porous should paywalls be?: Meanwhile, the paid-content train chugs along, led by The New York Times, which is still planning on instituting its paywall next year. The Times’ digital chief, Martin Nisenholtz, dropped a few more details this week about how its model will work, again stressing that the site will remain open to inbound links across the web.

But for the first time, Nisenholtz also stressed the need to limit the abuse of those links as a way to get inside the wall without paying, revealing that The Times will be working with Google to limit the number of times a reader can access Times articles for free via its search. Nisenholtz also hinted at the size of the paywall’s target audience, leading Poynter’s Rick Edmonds to estimate that The Times will be focusing on about 6 million “heavy users of the site.”

Reuters’ Felix Salmon was skeptical of Nisenholtz’s stricter paywall plans, saying that they won’t be worth the cost: “Strengthening your paywall sends the message that you don’t trust your subscribers, or your subscribers’ non-subscriber friends: you’re treating them as potential content thieves.” The only way such a strategy would make sense, he said, is if The Times is considering starting at a very high price point, something like $20 a month. Henry Blodget of Business Insider, on the other hand, is warming to the idea of a paywall for The Times.

In other paid-content news: News Corp.’s Times of London, which is running a very different paywall from The New York Times, may have only 54,000 people accessing content behind it, according to research by the competing Guardian. The Augusta (Ga.) Chronicle announced it’s launching an metered model powered by Steve Brill’s Press+, a plan Steve Yelvington defended and Matthew Terenzio questioned.

While one paid-content plan gets started, another one might be coming to an end: Newsday is taking its notoriously unsuccessful paywall down through next month, and several on Twitter guessed that the move would become permanent. One news organization that’s not going to be a pioneer in paid online news: The Washington Post, as Post Co. CEO Don Graham said at a conference this week.

Reading roundup: Other than the ongoing WikiLeaks brouhaha, it’s been a relatively quiet week on the future-of-news front. Here’s a bit of what else went on:

— Web guru Tim O’Reilly held his News Foo Camp in Arizona last weekend, and since it was an intentionally quiet event, it didn’t dominate the online discussion like many such summits do. Still, there were a few interesting post-Newsfoo pieces for the rest of us to chew on, including a roundup of the event by TBD’s Steve Buttry, Alex Hillman’s reflections, and USC j-prof Robert Hernandez’s thoughts on journalists’ calling a lie a lie.

— A few iPad bits: News media marketer Earl Wilkinson wrote about a possible image problem with the iPad, All Things Digital’s Peter Kafka reported on the negotiations between Apple and publishers on iTunes subscriptions, and The New York Times’ David Nolen gave some lessons from designing election results for the iPad.

— The Guardian’s Sarah Hartley interviewed former TBD general manager Jim Brady about the ambitious local online-TV project, and Lost Remote’s Cory Bergman looked at TBD and other local TV online branding efforts.

— Advertising Age’s Ann Marie Kerwin has an illuminating list of 10 trends in global media consumption.

— Finally, two good pieces from the Lab: Harvard prof Nicholas Christakis on why popularity doesn’t equal influence on social media, and The New York Times’ Aron Pilhofer and Jennifer Preston provided a glimpse into how one very influential news organization is evolving on social media.

November 23 2010

19:00

Catalysts: The Globe and Mail’s community brain trust

One of the Big Existential Questions facing journalism right now is the extent to which news organizations are just that — organizations that produce news — and the extent to which they’re also something more: engagers of the world, curators of human events, conveners of community. Should news outlets focus on news…or should they also be sponsoring conferences and creating film clubs and setting up stores and selling wine?

There’s a lot of variation in the way they answer that question, of course, but many news outlets are currently skewing toward the “community” end of the continuum, preparing for the future armed with the idea that news production is only part of their mandate — the notion that to succeed, both journalistically and financially, they’ll need to figure out ways to cultivate community out of, and around, their news content.

One particularly interesting experiment to that end — a worthwhile initiative, you might say! — is playing out in Canada, where The Globe and Mail, the country’s paper of record, has convened a community of users to help guide its engagement policies. The Globe Catalysts are a kind of external brain trust for the outlet, a community charged with helping to ensure that the paper’s path is the right one for its users.

“We wanted people to know we’re taking this seriously,” Jennifer MacMillan, the paper’s communities editor, explained of the project. And at its core, the Catalysts experiment is about demonstrating that engagement is a mutual proposition. “We wanted to make sure people felt valued.”

The paper came up with the idea over the summer, MacMillan told me — as a project that would be a part of the paper’s print and online redesign that rolled out this fall. (The idea, actually, was Mathew Ingram’s — the Lab contributor who, before he became a writer at GigaOm, was the Globe’s communities editor.) To test the waters of user interest, MacMillan and her team sent out a Catalyst invitation to the users who subscribe to The Globe and Mail’s e-newsletters (“people we knew were engaged, and who might have an interest in helping us shape where we’re going”) — a form asking for basic info like name, postal code, gender, and profession. And they got, to their shock, floods of replies in return — “several thousand,” in fact. Which was not just a surprise, but also “really encouraging,” MacMillan notes — a show of users not just expressing interest in the paper’s future, but acting on it. “A sign that they wanted to play a bigger part in the experience of the Globe.”

From there, the paper streamlined further, asking respondents to write a short explanation of their vision for the paper. Looking for a cross-section of background and location, interests and perspectives — and employing the services of the digital communications firm Sequentia Environics for help in whittling down the applications — the paper selected a group of users who are charged with helping to oversee the community elements of the paper’s content. A group of 1,000 or so users, in fact, MacMillan told me. (And, of those, about 800 accepted the offer to be Catalysts.) From there, they created a special, members-only section of the Globe and Mail site and then “just started chatting” — about the paper’s future and about the best way to cultivate community around it.

And a big part of that community is the content that it generates: the comments that flesh out a story’s life in the world beyond its text. Per MacMillan’s introduction of the Catalysts project, its members will:

— Help out commenters when they need a hand

— Help keep discussion on-topic

— Intervene when discussion becomes immoderate or personal

— Bring particularly poor behaviour to the attention of Globe staff

— Act in any manner that is representative of a community leader

— Add thoughtful posts that add background info, perspective

— Recommend/vote on comments that add insight and contribute to the discussion

It’s a broad mandate that’s along the lines of Gawker’s starred commenter system and HuffPo’s “Moderator” badge. And so far, it’s yielded good results: “We’ve had very good feedback,” MacMillan says, “and I think a big part of that is that we’re giving readers what they were looking for.” The paper’s recent series, “Canada: Our Time to Lead,” made use not only of Catalyst moderation, but also of the Catalysts’ connection to the newsroom. Globe reporters waded into the Catalyst forum, which led to conversations and new (crowd)sourcing opportunities, MacMillan notes. “We’ve never really done something quite like this before, where the contact has been so direct” — and “it was a really fruitful discussion.”

As for the comments, their volume has held fairly steady since the Catalysts started doing their thing in early October — a recent piece on Canada’s failed bid for a seat on the UN Security Council garnered over 2,000 comments — but their overall value, MacMillan says, has risen. Which is a trend we’re seeing among several of the news organizations that employ a select group of users to do their comment-moderation: investment leads to accountability leads to higher quality. (And to add a bit of incentive, the paper has made a practice of picking a particularly punchy quote from a user comment, and running that quote, via its “You said it” feature, across its homepage.)

But what’s the incentive for the Catalysts themselves? The fact that the community has a high barrier to entry, and no financial reward, begs the question: Why? Why are people willing to take time out of their presumably busy lives to participate in a project whose work isn’t compensated? Is this the cognitive surplus, playing out in our news environment?

To some extent, yes. Financial gain is by no means the only incentive for participation, of course — and there’s something inherently rewarding about seeing your ideas play out “in living color,” MacMillan notes. And togetherness — being part of something — can be its own compensation. One benefit of the Catalyst approach could simply be that it’s making its members part of a community; and in this fragmented world of ours, that alone is a value. And though the project is a work in progress, it’s been gratifying to see what can happen when put some effort into transforming your users — anonymous, atomized — into something more meaningful and productive: a community. “They’re interested in seeing where this is going,” MacMillan says — “just as we are.”

October 05 2010

14:00

Doubling down on print: Canada’s Globe and Mail unveils a new print edition to complement the web

The Globe and Mail, Canada’s most-circulated national daily newspaper, revealed its much-ballyhooed redesign on Friday. The paper is calling it “the most significant redesign” in its 166-year history, and it’s a billion-dollar bet on print at a time when the format’s fortunes would seem to be fading.

The renovations to “Canada’s National Newspaper” are part of what editor-in-chief John Stackhouse boldly calls his “Proudly Print” approach, with print as one component (with online and mobile) of a three-pronged news attack. The redesign tries to make the differences between print and web more clear. Full-color printing and a high-gloss wrap — the first of its kind in North America — aim to help lure advertisers. There’ll be more magazine-like stories, including photo-driven features plastered boldly on the front page. A slightly narrower size means shorter, punchier stories. And that’s not to mention the informational accoutrements, like sidebars and info graphs, and the litany of new inserts and content realignments. The redesign “once again demonstrates our commitment to the newspaper business,” according to publisher and CEO Phillip Crawley.

This is a big-time overhaul for the Globe, and not only because the paper sees it as a reassertion of dominance — i.e., shelling the struggling National Post, its conservative competitor since 1999, in the national newspaper war of attrition — over the Canadian media landscape.

But whenever a redesign happens, criticism follows. The prevailing question in this case is fairly obvious: Why invest in an 18-year, C$1.7-billion printing deal — with the same press as the San Francisco Chronicle — at a time when newsprint seems like yesterday’s medium?

“It’s going to be a millstone around the Globe’s neck,” says Mathew Ingram, a senior writer at GigaOm and former Globe web editor (and Lab contributor). “That’s 10 years you’re going to be paying for something that’s going to restrict the paper’s ability to do things that are focusing on the web. That’s not a thing a newspaper needs at a time like this.”

But the Globe sees its investment as a bet on print having a complementary role to online news going forward. “Our readers are digitally-minded people,” Stackhouse told me on launch day. “We publish a paper for people who are online a lot and still want a printed product at their doorstep every day to make sense of a world that flew by them while there were online.” Stackhouse, who took over as editor-in-chief in spring 2009 after a career as a business reporter, knows what he’s up against, and he’s making an argument about what a 21st-century newspaper needs to look like.

The Globe has always been the highbrow stalwart in Canadian journalism — and judging from its minimalist yet dramatic ad campaign, the paper still sees itself at the head of table. (For more proof, check out this nifty microsite.) Stackhouse uses the term “the daily pause,” when readers feel obligated to close their browsers and read insightful, show-stopping journalism. That’s what newspapers should strive to give their readers, he told me. He says there “needs to be more selection. We need to bring more insight to issues that matter most and focus on issues of consequence and try to have fun with it.”

The Globe, like most other newspapers, realizes there’s still money in print advertising. According to a profile of the Globe in last month’s Toronto Life magazine, the paper’s online component brings in roughly 15 percent of the revenue generated on the print side — not far off the totals for most large American newspapers.

Whether or not Crawley’s doubling-down strategy will work remains to be seen. Eighteen years is a long time. Critics wonder if placing such emphasis on print will limit the Globe’s ability to take the reigns of a slim Canadian online news market. The responses look a lot like this tweet, from Toronto-based technology consultant Rob Hyndman: “The Globe’s changes are about fear of loss, not about moving towards a positive goal.”

The Globe surely sees things differently. Its prime competitors — the National Post, the Toronto Star, and free dailies like Metro and 24 — are all print products. In fact, the paper’s weekday circulation jumped 5 percent last year and its print revenue increased 10 percent, while everyone else took a step backward. In Canada, the world of print is still the gladiator ring. The Canadian online news marketplace is underdeveloped: There’s nothing like Salon or The Daily Beast or The Huffington Post to draw eyeballs away from sites — GlobeandMail.com, CBC.ca, CTV.ca, GlobalTV.com — already affiliated with traditional news organizations. Speaking at the Economic Club of Canada on the eve of the launch, Stackhouse pinpointed four online competitors — The Huffington Post, Bloomberg, Yahoo Finance, and the BBC — none of which are Canadian. Without a sea of competitors galvanizing innovation and growth in Canadian online news, the Globe seems to think it makes sense to stick to the gravy — a move Ingram thinks is a mistake.

“Now is the time to seize the day, to become a leader,” he says, “because the Globe doesn’t have a huge amount of competition in print or online. It feels like it’s the only game in town — except maybe the CBC — and that lulls the paper into a false sense of security about its future.”

June 07 2010

14:00

Maximizing the values of the link: Credibility, readability, connectivity

The humble, ubiquitous link found itself at the center of a firestorm last week, with the spark provided by Nicholas Carr, who wrote about hyperlinks as one element (among many) he thinks contribute to distracted, hurried thinking online. With that in mind, Carr explored the idea of delinkification — removing links from the main body of the text.

The heat that greeted Carr’s proposals struck me (and CJR’s Ryan Chittum) as a disproportionate response. Carr wasn’t suggesting we stop linking, but asking if putting hyperlinks at the end of the text makes that text more readable and makes us less likely to get distracted. But of course the tinder has been around for a while. There’s the furor over iPad news apps without links to the web, which has angered and/or worried some who see the iPad as a new walled garden for content. There’s the continuing discontent with “old media” and their linking habits as newsrooms continue their sometimes technologically and culturally bumpy transition to becoming web-first operations. And then there’s Carr’s provocative thesis, explored in The Atlantic and his new book The Shallows, that the Internet is rewiring our brains to make us better at skimming and multitasking but worse at deep thinking.

I think the recent arguments about the role and presentation of links revolve around three potentially different things: credibility, readability and connectivity. And those arguments get intense when those factors are mistaken for each other or are seen as blurring together. Let’s take them one by one and see if they can be teased apart again.

Credibility

A bedrock requirement of making a fair argument in any medium is that you summarize the opposing viewpoint accurately. The link provides an ideal way to let readers check how you did, and alerts the person you’re arguing with that you’ve written a response. This is the kind of thing the web allows us to do instantly and obviously better than before; given that, providing links has gone from handy addition to requirement when advancing an argument online. As Mathew Ingram put it in a post critical of Carr, “I think not including links (which a surprising number of web writers still don’t) is in many cases a sign of intellectual cowardice. What it says is that the writer is unprepared to have his or her ideas tested by comparing them to anyone else’s, and is hoping that no one will notice.”

That’s no longer a particularly effective strategy. Witness the recent dustup between NYU media professor Jay Rosen and Gwen Ifill, the host of PBS’s Washington Week. Early last month, Rosen — a longtime critic of clubby political journalism — offered Washington Week as his pick for something the world could do without. Ifill’s response sought to diminish Rosen and his argument by not deigning to mention him by name. This would have been a tacky rhetorical ploy even in print, but online it fails doubly: The reader, already suspicious by Ifill’s anonymizing and belittling a critic, registers the lack of a link and is even less likely to trust her account. (Unfortunately for Ifill, the web self-corrects: Several commenters on her post supplied Rosen’s name, and were sharply critical of her in ways a wiser argument probably wouldn’t have provoked.)

Readability

Linking to demonstrate credibility is good practice, and solidly noncontroversial. Thing is, Carr didn’t oppose the basic idea of links. He called them “wonderful conveniences,” but added that “they’re also distractions. Sometimes, they’re big distractions — we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head.”

Chittum, for his part, noted that “reading on the web takes more self-discipline than it does offline. How many browser tabs do you have open right now? How many are from links embedded in another piece your were reading and how many of them will you end up closing without reading since you don’t have the time to read Everything On the Internets? The analog parallel would be your New Yorker pile, but even that — no matter how backed up — has an endpoint.”

When I read Chittum’s question about tabs, my eyes flicked guiltily from his post to the top of my browser. (The answer was 11.) Like a lot of people, when I encounter a promising link, I right-click it, open it in a new tab, and read the new material later. I’ve also gotten pretty good at assessing links by their URLs, because not all links are created equal: They can be used for balance, further explanation and edification, but also to show off, logroll and name-drop.

I’ve trained myself to read this way, and think it’s only minimally invasive. But as Carr notes, “even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters.” I’m not sure about the matters part, but I’ll concede the point about the extra cognitive load. I read those linked items later because I want to pay attention to the argument being made. If I stopped in the middle for every link, I’d have little chance of following the argument through to its conclusion. Does the fact that I pause in the middle to load up something to read later detract from my ability to follow that argument? I bet it does.

Carr’s experiment was to put the links at the end. (Given that, calling that approach “delinkification” was either unwise or intentionally provocative.) In a comment to Carr’s post, Salon writer Laura Miller (who’s experimented with the endlinks approach), asked a good question: Is opening links in new tabs “really so different from links at the end of the piece? I mean, if you’re reading the main text all the way through, and then moving on to the linked sources through a series of tabs, then it’s not as if you’re retaining the original context of the link.”

Connectivity

Carr was discussing links in terms of readability, but some responses have dealt more with the merits of something else — connectivity. Rosen — who’s described the ethic of the web persuasively as “to connect people and knowledge,” described Carr’s effort as an attempt to “unbuild the web.” And it’s a perceived assault on connectivity that inflames some critics of the iPad. John Batelle recently said the iPad is “a revelation for millions and counting, because, like Steve Case before him, Steve Jobs has managed to render the noise of the world wide web into a pure, easily consumed signal. The problem, of course, is that Case’s AOL, while wildly successful for a while, ultimately failed as a model. Why? Because a better one emerged — one that let consumers of information also be creators of information. And the single most important product of that interaction? The link. It was the link that killed AOL — and gave birth to Google.”

Broadly speaking, this is the same criticism of the iPad offered bracingly by Cory Doctorow: It’s a infantilizing vehicle for consumption, not creation. Which strikes me now as it did then as too simplistic. I create plenty of information, love the iPad, and see no contradiction between the two. I now do things — like read books, watch movies and casually surf the web — with the iPad instead of with my laptop, desktop or smartphone because the iPad provides a better experience for those activities. But that’s not the same as saying the iPad has replaced those devices, or eliminated my ability or desire to create.

When it comes to creating content, no, I don’t use the iPad for anything more complex than a Facebook status update. If I want to create something, I’ll go to my laptop or desktop. But I’m not creating content all the time. (And I don’t find it baffling or tragic that plenty of people don’t want to create it at all.) If I want to consume — to sit back and watch something, or read something — I’ll pick up the iPad. Granted, if I’m using a news app instead of a news website, I won’t find hyperlinks to follow, at least not yet. But that’s a difference between two modes of consumption, not between consumption and creation. And the iPad browser is always an icon away — as I’ve written before, so far the device’s killer app is the browser.

Now that the flames have died down a bit, it might be useful to look at links more calmly. Given the link’s value in establishing credibility, we can dismiss those who advocate true delinkification or choose not to link as an attempt to short-cut arguments. But I think that’s an extreme case. Instead, let’s have a conversation about credibility, readability and connectivity: As long as links are supplied, does presenting them outside of the main text diminish their credibility? Does that presentation increase readability, supporting the ethic of the web by creating better conversations and connections? Is there a slippery slope between enhancing readability and diminishing connectivity? If so, are there trade-offs we should accept, or new presentations that beg to be explored?

Photo by Wendell used under a Creative Commons license.

April 02 2010

14:00

This Week in Review: The iPad’s skeptics, Murdoch’s first paywall move and a ‘Chatroulette for news’

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

The iPad’s fanboys and skeptics: For tech geeks and future-of-journalism types everywhere, the biggest event of the week will undoubtedly come tomorrow, when Apple’s iPad goes on sale. The early reviews (Poynter’s Damon Kiesow has a compilation) have been mostly positive, but many of the folks opining on the iPad’s potential impact on journalism have been quite a bit less enthusiastic. A quick rundown:

— Scott Rosenberg, who’s studied the history of blogging and programming, says the news media’s excitement over the iPad reminds him of the CD-ROM craze of the early 1990s, particularly in its misguided expectation for a new, ill-defined technology to lead us into the future. The lesson we learned then and need to be reminded of now, Rosenberg says, is that “people like to interact with one another more than they like to engage with static information.”

— Business Insider’s Henry Blodget argues that the iPad won’t save media companies because they’re relying on the flawed premise that people want to consume content in a “tightly bound content package produced by a single publisher,” just like they did in print.

— Tech exec Barry Graubart says that while the iPad will be a boon to entertainment companies, it won’t provide the revenue boost news orgs expect it to, largely for two reasons: Its ads can’t draw the number of eyeballs that the standard web can, and many potential news app subscribers will be able to find suitable alternatives for free.

— GigaOm’s Mathew Ingram is not impressed with the iPad apps that news outlets have revealed so far, describing them as boring and unimaginative.

— Poynter’s Damon Kiesow gives us a quick summary of why some publishers thought the iPad might be a savior in the first place. (He doesn’t come down firmly on either side.)

Two other thoughtful pieces worth highlighting: Ken Doctor, a keen observer of the world of online news, asks nine questions about the iPad, and offers a lot of insight in the process. And Poynter’s Steve Myers challenges journalists to go beyond creating “good-enough” journalism for the iPad and produce creative, immersive content that takes full advantage of the device’s strengths.

Murdoch’s paid-content move begins: Rupert Murdoch has been talking for several months about his plans to put up paywalls around all of his news sites, and this week the first of those plans was unveiled. The Times and Sunday Times of London announced that they will begin charging for its site in June — £1 per day or £2 per week. This would be stricter than the metered model that The New York Times has proposed and the Financial Times employs: There are no free articles or limits, just 100% paid content.

The Times and Sunday Times both accompanied the announcement with their own editorials giving a rationale for their decision. The Sunday Times is far more straightforward: “At The Sunday Times we put an enormous amount of money and effort into producing the best journalism we possibly can. If we keep giving it away we will no longer be able to do that.” Some corners of journalism praised the Times’ decision and echoed its reasoning: BBC vet John Humphrys, Texas newspaperman John P. Garrett (though he didn’t mention the Times by name in a post decrying unthinking “have it your way” journalism), and British PR columnist Ian Monk.

The move also drew criticism, most prominently from web journalism guru Jeff Jarvis, who called the paywall “pathetic.” (If you want your paywall-bashing in video form, Sky News has one of Jarvis, too.) Over at True/Slant, Canadian writer Colin Horgan had some intriguing thoughts about why this move could be important: The fact that the Internet is so all-encompassing as a medium has led us to blur together vastly different types on it, Horgan argues. “What Murdoch is trying to do (perhaps unintentionally) is destroy that mental disconnect, and ask us to pay for media within a medium.”

Two other paid-content tidbits worth noting: Christian Science Monitor editor John Yemma told paidContent that news organizations’ future online will come not from “digital razzle dazzle,” but from relevant, meaningful content. And Damon Kiesow plotted paid content on a supply-and-demand curve, concluding that, not surprisingly, we have an oversupply of information.

Chatroulette, serendipity and the news: The random video chat site Chatroulette has drawn gobs of attention from media outlets, so it was probably only a matter of time before some of them applied the concept to online news. Daniel Vydra, a software developer at The Guardian, was among the first this week when he created Random Guardian and New York Times Roulette, two simple programs that take readers to random articles from those newspapers’ websites. Consultant Chris Thorpe explained the thinking behind their development — a Clay Shirky-inspired desire to recapture online the serendipity that a newspaper’s bundle provides.

GigaOm’s Mathew Ingram wrote about the project approvingly, saying he expects creative, open API projects like this to be more successful in the long run than Rupert Murdoch’s paywalls. Also, Publish2’s Ryan Sholin noted that just because everyone’s excited about the moniker “Chatroulette for news” doesn’t mean this concept hasn’t been around for quite a while.

Meanwhile, the idea sparked deeper thoughts from two CUNY j-profs about the concept of serendipity and the news. Here at the Lab, C.W. Anderson argued that true serendipity involves coming across perspectives you don’t agree with, and asked how one might create a true “news serendipity maker” that could take into account your news consumption patterns, then throw you some curveballs. And in a short but smart post, Jeff Jarvis said that serendipity is not mere randomness, but unexpected relevance — “the unknown but now fed curiosity.”

How much slack can nonprofits take up?: Alan Mutter, an expert in the dollars-and-cents world of the news business both traditionally and online, raised a pretty big stink this week with a post decrying the idea that nonprofits can carry the bulk of the load of journalism. The numbers at the core of Mutter’s argument are simple: Newspapers are spending an estimated $4.4 billion annually on newsgathering, and it would take an $88 billion endowment to provide that much money each year. That would be more than a quarter of the $307.7 billion contributed to charity in 2008 — a ridiculously tall order.

Mutter drew a lot of fire in his comment section for attacking a straw man with that argument, as he didn’t cite any specific people who are claiming that nonprofits will, in fact, take over the majority of journalism’s funding. As many of those folks wrote, the nonprofit advocates have always claimed that they’ll be a part of network that makes up journalism’s future, not the network itself. (One of them, Northeastern prof Ben Compaine, had made that exact argument just a few days earlier, and Steve Outing made a similar one in response to Mutter’s post.)

John Thornton, a co-founder of the nonprofit Texas Tribune, wrote the must-read point-by-point response, taking issue with the basis of Mutter’s math and his assumption that market-driven solutions are “inherently superior” to non-market ones. Besides, he argued, serious journalism hasn’t exactly been doing business like gangbusters lately, either: “Expecting investors to continue to fund for-profit, Capital J journalism just ‘cuz:  doesn’t that sound a lot like charity?” Reuters financial blogger Felix Salmon weighed in with similar numbers-based objections, as did David Cay Johnston.

Reading roundup: One mini-debate, and four nifty resources:

Former tech/biz journalist Chris Lynch fired a shot at j-schools in a post arguing that the shrunken (but elite) audiences resulting from widespread news paywalls would cause “most journalism schools to shrink or disappear.” Journalism schools, he said, are teaching an outdated objectivity-based philosophy that doesn’t hold water in the Internet era, when credibility is defined much differently. Gawker’s Ravi Somaiya chimed in with an anti-j-school rant, and North Carolina j-school dean Jean Folkerts and About.com’s Tony Rogers (a community college j-prof) leaped to j-schools’ defense.

Now the four resources:

— Mathew Ingram of GigaOm has a quick but pretty comprehensive explanation of the conundrum newspapers are in and some of the possible ways out. Couldn’t have summed it better myself.

— PBS MediaShift’s Jessica Clark outlines some very cool efforts to map out local news ecosystems. This will be something to keep an eye out for, especially in areas with blossoming hyperlocal news scenes, like Seattle.

— Consider this an addendum to last month’s South by Southwest festival: Ball State professor Brad King has posted more than a dozen short video interviews he conducted there, asking people from all corners of media what the most interesting thing they’re seeing is.

— British j-prof Paul Bradshaw briefly gives three principles for reporters in a networked era. Looks like a pretty good journalists’ mission statement to me.

March 29 2010

17:47

What would it take to build a true “serendipity-maker”?

What if we created a “ChatRoulette for news” that generated content we tended to disagree with — but was also targeted toward our regular levels and sources of news consumption? How hard would it be?

For the last 24 hours or so, the Twitter-sphere has been buzzing over Daniel Vydra’s “serendipity maker,” an off-the-cuff Python hack that draws on the APIs of the Guardian, New York Times, and Australian Broadcasting Corp. in order to create a series of “news roulettes.” In sum, hit a button and you’ll get taken to a totally random New York Times, Guardian, or ABC News story. As the Guardian noted on its technology blog, “the idea came out of a joking remark by Chris Thorpe yesterday in a Guardian presentation by Clay Shirky that what we really need is a ‘Chatroulette for news’”:

After all, we do have loads of interesting content: but the trouble with the way that one tends to trawl the net, and especially newspapers, simply puts paid to the sort of serendipitous discovery of news that the paper form enables by its juxtaposition of possibly unrelated — but potentially important — subjects.

This relates to the much-debated theoretical issue of “news serendipity,” summarized here by Mathew Ingram. In essence, the argument goes that while there is more news on the web, our perspectives on the news are narrower because we only browse the sites we already agree with, or know we already like, or care about. In newspapers, however, we “stumbled upon” (yes, pun intended) things we didn’t care about, or didn’t agree with, in the physical act of turning the page.

As Ryan Sholin has been pointing out all morning on Twitter, the idea of a “serendipity maker” for the web isn’t entirely new. And I don’t know if the current news roulettes really solve the problem journalism theorists are concerned about. So I’d like to know: What would it take to create a news serendipity maker that automatically knew and “factored in” your news consumption patters, but then showed you web content that was the opposite of what you normally consumed?

For example, I’m naturally hostile to the Tea Party as a political organization. What if someone created a roulette that automatically generated news content sympathetic to the Tea Party? And what if they found a way to key it to my news consumption patterns even more strongly, i.e., if somehow the roulette knew I was a regular New York Times reader and would pick Tea Party friendly articles written either by the Times or outlets like the Times (rather than, say, random angry blog posts?)

I think this is interesting, because it would basically hack the entire logic of the web. The beauty of the web is that it can direct you towards ever more finely grained content which is exactly what you want to read. It would somehow know what you wanted even before you did. In other words, it might be the opposite of what Mark S. Luckie called “a Pandora for news.” And it would solve a very real social problem — or at least a highly theorized social problem — what Cass Sunstein calls the drift towards a “Daily Me” or “Daily We,” where we only read news content we already agree with, and our political culture suffers as a result.

So. This is a shout out for news hackers, developers, and others to weigh in: How hard would it be to create a machine like this? How would you do it? Would you do it? I would really like to write a longer post on this, based on your replies. So feel free to chime in in the comments section, or email me directly with your thoughts. I’d like to include them in my next post.

March 26 2010

14:00

This Week in Review: Anonymous news comments, two big media law cases, and a health coverage critique

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

Anonymity, community and commenting: We saw an unusually lively conversation over the weekend on an issue that virtually every news organization has dealt with over the past few years: anonymous comments. It started with the news that Peer News, a new Hawaii-based news organization edited by former Rocky Mountain News chief John Temple, would not allow comments. His rationale was that commenting anonymity fosters a lack of responsibility, which leads to “racism, hate and ugliness.”

That touched off a spirited Twitter debate between two former newspaper guys, Mathew Ingram (Globe and Mail, now with GigaOm) and Howard Owens (GateHouse, now runs The Batavian). Afterward, Ingram wrote a fair summary of the discussion — he was pro-anonymous comments, Owens was opposed — and elaborated on his position.

Essentially, Owens argued that it’s unethical for news sites (particularly community-based ones) to allow anonymous comments because “readers and participants have a fundamental right to know who is posting what.” And Ingram makes two main points in his blog post: That many online communities have anonymous comments and very healthy community, and that it’s virtually impossible to pin down someone’s real identity online, so pretty much all commenting online is anonymous anyway.

Several other folks chimed in with various ideas for news commenting. Steve Buttry, who’s working on a fledgling as-yet-unnamed Washington news site wondered whether news orgs could find ways to create two tiers of commenting — one for ID’d, the other for anonymous. Steve Yelvington, who dipped into Ingram and Owens’ debate, extolled the values of leadership, as opposed to management, in fostering great commenting community. The Cincinnati Enquirer’s Mandy Jenkins offered similar thoughts, saying that anonymity doesn’t matter nearly as much as an active, personable moderator.

J-prof and news futurist Jeff Jarvis and French journalist Bruno Boutot zoom out on the issue a bit, with Jarvis arguing that commenting is an insulting, inferior form of communication for news organizations to offer, and they should instead initiate more interactive, empowering communication earlier in the journalistic process. Boutot builds on that to say that newspapers need to invite readers into the process to build trust and survive, and outlines a limited place for anonymity in that goal. Finally, if you’re interested in going deeper down the rabbit hole of anonymous commenting, Jack Lail has an amazingly comprehensive list of links on the subject.

The iPad and magazines: The iPad will be officially released next Saturday, so expect to see the steady stream of articles and posts about it will or won’t save publishers and journalism to swell over the next couple of weeks. This week, a comScore survey found that 34 percent of their respondents would be likely to read newspapers or magazines if they owned an iPad — not nearly the percentage of people who said they’d browse the internet or check email with it, but actually more than I had expected. PaidContent takes a look at 15 magazines’ plans for adapting to tablets like the iPad, and The Wall Street Journal examines the tacks they’re taking with tablet advertising.

At least two people aren’t impressed with some of those proposals. Blogger and media critic Jason Fry says he expects many publishers to embrace a closed, controlled iPad format, which he argues is wearing thin because it doesn’t mesh well with the web. “With Web content, publishers aren’t going to be able to exercise the control that print gave them and they hope iPad will return to them,” he writes. And British j-prof Paul Bradshaw calls last week’s VIV Mag demo “lovely but pointless.” Meanwhile, Wired’s Steven Levy looks at whether the iPad or Google’s Chrome OS will be instrumental in shaping the future of computing.

Aggregation and media ownership in the courts: In the past week or so, we’ve seen developments in two relatively outside-the-spotlight court cases, both of which were good news for larger, traditional media outlets. First, a New York judge ruled that a web-based financial news site can’t report on the stock recommendations of analysts from major Wall Street firms until after each day’s opening bell. The Citizen Media Law Project’s Sam Bayard has a fantastic analysis of the case, explaining why the ruling is a blow to online news aggregators: It’s an affirmation of the “hot news” principle, which gives the reporting of certain facts similar protections to intellectual property, despite the fact that facts are in the public domain.

Meanwhile, the Lab’s C.W. Anderson analyzed the statements of several news orgs’ counsel at an FTC hearing earlier this month, finding in them a blueprint for how they plan to protect (or control) their content online. Some of those arguments include the hot news doctrine, as well as a concept of aggregation as an opt-in system. Both Anderson’s and Bayard’s pieces are lucid explanations of what’s sure to be a critical area of media law over the next couple of years.

And in another case, a federal appeals judge at least temporarily lifted the FCC’s cross-ownership ban that prevents media companies from owning a newspaper and TV station in the same outlet. Here’s the AP story on the ruling, and just in time, we got a great summary by Molly Kaplan of the New America Foundation of the “what” and “so what” of media concentration based on a Columbia University panel earlier this month.

Health care coverage taken to task: Health care reform, arguably the American news media’s biggest story of the past year, culminated this week with the passage of a reform bill. Washington Post media critic Howard Kurtz was among the first to take a crack at a postmortem on the media’s performance on the story, chiding the press in a generally critical column for focusing too much (as usual) on the political and procedural aspects of health care reform, rather than the substance of the proposals. The news media produced enough data and analysis to satisfy policy junkies, Kurtz said, but “in the end, the subject may simply have been too dense for the media to fully digest…For a busy electrician who plugs in and out of the news, the jousting and the jargon may have seemed bewildering.”

Kurtz was sympathetic, though, to what he saw as the reasons for that failure: The story was complicated, long, bewildering, and at times tedious, and the press was driven by the constant need to produce new copy and fill airtime. Those excuses didn’t fly with C.W. Anderson, who contended that Kurtz “is basically admitting the press has no meaningful role in our democracy.” If the press can’t handle meaningful stuff like health care reform, he asked, what good is it? And Rex Hammock used Kurtz’s critique as an example of why we need another form of context-oriented journalism to complement the day-to-day grind of information.

Google pulls an end-around on China: This isn’t particularly journalism-related, so I won’t dwell on it much, but it’s huge news for the global web, so it deserves a quick summary. Google announced this week that it’s stopping its censorship of Chinese search by using its servers in nearby Hong Kong, and two days later, a Google exec also told Congress that the United States needs to take online censorship seriously elsewhere in the world, too.

The New York Times‘ and the Guardian’s interviews with Sergey Brin and James Fallows’ interview with David Drummond give us more insight into the details of the decision and Google’s rationale, and Mathew Ingram has a good backgrounder on Google-China relations. Not surprisingly, not everyone’s wowed by Google’s move: Search Engine Land’s Danny Sullivan says it’s curiously late for Google to start caring about Chinese censorship. Finally, China- and media-watcher Rebecca MacKinnon explains why the ball is now in China’s court.

Reading roundup: I’ve got a bunch of cool bits and pieces for you this week. We’ll try to run through them quickly.

— Jacob Weisberg, chairman of the Slate Group, gives a brief but illuminating interview with paidContent’s Staci Kramer that’s largely about, well, paid content. Weisberg explains why Slate’s early experiment with a paywall was a disaster, but says media outlets need to charge for mobile news, since that’s a charge not for content, but for a convenient form of delivery.

— Since we’ve highlighted the launch and open-sourcing of Google’s Living Stories, it’s only fair to note an obvious downside: Florida j-prof Mindy McAdams points out that it’s been a month since it was updated. Google has acknowledged that fact with a note, and Joey Baker notes that he guessed last month that Google was open-sourcing the project because the Washington Post and New York Times weren’t using it well.

— Like ships passing in the night: USC j-prof Robert Hernandez argues that for many young or minority communities in cities, their local paper isn’t just dying; it’s long been dead because it’s consciously ignored them. Meanwhile, Gawker’s Ravi Somaiya notes that with the rise of Twitter and Facebook, big-time blogging is becoming more fact-driven, professionally written and definitive — in other words, more like those dead and dying newspapers.

— Colin Schultz has some great tips for current and aspiring science journalists, though several of them are transferable to just about any form of journalism.

— Finally, I haven’t read it yet, but I’m willing to bet that this spring’s issue of Nieman Reports on visual journalism is chock full of great stuff. Photojournalism prof Ken Kobre gives you a few good places to start.

Mask photo by Thirteen of Clubs used under a Creative Commons license.

March 12 2010

15:00

This Week in Review: Plagiarism and the link, location and context at SXSW, and advice for newspapers

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

The Times, plagiarism and the link: A few weeks ago, the resignations of two journalists from The Daily Beast and The New York Times accused of plagiarism had us talking about how the culture of the web affects that age-old journalistic sin. That discussion was revived this week by the Times’ public editor, Clark Hoyt, whose postmortem on the Zachery Kouwe scandal appeared Sunday. Hoyt concluded that the Times “owes readers a full accounting” of how Kouwe’s plagiarism occurred, and he also called out DealBook, the Times’ business blog for which Kouwe wrote, questioning its hyper-competitive nature and saying it needs more oversight. (In an accompanying blog post, Hoyt also said the Times needs to look closer at implementing plagiarism prevention software.)

Reuters’ Felix Salmon challenged Hoyt’s assertion, saying that the Times’ problem was not that its ethics were too steeped in the ethos of the blogosphere, but that they aren’t bloggy enough. Channeling CUNY prof Jeff Jarvis’ catchphrase “Do what you do best and link to the rest,” Salmon chastised Kouwe and other Times bloggers for rewriting stories that other online news organizations beat them to, rather than simply linking to them. “The problem, here, is that the bloggers at places like the NYT and the WSJ are print reporters, and aren’t really bloggers at heart,” Salmon wrote.

Michael Roston made a similar argument at True/Slant the first time this came up, and ex-newspaperman Mathew Ingram strode to Salmon’s defense this time with an eloquent defense of the link. It’s not just a practice for geeky insiders, he argues; it’s “a fundamental aspect of writing for the web.” (Also at True/Slant, Paul Smalera made a similar Jarvis-esque argument.) In a lengthy Twitter exchange with Salmon, Times editor Patrick LaForge countered that the Times does link more than most newspapers, and Kouwe was an exception.

Jason Fry, a former blogger for the Wall Street Journal, agreed with Ingram and Smalera, but theorizes that the Times’ linking problem is not so much a refusal to play by the web’s rules as “an unthinking perpetuation of print values that are past their sell-by date.” Those values, he says, are scoops, which, as he argued further in a more sports-centric column, readers on the web just don’t care about as much as they used to.

Location prepares for liftoff: The massive music/tech gathering South By Southwest (or, in webspeak, SXSW) starts today in Austin, Texas, so I’m sure you’ll see a lot of ideas making their way from Austin to next week’s review. If early predictions are any indication, one of the ideas we’ll be talking about is geolocation — services like Foursquare and Gowalla that use your mobile device to give and broadcast location-specific information to and about you. In anticipation of this geolocation hype, CNET has given us a pre-SXSW primer on location-based services.

Facebook jump-started the location buzz by apparently leaking word to The New York Times that it’s going to unveil a new location-based feature next month. Silicon Alley Insider does a quick pro-and-con rundown of the major location platforms, and ReadWriteWeb wonders whether Facebook’s typically privacy-guarding users will go for this.

The major implication of this development for news organizations, I think, is the fact that Facebook’s jump onto the location train is going to send it hurtling forward far, far faster than it’s been going. Within as little as a year, location could go from the domain of early-adopting smartphone addicts to being a mainstream staple of social media, similar to the boom that Facebook itself saw once it was opened beyond college campuses. That means news organizations have to be there, too, developing location-based methods of delivering news and information. We’ve known for a while that this was coming; now we know it’s close.

The future of context: South By Southwest also includes bunches of fascinating tech/media/journalism panels, and one of them that’s given us a sneak preview is Monday’s panel called “The Future of Context.” Two of the panelists, former web reporter and editor Matt Thompson and NYU professor Jay Rosen, have published versions of their opening statements online, and both pieces are great food for thought. Thompson’s is a must-read: He describes the difference between day-to-day headline- and development-oriented information about news stories that he calls “episodic” and the “systemic knowledge” that forms our fundamental framework for understanding an issue. Thompson notes how broken the traditional news system’s way of intertwining those two forms of knowledge are, and he asks us how we can do it better online.

Rosen’s post is in less of a finished format, but it has a number of interesting thoughts, including a quick rundown of reasons that newsrooms don’t do explanatory journalism better. Cluetrain Manifesto co-author Doc Searls ties together both Rosen’s and Thompson’s thoughts and talks a bit more about the centrality of stories in pulling all that information together.

Tech execs’ advice for newspapers: Traditional news organizations got a couple of pieces of advice this week from two relatively big-time folks in the tech world. First, Netscape co-founder Marc Andreessen gave an interview with TechCrunch’s Erick Schonfeld in which he told newspaper execs to “burn the boats” and commit wholeheartedly to the web, rather than finding way to prop up modified print models. He used the iPad as a litmus test for this philosophy, noting that “All the new [web] companies are not spending a nanosecond on the iPad or thinking of ways to charge for content. The older companies, that is all they are thinking about.”

Not everyone agreed: Newspaper Death Watch’s Paul Gillin said publishers’ current strategy, which includes keeping the print model around, is an intelligent one: They’re milking the print-based profits they have while trying to manage their business down to a level where they can transfer it over to a web-based model. News business expert Alan Mutter offered a more pointed counterargument: “It doesn’t take a certifiable Silicon Valley genius to see that no business can walk away from some 90% of its revenue base without imploding.”

Second, Google chief economist Hal Varian spoke at a Federal Trade Commission hearing about the economics of newspapers, advising newspapers that rather than charging for online content, they should be experimenting like crazy. (Varian’s summary and audio are at Google’s Public Policy Blog, and the full text, slides and Martin Langeveld’s summary are here at the Lab. Sync ‘em up and you can pretty much recreate the presentation yourself.) After briefly outlining the status of newspaper circulation and its print and online advertising, Varian also suggests that newspapers make better use of the demographic information they have of their online readers. Over at GigaOM, Mathew Ingram seconds Varian’s comments on engagement, imploring newspapers to actually use the interactive tools that they already have at their sites.

Reading roundup: We’ll start with our now-weekly summary of iPad stuff: Apple announced last week that you can preorder iPads as of today, and they’ll be released April 3. That could be only the beginning — an exec with the semiconductor IP company ARM told ComputerWorld we could see 50 similar tablet devices out this year. Multimedia journalist Mark Luckie urged media outlets to develop iPad apps, and Mac and iPhone developer Matt Gemmell delved into the finer points of iPad app design. (It’s not “like an iPhone, only bigger,” he says.)

I have two long, thought-provoking pieces on journalism, both courtesy of the Columbia Journalism Review. First, Megan Garber (now with the Lab) has a sharp essay on the public’s growing fixation on authorship that’s led to so much mistrust in journalism — and how journalists helped bring that fixation on. It’s a long, deep-thinking piece, but it’s well worth reading all the way through Garber’s cogent argument. Her concluding suggestions for news orgs regarding authority and identity are particularly interesting, with nuggets like “Transparency may be the new objectivity; but we need to shift our definition of ‘transparency’: from ‘the revelation of potential biases,’ and toward ‘the revelation of the journalistic process.’”

Second, CJR has the text of Illinois professor Robert McChesney’s speech this week to the FTC, in which he makes the case for a government subsidy of news organizations. McChesney and The Nation’s John Nichols have made this case in several places with a new book, “The Death and Life of American Journalism,” on the shelves, but it’s helpful to have a comprehensive version of it in one spot online.

Finally, the Online Journalism Review’s Robert Niles has a simple tip for newspaper publishers looking to stave off their organizations’ decline: Learn to understand technology from the consumer’s perspective. That means, well, consuming technology. Niles provides a to-do list you can hand to your bosses to help get them started.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl