- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me."
[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]
Cuts and big changes for two papers: In the past week, two American newspapers have announced major reorganizations that, depending on who you read, were either cold corporate downsizing or fresh attempts at journalism innovation. First, late last week, Gannett’s USA Today announced that it would undergo the most sweeping change in its 28-year history, transforming “into a multi-media company” as opposed to a newspaper — and laying off 130 of its 1,500 employees in the process. The Associated Press and paidContent have pretty good explanations of what the changes entail, and thanks to the feisty Gannett Blog, we have the slide presentation Gannett execs made to USA Today’s staff.
Though there are some dots to be connected, those slides are the best illustration of what Gannett is trying to do: Push USA Today further into web content, breaking news and especially mobile content (by far its fastest-growing area) in order to justify a simultaneous move deeper into mobile and online advertising. The paper is hoping to become faster on breaking news, with a web-first mindset, fewer editors, and a strategy that focuses on flooding coverage on breaking stories and then coming back later for deeper features.
Gannett Blog’s Jim Hopkins, a longtime critic of the company, wasn’t thrilled about this move, either, pointing out the lack of newsroom experience in some of its key executives and saying that Gannett touted almost the exact same strategy four years ago, to little effect. He did say a few days later, though, that Gannett’s plans to encourage more collaboration among staffers — by flattening the “silos” of the News, Sports, Money, and Life sections — are long overdue.
News media analyst Ken Doctor was much more charitable, seeing in USA Today’s overhaul echoes of the new “digital first” mentalities at the Journal Register Co. and TBD. The best way to see this, Doctor said, is to “mark another day in which a publisher is acting on the plain truths of the marketplace and of the audiences, and trying to reinvent itself.” Newspaper Death Watch’s Paul Gillin called USA Today’s transformation a bellwether for news organizations and said its harmony between news and advertising is a bitter but necessary pill for traditionalists to swallow. And media consultant Mario Garcia said USA Today’s audience-driven approach is the key to survival in a multimedia environment.
The other newspaper to announce an overhaul was the Deseret News of Salt Lake City, a for-profit paper published by the Mormon Church. The paper is laying off 43 percent of its staff, though you wouldn’t know it from the News’ own article on the changes. In a pair of posts, Ken Doctor looked at the change in philosophy that’s accompanying the cuts — an attempt to become the worldwide Mormon newspaper of sorts, along with pro-am and local news efforts and a news-broadcast collaboration — and liked what he found. News business expert Alan Mutter examined the prospects for a slashed, print-and-broadcast newsroom and came out less optimistic.
A Twitter stunt gone awry: Twitter devotees are used to seeing untrue rumors and scoops occasionally get reported there (as Jeff Goldblum can attest), but this week may have been the first time a false Twitter report was knowingly started by a member of the traditional media as a stunt. Fed up with the more-breathless-than-usual Twitter rumor-reporting that’s been going on in the sports media this summer, Washington Post sports reporter Mike Wise decided to start a false rumor about the length of an NFL quarterback’s suspension to make a point about the unreliability of reporting on Twitter.
The stunt bombed; Wise admitted the hoax an hour later and was suspended for a month by the Post the next day. Such an ill-advised prank isn’t really news in itself, but it did spur a bit of interesting commentary on Twitter and breaking news. Numerous people argued that Wise’s hoax betrayed a fundamental misunderstanding of the nature of Twitter as a news medium — one that many others probably share. Even after the episode, Wise maintained that it showed that nobody checks facts or sourcing on breaking stories on Twitter.
Quite a few observers disagreed for a variety of reasons. Barry Petchesky of Gawker’s sports blog Deadspin said the whole incident actually disproved Wise’s thesis: The false story didn’t gain much traction, and the media outlets that did report the story credited Wise until it could be confirmed independently, just the way the system is supposed to work.
But the primary objection was that, as Gawker’s Hamilton Nolan, Slate’s Tom Scocca, and several others all argued, to the extent that Wise was trusted, it was because of the credibility that people give to The Washington Post — a traditional news organization — rather than Twitter itself. As TBD’s Steve Buttry pointed out, people would have run with this story if Wise had planted it in the Post itself or on its website; what makes Twitter any different? DCist’s Aaron Morrissey put the point well: Wise falsely “assumed that there weren’t levels of authenticity to Twitter, which, just like any other social construct on Earth, features some people who are reputable concerning whatever and others who aren’t.”
Rupert’s paywall runs into obstacles: Two months after the online paywall went up at Rupert Murdoch’s Times of London, The Independent (a competitor of The Times) reported this week that with a vastly reduced audience to sell to, advertisers are fleeing the site. In the article, various British news industry analysts also said The Times is killing its online brand and not adding any of the sort of value that’s necessary to justify charging for news. Stateside, too, Lost Remote’s Steve Safran saw the news as “mounting evidence that putting up a paywall is bad for business.”
It should be noted, though, that according to those analysts, The Times’ paywall is “more about gathering consumer information than selling content” — News Corp.’s primary intent may be getting detailed, personalized information on Times readers and using it to sell them other products within its media empire, including its BSkyB satellite TV. Francois Nel ran some possible numbers and determined that even with its relatively small audience (15,000 subscribers, plus day-pass users), News Corp. could be making more money with its paywall than without.
On the other hand, a new study reported by paidContent estimated that online subscribers to The Times and Murdoch’s Wall Street Journal are worth only a quarter of their print counterparts. Getting rid of the print product, the study posited, wouldn’t even make up for the loss of income from those subscribers. The Press Gazette’s Dominic Ponsford detailed more of the research firm’s report — a rather depressing one for newspaper execs.
Google and the AP play nice: A quiet news development worth noting: Google and The Associated Press renewed their licensing agreement that allows Google (including, especially, Google News) to host AP content. The deal was announced on Google’s side via a one-paragraph post, and on the AP’s side through a short press release, and then a much more extensive article by its technology writer Michael Liedtke. The extension is significant because the two sides have had a consistently fractious relationship — their first agreement began in 2006 after the AP threatened to sue Google for aggregating its articles, AP executives have criticized news aggregators for misappropriating content, and the AP’s material briefly stopped appearing on Google News late last year.
The Lab’s Megan Garber noted that this new agreement might go beyond another truce and mark a change in the way the companies relate: “Us-versus-them becoming let’s-work-together.” Search Engine Land’s Danny Sullivan provided plenty of background, surmising that AP has learned its lesson that Google News can live on just fine without them.
Reading roundup: This week was an especially rich one for all sorts of web-journalism punditry. Here’s a sampling:
— The American Journalism Review’s Barb Palser tried to throw some cold water on the hyperlocal news movement, using some Pew stats to argue that people don’t go online for neighborhood news as much as we might think. (That use of statistics led to a frustrated response by Michele McLellan.) And the Online Journalism Review’s Robert Niles added his skepticism to the discussion surrounding Patch and large-scale hyperlocal news.
— NYU j-prof Jay Rosen can be a polarizing figure, but there are few media observers who are better at pulling thoughtful insights out of the often mystifying world that is journalism-in-transition. We got three particularly thought-provoking tidbits from him this week: A sharp interview with The Economist about the American press; a lecture at a French j-school about the changing dynamic between “the audience” and “the public,” with tips for new students; and a video clip from the Journal Register Co.’s ideaLab on news production and innovation.
— We spent some time this summer talking about the merits (and drawbacks) of links, so consider this a worthy addendum: Scott Rosenberg, who recently chronicled the history of blogging, issued a three-part defense of the link this week. A great examination of one of the fundamental features of the web.
— Finally, two cool reads, one practical and the other theoretical. The Atlantic’s Alexis Madrigal listed five lessons from the publication of Longshot, the hyperspeed-produced magazine formerly known as 48HRS, and here at the Lab, Cornell scholar Joshua Braun talked about the way TV news organizations maintain the “stage management” of broadcast in their online efforts. “They continue to control what remains backstage and what goes front-stage,” he told Megan Garber in a Q&A, giving comment moderation as one example. “That’s not unique to the news, either. But it’s an interesting preservation of the way the media’s worked for a long time.”
The humble, ubiquitous link found itself at the center of a firestorm last week, with the spark provided by Nicholas Carr, who wrote about hyperlinks as one element (among many) he thinks contribute to distracted, hurried thinking online. With that in mind, Carr explored the idea of delinkification — removing links from the main body of the text.
The heat that greeted Carr’s proposals struck me (and CJR’s Ryan Chittum) as a disproportionate response. Carr wasn’t suggesting we stop linking, but asking if putting hyperlinks at the end of the text makes that text more readable and makes us less likely to get distracted. But of course the tinder has been around for a while. There’s the furor over iPad news apps without links to the web, which has angered and/or worried some who see the iPad as a new walled garden for content. There’s the continuing discontent with “old media” and their linking habits as newsrooms continue their sometimes technologically and culturally bumpy transition to becoming web-first operations. And then there’s Carr’s provocative thesis, explored in The Atlantic and his new book The Shallows, that the Internet is rewiring our brains to make us better at skimming and multitasking but worse at deep thinking.
I think the recent arguments about the role and presentation of links revolve around three potentially different things: credibility, readability and connectivity. And those arguments get intense when those factors are mistaken for each other or are seen as blurring together. Let’s take them one by one and see if they can be teased apart again.
A bedrock requirement of making a fair argument in any medium is that you summarize the opposing viewpoint accurately. The link provides an ideal way to let readers check how you did, and alerts the person you’re arguing with that you’ve written a response. This is the kind of thing the web allows us to do instantly and obviously better than before; given that, providing links has gone from handy addition to requirement when advancing an argument online. As Mathew Ingram put it in a post critical of Carr, “I think not including links (which a surprising number of web writers still don’t) is in many cases a sign of intellectual cowardice. What it says is that the writer is unprepared to have his or her ideas tested by comparing them to anyone else’s, and is hoping that no one will notice.”
That’s no longer a particularly effective strategy. Witness the recent dustup between NYU media professor Jay Rosen and Gwen Ifill, the host of PBS’s Washington Week. Early last month, Rosen — a longtime critic of clubby political journalism — offered Washington Week as his pick for something the world could do without. Ifill’s response sought to diminish Rosen and his argument by not deigning to mention him by name. This would have been a tacky rhetorical ploy even in print, but online it fails doubly: The reader, already suspicious by Ifill’s anonymizing and belittling a critic, registers the lack of a link and is even less likely to trust her account. (Unfortunately for Ifill, the web self-corrects: Several commenters on her post supplied Rosen’s name, and were sharply critical of her in ways a wiser argument probably wouldn’t have provoked.)
Linking to demonstrate credibility is good practice, and solidly noncontroversial. Thing is, Carr didn’t oppose the basic idea of links. He called them “wonderful conveniences,” but added that “they’re also distractions. Sometimes, they’re big distractions — we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head.”
Chittum, for his part, noted that “reading on the web takes more self-discipline than it does offline. How many browser tabs do you have open right now? How many are from links embedded in another piece your were reading and how many of them will you end up closing without reading since you don’t have the time to read Everything On the Internets? The analog parallel would be your New Yorker pile, but even that — no matter how backed up — has an endpoint.”
When I read Chittum’s question about tabs, my eyes flicked guiltily from his post to the top of my browser. (The answer was 11.) Like a lot of people, when I encounter a promising link, I right-click it, open it in a new tab, and read the new material later. I’ve also gotten pretty good at assessing links by their URLs, because not all links are created equal: They can be used for balance, further explanation and edification, but also to show off, logroll and name-drop.
I’ve trained myself to read this way, and think it’s only minimally invasive. But as Carr notes, “even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters.” I’m not sure about the matters part, but I’ll concede the point about the extra cognitive load. I read those linked items later because I want to pay attention to the argument being made. If I stopped in the middle for every link, I’d have little chance of following the argument through to its conclusion. Does the fact that I pause in the middle to load up something to read later detract from my ability to follow that argument? I bet it does.
Carr’s experiment was to put the links at the end. (Given that, calling that approach “delinkification” was either unwise or intentionally provocative.) In a comment to Carr’s post, Salon writer Laura Miller (who’s experimented with the endlinks approach), asked a good question: Is opening links in new tabs “really so different from links at the end of the piece? I mean, if you’re reading the main text all the way through, and then moving on to the linked sources through a series of tabs, then it’s not as if you’re retaining the original context of the link.”
Carr was discussing links in terms of readability, but some responses have dealt more with the merits of something else — connectivity. Rosen — who’s described the ethic of the web persuasively as “to connect people and knowledge,” described Carr’s effort as an attempt to “unbuild the web.” And it’s a perceived assault on connectivity that inflames some critics of the iPad. John Batelle recently said the iPad is “a revelation for millions and counting, because, like Steve Case before him, Steve Jobs has managed to render the noise of the world wide web into a pure, easily consumed signal. The problem, of course, is that Case’s AOL, while wildly successful for a while, ultimately failed as a model. Why? Because a better one emerged — one that let consumers of information also be creators of information. And the single most important product of that interaction? The link. It was the link that killed AOL — and gave birth to Google.”
Broadly speaking, this is the same criticism of the iPad offered bracingly by Cory Doctorow: It’s a infantilizing vehicle for consumption, not creation. Which strikes me now as it did then as too simplistic. I create plenty of information, love the iPad, and see no contradiction between the two. I now do things — like read books, watch movies and casually surf the web — with the iPad instead of with my laptop, desktop or smartphone because the iPad provides a better experience for those activities. But that’s not the same as saying the iPad has replaced those devices, or eliminated my ability or desire to create.
When it comes to creating content, no, I don’t use the iPad for anything more complex than a Facebook status update. If I want to create something, I’ll go to my laptop or desktop. But I’m not creating content all the time. (And I don’t find it baffling or tragic that plenty of people don’t want to create it at all.) If I want to consume — to sit back and watch something, or read something — I’ll pick up the iPad. Granted, if I’m using a news app instead of a news website, I won’t find hyperlinks to follow, at least not yet. But that’s a difference between two modes of consumption, not between consumption and creation. And the iPad browser is always an icon away — as I’ve written before, so far the device’s killer app is the browser.
Now that the flames have died down a bit, it might be useful to look at links more calmly. Given the link’s value in establishing credibility, we can dismiss those who advocate true delinkification or choose not to link as an attempt to short-cut arguments. But I think that’s an extreme case. Instead, let’s have a conversation about credibility, readability and connectivity: As long as links are supplied, does presenting them outside of the main text diminish their credibility? Does that presentation increase readability, supporting the ethic of the web by creating better conversations and connections? Is there a slippery slope between enhancing readability and diminishing connectivity? If so, are there trade-offs we should accept, or new presentations that beg to be explored?
Photo by Wendell used under a Creative Commons license.
By any measure, former Washington Post executive editor Len Downie epitomized success in the traditional, subscription-and-advertising model of newspaper journalism: With a staff that once topped 900 and an annual budget of $100 million, his newsroom hauled in 25 Pulitzer Prizes over 17 years and wielded influence from Congress to the darkest recesses of the nation’s capital.
Since stepping down from the Post’s top newsroom job at age 66, Downie has taken on a professorship at Arizona State University. But behind the scenes, he also is lending his experience to help shape the practices and prospects for the burgeoning nonprofit sector in journalism.
Why? Simple: Downie says the for-profit model alone no longer can support the kinds of investigative, explanatory, and accountability journalism that society needs. As the for-profit sector shrinks, journalists and interested readers must explore new ways to underwrite their work.
“There are going to have to be many different kinds of economic models,” Downie said in an interview at the Post’s offices. “The future is a much more diverse ecosystem.”
Downie has made himself an expert on the nonprofit model, and wrote about its possibilities in his recent report, “The Reconstruction of American Journalism,” with Michael Schudson.
Less known, perhaps, is that Downie casts a wide net as within the nonprofit sector of journalism. He’s a board member at the Center for Investigative Reporting, which recently launched California Watch to cover money and politics at the state level. He also chairs the journalism advisory committee at Kaiser Health News, which has provided niche explanatory reporting to leading newspapers, including the Post. And he’s also on the board of Investigative Reporters and Editors, which has incorporated panels on the nonprofit model into its conferences. (I should note that I am a part-time editor for the Washington Post News Service.)
Looking across the sector, Downie sees great potential — and some big, unanswered questions.
On the upside, nonprofits are helping journalism move toward a more collaborative model, Downie said. In the old days, newspapers resisted ideas and assistance from outside. But in the new news ecosystem, collaboration is a way of life. “All of our ideas have been changed about that,” he said.
Also a plus: Big foundations and the public at large are warming to the idea that news organizations are deserving of their support, just like the symphony or any other nonprofit that contributes to society’s cultural assets. “There’s a question of whether there’s enough public realization,” Downie said. “I think we’re heading to that direction. Awareness is growing steadily.”
But a lot of questions still must be sorted out, Downie said.
High on the list, he said, is the most basic of all: Where will the money come from? Like other nonprofits, nonprofit news organizations will have to find the right mix of foundation money, grassroots support, advertising, and perhaps additional government support, he said.
That leads to the other big question of sustainability: It’s not clear that all the nonprofits that have launched in recent years will survive. “How many will succeed and for how long?” Downie wondered. A related question: How will the collaborative model will settle out, and where nonprofits will find productive niches?
Downie said he also has been watching nonprofits wrestle with the issue of credibility — how to achieve it and how to keep it.
The answer begins with editorial independence and transparency about financial supporters, Downie said. But when it comes to painting a bright line between journalism and ideology, advocacy or spin, there are no magic formulas to assure readers — just the experience of trial and error.
“It’s one of these things that’s proven by its exceptions,” Downie said. “When there’s an exception, it’s a scandal.”
Enterprise reporting partnerships with online news organizations are in vogue at major newspapers these days, and arguably no paper has been more aggressive in pursuing them than the Washington Post. But in his ombudsman column Sunday, Andrew Alexander takes Post editors to task for a series of failures that plagued its most recent partnership, with a new organization calling itself the Fiscal Times.
The Fiscal Times is not a nonprofit, but it has a lot of the markings of one. It is backed by a wealthy philanthropist, investment banker and U.S. commerce secretary Peter G. Peterson; it is staffed by established journalists, including former Post political writer and editor Eric Pianin; and it claims to run an independent, nonpartisan, non-ideological newsroom. The main difference is that the Fiscal Times is run by a privately held company controlled by Peterson and his son Michael.
So what went wrong?
On Dec. 31, the Post ran its first story from the Fiscal Times, a newsy report that support was building on Capitol Hill for a bipartisan commission to tackle the nation’s chronic deficits and mounting debts. As it happens, this is Peterson’s pet issue and the focus of the Peterson Foundation.
According to Alexander, problem No. 1 with the story was that it quoted the president of the Concord Coalition, but failed to mention that the group receives funding from the Peterson Foundation. It also cited data from a study supported by the foundation but again failed to note the foundation’s backing, according to Alexander.
Alexander goes on to cite other problems with the story, including balance and timing. But the big foul-up in his book appear to be the transparency issues surrounding Peterson’s support for issue advocacy, and I couldn’t agree more.
Is is possible for a deeply opinionated philanthropist to keep his nose out of a newsroom of his own making? I do think it’s possible. Look at ProPublica, funded almost entirely by Herb and Marion Sandler, who also launched the liberal-leaning Center for American Progress. But transparency is key to credibility — and ultimately, to the viability of any news organization, for-profit or nonprofit.
What does transparency look like? Mostly, it resides with the intent of the publisher, and it might be expressed as a newsroom oversight board or other firewall structure that keeps newsrooms insulated from financial pressures. But to the outside world, it means disclosure of anything that might even hint of a conflict.
In this case, the Post fell down on the job, according to Alexander. But the Post has been around for a long time, and it certainly will recover. The Fiscal Times — like so many of the new news organizations that have sprouted up in recent years — has not developed a similar reservoir of credibility. The question is whether any governance structure, process or procedure can provide an adequate substitute.
"Tell the chef, the beer is on me."
"Basically the price of a night on the town!"
"I'd love to help kickstart continued development! And 0 EUR/month really does make fiscal sense too... maybe I'll even get a shirt?" (there will be limited edition shirts for two and other goodies for each supporter as soon as we sold the 200)