Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 04 2011

19:00

Mother Jones web traffic up 400+ percent, partly thanks to explainers

February was a record-breaking traffic month for Mother Jones. Three million unique users visited the site — a 420 percent increase from February 2010’s numbers. And MotherJones.com posted 6.6 million pageviews overall — a 275 percent increase.

The investigative magazine credits the traffic burst partly to a month of exceptional work in investigations, essays, and exposes, its editorial bread and butter: real-time coverage of the Wisconsin protests, a Kevin Drum essay on the consequences of wealth inequality in America, the first national media coverage of that infamous prank call to Wisconsin governor Scott Walker. The also mag credits the traffic, though, to its extended presence on social media: Mother Jones’ Twitter followers increased 28 percent in February, to more than 43,000; its Facebook fan base grew 20 percent, to nearly 40,000; and its Tumblr fan base grew 200 percent, to nearly 3,000 followers.

In all, the mag estimates, a cumulative 29 percent of traffic to MotherJones.com came from social media sites.

But Mother Jones also attributes the traffic explosion to a new kind of news content: its series of explainers detailing and unpacking the complexities of the situations in Tunisia, Egypt, Bahrain, Libya, and Wisconsin. We wrote about MoJo’s Egypt explainer in January, pointing out the feature’s particular ability to accommodate disparate levels of reader background knowledge; that format, Adam Weinstein, a MoJo copy editor and blogger, told me, has become the standard one for the mag’s explainers. “It was a great resource for the reader, but it also helped us to focus our coverage,” Weinstein notes. “When something momentous happens, it can be hard for a small staff to focus their energies, I think. And this was an ideal way to do that.”

The magazine started its explainer series with a debrief on Tunisia; with the Egypt explainer, written by MoJo reporter Nick Baumann, the form became a format. The explainers were “a collaborative effort,” Weinstein says — “everybody pitched in.” And the explainer layout, with the implicit permission it gives to the reader to pick and choose among the content it contains, “just became this thing where we could stockpile the information as it was coming in, and also be responsive to be people responding via social media with questions, with interests, with inquiries that they didn’t see answers to in other media outlets.”

It was a format that proved particularly useful, Weinstein notes, during the weekend after Mubarak had resigned in Egypt and when protests gained power in Libya and, stateside, Wisconsin. “All of this was happening at the same time,” he says — “none of us were getting a lot of sleep that weekend” — and “our social media just exploded.” But because MoJo’s Twitter and Tumblr and Facebook pages became, collectively, such an important interface for conversation, “we needed a really efficient way of organizing our content,” and in one convenient place. So the explainer format became, essentially, “a great landing page.”

The success of that format could offer an insight for any outlet trying to navigate between the Scylla and Charybdis of content and context. Explainers represent something of a tension for news organizations; on the one hand, they can be hugely valuable, both to readers and to orgs’ ability to create community around particular topics and news events; on the other, they can be redundant and, worse, off-mission. (“We’re not Wikipedia,”  one editor puts it.)

It’s worth noting, though, that MoJo explainers aren’t, strictly, topic pages; rather, they’re topical pages. Their content isn’t reference material catered to readers’ general interests; it’s news material catered to readers’ immediate need for context and understanding when it comes to complex, and current, situations. The pages’ currency, in other words, is currency itself.

That’s likely why the explainers have been so successful for MoJo’s traffic (and, given the outlet’s employment of digital advertising, its bottom line); it’s also why, though, the format requires strategic thinking when it comes to the resources demanded by reporting and aggregation — particularly for outlets of a small staff size, like MoJo. Explainers, as valuable as they can be, aren’t always the best way for a news outlet to add value. “We still do the long-form stories,” Weinstein notes, “and this has just given us a place to have a clearinghouse for that.” For MoJo, he says, the explainer “is a way of stitching together all the work that everyone’s been doing. And we’re thrilled that readers have responded.”

February 23 2011

17:00

The context-based news cycle: editor John O’Neil on the future of The New York Times’ Topics Pages

“There’s are a lot of people in the news industry who are very skeptical of anything that isn’t news,” says The New York Times’ John O’Neil. As the editor of the Times’ Topic Pages, which he calls a “current events encyclopedia,” O’Neil oversees 25,000 topic pages, half of which — about 12,000 or so — include some human curation.

While the rest of the newsroom is caught up in the 24-hour news cycle, constantly churning out articles, O’Neil and his team are on a parallel cycle, “harvesting the reference material every day out of what the news cycle produces.” This means updating existing topic pages, and creating new ones, based on each morning’s news. (The most pressing criterion for what gets updated first, O’Neil said, is whether “we would feel stupid not having it there.”) A few of the Times’ most highly curated topics include coffee (curated by coffee reporter Oliver Strand with additional updates by Mike White) and and Wikipedia (curated by media reporter Noam Cohen),  as well as more predictably prominent topics like Wikileaks and Egypt.

The Topics team includes three editors and two news assistants, who work with Times reporters. “People give us links to studies they’ve used for stories or articles they’ve looked at, and this is something that we do hope to expand,” O’Neil said.

But half of the topic pages are “purely automated,” O’Neil said. And O’Neil is even contemplating contracting the number of curated topic pages, as people and events drop out of relevance. (The Topic pages garner 2.5 percent of the Times’ total pageviews.) O’Neil said he had read a statistic that roughly a third of Wikipedia’s traffic came from only about 3,000 of its now more than 17 million pages. “We’re concentrating more on that upper end of the spectrum.”

In a phone conversation, I talked with O’Neil about why the Times has ventured into Wikipedia territory, how the Times’ model might be scalable for local news organizations, and why creating a “current events encyclopedia” turns out to be easier than you might think. A condensed and edited version of that conversation is below.

LB: How did the topic pages develop?

JO: Topic pages began as part of the redesign in 2006. Folks up in tech and the website realized they could combine the indexing that has actually gone on for decades with the ability to roll up a set of electronic tags. The first topic pages were just a list of stories by tag, taking advantage of the fact that we had human beings looking at stories every day and deciding if they were about this, or were they about that. Just that simple layer of human curation created lists that are much more useful than a keyword search, and they proved to be pretty popular — more popular than expected at the time.

LB: What’s the philosophy at the Times behind the topic page idea?

JO: Jill Abramson’s point of view when she started looking at this: When she was a reporter, she would work on a story for days on end, weeks on end, and pile up more and more material. You end up with a stack of manila folders full of material, and she would take all of that and boil it down to a 1,200-word story. It was a lot of knowledge that was gained in the process, and it didn’t make it to the reader. The question was: How can we try to share some of that with the reader?

My impression is that people find these pages terrifically useful. Not everybody comes to a news story with the knowledge you would have if you’d been following the story closely all along. News organizations are set up to deal with the expectation [that people] have read the paper yesterday and the day before.

LB: How do you go about transforming news stories into reference material? What does the process look like?

JO: What we found, as we did this, is that the Times is actually publishing reference material every day. It’s buried within stories. In a given day, with 200 articles in the paper, about 10 percent reperesent extremely significant developments in the field. Now we can take a small number of subjects, like Tunisia or Egypt or Lebanon or the Arizona shootings, and keep on top of everything, set the bar higher. We can really keep up with what the daily paper’s doing on the biggest stories.

LB: As you note, there’s a lot of wariness among “news” folks  around putting  effort into topic pages. For instance, when I talked with Jonathan Weber of The Bay Citizen, the Times’ San Francisco local news partner, about topic pages, he told me: ”people are looking for news from a news site….We’re not Wikipedia. You don’t really go [to a topic page] for a backgrounder, you go there for a story.” How would you respond to that?

JO: Our experience has been that that’s never been entirely true, and it’s becoming less true all the time. Look at the pound-and-half print New York Times, and think how much of that is about things other than what happened yesterday. Even in the print era, that was a pretty big chunk.

Then again, it makes sense for folks at a place like The Bay Citizen to be more skeptical about topic pages. A blog, after all, is all about keeping the items coming. And a site focused on local news would feel less need to explain background — hey, all our readers live here and know all that! — than if they were covering the Muslim Brotherhood, for instance.

LB: So what about the Wikipedia factor? Why should the Times be getting into the online encyclopedia business?

JO: I think Wikipedia is an amazing phenomenon. I use it. But there’s no field of information in which people would find there to be only one source. On Wikipedia, there’s the uncertainty principle: It’s all pretty good, but you’re never sure with any specific thing you’re looking at, how specific you can be about it. You have to be an expert to know which are the good pages and which are the not-so-good ones.

Our topic pages — and other newspaper-based pages — bring, for one thing, a level of authority or certainty. We’re officially re-purposing copy that has been edited and re-edited to the standards of The New York Times. It’s not always perfect, but people know what they can expect.

LB: What’s the business-side justification for the Topic Pages?

JO: We know that the people who come to the topic page are more likely than people who come to an article page to continue on and look at other parts of the site. It helps bring people to the site from search engines.

It’s also brand-building; it’s another way people can form an attachment. People can also subscribe to topic pages. (Every page produces an RSS feed.) We’ve begun to do some experimenting with social media. There are lots of people who want to like or follow or friend The New York Times, but a topic pages feed gives you a way of looking at a slice of this audience. It turns the supermarket into a series of niche food stands, so to speak.

LB: The Times obviously has a lot more resources than most local news outlets. Is developing topic pages something of a luxury, or is it something that makes sense to pursue on a more grass-roots level?

JO: At the Times, less than one half of one percent of the newsroom staff is re-purposing the copy. That makes it of lasting value, and makes it more accessible to people who are searching. If you think about a small regional paper, three editors would be a huge commitment. On the other hand, the array of topics on which they produce a significant amount of information that other people don’t is small. There’s a relatively small number of subjects where they feel like, “We really own this, this is key to our readership and important to our coverage.”

If people think of topic pages as the creation of original content on original subjects, it never looks feasible. If you think about it as re-purposing copy on your key subjects, I think it’s something more and more people will do.

October 13 2010

18:00

Behind-the-scenes innovation: How NPR’s Project Argo is making life more efficient for its bloggers

Remember the days before the roundup post existed? Neither do I. [Laura's making me feel old. —Ed.] The roundup is a longstanding staple of the blogosphere, an expected post for loyal readers who want a rundown of the best new stuff around the web on a given topic. But can a staple still have room for innovation? Over at Argo Network, the new blog network at NPR, the leadership team is giving it a shot on the back end. They’ve designed a workflow that makes it easier for their bloggers to cull through links and produce a roundup post. The result: a simpler process for the blogger, and added benefit for the reader. It’s no technological revolution, but an example of the kind of small improvement that can make it easier to share work with the audience.

“We realized the workflow inefficiency of how a blogger would create a link roundup — copying and pasting URLs from places,” Matt Thompson, Argo editorial project manager, told me. “We were thinking about workflow and how can we make this as easy as possible. How do we take an action the blogger is making regularly and make it more efficient?”

Thompson puts workflow innovation in the broader context of the Argo Project, which NPR see as an experiment in form. The Argo team sees blogging — or online serial storytelling, as Thompson put it — as a medium still in its infancy. There’s still time, they say, to think about how it can be improved, including how to do it more efficiently. And they plan to release the new tools that come out of their experimentation to the general public. The team’s developer, Marc Lavallee, said they’re trying to create tools that fit the workflow of the lone blogger. “Most of what we build will be the type of thing a person running a solo site would find useful,” Lavallee said. “When you’re thinking about a product, it’s so much easier to say: ‘One person is behind this blog. Would I do that every day? No? Then let’s not build that.’”

The roundup tool is a good example of the Argo team’s thinking. As bloggers go through their links each day, scrolling through stories and posts looking for the most interesting stuff on their beat, they tag the links using Delicious. Their Delicious accounts are synced up with the Argo’s backend (WordPress modified using Django) to match up the tags. The backend pulls in the links, letting bloggers quickly put together a nice-looking post without all the copy/pasting and formatting. Thompson made a screen-capture video of the whole process, which you can check out below. Here’s a sample of what the roundup would look like published.

Using Delicious as a link-post builder isn’t new, of course, but Argo’s version integrates specifically into their sidebar, a special WordPress post type, and Lavallee’s code.

The tagging tool also feeds into the sites’ topic pages. Those of us who spend all day on the Internet encounter great links all the time that aren’t right for a full post, or maybe even for a spot in a roundup post — but for people interested in a particular topic, it could still be valuable. The Argo process lets bloggers make use of those links with the same tagging function, making the site’s content pages a bit better than a purely automated feed. Check out the ocean acidification page over at the Argo blog Climatide (covering issues related to climate change and the ocean on Cape Cod) — in the sidebar, “Latest Ocean Acidification Links” contains (at this writing, at least) links pulled in through the Delicious tagging process. Others are drawn from Daylife or handmade Twitter lists around certain topics.

Thompson is passionate about contextual news, so I asked him if his topic pages might serve, perhaps, a more noble function than driving search traffic, which is arguably why most news organizations have topic pages at all. Thompson was quick to point out that the Argo topic pages are still new; he’s working with bloggers on their “tagging hygiene,” he says. And he admits that others at Argo is a bit “skeptical of topics pages,” which “is probably a good thing.” But the pages have potential, when built out, to let readers drill down into narrow-but-important topics in line with the goal of the blog. “These pages aren’t just sort of random machine driven pages,” Thompson said. The humanized topic pages help Argo bloggers get their sites, as Thompson puts it, to be “an extension of their mind and their thinking.”

Photo by Benny Mazur used under a Creative Commons license.

June 30 2010

22:00

Google News revamps with “news for you” angle

A few moments ago, the Google News homepage rolled out a redesign — a revamp meant to make the algorithm-driven news site “more customizable and shareable.”

“There’s an old saying that all news is local,” writes Google software engineer Kevin Stolt in a blog post announcing the design changes. “But all news is personal too — we connect with it in different ways depending on our interests, where we live, what we do and a lot of other factors. Today we’re revamping the Google News homepage with several changes designed to make the news that you see more relevant to you. We’re also trying to better highlight interesting stories you didn’t know existed and to make it easier for you to share stories through social networks.”

In other words, the new site is trying to balance two major, and often conflicting, goals of news consumption: personalization and serendipity.

The more specific purpose of today’s changes, Google says, is threefold: first, to have consumers tell Google what stories most interest them; second, to help those consumers keep track of ongoing stories; and third, to help them share stories with others.

Among the changes being implemented, per Stolt’s explanation of them:

Customizable interest areas: “The new heart of the homepage is something we call ‘News for you’: a stream of headlines automatically tailored to your interests. You can help us get it right by using the ‘Edit personalization’ box to specify how much you’re interested in Business, Health, Entertainment, Sports or any subject you want to add (whether it’s the Supreme Court, the World Cup or synthetic biology). You can choose to view the stories by Section view or List view, and reveal more headlines by hovering over the headline with your mouse. We’ll remember your preferences each time you log in.”

Customizable news sourcing: “To give you more control over the news that you see, we’re now allowing you to choose which news sources you’d like to see more or less often. You can do so in News Settings. These sources will rank higher or lower for you (but not for anyone else) in Google News search results and story clusters.”

An emphasis on local news: “And then there’s local news; we’re now highlighting weather and headlines about your city or neighborhood in their own section, which you can edit with whichever location you want to follow.”

An increased emphasis on the Spotlight section: “We’re also more prominently displaying the Spotlight section, which features stories of more lasting interest than breaking news and has been one of our most popular sections since we introduced it last fall.”

Communal (read: non-customized) story highlights: “There are the subjects that interest you and then there’s the major news of the day. To make it easy for you to find the big stories like Hurricane Alex, we’re adding links to topics that many outlets are covering. You’ll find these topics in the Top Stories section on the left side of the homepage as well as in linked keywords above headlines. Clicking on a topic link takes you to a list of related coverage that you can add to your news stream.”

(This is also a nod, I’d add, toward serendipity — a goal Google News has expressed interest in before, most notably through its Spotlight and its Editors’ Picks features.)

The changes are pretty fascinating, all in all (especially in the context of Google’s rumored move into Facebook territory); we’ll likely have more to say on them later on. In the meantime, here’s more on the changes, from the horse’s mouth:

June 10 2010

14:00

Linking by the numbers: How news organizations are using links (or not)

In my last post, I reported on the stated linking policies of a number of large news organizations. But nothing speaks like numbers, so I also trawled through the stories on the front pages of a dozen online news outlets, counting links, looking at where they went, and how they were used.

I checked 262 stories in all, and to a certain degree, I found what you’d expect: Online-only publications were typically more likely to make good use of links in their stories. But I also found that use of links often varies wildly within the same publication, and that many organizations link mostly to their own topic pages, which are often of less obvious value.

My survey included several major international news organizations, some online-only outlets, and some more blog-like sites. Given the ongoing discussion about the value of external links, and the evident popularity of topic pages, I sorted links into “internal”, “external”, and “topic page” categories. I included only inline links, excluding “related articles” sections and sidebars.

Twelve hand-picked news outlets hardly make up an unbiased sample of the entire world of online news, nor can data from one day be counted as comprehensive. But call it a data point — or a beginning. For the truly curious, the spreadsheet contains article-level numbers and notes.

Of the dozen online news outlets surveyed, the median number of links per article was 2.6. Here’s the average number of links per article for each outlet:

Source Internal External Topic Page Total BBC News 0 0 0 0 CNN 0.3 0.2 0.7 1.2 Politico 0.7 0.2 0.6 1.5 Reuters.com 0.1 0.2 1.4 1.7 Huffington Post 1.1 1.0 0 2.1 The Guardian 0.5 0.2 1.8 2.4 Seattle Post-Intelligencer 0.9 1.9 0 2.8 Washington Post 1.0 0.3 2.0 3.3 Christian Science Monitor 2.5 1.1 0 3.6 TechCrunch 1.8 3.6 1.2 6.6 The New York Times 1 1.2 4.6 6.8 Nieman Journalism Lab 1.4 13.1 0 14.5

The median number of internal links per article was 0.95, the median number of external links was 0.65, and the median number of topic page links was also 0.65. I had expected that online-only publications would have more links, but that’s not really what we see here. TechCrunch and our own Lab articles rank quite high, but so does The New York Times. Conversely, the BBC, Reuters, CNN, and The Huffington Post are not converting from a print mindset, so I would have expected them to be more web native — but they rank at the bottom.

What’s going on here? In short, we’re seeing lots of automatically generated links to topic pages. Many organizations are using topic pages as their primary linking strategy. The majority of links from The New York Times, The Washington Post, Reuters.com, CNN, and Politico — and for some of these outlets the vast majority — were to branded topic pages.

Topic pages can be a really good idea, providing much needed context and background material for readers. But as Steve Yelvington has noted, topic pages aren’t worth much if they’re not taken seriously. He singles out “misplaced trust in automation” as a pitfall. Like many topic pages, this CNN page is nothing more than a pile of links to related stories.

It doesn’t seem very useful to use such a high percentage of a story’s links directing readers to such pages. I wonder about the value of heavy linking to broad topic pages in general. How much is the New York Times reader really served by having a link to the HBO topic page from every story about the cable industry, or the Washington Post reader served by links on mentions of the “GOP”?

I suspect that links to topic pages are flourishing because such links can be generated by automated tools and because topic pages can be an SEO strategy, not because topic page links add great journalistic value. My suspicion is that most of the topic page links we are seeing here are automatically or semi-automatically inserted. Nothing wrong with automation — but with present technology it’s not as relevant as hand-coded links.

So what do we see when we exclude topic page links?

Excluding links to topic pages — counting only definitely hand-written links — the median number of links per article drops to 1.7. The implication here is that something like 30 percent of the links that one finds in online news articles across the web go to topic pages, which certainly matches my reading experience. Sorting the outlets by internal-plus-external links also shows an interesting shift in the linking leaderboard.

Source Internal External Total BBC News 0 0 0 Reuters.com 0.1 0.2 0.3 CNN 0.3 0.2 0.5 The Guardian 0.5 0.2 0.7 Politico 0.7 0.2 0.9 Washington Post 1.0 0.3 1.3 Huffington Post 1.1 1.0 2.1 The New York Times 1 1.2 2.2 Seattle Post-Intelligencer 0.9 1.9 2.8 Christian Science Monitor 2.5 1.1 3.6 TechCrunch 1.8 3.6 5.4 Nieman Journalism Lab 1.4 13.1 14.5

The Times and the Post have moved down, and online-only outlets Seattle Post-Intelligencer and Christian Science Monitor have moved up. TechCrunch still ranks high with a lot of linking any way you slice it, and the Lab is still the linkiest because we’re weird like that. (To prevent cheating, I didn’t tell anyone at the Lab, or elsewhere, that I was doing this survey.) But the BBC, CNN, and Reuters are still at the bottom.

Linking is unevenly executed, even within the same publication. The number of links per article depended on who was writing it, the topic, the section of the publication, and probably also the phase of the moon. Even obviously linkable material, such as an obscure politician’s name or a reference to comments on Sarah Palin’s Facebook page, was inconsistently linked. Meanwhile, one anomalous Reuters story linked to the iPad topic page on every single reference to “iPad” — 16 times in one story. (I’m going to have to side with the Wikipedia linking style guide here, which says link on first reference only.)

Whether or not an article contains good links seems to depend largely on the whim of the reporter at most publications. This suggests a paucity of top-down guidance on linking, which is in line with the rather boilerplate answers I got to my questions about linking policy.

Articles posted to the “blog” section of a publication generally made heavier use of links, especially external links. The average number of external links per page at The New York Times drops from 1.2 to 0.8 if the single blog post in the sample is excluded — it had ten external links! Whatever news outlets mean by the word “blog,” they are evidently producing their “blogs” differently, because the blogs have more links.

The wire services don’t link. Stories on Reuters.com — as distinguished from stories delivered on Reuters professional products — had an average of 1.7 links per article. But only 0.3 of these links were not to topic pages, and only blog posts had any external links at all. Stories read on Reuters professional products sometimes contain links to source financial documents or other Reuters stories, though it’s not clear to me whether these systems use or support ordinary URLs. The Associated Press has no hub news website of its own so I couldn’t include it in my survey, but stories pushed to customers through their standard feed do not include inline links, though they sometimes include links in an an “On the Net” section at the end of the story.

As I wrote previously, Reuters and AP told me that the reason they don’t include inline hyperlinks is that many of their customers publish on paper only and use content management systems that don’t support HTML.

What does this all mean? The link has yet to make it into the mainstream of journalistic routine. Not all stories need links, of course, but my survey showed lots of examples where links would have provided valuable backstory, context, or transparency. Several large organizations are diligent about linking to their own topic pages, probably with the assistance of automation, but are wildly inconsistent about linking to anything else. The cultural divide between “journalists” and “bloggers” is evident by the way that writers use links (or don’t use them), even within the same newsroom. The major wire services don’t yet offer integrated hypertext products for their online customers. And when automatically generated links are excluded, online-only publications tend to take links more seriously.

January 07 2010

19:11

Keeping Martin honest: Checking on Langeveld’s predictions for 2009

[A little over one year ago, our friend Martin Langeveld made a series of predictions about what 2009 would bring for the news business — in particular the newspaper business. I even wrote about them at the time and offered up a few counter-predictions. Here's Martin's rundown of how he fared. Up next, we'll post his predictions for 2010. —Josh]

PREDICTION: No other newspaper companies will file for bankruptcy.

WRONG. By the end of 2008, only Tribune had declared. Since then, the Star-Tribune, the Chicago Sun-Times, Journal Register Company, and the Philadelphia newspapers made trips to the courthouse, most of them right after the first of the year.

PREDICTION: Several cities, besides Denver, that today still have multiple daily newspapers will become single-newspaper towns.

RIGHT: Hearst closed the Seattle Post-Intelligencer (in print, at least), Gannett closed the Tucson Citizen, making those cities one-paper towns. In February, Clarity Media Group closed the Baltimore Examiner, a free daily, leaving the field to the Sun. And Freedom is closing the East Valley Tribune in Mesa, which cuts out a nearby competitor in the Phoenix metro area.

PREDICTION: Whatever gets announced by the Detroit Newspaper Partnership in terms of frequency reduction will be emulated in several more cities (including both single and multiple newspaper markets) within the first half of the year.

WRONG: Nothing similar to the Detroit arrangement has been tried elsewhere.

PREDICTION: Even if both papers in Detroit somehow maintain a seven-day schedule, we’ll see several other major cities and a dozen or more smaller markets cut back from six or seven days to one to four days per week.

WRONG, mostly: We did see a few other outright closings including the Ann Arbor News (with a replacement paper published twice a week), and some eliminations of one or two publishing days. But only the Register-Pajaronian of Watsonville, Calif. announced it will go from six days to three, back in January.

PREDICTION: As part of that shift, some major dailies will switch their Sunday package fully to Saturday and drop Sunday publication entirely. They will see this step as saving production cost, increasing sales via longer shelf life in stores, improving results for advertisers, and driving more weekend website traffic. The “weekend edition” will be more feature-y, less news-y.

WRONG: This really falls in the department of wishful thinking; it’s a strategy I’ve been advocating for the last year or so to follow the audience to the web, jettison the overhead of printing and delivery, but retain the most profitable portion of the print product.

PREDICTION: There will be at least one, and probably several, mergers between some of the top newspaper chains in the country. Top candidate: Media News merges with Hearst. Dow Jones will finally shed Ottaway in a deal engineered by Boston Herald owner (and recently-appointed Ottaway chief) Pat Purcell.

WRONG AGAIN, but this one is going back into the 2010 hopper. Lack of capital by most of the players, and the perception or hope that values may improve, put a big damper on mergers and acquisitions, but there should be renewed interest ahead.

PREDICTION: Google will not buy the New York Times Co., or any other media property. Google is smart enough to stick with its business, which is organizing information, not generating content. On the other hand, Amazon may decide that they are in the content business…And then there’s the long shot possibility that Michael Bloomberg loses his re-election bid next fall, which might generate a 2010 prediction, if NYT is still independent at that point.

RIGHT about Google, and NOT APPLICABLE about Bloomberg (but Bloomberg did acquire BusinessWeek). The Google-NYT pipe dream still gets mentioned on occasion, but it won’t happen.

PREDICTION: There will be a mini-dotcom bust, featuring closings or fire sales of numerous web enterprises launched on the model of “generate traffic now, monetize later.”

WRONG, at least on the mini-bust scenario. Certainly there were closings of various digital enterprises, but it didn’t look like a tidal wave.

PREDICTION: The fifty newspaper execs who gathered at API’s November Summit for an Industry in Crisis will not bother to reconvene six months later (which would be April) as they agreed to do.

RIGHT. There was a very low-key round two with fewer participants in January, without any announced outcomes, and that was it. [Although there was also the May summit in Chicago, which featured many of the same players. —Ed.]

PREDICTION: Newspaper advertising revenue will decline year-over-year 10 percent in the first quarter and 5 percent in the second. It will stabilize, or nearly so, in the second half, but will have a loss for the year. For the year, newspapers will slip below 12 percent of total advertising revenue (from 15 percent in 2007 and around 13.5 percent in 2008). But online advertising at newspaper sites will resume strong upward growth.

WRONG, and way too optimistic. Full-year results won’t be known for months, but the first three quarters have seen losses in the 30 percent ballpark. Gannett and New York Times have suggested Q4 will come in “better” at “only” about 25 percent down. My 12 percent reference was to newspaper share of the total ad market, a metric that has become harder to track this year due to changes in methodology at McCann, but the actual for 2009 ultimately will sugar out at about 10 percent.

PREDICTION: Newspaper circulation, aggregated, will be steady (up or down no more than 1 percent) in each of the 6-month ABC reporting periods ending March 31 and September 30. Losses in print circulation will be offset by gains in ABC-countable paid digital subscriptions, including facsimile editions and e-reader editions.

WRONG, and also way too optimistic. The March period drop was 7.1 percent, the September drop was 10.6 percent, and digital subscription didn’t have much impact.

PREDICTION: At least 25 daily newspapers will close outright. This includes the Rocky Mountain News, and it will include other papers in multi-newspaper markets. But most closings will be in smaller markets.

WRONG, and too pessimistic. About half a dozen daily papers closed for good during the year.

PREDICTION: One hundred or more independent local startup sites focused on local news will be launched. A number of them will launch weekly newspapers, as well, repurposing the content they’ve already published online. Some of these enterprises are for-profit, some are nonprofit. There will be some steps toward formation of a national association of local online news publishers, perhaps initiated by one of the journalism schools.

Hard to tell, but probably RIGHT. Nobody is really keeping track of how many hyperlocals are active, or their comings and goings. An authoritative central database would be a Good Thing.

PREDICTION: The Dow Industrials will be up 15 percent for the year. The stocks of newspaper firms will beat the market.

RIGHT. The Dow finished the year up 18.8 percent. (This prediction is the one that got the most “you must be dreaming” reactions last year.

And RIGHT about newspapers beating the market (as measured by the Dow Industrials), which got even bigger laughs from the skeptics. There is no index of newspaper stocks, but on the whole, they’ve done well. It helps to have started in the sub-basement at year-end 2008, of course, which was the basis of my prediction. Among those beating the Dow, based on numbers gathered by Poynter’s Rick Edmonds, were New York Times (+69%), AH Belo (+164%), Lee Enterprises (+746%), McClatchy (+343%), Journal Communications (+59%), EW Scripps (+215%), Media General (+348%), and Gannett (+86%). Only Washington Post Co. (+13%) lagged the market. Not listed, of course, are those still in bankruptcy.

PREDICTION: At least one publicly-owned newspaper chain will go private.

NOPE.

PREDICTION: A survey will show that the median age of people reading a printed newspaper at least 5 days per week is is now over 60.

UNKNOWN: I’m not aware of a 2009 survey of this metric, but I’ll wager that the median age figure is correct.

PREDICTION: Reading news on a Kindle or other e-reader will grow by leaps and bounds. E-readers will be the hot gadget of the year. The New York Times, which currently has over 10,000 subscribers on Kindle, will push that number to 75,000. The Times will report that 75 percent of these subscribers were not previously readers of the print edition, and half of them are under 40. The Wall Street Journal and Washington Post will not be far behind in e-reader subscriptions.

UNKNOWN, as far as the subscription counts go: newspapers and Kindle have not announced e-reader subscription levels during the year. The Times now has at least 30,000, as does the Wall Street Journal (according to a post by Staci Kramer in November; see my comment there as well). There have been a number of new e-reader introductions, but none of them look much better than their predecessors as news readers. My guess would be that by year end, the Times will have closer to 40,000 Kindle readers and the Journal 35,000. During 2010, 75,000 should be attainable for the Times, especially counting all e-editions (which include the Times Reader and 53,353 weekdays and 34,435 Sundays for the six months ending Sept. 30.

PREDICTION: The advent of a color Kindle (or other brand color e-reader) will be rumored in November 2009, but won’t be introduced before the end of the year.

RIGHT: plenty of rumors, but no color e-reader, except Fujitsu’s Flepia, which is expensive, experimental, and only for sale in Japan.

PREDICTION: Some newspaper companies will buy or launch news aggregation sites. Others will find ways to collaborate with aggregators.

RIGHT: Hearst launched its topic pages site LMK.com. And various companies are working with EVRI, Daylife and others to bring aggregated feeds to their sites.

PREDICTION: As newsrooms, with or without corporate direction, begin to truly embrace an online-first culture, outbound links embedded in news copy, blog-style, as well as standalone outbound linking, will proliferate on newspaper sites. A reporter without an active blog will start to be seen as a dinosaur.

MORE WISHFUL THINKING, although there’s progress. Many reporters still don’t blog, still don’t tweet, and many papers are still on content management systems that inhibit embedded links.

PREDICTION: The Reuters-Politico deal will inspire other networking arrangements whereby one content generator shares content with others, in return for right to place ads on the participating web sites on a revenue-sharing basis.

YES, we’re seeing more sharing of content, with various financial arrangements.

PREDICTION: The Obama administration will launch a White House wiki to help citizens follow the Changes, and in time will add staff blogs, public commenting, and other public interaction.

NOT SO FAR, although a new Open Government Initiative was recently announced by the White House. This grew out of some wiki-like public input earlier in the year.

PREDICTION: The Washington Post will launch a news wiki with pages on current news topics that will be updated with new developments.

YES — kicked off in January, it’s called WhoRunsGov.com.

PREDICTION: The New York Times will launch a sophisticated new Facebook application built around news content. The basic idea will be that the content of the news (and advertising) package you get by being a Times fan on Facebook will be influenced by the interests and social connections you have established on Facebook. There will be discussion of, if not experimentation with, applying a personal CPM based on social connections, which could result in a rewards system for participating individuals.

NO. Although the Times has continued to come out with innovative online experiments, this was not one of them.

PREDICTION: Craigslist will partner with a newspaper consortium in a project to generate and deliver classified advertising. There will be no new revenue in the model, but the goal will be to get more people to go to newspaper web sites to find classified ads. There will be talk of expanding this collaboration to include eBay.

NO. This still seems like a good idea, but probably it should have happened in 2006 and the opportunity has passed.

PREDICTION: Look for some big deals among the social networks. In particular, Twitter will begin to falter as it proves to be unable to identify a clearly attainable revenue stream. By year-end, it will either be acquired or will be seeking to merge or be acquired. The most likely buyer remains Facebook, but interest will come from others as well and Twitter will work hard to generate an auction that produces a high valuation for the company.

NO DEAL, so far. But RIGHT about Twitter beginning to falter and still having no “clearly attainable” revenue stream in sight. Twitter’s unique visitors and site visits, as measured by Compete.com, peaked last summer and have been declining, slowly, ever since. Quantcast agrees. [But note that neither of those traffic stats count people interacting with Twitter via the API, through Twitter apps, or by texting. —Ed.]

PREDICTION: Some innovative new approaches to journalism will emanate from Cedar Rapids, Iowa.

YES, as described in this post and this post. See also the blogs of Steve Buttry and Chuck Peters. The Cedar Rapids Gazette and its affiliated TV station and web site are in the process of reinventing and reconstructing their entire workflow for news gathering and distribution.

PREDICTION: A major motion picture or HBO series featuring a journalism theme (perhaps a blogger involved in saving the world from nefarious schemes) will generate renewed interest in journalism as a career.

RIGHT. Well, I’m not sure if it has generated renewed interest in journalism as a career, but the movie State of Play featured both print reporters and bloggers. And Julie of Julie & Julia was a blogger, as well. [Bit of a reach there, Martin. —Ed.]

[ADDENDUM: I posted about Martin's predictions when he made them and wrote this:

I’d agree with most, although (a) I think there will be at least one other newspaper company bankruptcy, (b) I think Q3/Q4 revenue numbers will be down from 2008, not flat, (c) circ will be down, not stable, (d) newspaper stocks won’t beat the market, (e) the Kindle boom won’t be as big as he thinks for newspapers, and (f) Twitter won’t be in major trouble in [2009] — Facebook is more likely to feel the pinch with its high server-farm costs.

I was right on (a), (b), and (c) and wrong on (d). Gimme half credit for (f), since Twitter is now profitable and Facebook didn’t seem too affected by server expenses. Uncertain on (e), but I’ll eat my hat if “75 percent of [NYT Kindle] subscribers were not previously readers of the print edition, and half of them are under 40.” —Josh]

Photo of fortune-teller postcard by Cheryl Hicks used under a Creative Commons license.

November 24 2009

09:35
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl