Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 15 2013

12:20

The newsonomics of where NewsRight went wrong

newsright-wide

Quietly, very quietly, NewsRight — once touted as the American newspaper industry’s bid to protect its content and make more money from it — has closed its doors.

Yesterday, it conducted a concluding board meeting, aimed at tying up loose ends. That meeting follows the issuing of a put-your-best-face-on-it press release two weeks ago. Though the news has been out there, hardly a whimper was heard.

Why?

Chalk it up, first, to how few people are really still covering the $38.6 billion U.S. newspaper industry. Then add in the fact that the world is changing rapidly. Piracy protection has declined as a top publisher concern. Google’s snippetization of the news universe is bothersome, but less of a central issue. The declining relative value of the desktop web — where NewsRight was primarily aimed — in the mobile age played a part. Non-industry-owned players like NewsCred (“The newsonomics of recycling journalism”) have been born, offering publishers revenue streams similar to those that NewsRight itself was intended to create.

Further, new ways to value news content — through all-access subscriptions and app-based delivery, content marketing, marketing services, innovative niching and more — have all emerged in the last couple of years.

Put a positive spin on it, and the U.S. newspaper industry is looking forward, rather than backward, as it seeks to find new ways to grow reader and ad revenues.

That’s all true. But it’s also instructive to consider the failure of NewsRight.

It’s easy to deride it as NewsWrong. It’s one of those enterprises that may just have been born under a bad sign. Instead of the stars converging, they collided.

NewsRight emerged as an Associated Press incubator project. If you recall the old AP News Registry and its “beacon,” NewsRight became its next iteration. It was intended to track news content as it traversed the web, detecting piracy along the way (“Remember the beacon”). It was an ambitious databasing project, at its peak taking in feeds from more than 900 news sites. The idea: create the largest database of current news content in the country, both categorized by topic and increasingly trackable as it was used (or misused) on the web.

AP initially incentivized member newspapers to contribute to the News Registry by discounting some of their annual fees. Then a bigger initiative emerged, first called the News Licensing Group (NLG). The strategy: harness the power of the growing registry to better monetize newspaper content through smart licensing.

NLG grew into a separate company, with AP contributing the registry’s intellectual property and becoming one of 29 partners. The other 28: U.S. daily newspaper companies and the leading European newspaper and magazine publisher Axel Springer. Those partners collectively committed more than $20 million — though they ended up spending only something more than half of that before locking up the premises.

Renamed NewsRight, it was an industry consortium, and here a truism applies: It’s tougher for a consortium — as much aimed at defense than offense — to innovate and adjust quickly. Or, to put it in vaudevillian terms: Dying is easy — making decisions among 29 newspaper companies can be torture.

It formally launched just more than a year ago, in January 2012 (“NewsRight’s potential: New content packages, niche audiences, and revenue”), and the issues surfaced immediately. Let’s count the top three:

  • Its strategy was muddled. Was it primarily a content-protection play, bent on challenging piracy and misuse? Or was it a way to license one of the largest collections of categorized news content? Which way did it want to go? Instead of deciding between the two, it straddled both.
  • In May 2011, seven months before the launch, the board had picked TV veteran David Westin as its first CEO. Formerly head of ABC News, he seemed an odd fit from the beginning. A TV guy in a text world. An analog guy in a digital world. Then friction between Westin and those who had hired him — including then-AP CEO Tom Curley — only complicated the strategic indecision. Westin was let go in July, which I noted then, was the beginning of the end.
  • Publishers’ own interests were too tough to balance with the common good. Though both The New York Times Company and AP were owners, it was problematic to include feeds of the Times and AP in the main NewsRight “catalog.” The partners tried to find prices suitable for the high-value national content (including the Times and AP) and the somewhat lesser-valued regional content, but that exercise proved difficult, the difficulty of execution exacerbated by anti-trust laws. Potential customers, of course, wanted the Times and AP as part of any deal, so dealmaking was hampered.

Further, all publishers take in steady revenue streams — collectively in the tens of millions — from enterprise licensors, like LexisNexis, Factiva, and Thomson Reuters, as well as education and copyright markets. NewsRight’s owners (the newspaper companies) didn’t want NewsRight to get in the way of those revenue streams — and those were the only licensing streams that had proven lucrative over time.

Long story short, NewsRight was hobbled from the beginning, and in its brief life, was able to announce only two significant customer, Moreover and Cision, and several smaller ones.

How could it have been so difficult?

It’s understandable on one level. Publishers have seethed with rage as they’ve seen their substantial investment in newsrooms harvested — for nothing — by many aggregators from Google to the tens of thousands of websites that actually steal full-text content. Those sites all monetize the content with advertising, and, save a few licensing agreements (notably with AP itself), they share little in the way of ad revenue.

But rage — whether seething or public — isn’t a business model.

Anti-piracy, itself, has also proven not to be much of a business model. Witness the tribulations of Attributor, an AP-invested-in content-tracking service that used some pretty good technology to track pirated content. It couldn’t get the big ad providers to act on piracy, though. Last year, after pointing its business in the direction of book industry digital rights management, it was sold for a meager $5.6 million to Digimarc.

So if anti-piracy couldn’t wasn’t much of a business model, then the question turned to who would pay to license NewsRight’s feed of all that content, or subsets of it?

Given that owner-publishers wanted to protect their existing licensing streams, NewsRight turned its sights to an area that had not well-monetized: media monitoring.

Media monitoring is a storied field. When I did content syndication for Knight Ridder at the turn of the century, I was lucky enough to visit Burrelles (now BurrellesLuce) in Livingston, New Jersey. In addition to a great auto tour of Tony Soprano country, I got to visit the company in the midst of transition.

In one office, older men with actual green eyeshades meticulously clipped periodicals (with scissors), monitoring company mentions in the press. The company then took the clips and mailed them. That’s a business that sustained many a press agent for many a decade: “Look, see the press we got ya!”

In Burrelles’ back rooms, the new digital monitoring of press mention was beginning to take form. Today, media monitoring is a good, if mature, industry segment, dominated by companies like Cision, BurrellesLuce, and Vocus, as social media monitoring and sentiment analysis both widen and complicate the field. Figure there are more than a hundred media monitoring companies of note.

Yet even within the relatively slim segment of the media monitoring space, NewsRight couldn’t get enough traction fast enough. Its ability to grow revenues there — and then to pivot into newer areas like mobile aggregation and content marketing — ran into the frustrations of the owner-newspapers. So they pulled the plug, spending less than they had actually committed. They decided to cut their losses, and move on.

Moving on meant making NewsRight’s last deal. The company — which has let go its fewer than 10 employees — announced that it had “joined forces” with BurrellesLuce and Moreover. It’s a face-saver — and maybe more.

Those two companies will try to extend media monitoring contracts for newspaper companies. BurrellesLuce (handling licensing and aggregation) and Moreover (handling billing and tracking) will make content available under the NewsRight name. The partnership’s new CAP (Compliant Article Program) seeks to further contracting for digital media monitoring rights, a murky legal area. If CAP works, publishers, Moreover, and BurrellesLuce will share in the new revenue.

What about NewsRight’s anti-piracy mandate? That advocacy position transitions over to the Newspaper Association of America.

NAA is itself in the process of being restyled into a new industry hub (with its merger and more) under new CEO Caroline Little. “As both guardian and evangelist for the newspaper industry, the NAA feels a tremendous responsibility to protect original content generated by its members,” noted Little in the NewsRight release.

What about the 1,000-title content database, the former AP registry that had formed the nucleus of NewsRight? It’s in limbo, and isn’t part of the BurrellesLuce/Moreover turnover. Its categorization technology has had stumbles and overall the system needs an upgrade.

There’s a big irony here.

In 2013, we’re seeing more innovative use of news content than we have in a long time. From NewsCred’s innovative aggregation model to Flipboard’s DIY news magazines, from new content marketing initiatives at The New York Times, Washington Post, Buzzfeed, and Forbes to regional agency businesses like The Dallas Morning News’ Speakeasy, there are many new ways news content is being monetized.

We’re really in the midst of a new content re-evaluation. No one makes the mistake this time around of calling news content king, but its value is being reproven amid these fledgling strategies.

Maybe the advent of a NewsCred — which plainly better understood and better built technology to value a new kind of content aggregation — makes NewsRight redundant. That’s in a sense what the partners decided: let the staffs of BurrellesLuce and Moreover and smarts of the NewsCreds make sense of whatever newer licensing markets are out there. Let them give the would-be buyers what they want: a licensing process to be as simple as it can be. One-stop, one-click, or as close as you can manage to that. While the disbanding of NewsRight seems to take the news industry in the opposite, more atomized, direction, in one way, it may be the third-party players who succeed here.

So is it that NewsRight is ending with a whimper, or maybe a sigh of relief? Both, plainly. It’s telling that no one at NewsRight was either willing or able to talk about the shutdown.

Thumbs down to content consortia. Thumbs up to letting the freer market of entrepreneurs make sense of the content landscape, with publishers getting paid something for what the companies still know how to do: produce highly valued content.

March 25 2013

12:15

April 26 2012

14:00

LedeHub to Foster Open, Collaborative Journalism

I'm honored to be selected as one of the inaugural AP-Google Journalism and Technology Scholarship fellows for the 2012-13 academic school year, and am excited to begin work on my project, LedeHub.

I believe in journalism's ability to better the world around us. To fully realize the potential of journalism in the digital age, we need to transform news into a dialogue between readers and reporters. LedeHub does just that, fostering collaborative, continuous and open journalism while incorporating elements of crowdsourcing to allow citizens, reporters and news organizations to come together in unprecedented ways.

LedeHub in Action

Here's a potential case study: "Alice" isn't a journalist, but she loves data and can spot the potential for a story amid the rows and columns of a CSV file. She comes across some interesting census data illustrating the rise of poverty in traditionally wealthy Chicagoland suburbs, but isn't quite sure how to use it, so she points her browser to www.ledehub.com. She creates a new story repository called "census-chicago-12," tags it under "Government Data," and commits the numbers.

Two days later, "Bob" -- a student journalist with a knack for data reporting -- is browsing the site and comes across Alice's repository. He forks it and commits a couple paragraphs of analysis. Alice sees Bob's changes and likes where he's headed, so she merges it back into her repository, and the two continue to collaborate. Alice works on data visualization, and Bob continues to do traditional reporting, voicing the story of middle-class families who can no longer afford to send their children to college.

A few days later, a news outlet like the Chicago Tribune sees "census-chicago-12" and flags it as a promising repository -- pulls it, edits, fact-checks and publishes the story, giving Alice and Bob their first bylines.

As you can see, LedeHub re-imagines the current reporting and writing workflow while underscoring the living nature of articles. By representing stories as "repositories" -- with the ability to edit, update, commit and revert changes over time -- the dynamic nature of news is effectively captured.

Fostering Open-Source Journalism

GitHub and Google Code are social coding platforms that have done wonders for the open-source community. I'd like to see similar openness in the journalism industry.

My proposal for LedeHub is to adapt the tenets of Git -- a distributed version control system -- and appropriate its functionality as it applies to the processes of journalism. I will implement a web application layer on top of this core functionality to build a tool for social reporting, writing and coding in the open. This affords multiple use cases for LedeHub, as illustrated in the case study I described above -- users can start new stories, or search for and contribute to stories already started. I'd like to mirror the basic structure of GitHub, but re-appropriate the front end to cater to the news industry and be more reporter-focused, not code-driven. That said, here's a screenshot of the upcoming LedeHub repository on GitHub (to give you a general idea of what the LedeHub dashboard might look like):

ledehub.jpg

Each story repository may contain text, data, images or code. The GitHub actions of committing (adding changes), forking (diverging story repositories to allow for deeper collaboration and account for potential overlap) and cloning will remain analagous in LedeHub. Repositories will be categorized according to news "topics" or "areas" like education or politics. Users -- from citizens to reporters or coders -- will have the ability to "watch" different story repositories they are interested in and receive updates when changes to that story are made. Users can also comment on different "commits" for a story, offering their input or suggestions for improvement. GitHub offers a "company" option, which allows for multiple users to be added to the organization, a feature I would like to mimic in my project for news outlets, in addition to Google Code's "issues" feature.

Next Steps

I recognize that the scope of my project is ambitious, and my current plan is to segment implementation into iterations -- to build an initial prototype to test within one publication and expand from there.

Journalism needs to become more open, like the web. Information should be shared. The collaboration between the New York Times and the Guardian over WikiLeaks data was very inspiring, two "competing" organizations sharing confidential information for publication. With my project, LedeHub, I hope to foster similar transparency and collaboration.

So, that's the proposal. There's still a lot to figure out. For example, what's the best way to motivate users to collaborate? What types of data can be committed? What copyright issues need to be considered? Should there be compensation involved? Fact-checking? Sound off. I'd love to hear your thoughts.

Keep up with all the new content on Collaboration Central by following our Twitter feed @CollabCentral or subscribing to our RSS feed or email newsletter:







Get Collaboration Central via Email

Katie Zhu is a junior at Northwestern University, studying journalism and computer science, and is particularly interested in human-computer interaction, data visualization and interaction design. She has previously interned at GOOD in Los Angeles, where she helped build GOOD's mobile website. She continues development work part-time throughout the school year, and enjoys designing and building products at the intersection of news and technology. She was selected as a finalist in the Knight-Mozilla News Technology Partnership in 2011.

This is a summary. Visit our site for the full post ».

January 23 2012

14:50

Daily Must Reads, Jan. 23, 2012

The best stories across the web on media and technology, curated by Lily Leung


1. AP CEO Tom Curley, who led company into digital space, to retire (Poynter)

2. Twitter reacts to death of Joe Paterno (Mashable)

3. White House joins Google+ (Los Angeles Times)


4. Apple enters the $8 billion industry of K-12 textbooks (paidcontent.org)



5. Tablet and e-reader sales soar (New York Times)

6. Twitter's Jack Dorsey talks social, SOPA and Asia (All Things D)


Subscribe to our daily Must Reads email newsletter and get the links in your in-box every weekday!



Subscribe to Daily Must Reads newsletter

This is a summary. Visit our site for the full post ».

January 18 2012

15:59

Daily Must Reads, Jan. 18, 2012

The best stories across the web on media and technology, curated by Lily Leung


1. Google joins anti-SOPA campaign (AdAge)

2. Inside Jerry Yang's departure from Yahoo (All Things D)

3. AP tweaks social media rules on incorrect tweets (Poynter)

4. Social media ROI metrics remain 'chaotic' (Online Media Daily)

5. New York Post launches Kindle Fire app (FishbowlNY)

6. Fox News puts Twitter hashtags to work during GOP debate (Lost Remote)

Subscribe to our daily Must Reads email newsletter and get the links in your in-box every weekday!



Subscribe to Daily Must Reads newsletter

This is a summary. Visit our site for the full post ».

January 10 2012

15:20

The Top 10 Data-Mining Links of 2011

Overview is a project to create an open-source document-mining system for investigative journalists and other curious people. We've written before about the goals of the project, and we're developing some new technology, but mostly we're stealing it from other fields.

overview.png

The following are some of the best ideas we saw in 2011, the data-mining work that we found most inspirational. Many of these links are educational resources for learning about specific technology. Some of this work illuminates how algorithms and humans treat information differently. Other are just amazing, mind-bending work.

1. What do your connections say about you? A lot. It is possible to accurately predict your political orientation solely on the basis of your network on Twitter. You can also work out gender and other things from public information.

2. Free textbooks from Stanford University. "Introduction to Information Retrieval" teaches you how a search engine works, in great detail. "Mining Massive Data Sets" covers a variety of big-data principles that apply to different types of information.

3. We're not above having a list of lists. Here's the Data Mining Blog's top 5 articles. Most of these are foundational, covering basic philosophy and technique such as choosing variables, finding clusters, and deciding what you're looking for.

4. The MINE technique looks for patterns between hundreds or thousands of variables -- say, patterns of gene expression inside a single cell. It's very general, and finds not only individual relationships but networks of cause and effect. Here's a nifty video, here's the original paper, and here's one statistician's review.

5. This is one of those papers that really changed the way I look at things. How do we know when a data visualization shows us something that is "actually there," as opposed to an artifact of the numbers? "Graphical Inference for Infovis" provides one excellent answer, based on a clever analogy with numerical statistics.

6. Lots of text-mining work uses "clustering" or "classification" techniques to sort documents into topics. But doesn't a categorization algorithm impose its own preconceptions? This is a deep issue, which you might think of as "framing" in code. To explore this question Justin Grimmer and Gary King went meta with a system that visualizes all possible categorizations of a document set, and how they relate.

7. A few years ago Google showed that the number of searches for "flu" was a great predictor of the actual number of outbreaks in a given location -- faster and more specific than the Center for Disease Control's own surveillance data. The team has now expanded the technique into Google Correlate, which instantly scans through petabytes of data to find search terms which follow any user-supplied time series. Here's New Scientist taking it for a test drive.

stanford.png

8. Not content with free professional textbooks, Stanford has created two free online courses for machine learning and natural language processing. Both are live-streamed lecture series taught by experts, with homework. Learning these intricate technologies has never been easier.

9. Lots of people have speculated about the role of social media in protest movements. A team of researchers looked at the data, analyzing a huge set of tweets from the "May 20" protests in Spain last year. How do protests spread from social media? Now we have at least one solid answer.

10. And the craziest data-mining link we ran across in 2011: IBM's DeepQA project, which beat human Jeopardy champions. This project looks into an unstructured database to correctly answer about 80% of all general questions posed to it, in just a few seconds. Here's a TED talk, and here's the technical paper that explains how it works. I can't tell you how badly I want one of these in the newsroom. If enough journalist hackers build on each other's work, maybe one day ...

Happy data mining! We'll be releasing our own prototype document-mining system, and the source, at the NICAR conference next month. If these are the sorts of algorithms you like to play with, we're also hiring programmers who want to bring these sorts of advanced techniques within everyone's reach.

January 06 2012

17:26

Daily Must Reads, Jan. 6, 2012

The best stories across the web on media and technology, curated by Nathan Gibbs


1. BitTorrent takes on Dropbox with personal file sharing (GigaOM)

2. Why ONA opposes #SOPA (Online News Association)

3. Europe's largest free WiFi zone set for London (BBC News)

4. Matt Alexander: The e-reader, as we know it, is doomed (The Loop)

5. How Google beat AP with Iowa caucus results (and why it matters) (Poynter)

6. News orgs form NewsRight to protect digital rights, licensing (MediaPost)




Subscribe to our daily Must Reads email newsletter and get the links in your in-box every weekday!



Subscribe to Daily Must Reads newsletter

This is a summary. Visit our site for the full post ».

September 15 2011

17:51

AP managing editors meeting: news executives say mobile delivery future of news

Seattle Times :: News executives opening the Associated Press Managing Editors meeting in Denver on Wednesday said mobile news delivery offers newspapers and other media companies a good opportunity to make money in the digital world. Tom Curley, The Associated Press' president and CEO, said media companies lost revenue opportunities with the Internet but have a chance to change that with mobile.

Continue to read Kristen Wyatt, seattletimes.nwsource.com

July 30 2011

04:57

A portrait: David Minthorn, grammar expert for the Associated Press

Washington Post :: In its modern, digital forms, writing has become something like an untended garden. It’s overgrown with text-speak and crawling with invasive species like tweets and dashed-off e-mails. OMG, it’s a mess. So think of David Minthorn as a linguistic gardener, doggedly cultivating this weedy patch in the hope of restoring some order and maybe coaxing something beautiful out of it. Minthorn’s mission is the maintenance of English grammar, the policing of punctuation and the enforcement of a consistent written style.

A portrait - Continue to read Paul Farhi, www.washingtonpost.com

March 15 2011

17:22

How Social Media, Internet Changed Experience of Japan Disaster

The reports and pictures of the devastation from the earthquake and tsunami in Japan last week reminded me of reporting on the earthquake that leveled Japan's port city of Kobe in 1995.

On a personal level, I am praying for the people in a country I have come to see as a second home.

As a media observer, what struck me this time was how rich and multifaceted the information flow was. In 1995, I worked in the AP bureau in Tokyo, trying to understand what I could from Japanese broadcast news reports. We were sometimes able to reach someone, official or not, in the Kobe region via phone for a quick interview as the death toll rose, eventually reaching more than 6,400.

We, of course, covered the major news conferences held by agencies and government offices. For information from the region, I relied largely on the reporters and photographers  (including me three weeks and then six months after the quake) who were dispatched to the scene. Listening to and watching the broadcast channels and the other wire services was an overwhelming and chaotic but -- by today's standards -- thin experience.

Multi-platform Experience Today

The past few days, sitting at home and in my office in New York, it felt like I had more information and contacts at my fingertips than I did then as a reporter in Japan. The morning I learned of the quake, I had a TV connected to digital cable, an iPad, a Blackberry and a web-connected computer in my living room.

I flipped among ABC, NBC, MSNBC, Fox, CNN, and BBC on TV. An iPad app gave me video of quake alerts in English and other languages from Japanese national broadcaster NHK. I dipped into the Twitter and Facebook streams.

A photo slideshow on the front page of the New York Times only a few hours after the quake gave a sense of not just the depth of destruction but also the geographic breadth. The towns being mentioned in captions spanned multiple prefectures (similar to states).

I was able to watch Japanese TV network TBS live via a Ustream link I was referred to in a "Japan Quake" page assembled by my New York-based friend and media colleague Sree Sreenivasan.

sree japan page.jpg

Huge Amounts of Video

The sheer amount of video -- from a country that may have more cameras and camera-equipped cell phones than any other per capita -- was so much greater than ever in the TV-only era. Even on TV, I saw constantly updated videos among the various channels, rather than the same loop of packaged videos used in an earlier era.

TV anchors such as Christiane Amanpour, Shepard Smith and Anderson Cooper all are doing shows live from Japan. If it seemed crass that some American networks quickly moved to a branded logo and dramatic music for their quake coverage, it was also intriguing how they now used reports from people talking via webcams.

One Westerner who spoke English with an American accent sat in his Japanese apartment and showed the cup of noodles and the Dole pineapple juice he had had for dinner 11 hours earlier and said he didn't know what else he'd be able to eat.

The technology also allowed everyone to see video I would have been able to see only as a news editor back then.

On Facebook, my stepmother from California, shared a six-minute video from Asahi TV that I'd seen clips of on TV. It showed water rushing through the streets of one town. With the natural sound, it had that much more impact than with newspeople talking over it. The surprisingly calm expressions on the faces of bystanders watching from high ground puzzled both my stepmother and me, and was something I didn't see in the multiple TV clips I had seen pulled from this video.

Soon after the quake, I got a hold of one Tokyo resident, one of my best friends, via a Skype connection to his cell phone in Osaka, where he was traveling on business. He said people there had felt the quake but that life was basically unchanged in Japan's second-largest city.

Updates on Facebook, Twitter

JapanQuake_ManWife.jpg

I confirmed that another close friend, an American who is a highly skilled translator in Tokyo, was fine by reading her Facebook wall. There, she also posted constant updates that told all her "friends" the latest reports she was seeing and hearing, as well as her feelings and what she could see with her own eyes. I could see that yet another friend was OK by reading her bylines in AP reports.

A decent amount of the Twitter stream, especially in Japanese, was not very useful in an informational sense; there were exclamations of relief or horror, or strange exclamations that seemed almost senseless. But there were also referrals to data, reports, information I could tap into quickly.

I learned, and was able to confirm, that this was either the 5th or 6th largest quake in recorded history, that a nuclear plant was having trouble with its coolant, that 200-300 people had died in one area, that a bunch of new cars were washed from a port.

nytimes image.jpg

Satellite imagery combined with Google Earth technology let many news organizations show overhead images of how towns looked before the tsunami, then after they been flooded.

Shared Details Could be Gut-Wrenching

Sometimes the little details were the most heart-wrenching, such as when a broadcaster droned the numbers of dead town by town, or when my friend on Facebook told us of the man who was riding his bicycle around with a note pinned on it about his missing wife. Here's that report from NHK via CNN:

The combination of reports provided details that gave a sense of daily life in the affected regions that in the pre-web era I never would have had living overseas, no matter how good a correspondent's reports.

By watching the live stream of TBS on Monday, for example, I learned that gas was being rationed at one station where motorists had to wait 30 minutes to get in line; heard a woman in a store complain she'd been looking for batteries but couldn't find them anywhere; and heard another express relief that one store's shelves had some instant ramen noodles. I learned details of how planned blackouts instituted to conserve electricity would take affect as a stream of related tweets moved by on the side.

Some things were much the same as in 1995: the weak pronouncements of government officials who seemed reluctant to say anything meaningful; the frustration of victims angry at not being told what to do or where to go; the sense of foreboding as the death count continued to rise.

I knew from my Kobe experience that the couple hundred pronounced dead in the initial reports would grow by orders of magnitude. I had seen Japanese reports of entire neighborhoods, even villages, that were "missing" after the mid-afternoon tsunami.

This time the feeling of being connected was much stronger, even though I was thousands rather than hundreds of miles away.

Some connections were possible this time only because of technology. I was able to observe New Jersey-based relatives of my Tokyo-based translator friend express love and relief that she and her family in Japan were safe. My friends in the U.S. and elsewhere used Facebook, Twitter and text messages to ask me about my loved ones in Japan, which let me reply in a way that was much easier to handle than in the previous era.

The media and communication technology of course do not change the scope of the disaster but do change the way we are able to experience and share it.

Resources like the Google People Finder in Japanese and English, links to aid sites, like the one on this WNYC.org page, and some social media outreach may have even changed things in a more fundamental way.

I do hope the pain and struggles of people affected are mitigated by knowing their plight can be seen and understood in a richer way, and by help they may receive more easily because of new technologies.

A former managing editor at ABCNews.com and an MBA, Dorian Benkoil handles marketing and sales strategies for MediaShift. He is SVP at Teeming Media, a strategic media consultancy focused on attracting, engaging, and activating communities through digital media. He tweets at @dbenk.

This is a summary. Visit our site for the full post ».

March 08 2011

17:09

Using Wire Content for Topic Pages without a SEO ding?

Is there a way to pull in AP or Reuters wire content in a way that won't get dinged by Google's SEO rules? We'd like to pull in AP content, for example, and then place related links, multimedia, etc. around it, making it a more valuable page, but this seems to invariably run into the duplicate content rule. If there's enough value-added content in there as well, does that negate the problem? Any recommended resources for learning more?

Tags: seo google ap

November 18 2010

18:06

Google News Meta Tags Fail to Give Credit Where Credit Is Due

Far be it for me to question the brilliance of Google, but in the case of its new news meta tagging scheme, I'm struggling to work out why it is brilliant or how it will be successful.

First, we should applaud the sentiment. Most of us would agree that it is a Good Thing that we should be able to distinguish between syndicated and non-syndicated content, and that we should be able to link back to original sources. So it is important to recognize that both of these are -- in theory -- important steps forward both from the perspective of news and the public.

But there are a number of problems with the meta tag scheme that Google proposes.

Problems With Google's Approach

Meta tags are clunky and likely to be gamed. They are clunky because they cover the whole page, not just the article. As such, if the page contains more than one article or, more likely, contains lots of other content besides the article (e.g. links, promos, ads), the meta tag will not distinguish between them. More important is that meta tags are, traditionally, what many people have used to game the web. Put in lots of meta tags about your content, the theory goes, and you will get bumped up the search engine results. Rather than address this problem, the new Google system is likely to make it worse, since there will be assumed to be a material value to adding the "original source" meta tag.

Though there is a clear value in being able to identify sources, distinguishing between an "original source" as opposed to a source is fraught with complications. This is something that those of us working on hNews, a microformat for news, have found when talking with news organizations. For example, if a journalist attends a press conference then writes up that press conference, is that the original source? Or is it the press release from the conference with a transcript of what was said? Or is it the report written by another journalist in the room published the following day? Google appears to suggest they could all be "original sources"; if this extends too far then it is hard to see what use it is.

Even when there is an obvious original source, like a scientific paper, news organizations rarely link back to it (even though it's easy to use a hyperlink). The BBC -- which is generally more willing to source than most -- has historically tended to link to the front page of a scientific publication or website rather than to the scientific paper itself (something the Corporation has sought to address in its more recent editorial guidelines). It is not even clear, in the Google meta-tagging scheme, whether a scientific paper is an original source, or the news article based on it is an original source.

And what about original additions to existing news stories? As Tom Krazit wrote on CNET:

The notion of 'original source' doesn't take into account incremental advances in news reporting, such as when one publication advances a story originally broken by another publication with new important details. In other words, if one publication broke the news of Prince William's engagement while another (hypothetically) later revealed exactly how he proposed, who is the "original source" for stories related to "Prince William engagement," a hot search term on Google today?

Differences with hNews

Something else Google's scheme does not acknowledge is that there are already methodologies out there that do much of what it is proposing, and are in widespread use (ironic given Google's blog post title "Credit where credit is due"). For example, our News Challenge-funded project, hNews already addresses the question of syndicated/non-syndicated, and in a much simpler and more effective way. Google's meta tags do not clash with hNews (both conventions can be used together), but neither do they build on its elements or work in concert with them.

One of the key elements of hNews is "source-org" or the source organization from which the article came. Not only does this go part-way toward the "original source" second tag Google suggests, it also cleverly avoids the difficult question of how to credit a news article that may be based on wire copy but has been adapted since -- a frequent occurence in journalism. The Google syndication method does not capture this important difference. hNews is also already the standard used by the largest American syndicator of content, the Associated Press, and is also used by more than 500 professional U.S. news organizations.

It's also not clear if Google has thought about how this will fit into the workflow of journalists. Every journalist we spoke to when developing hNews said they did not want to have to do things that would add time and effort to what they already do to gather, write up, edit and publish a story. It was partly for this reason that hNews was made easy to integrate into publishing systems; it's also why hNews marks information up automatically.

Finally, the new Google tags only give certain aspects of credit. They give credit to the news agency and the original source but not to the author, or to when the piece was first published, or how it was changed and updated. As such, they are a poor cousin to methodologies like hNews and linked data/RDFa.

Ways to Improve

In theory Google's initiative could be, as this post started by saying, a good thing. But there are a number of things Google should do if it is serious about encouraging better sourcing and wants to create a system that works and is sustainable. It should:

  • Work out how to link its scheme to existing methodologies -- not just hNews but linked data and other meta tagging methods.
  • Start a dialogue with news organizations about sourcing information in a more consistent and helpful way.
  • Clarify what it means by original source and how it will deal with different types of sources.
  • Explain how it will prevent its meta-tagging system from being misused such that the term "original source" becomes useless.
  • Use its enormous power to encourage news organizations to include sources, authors, etc. by ranking properly marked-up news items over plain-text ones.

It is not clear whether the Google scheme -- as currently designed -- is more focused on helping Google with some of its own problems sorting news or with nurturing a broader ecology of good practice.

One cheer for intention, none yet for collaboration or execution.

November 11 2010

16:00

The Newsonomics of journalist headcounts

[Each week, our friend Ken Doctor — author of Newsonomics and longtime watcher of the business side of digital news — writes about the economics of the news business for the Lab.]

We try to make sense of how much we’ve lost and how much we’ve gained through journalism’s massive upheaval. It’s a dizzying picture; our almost universal access to news and the ability of any writer to be her own publisher gives the appearance of lots more journalism being available. Simultaneously, the numbers of paid professional people practicing the craft has certainly lowered the output through traditional media.

It’s a paradox that we’re in the midst of wrestling with. We’re in the experimental phase of figuring out how much journalists, inside and out of branded media, are producing — and where the biggest gaps are. We know that numbers matter, but we don’t yet know how they play with that odd measure that no metrics can yet definitively tell us: quality.

I’ve used the number of 1,000,000 as a rough approximation of how many newspaper stories would go unwritten in 2010, as compared to 2005, based on staffing reduction. When I brought that up on panel in New York City in January, fellow panelist Jeff Jarvis asked: “But how many of those million stories do we need? How many are duplicated?” Good questions, and ones that of course there are no definitive answers for. We know that local communities are getting less branded news; unevenly, more blog-based news; and much more commentary, some of it produced by experienced journalists. There’s no equivalency between old and new, but we can get some comparative numbers to give us some guidelines.

For now, let’s look mainly at text-based media, though we’ll include public radio here, as it makes profound moves to digital-first and text. (Broadcast and cable news, of course, are a significant part of the news diet. U.S. Labor Department numbers show more than 30,000 people employed in the production of broadcast news, but it’s tough to divine how much of that effort so far has had an impact on text-based news. National broadcast numbers aren’t easily found, though we know there are more than 3,500 people (only a percentage of them in editorial) working in news divisions of the Big Four, NBC, ABC, Fox, and CBS — a total that’s dropped more than 25 percent in recent years.)

Let’s start our look at text-based media with the big dog: daily newspapers. ASNE’s annual count put the national daily newsroom number at 41,500 in 2010, down from 56,400 in 2001 (and 56,900 in 1990). Those numbers are approximations, bases on partial survey, and they are the best we have for the daily industry. So, let’s use 14,000 as the number of daily newsroom jobs gone in a decade. We don’t have numbers for community weekly newspapers, with no census done by either the National Newspaper Association or most state press associations. A good estimate looks to be in the 8,000-10,000 range for the 2,000 or so weeklies in the NNA membership, plus lots of stringers.

Importantly, wire services aren’t included in the ASNE numbers. Put together the Associated Press, Reuters, and Bloomberg (though some of those workforces are worldwide, not U.S.-based) and you’ve got about 7,500 editorial staffers.

Let’s look at some areas that are growing, starting with public radio. Public radio, on the road to becoming public media, has produced a steady drumbeat of news about its expansion lately (“The Newsonomics of public radio argonauts,” “Public Radio $100 Million Plan: 100 Journalist Per City,”), as Impact of Government, Project Argo, Local Journalism Centers add more several hundred journalists across the country. But how many journalists work in public broadcasting? Try 3,224, a number recently counted in a census conducted for the Corporation for Public Broadcasting. That’s “professional journalists”, about 80% of them full-time. About 2,500 of them are in public radio, the rest in public TV. Should all the announced funding programs come to fruition, the number could rise to more than 4,000 by the end of 2011.

Let’s look at another kind of emerging, non-profit-based journalism numbers, categorized as the most interesting and credible nonprofit online publishers by Investigative Reporting Workshop’s iLab site. That recent census includes 60 sites, with the largest including Mother Jones magazine, The Christian Science Monitor, ProPublica, the Center for Investigative Reporting, and and the Center for Public Integrity. Also included are such newsworthy sites as Texas Tribune, Bay Citizen, Voice of San Diego, the New Haven Independent and the St. Louis Beacon. Their total full-time employment: 658. Additionally, there are high dozens, if not hundreds, of journalists operating their own hyperlocal blog sites around the country. Add in other for-profit start-ups, from Politico to Huffington Post to GlobalPost to TBD to Patch to a revived National Journal, and the journalists hired by Yahoo, MSN and AOL (beyond Patch), and you’ve got a number around another thousand.

How about the alternative press — though not often cited in online news, they’re improving their digital game, though unevenly. Though AAN — the Association of Alternative Newsweeklies — hasn’t done a formal census, we can get an educated guess from Mark Zusman, former president of AAN and long-time editor of Portland’s Willamette Week, winner of 2005 Pulitzer for investigative reporting. “The 132 papers together employ something in the range of 800 edit employees, and that’s probably down 20 or 25 percent from five years ago”.

Add in the business press, outside of daily newspapers. American City Business Journals itself employs about 600 journalists, spread over the USA. Figure that from the now-veteran Marketwatch to the upstart Business Insider and numerous other business news websites, we again approach 1,000 journalists here.

What about sports journalists working outside of dailies? ESPN alone probably can count somewhere between 500 and 1000, of its total 5,000-plus workforce. Comcast is hiring by the dozens and publications like Sporting News are ramping up as well (“The Newsonomics of sports avidity“). So, we’re on the way to a thousand.

How about newsmagazine journalists? Figure about 500, though that number seems to slip by the day, as U.S. News finally puts its print to bed.

So let’s look broadly at those numbers. Count them all up — and undoubtedly, numerous ones are missing — and you’ve got something more than 65,000 journalists, working for brands of one kind or another. What interim conclusions can we draw?

  • Daily newspaper employment is still the big dog, responsible for a little less than two-thirds of the journalistic output, though down from levels of 80 percent or more. When someone tells you that the loss of newspaper reporting isn’t a big deal, don’t believe it. While lots of new jobs are being created — that 14,000 loss in a decade is still a big number. We’re still not close to replacing that number of jobs, even if some of the journalism being created outside of dailies is better than what some of what used to be created within them.
  • If we look at areas growing fastest (public radio’s push, online-only growth, niche growth in business and sports), we see a number approaching 7,500. That’s a little less than 20 percent of daily newspaper totals, but a number far higher than most people would believe.
  • When we define journalism, we have to define it — and count it — far more widely than we have. The ASNE number has long been the annual, depressing marker of what’s lost — a necrology for the business as we knew it — not suggesting what’s being gained. An index of journalism employment overall gives us a truer and more nuanced picture.
  • Full-time equivalent counts only go so far in a pro-am world, where the machines of Demand, Seed, Associated Content, Helium and the like harness all kinds of content, some of it from well-pedigreed reporters. While all these operations raise lots of questions on pay, value and quality, they are part of the mix going forward.

In a sense, technologies and growing audiences have built out a huge capacity for news, and that new capacity is only now being filled in. It’s a Sim City of journalism, with population trends in upheaval and the urban map sure to look much different by 2015.

Photo by Steve Crane used under a Creative Commons license.

November 04 2010

14:00

The Newsonomics of Kindle Singles

[Each week, our friend Ken Doctor — author of Newsonomics and longtime watcher of the business side of digital news — writes about the economics of the news business for the Lab.]

Maybe the newspaper is like the old LP — you know, as in “Long Play.” It may be a 33 1/3, though it seems like it came out of the age of 78s sometimes, a relic of the post-Victorian Victrola age. It is what it is, a wonderful compendium of one day in the life (of a nation, a city, a village), a one-size-fits-all product, the same singular product delivered to mass volumes of readers.

In the short history of Internet disintermediation and disruption of the traditional news business, we’ve heard endless debate of the “the content and the container,” as people have tried to peel back the difference between the physical form of the newspaper — its container — and what it had in it. It’s a been a tough mindset change, and the many disruptors of the world — the Googles, the Newsers, and the Huffington Posts, for instance — have expertly picked apart the confusions and the potentials new technologies have made possible. The news business has been atomized, not by Large Hadron Colliders, but by simple digital technology that has blown up the container and treats each article as a digestible unit. Aggregate those digestible units with some scheme that makes sense to readers (Google: news search; Newser: smart selection and précis; HuffPo: aggregation, personality and passion), and you’ve got a new business, and one with a very low cost basis.

None of this is a revelation. What is new, and why I re-think that context is the advent of Kindle Singles. The Lab covered Amazon’s announcement of less-than-a-book, more-than-as-story Kindle Singles out of the chute a couple of weeks ago. Josh Benton described how the new form could well serve as a new package, a new container, for longer, high-quality investigative pieces, those now being well produced in quantity by ProPublica, the Center for Investigative Reporting (and its California Watch), and the Center for Public Integrity. That’s a great potential usage, I think.

In fact, Kindle Singles may open the door even further to wider news business application, for news companies — old and new, publicly funded and profit-seeking, text-based and video-oriented. It takes the old 78s and 33 1/3s, and opens a world of 45s, mixes, and infinite remixes. It says: You know what a book is, right? Think again. It can also say: You know what a newspaper is, right? Think again. While the Kindle Singles notion itself seems to have its limits — it’s text and fixed in time, not updatable on the fly — it springs loose the wider idea of publishing all kinds of new news and newsy content in new containers. Amazon is trying to define this strange new middle, with the Kindle Singles nomenclature, while some have used the term “chapbook” to describe it. We’ve got to wonder what Apple is thinking in response — what’s an app in Kindle Singles world? What’s a Kindle Single in an apps world? It’s not a book, an article, a newspaper, or a magazine, but something new. We now get to define that something new, both in name, but most importantly in content possibility.

What it may be for news organizations is a variety of news-on-demand. Today, we could be reading tailored and segmented sections on the election, from red and blue perspectives, from historical perspectives, from numerical perspectives. Today, we in the Bay Area could get not just a single triumphant San Francisco Giants celebratory section, but our choice of several, one providing San Francisco Giants history, one providing New York Giants history, one looking at the players themselves; the list goes on and on. More mundane, and more evergreen commercial topics? Job-hunting, job-finding, job-prep guides, tailored to skills, ages, and wants? Neighborhood profile sections for those seeking new housing (pick one or several neighborhoods, some with data, some with resident views, others tapping into neighborhood blogs). It’s endless special sections, on demand, some ad-supported, some not; a marketer’s dream. Some are priced high; some are priced low; some are free and become great lead generators for other digital reader products.

A few recent initiatives in the news business news lend themselves to Singles thinking. Take Politico’s newly announced topical e-newsletters. Take Rupert Murdoch’s notion of a paid-content portal, Alesia, which had within the idea of mixing and matching content differently, until its plug was recently pulled. Take AP’s new rights consortium, a venture that could build on this approach. Again, endless permutations are possible.

Who is going to come up with the ideas for the content? Well, editors themselves should have their shot, though one-size-fits-all thinking has circumscribed the imagination of too many. Still, there are hundreds of editors (and reporters and designers and copy editors) still in traditional ranks and now employed outside of it capable of creating new audience-pleasing packages. Some will work; some won’t. Experiment, and fail quickly. The biggest potential, though? Letting readers take open-sourced news content and create packages themselves, giving them a small revenue share, on sales. (Both the Guardian and the New York Times, among others, have opened themselves up for such potential usage.) Tapping audiences to serve audiences, to mix and match content, makes a lot of sense.

Why might this work when various little experiments have failed to produce much revenue for news companies, thinking of Scribd and HP’s MagCloud? Well, it’s the installed bases and paid-content channels established by the Amazons (and the Apples). They’ve got the customers and the credit cards, and they’ve tapped the willingness to pay. They need stuff to sell.

For newspaper companies, it’s another chance to rewrite the economics of the business. The newsonomics of Kindle Singles may mean that publishers can worry less about cost of content production, for a minute, and more about its supply. Maybe the problem hasn’t been the cost of professional content, but its old-school one-size-fits-all distribution package. That sports story or neighborhood profile could bring in lots more money per unit, if Singles notion takes off.

One big caution here: Singles thinking leads us into a more Darwinian world than ever. In my Newsonomics book, I chose as Law #1: “In the age of Darwinian content, we’re becoming our own and each other’s editors.” Great, useful content will sell; mediocre content will die faster. Repackaging content pushes the new content meritocracy to greater heights. As we approach 2011, news publishers are hoping to hit home runs with new paid content models. Maybe the future is as much small ball, hitting a lot of one-base hits, of striking out as often — and of Singles.

October 28 2010

14:00

The Newsonomics of the third leg

[Each week, our friend Ken Doctor — author of Newsonomics and longtime watcher of the business side of digital news — writes about the economics of the news business for the Lab.]

Most publishing stood proudly and stably on two feet, for decades.

You got readers to help pay for the product. And you got advertisers to pay as well. While American newspapers dependably got 20 percent of their revenue from readers, European ones have gotten more than 30 percent and Japanese ones more than 50 percent. In the consumer magazine, trade, and B2B worlds, the splits vary considerably, but the same two legs makes the businesses work.

Even public radio, seemingly a different animal, has followed a similar model. Substitute “members” for subscribers and “underwriters” for advertisers, and the same two-legged model is apparent.

In our digital news world, though, the news business has been riding, clumsily, a unicycle for more than a decade. Revenue — other than the Wall Street Journal’s and the Financial Times’ — has been almost wholly based on advertising. So, that’s why we’re seeing the big paid content push. “Reader digital revenue in 2011!” is the cry and the quest, as the News Corp. pay walls have gone up, Journalism Online hatches its Press+ eggs, The New York Times prepares to turn on its meter, and Politico launches its paid e-newsletters. They all have the same goal in mind: digital reader revenue.

The simple goal: a back-to-the-future return to a two-legged business model. (See Boston.com’s New Strategies: Switch and Retention). We’ll see how strong that second leg is as 2011 unfolds.

While two legs are good, and better than one, consider that three would be better still. Three provide a stronger stool, and a more diversified business. We’re beginning to see a number of third legs emerging. So it’s look at the emerging newsonomics of the third leg.

The clearest to see is foundation funding. Foundations, led by Knight, have been pouring money into online startups. The startups, of course, are selling advertising and/or sponsorship, and some are selling memberships, as well. In addition to those same two legs, foundation funding provides a third leg — at least for awhile. Our 2010 notion is that foundation funding isn’t a lasting revenue source, but a jumpstart; that may change as we move toward 2015. We may well see foundation funding turn into endowments for local journalism, so it may become a dependable third leg.

Make no mistake: It’s not just the new guys who benefit from foundation “third leg” funding. Take California Watch, the Center for Investigative Reporting’s statewide investigative operation. Barely a year old, its dozen-plus staffers have written stories that have appeared throughout the traditional press, from major dailies to commercial broadcasters to the ethnic press. California Watch work — at this point wholly funded by foundations, though CIR, too, is looking back to the traditional legs for future funding — then is used by the old press both to improve quality and cut their own costs. So, indirectly, the old press derives benefit from this third leg of foundation funding.

Take a couple of examples from the cable industry. We’ve seen the Cablevision model, as the New York-based company bought Newsday, took the website “paid” and bundled it with its cable subscriptions. The notion, here: Cablevision is driving “exclusive” value for its cable (and Triple Play) offers by offering Newsday online content, content not otherwise available without paying separately (or subscribing to print Newsday). Newsday.com sells advertising, and online access, but the real value being tested is what its content does to spur retention and new sales in Cablevision’s big business: cable.

Similarly, Comcast — a pipes company fitfully becoming a content company as well as it tries to complete its NBCU deal — is making a big investment in digital sports. Headed by former digital newspaper exec Eric Grilly, ex of Philly.com and Media News, it’s a big play. Well-deployed in five cities — Chicago, Boston, Philadelphia, the Bay Area and Washington D.C. — and headed for nine more, all in which it runs regional sports cable networks. Comcast Digital Sports now employs more than 80 people and is producing more than 50 hours of programming a week in each market.

While Comcast is ramping up advertising sales and may test paid reader products as well, it’s that same third leg — the cable revenue — that is the biggest reason behind the push. “We want to provide value to the core business,” Grilly told me last week.

In the cable cases, news production can be justified because it feeds a bigger revenue beast. Thomson Reuters and Bloomberg’s large news staffs do the same, feeding bigger financial services businesses.

Lastly, let’s consider the new Associated Press-lead push for an industry-wide “rights consortium.” While its daily newspapers try to stand taller on the two legs of digital ad and reader revenue, the business that could emerge from this new company is about syndication. In that sense, it could be a business-to-business-to-consumer (B2B2C) push, aimed at a third growing revenue source for all, as news content un-tethered from publishers’ own branded sites is used — and monetized — across mobile platforms, mixed and matched in all kinds of ways.

Maybe, overall, it’s a regeneration process for the news business, as the old legs have grown weaker, the environment is forcing evolutionary experimentation. Over the next several years, we’ll see which third legs survive and prosper, and which others become dead ends.

Photo by This Particular Greg used under a Creative Commons license.

October 18 2010

18:15

Newspapers Must Consider More Free, Citizen Media Content

Newspapers can be saved and they can get back to delivering a consistent return on capital to investors, but this can't be achieved using old methods. At CRG Partners, our experience working with newspaper companies in the U.S. and U.K. has shown us that publishers and their executive management seem to believe that traditional cost-cutting methods of layoffs, smaller and thinner papers and lower salaries represent all of the savings that they can generate out of their operations. That's not the case.

One of the myriad problems facing publishers and editors is that, while their resources have been halved or more and they have drastically cut staff and operations, they still face the need to create valuable, compelling and most importantly, local news and features.

One publisher told our firm, "In order to survive we have to be able to generate non-commodity hyper-local content that is relevant and at a cost that allows us to remain competitive and profitable."

Content costs represent between 35 and 45 percent of the cost of producing a newspaper, so the question becomes: How can we cut costs in content and still deliver quality? In order to approach the question correctly, publishers need better information about how they source content -- which content comes from what sources, how it is used, and how much it costs. Content sourcing is one of the area where newspaper publishers and other content-driven organizations can realize real cost savings and prepare their organizations for the new world of publishing.

Maintaining Quality Amid Economic Realities

In reality, publishers and CEOs have little understanding about what their editors are doing. Publishers don't know the relevance of the cost of staff-produced content, paid content from syndicates, wire services and shared or free contributed content and associated editing costs. If they can get a handle on this, they can do a better job figuring out the cost/quality equation for print, online and beyond.

Without change, the opportunity to reduce costs without impacting quality is probably limited. How to build a better model? When you are working towards more efficient content sourcing, you have to ask the right questions:

  1. Is there an alternative content gathering model or a more efficient model that will help to reduce costs without negatively impacting quality?
  2. Can we improve our content gathering model without any need for change?
  3. How good are we at sharing content?
  4. How much copy is rewritten?
  5. Can we increase pro bono content and is there a strategy in place to facilitate this?

Metro dailies spend large sums on Associated Press and wire content while also maintaining significant local staffing levels. Based on our experience working with these types of publishers, the problem is that the expenditures often don't match the way content is used. Additionally, the way content is used varies wildly by title. A content sourcing analysis can reveal sometimes startling mismatches between editorial expenditure and the way content is used.

Some content is national or international in nature and, in our view, don't need to be staff-produced. Those cases include national and international reports, movie reviews, celebrity news, travel and many lifestyle features. Staff photography can be moved to the first few pages of a section and wire service or contributed photos used further inside. Layers of copyediting can be reduced.

Free or contributed content is a small but growing source of the newspaper offering. Metro dailies have so far rejected the large amount of free content that is available due to concerns about quality, editorial independence and ethics. In this day and age, however, it is wrong to believe that the quality of content you can get from free or archived material or bloggers is unusable.

I'm not advocating that companies move to relying upon citizen journalism as a solution to the metro daily content sourcing puzzle. But certain areas -- high school sports, local government and education, for example -- can rely upon content produced by unpaid contributors who work within specified editorial policies. They can fit into the overall editorial sourcing solution. The best-producing, most popular journalists still have roles in the new model by producing relevant, non-commodity local news that differentiates the metro daily. They are needed now more than ever. New media still stands on the shoulders of old media.

Content Sourcing Data

Over the past year, our firm analyzed four newspaper chains representing 300 titles.
The below graphic illustrates what we found when we looked at how a group of U.K. newspapers were sourcing content (I share some U.S. data below it). Each letter on the left hand side represents a newspaper in the U.K. that has experienced downturns in circulation and revenue. The percentages illustrate how papers within the same chain use content in very different ways:

CRG_media_graphic3-1.jpg

At a different newspaper company, we found that 125 papers published an average of 37 percent staff-written articles and 29 percent wire service material. What we called "reworked content," or content that had to be rewritten or heavily edited, accounted for 14 percent of what was published. Shared content from sister publications was just 8 percent, while free, or contributed content, represented 5 percent of published content.

These papers had already undergone extensive staff reductions. In the conventional sense, all the costs had been wrung out. But newspapers have to change the way they think in order to survive. If you've wrung out all the costs you can from the existing content creation model, then it's time to change the model itself. One paper printed 8 percent of its material from free content. If that number moved up to 20 percent, the savings can be measured and monetized. In the case of this client, a reduction of the use of 16 percent of staff-produced material led to a savings of 28 percent in staffing costs.

Although the program has been implemented for 2010-2011, actual results aren't in yet. At this point, the editorial changes have been accepted and circulation is holding steady. If all goes according to plan, a total of $4.3 million more in savings will be realized. None of that could be accomplished by an editorial system that doesn't understand what it costs to produce a newspaper. It's high time for a content sourcing change in this industry.

As part of New York-based CRG Partners, Neil Heyside (neil.heyside@crgpartners.com) has more than 20 years of experience in process improvement, change management and operational reengineering in the U.K., U.S., Europe and South Africa. CRG Partners received the 2010 Turnaround Management Association's (TMA's) Mega Company Turnaround Award and was named Turnaround Consulting Firm of the Year by M&A Advisor. He can be reached at 212.370.5550.

This is a summary. Visit our site for the full post ».

October 15 2010

09:30

August 26 2010

16:00

The Newsonomics of news orgs surrounded by non-news

[Each week, our friend Ken Doctor — author of Newsonomics and longtime watcher of the business side of digital news — writes about the economics of the news business for the Lab.]

The Washington Post Company has been much in the news recently, but not because of its flagship paper. It’s making news around its other holdings. It has shed Newsweek, staunching a $30 million annual bleed. More importantly to the company’s finances, its Kaplan “subsidiary” has been much in the spotlight, under investigation by the feds, along with other for-profit educators, for fraud around student loans.  Those inquiries have rocked The Washington Post Co.’s share price, sending it to a year-to-date low.

The Post’s case has also refocused public attention on how much the company is dependent on Kaplan revenues. Those revenues now amount to 62 percent of revenues, and 67 percent of profits. It became clear to even those who hadn’t been watching closely that the Post was more an education company than a newspaper one, though the family ownership of the Grahams clearly intend to use that positioning to protect and sustain the flagship paper.

The Post case is not an isolated one. Fewer news companies are, well, “news” companies in the way we used to think of them. More news operations find themselves within larger enterprises these days, and I believe that will be a continuing trend. It could be good for journalism — buffering news operations in times of changing business models — or it could be bad for journalism, as companies whose values don’t include the “without fear or favor” gene increasingly house journalists. That push and pull will play out dramatically over the next five years.

Let’s look, though, at the changing newsonomics of the companies that own large news enterprises.

Here’s a chart of selected companies, showing what approximate (revenue definitions vary significantly company to company) percentage of their overall annual revenues are derived from news:

News Corp.: 19 percent (newspapers and information services); 31 percent (newspapers and broadcast)
Gannett: 94.3 percent (newspapers and broadcast)
New York Times: 93 percent (newspapers and broadcast)
Washington Post: 21 percent (newspapers and broadcast)
Thomson Reuters: 2.3 percent (Media segment)
Bloomberg: <15 percent (non-terminal media businesses)
AP: 100 percent (newspapers and broadcast)
McClatchy: 100 percent (newspapers and broadcast)
Disney (ABC News): <14 percent (broadcast)
Guardian Media Group: 46 percent (newspapers)

The non-news revenues may be a surprise, but here’s one further fact to ponder: News, over the past several years, has continued to decline in its percentage contribution to most diversified companies. Given all the trends we know, it will continue to do so. Movies, cable, satellite, and even broadcasting all have challenges, structural and cyclical, but overall are all doing better than print and text revenues.

News Corp., the largest company by news revenue in the world with publications on three continents, is a great example. After all, although it is eponymously named, it is not really a “news company.” With only one in five of its overall dollars coming directly from traditional news, it’s much more dependent on the success of the latest Ben Stiller comedy or the fortunes of a blockbuster than on the digital advertising growth of The Wall Street Journal or the paid-content successes — or failures — of The Times of London. These matter, of course, but let’s consider the context.

In February, I wrote about the “Avatar Advantage” that News Corp.’s Wall Street Journal held in its increasingly head-to-head battle with The New York Times. At that point, Avatar had brought in $2 billion in gross receipts for News Corp., whose 20th Century Fox produced and distributed the movie. Now that number has grown by $750 million, to $2.75 billion in total. News Corp. shares that revenue with lots of hands, but what it keeps will make an impressive difference to its bottom line — and to what it can pour into The Wall Street Journal, as CEO Rupert Murdoch desires.

Compare that financial flexibility with the Times, and it’s night and day. The Times Co.’s total 2009 revenues: $2.4 billion, less than Avatar itself has produced. The Times is all but a newspaper pure play, deriving about 5.5 percent of its revenue from non-news Internet businesses, like About.com, after shedding TV and radio stations and its share of the Boston Red Sox.

It may be a one-of-a-kind pure play, in that it is the leading standalone news site and reaches vast audiences globally. Yet its pure-play nature can feel like a noose, which was tightening in the depth of the recession and only feels a lot looser now. The Times’ planned paid-content metering system, for instance, is a nervous-making strategy for a company with relatively little margin of error. Compare that to the revenue trajectories that News Corp.’s London papers may see after their paywalls have been in place for a year. Whatever the results, they’ll have de minimis impact to News Corp. fortunes.

Likewise, McClatchy — another newspaper pure play, like MediaNews, A.H. Belo, Lee, and a few others — is now betting wholly on newspapers and their torturous transition to digital.

While Gannett is heavily dependent on print newspapers, in the U.S. and UK, it has been benefited by the 13 percent of its revenues that come from broadcast. Broadcast revenues — buoyed by Olympics and election-year advertising — were up 18.6 percent for the first half of 2010, while newspapers were down 6.5 percent for Gannett. Broadcast may be a largely mature medium, too, but for the print news companies that haven’t jettisoned properties gained in an earlier foray into broadcast diversification, it has provided some balm. In addition to Gannett, MediaGeneral and Scripps are among those holding on to broadcast properties.

For the bigger companies, the consequences are more nuanced. I call these large, now globally oriented (in news coverage, in audience reach and, coming, in advertising sales) The Digital Dozen, twelve-plus companies that are trying to harness the real scale value of digital distribution.

The Digital Dozen’s Thomson Reuters is a great example. Until 2007, Reuters was a standalone, a 160-year-old news service struggling with its own business models in this changing world. Then, with its merger with financial services giant Thomson, it now contributes less than a tenth of TR’s annual revenue. That kind of insulation can be a good thing, both as it figures out how to synergize the Reuters and Thomson business lines (a complex work-in-progress) and to allow investment in Reuters products and staffing, even as news revenues find tough sledding. Meanwhile, its main competitor, AP, may have a strong commercial business (broadcast and print) worldwide — but it’s a news business, with no other revenue lines to provide breathing room.

National broadcast news, too, has seen rapid change, and much staff reduction in the past few years. GE, one behemoth of a diversified company, is turning over the NBC News operation to another giant, Comcast. ABC News is found within the major entertainment conglomerate Disney.

Meanwhile, Bloomberg — getting more than eight out of 10 of its dollars via the terminal rental business — is moving aggressively to build a greater news brand; witness the Business Week acquisition, and its push into government news coverage, formally announcing the hiring of 100 journalists for its Bloomberg Government new business unit. Non-news revenue — largely meaning non-advertising dependence — is what may increasingly separate “news” companies going forward. So we see the Guardian Media Group selling off its regional newspapers to focus, as its annual report proudly announces, on “a strong portfolio [of non-news companies and investments] to support our journalism.]

Journalism must be fed — but inky hands will be doing less and less of the feeding.

Image by John Cooper used under a Creative Commons license.

August 23 2010

12:02

August 18 2010

21:42
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl