Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 17 2012

16:07

Metrics, metrics everywhere: How do we measure the impact of journalism?

If democracy would be poorer without journalism, then journalism must have some effect. Can we measure those effects in some way? While most news organizations already watch the numbers that translate into money (such as audience size and pageviews), the profession is just beginning to consider metrics for the real value of its work.

That’s why the recent announcement of a Knight-Mozilla Fellowship at The New York Times on “finding the right metric for news” is an exciting moment. A major newsroom is publicly asking the question: How do we measure the impact of our work? Not the economic value, but the democratic value. The Times’ Aaron Pilhofer writes:

The metrics newsrooms have traditionally used tended to be fairly imprecise: Did a law change? Did the bad guy go to jail? Were dangers revealed? Were lives saved? Or least significant of all, did it win an award?

But the math changes in the digital environment. We are awash in metrics, and we have the ability to engage with readers at scale in ways that would have been impossible (or impossibly expensive) in an analog world.

The problem now is figuring out which data to pay attention to and which to ignore.

Evaluating the impact of journalism is a maddeningly difficult task. To begin with, there’s no single definition of what journalism is. It’s also very hard to track what happens to a story once it is released into the wild, and even harder to know for sure if any particular change was really caused by that story. It may not even be possible to find a quantifiable something to count, because each story might be its own special case. But it’s almost certainly possible to do better than nothing.

The idea of tracking the effects of journalism is old, beginning in discussions of the newly professionalized press in the early 20th century and flowering in the “agenda-setting” research of the 1970s. What is new is the possibility of cheap, widespread, data-driven analysis down to the level of the individual user and story, and the idea of using this data for managing a newsroom. The challenge, as Pilhofer put it so well, is figuring out which data, and how a newsroom could use that data in a meaningful way.

What are we trying to measure and why?

Metrics are powerful tools for insight and decision-making. But they are not ends in themselves because they will never exactly represent what is important. That’s why the first step in choosing metrics is to articulate what you want to measure, regardless of whether or not there’s an easy way to measure it. Choosing metrics poorly, or misunderstanding their limitations, can make things worse. Metrics are just proxies for our real goals — sometimes quite poor proxies.

An analytics product such as Chartbeat produces reams of data: pageviews, unique users, and more. News organizations reliant on advertising or user subscriptions must pay attention to these numbers because they’re tied to revenue — but it’s less clear how they might be relevant editorially.

Consider pageviews. That single number is a combination of many causes and effects: promotional success, headline clickability, viral spread, audience demand for the information, and finally, the number of people who might be slightly better informed after viewing a story. Each of these components might be used to make better editorial choices — such as increasing promotion of an important story, choosing what to report on next, or evaluating whether a story really changed anything. But it can be hard to disentangle the factors. The number of times a story is viewed is a complex, mixed signal.

It’s also possible to try to get at impact through “engagement” metrics, perhaps derived from social media data such as the number of times a story is shared. Josh Stearns has a good summary of recent reports on measuring engagement. But though it’s certainly related, engagement isn’t the same as impact. Again, the question comes down to: Why would we want to see this number increase? What would it say about the ultimate effects of your journalism on the world?

As a profession, journalism rarely considers its impact directly. There’s a good recent exception: a series of public media “impact summits” held in 2010, which identified five key needs for journalistic impact measurement. The last of these needs nails the problem with almost all existing analytics tools:

While many Summit attendees are using commercial tools and services to track reach, engagement and relevance, the usefulness of these tools in this arena is limited by their focus on delivering audiences to advertisers. Public interest media makers want to know how users are applying news and information in their personal and civic lives, not just whether they’re purchasing something as a result of exposure to a product.

Or as Ethan Zuckerman puts it in his own smart post on metrics and civic impact, ”measuring how many people read a story is something any web administrator should be able to do. Audience doesn’t necessarily equal impact.” Not only that, but it might not always be the case that a larger audience is better. For some stories, getting them in front of particular people at particular times might be more important.

Measuring audience knowledge

Pre-Internet, there was usually no way to know what happened to a story after it was published, and the question seems to have been mostly ignored for a very long time. Asking about impact gets us to the idea that the journalistic task might not be complete until a story changes something in the thoughts or actions of the user.

If journalism is supposed to inform, then one simple impact metric would ask: Does the audience know the things that are in this story? This is an answerable question. A survey during the 2010 U.S. mid-term elections showed that a large fraction of voters were misinformed about basic issues, such as expert consensus on climate change or the predicted costs of the recently passed healthcare bill. Though coverage of the study focused on the fact that Fox News viewers scored worse than others, that missed the point: No news source came out particularly well.

In one of the most limited, narrow senses of what journalism is supposed to do — inform voters about key election issues — American journalism failed in 2010. Or perhaps it actually did better than in 2008 — without comparable metrics, we’ll never know.

While newsrooms typically see themselves in the business of story creation, an organization committed to informing, not just publishing, would have to operate somewhat differently. Having an audience means having the ability to direct attention, and an editor might choose to continue to direct attention to something important even it’s “old news”; if someone doesn’t know it, it’s still new news to them. Journalists will also have to understand how and when people change their beliefs, because information doesn’t necessarily change minds.

I’m not arguing that every news organization should get into the business of monitoring the state of public knowledge. This is only one of many possible ways to define impact; it might only make sense for certain stories, and to do it routinely we’d need good and cheap substitutes for large public surveys. But I find it instructive to work through what would be required. The point is to define journalistic success based on what the user does, not the publisher.

Other fields have impact metrics too

Measuring impact is hard. The ultimate effects on belief and action will mostly be invisible to the newsroom, and so tangled in the web of society that it will be impossible to say for sure that it was journalism that caused any particular effect. But neither is the situation hopeless, because we really can learn things from the numbers we can get. Several other fields have been grappling with the tricky problems of diverse, indirect, not-necessarily-quantifiable impact for quite some time.

Academics wish to know the effect of their publications, just as journalists do, and the academic publishing field has long had metrics such citation count and journal impact factor. But the Internet has upset the traditional scheme of things, leading to attempts to formulate wider ranging, web-inclusive measures of impact such as Altmetrics or the article-level metrics of the Public Library of Science. Both combine a variety of data, including social media.

Social science researchers are interested not only in the academic influence of their work, but its effects on policy and practice. They face many of the same difficulties as journalists do in evaluating their work: unobservable effects, long timelines, complicated causality. Helpfully, lots of smart people have been working on the problem of understanding when social research changes social reality. Recent work includes the payback framework which looks at benefits from every stage in the lifecycle of research, from intangibles such as increasing the human store of knowledge, to concrete changes in what users do after they’ve been informed.

NGOs and philanthropic organizations of all types also use effectiveness metrics, from soup kitchens to international aid. A research project at Stanford University is looking at the use and diversity of metrics in this sector. We are also seeing new types of ventures designed to produce both social change and financial return, such as social impact bonds. The payout on a social impact bond is contractually tied to an impact metric, sometimes measured as a “social return on investment.”

Data beyond numbers

Counting the countable because the countable can be easily counted renders impact illegitimate.

- John Brewer, “The impact of impact

Numbers are helpful because they allow standard comparisons and comparative experiments. (Did writing that explainer increase the demand for the spot stories? Did investigating how the zoning issue is tied to developer profits spark a social media conversation?) Numbers can be also compared at different times, which gives us a way to tell if we’re doing better or worse than before, and by how much. Dividing impact by cost gives measures of efficiency, which can lead to better use of journalistic resources.

But not everything can be counted. Some events are just too rare to provide reliable comparisons — how many times last month did your newsroom get a corrupt official fired? Some effects are maddeningly hard to pin down, such as “increased awareness” or “political pressure.” And very often, attributing cause is hopeless. Did a company change its tune because of an informed and vocal public, or did an internal report influence key decision makers?

Fortunately, not all data is numbers. Do you think that story contributed to better legislation? Write a note explaining why! Did you get a flood of positive comments on a particular article? Save them! Not every effect needs to be expressed in numbers, and a variety of fields are coming to the conclusion that narrative descriptions are equally valuable. This is still data, but it’s qualitative (stories) instead of quantitative (numbers). It includes comments, reactions, repercussions, later developments on the story, unique events, related interviews, and many other things that are potentially significant but not easily categorizable. The important thing is to collect this information reliably and systematically, or you won’t be able to make comparisons in the future. (My fellow geeks may here be interested in the various flavors of qualitative data analysis.)

Qualitative data is particularly important when you’re not quite sure what you should be looking for. With the right kind, you can start to look for the patterns that might tell you what you should be counting,

Metrics for better journalism

Can the use of metrics make journalism better? If we can find metrics that show us when “better” happens, then yes, almost by definition. But in truth we know almost nothing about how to do this.

The first challenge may be a shift in thinking, as measuring the effect of journalism is a radical idea. The dominant professional ethos has often been uncomfortable with the idea of having any effect at all, fearing “advocacy” or “activism.” While it’s sometimes relevant to ask about the political choices in an act of journalism, the idea of complete neutrality is a blatant contradiction if journalism is important to democracy. Then there is the assumption, long invisible, that news organizations have done their job when a story is published. That stops far short of the user, and confuses output with effect.

The practical challenges are equally daunting. Some data, like web analytics, is easy to collect but doesn’t necessarily coincide with what a news organization ultimately values. And some things can’t really be counted. But they can still be considered. Ideally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics, comments, inbound links, social media discussion, and other reactions. With that sort of extensive data set, we stand a chance of figuring out not only what the journalism did, but how best to evaluate it in the future. But nothing so elaborate is necessary to get started. Every newsroom has some sort of content analytics, and qualitative effects can be tracked with nothing more than notes in a spreadsheet.

Most importantly, we need to keep asking: Why are we doing this? Sometimes, as I pass someone on the street, I ask myself if the work I am doing will ever have any effect on their life — and if so, what? It’s impossible to evaluate impact if you don’t know what you want to accomplish.

March 02 2011

15:00

Dennis Mortensen: Are news orgs worrying too much about search and not enough about the front page?

Editor’s Note: This is a guest post from Dennis R. Mortensen, former director of data insights at Yahoo and founder of Visual Revenue, a New York firm that sells its predictive-analytics services to news organizations.

Dennis saw my talk at the Canadian Journalism Foundation in January and wanted to comment on the role of a news site’s front page in its success in assembling an audience. He argues that paying too much attention to SEO on current articles could backfire on news orgs.

In Josh’s talk in Toronto, he hypothesized that:

[The front page is] still an enormously powerful engine of traffic. I would say actually that for most American newspapers…it’s probably 70 percent in a lot of cases.

Josh is saying you should view the front page as a traffic channel unto itself, just as you’d think of Facebook or Google — something I wholeheartedly agree with. If you choose to view your front page as a traffic channel, you’ll also sign up for a different kind of data analysis — analysis that mixes external referrers with internal referrers. In other words, a link from the Drudge Report is no different than a link from your own front page, in the sense that they both should be viewed as channels to optimize.

I argue that the front page is the most important engine of traffic for news media publishers today. I would also argue that this whole notion of news media publishers being held hostage by Google — and the slightly suboptimal idea of optimizing your articles for search to begin with — is somewhat misguided. It certainly seems wrong when we look at the data.

In this analysis, it’s important to distinguish between two core segments: current article views and archived article views. To begin, I’ve chosen a set of very strict and non-favorable definitions to my conclusion. A current article is defined as an item of content that is directly being exposed on the front or section front page right now. Any other content not currently exposed on a front page or section front page is deemed to be an archived article.

We looked at a sample of about 10 billion article views, across a sample of Visual Revenue clients, and found that on any given day, 64 percent of views are on current articles, and 36 percent of views are on archived articles.

So on a typical day, for most if not all news media publishers, the largest portion of article views comes off of their current article segment — stories published today or perhaps yesterday and still being actively promoted. I find this analysis fascinating and almost empowering, for the simple reason that most current news events are initially non-searchable. If a revolution breaks out in Egypt, I won’t know until I’m told about it. Non-searchable translates into a need for the stories to be discoverable by the reader in a different way, such as on a front page, through RSS, or in their social stream — all channels the publisher either owns or can influence.

There is no doubt that search, as a channel, owns the archive. One can discuss the data of why that is and why it is or isn’t optimal — I’ll leave that for a later discussion. But today, let’s focus on the current article segment, by far the bigger of the two. Where do those views come from? Looking at the same dataset from our clients, we get a very clear indication of where one’s focus should lie:

Sources of current article views:

78 percent come from front pages
7 percent come from search
6 percent come from social media
5 percent come from news aggregators
3 percent come from news originators
1 percent come from RSS & email

(We’re defining “news originators” as sites where at least two-thirds of the stories excerpted and promoted on their front page are original pieces generated by the news organization — which includes most traditional media. “News aggregators” are sites where less than two-thirds are original, such as Google News, Techmeme, or Drudge.)

I doubt this front-page dominance is much of a surprise to most editors — but for some reason, it seems like we aren’t taking the appropriate action today. We have 78 percent of all views on current articles coming from the front page — that’s 49 percent of all your article views on any given day — and what do we do to optimize it? And why is it that so many news organizations think immediately of search when we write a new story, when search has minimal initial impact? Even worse, writing an SEO-optimized article excerpt title for your front page probably deoptimizes your results on current articles.

The front page is indeed still an enormously powerful engine of traffic. We now know that about half of your article views can be attributed to the primary front page or the section front pages — and with it a huge chunk of any news organization’s online revenue. The question, then, is what kind of processes and optimization methodologies have you applied to take advantage of this fact?

January 19 2011

15:00

Seeking Alpha’s Premium Partnership Program and the evolution of paying for content

When the word dropped this weekend that the finance blog Seeking Alpha would begin paying its contributors, the news was met with both questions about its motives and concern about how the deal shakes out for writers.

The payment plan, called the “Premium Partnership Program,” provides contributors a rate of $10 for every 1,000 pageviews on stories submitted to Seeking Alpha “exclusively.” It’s a formula that makes sense on paper, particularly for a site that gets between 40-45 million pageviews a month: If exclusive stories garner high enough hits, the overall traffic helps the site — and writers get a payday.

The catch, of course — beyond the “exclusivity” clause — is that writers have to find the perfect alchemy of scoops and SEO-friendly subjects to gain a substantial cut. And already a few of Seeking Alpha’s contributors are saying that the math doesn’t add up. Reuters’s Felix Salmon, whose work appears on Seeking Alpha, offered up these numbers:

On average, I’ve been getting just under 48,000 pageviews per month. Which means that if I gave every single one of my blog entries to Seeking Alpha exclusively, then I’d still be earning on average less than $500 a month. And I’m a full-time blogger, unlike most Seeking Alpha contributors.

If most posts on Seeking Alpha get between 3,000 and 4,000 pageviews, that means that, under the partnership program, a writer would get a check for $30 or $40 per post.

It’s clear we’ve reached Stage 2 in the saga of how sites handle contributor content, as more outlets are trying to find a way to compensate writers. Stage 1 was the period when news sites traded on reputation (and maybe ego) in motivating contributors to submit content (“write for us and your name will be in front of the right people”). But as a sites grow, attracting more advertising dollars or at least more funding, the question for a number of writers becomes “how do I get a piece of the action?”

Many sites and writers employ fairly traditional freelancing models of compensation — flat rates per post — while others rely on variations in CPM rates. (Yahoo’s Contributor Network, for example, compensates writers at $2 CPM plus an upfront payment.) We’ve also seen slightly more elaborate schemes, like The Awl’s recent venture in profit-sharing. And of course there’s Demand Media, the subject of many a story about writers’ pay and working conditions.

When I spoke with Seeking Alpha’s CEO, David Jackson, last week, he told me that the site’s contributors were a mix of novice writers with backgrounds in the financial industries as well as established bloggers and newsletter writers. (Seeking Alpha has close to 4,000 contributors all told, including both individuals and other media properties like TechCrunch and Globe Investor, the investment site from Canada’s Globe and Mail.)

Before the partnership program, the payoff for writers was publicity for the work they published elsewhere. “We publish the article; we get traffic and drive leads to your business,” Jackson said in a phone conversation.

While that’s still the case, the money will sweeten the deal for the writers. “If they specialize in a particular sector, they become the authority on it and get lots of readership,” Jackson said. And that, in turn, will “make real money.”

Though he didn’t go into specifics, Jackson noted that writers have the potential to pull in a bigger take from pageviews than the site does from advertisers. Jackson told me they “view how much money [contributors] make as a sign of our success. If they do really well, it means we’re successful.”

The bottom line for the moment, though, is that freelancers and blog contributors are still not likely to pull in heavy dividends for their work — at least, as Salmon suggested, not enough for a full-time gig off any one website. Of course, the elephant in the room is The Huffington Post, which has an extensive network of unpaid contributors, and is in a universe far different than most sites, as Joseph Tartakoff points out. But out on the fringes, we’re seeing more of an evolution in the ways publishers are paying for the content they post online.

November 18 2010

17:30

Crunching Denton’s Ratio: What’s the return on paying sources?

There was a lot of buzz on Twitter yesterday about Paul Farhi’s piece in The Washington Post on checkbook journalism — in particular the way a mishmash of websites, tabloids, and TV news operations put money in the hands of the people they want to interview. (With TV, the money-moving is a little less direct, usually filtered through payments for photos or videos.)

But, just for a moment, let’s set aside the traditional moral issues journalists have with paying sources. (Just for a moment!) Does paying sources make business sense? Financially speaking, the justification given for paying sources is to generate stories that generate an audience — with the hope that the audience can then be monetized. Does it work?

There’s not nearly enough data to draw any real conclusions, but let’s try a little thought experiment with the (rough) data points we do have, because I think it might provide some insight into other means of paying for content. Nick Denton, the head man at Gawker Media and the chief new-media proponent of paying sources, provides some helpful financial context:

With the ability to determine instantly how much traffic an online story is generating, Gawker’s Denton has the pay scale almost down to a science: “Our rule of thumb,” he writes, “is $10 per thousand new visitors,” or $10,000 per million.

What strikes me about those numbers is how low they are. $10K for a million new visitors? There aren’t very many websites out there that wouldn’t consider that an extremely good deal.

Let’s compare Denton’s Ratio to the numbers generated by another money-for-audience scheme in use on the Internet: online advertising. After all, lots of ads are sold using roughly the same language Denton uses: the M in CPM stands for thousand. Except it’s cost per thousand impressions (a.k.a. pageviews), not cost per thousand new visitors, which would be much more valuable. What Denton’s talking about is more like CPC — cost per click, which sells at a much higher rate. (Those new visitors aren’t just looking at an ad for a story; they’re actually reading it, or at least on the web page.) Except it’s even more valuable than that, since there’s no guarantee that the person clicking a CPC ad is actually a “new” visitor. Let’s call what Denton’s talking about CPMNV: cost per thousand new visitors.

CPC rates vary wildly. When I did a little experiment last year running Google AdWords ads for the Lab, I ended up paying 63 cents per click. I ran a similar experiment a few months later with Facebook ads for the Lab, and the rate ended up being 26 cents per click.

What Denton is getting for his $10 CPMNV is one cent per click, one cent per new visitor. It’s just that the click isn’t coming from the most traditional attention-generating tool, an ad — it’s coming from a friend’s tweet, or a blogger’s link, or a mention on ESPN.com that sends someone to Google to search “Brett Favre Jenn Sterger.”

Doing the pageview math

And that $10 CPMNV that Denton’s willing to pay is actually less than the return he gets for at least some of his source-paid stories. Take the four Gawker Media pieces that the Post story talks about: the original photo of singer Faith Hill from a Redbook cover, to show how doctored the image was for publication; photos and a narrative from a man who hooked up with Senate candidate Christine O’Donnell; the “lost” early version of the iPhone 4 found in a California bar; and voice mails and pictures that allegedly show quarterback Brett Favre flirting with a woman named Jenn Sterger, who is not his wife. Gawker publishes its pageview data alongside each post, so we can start to judge whether Denton’s deals made financial sense. (Again, we’re talking financial sense here, not ethical sense, which is a different question.)

Faith Hill Redbook cover: 1.46 million pageviews on the main story, and about 730,000 pageviews on a number of quick folos in the days after posting. Total: around 2.2 million pageviews, not to mention an ongoing Jezebel franchise. Payment: $10,000.

Christine O’Donnell hookup: 1.26 million pageviews on the main story, 617,000 on the accompanying photo page, 203,000 on O’Donnell’s response to the piece, 274,000 on Gawker’s defense of the piece. Total: around 2.35 million pageviews. Payment: $4,000.

“Lost” iPhone: 13.05 million pageviews on the original story; 6.1 million pageviews on a series of folos. Total: around 19.15 million pageviews. Payment: $5,000.

Brett Favre/Jenn Sterger: 1.73 million pageviews on the first story, 4.82 million on the big reveal, 3.99 million pageviews on a long line of folos. Total: around 10.54 million pageviews. Payment: $12,000.

Let’s say, as a working assumption, that half of all these pageviews came from people new to Gawker Media, people brought in by the stories in question. (That’s just a guess, and I suspect it’s a low one — I’d bet it’s something more like 70-80 percent. But let’s be conservative.)

Expected under the Denton formula:
Faith Hill: 1 million new visitors
O’Donnell: 400,000 new visitors
iPhone: 500,000 new visitors
Favre: 1.2 million new visitors

Guesstimated real numbers:
Faith Hill: 1.1 million new visitors
O’Donnell: 1.17 million new visitors
iPhone: 9.56 million new visitors
Favre: 5.27 million new visitors

Again, these are all ham-fisted estimates, but they seem to indicate at least three of the four stories significantly overperformed Denton’s Ratio.

Reaching new audiences

The primary revenue input for Gawker is advertising. They don’t publish a rate card any more, but the last version I could find had most of their ad slots listed at a $10 CPM. Who knows what they’re actually selling at — ad slots get discounted or go unsold all the time, many pages have multiple ads, and lots of online ads get sold on the basis of metrics other than CPM. But with one $10 CPM ad per pageview, the 2.2 million pageviews on the Faith Hill story would drum up $22,000 in ad revenue. (Again, total guesstimate — Denton’s mileage will vary.)

Aside: Denton has said that these paid-for stories are “always money-losers,” and it’s true that pictures of Brett Favre’s manhood can be difficult to sell ads next to. Most (but not all) of those 10.54 million Brett Favre pageviews were served without ads on them. But that has more to do with, er, private parts than the model of paying sources.

But even setting aside the immediate advertising revenue — the most difficult task facing any website is getting noticed. Assuming there are lots of people who would enjoy reading Website X, the question becomes how those people will ever hear of Website X. Having ESPN talk about a Deadspin story during Sportscenter is one way. Having that Redbook cover emailed around to endless lists of friends is another. Gawker wants to create loyal readers, but you can only do that from the raw material of casual readers. Some fraction of each new flood of visitors will, ideally, see they like the place and want to stick around.

Denton publishes up-to-date traffic stats for his sites, and here’s what the four in question look like:

It’s impossible to draw any iron-clad conclusions from these squiggles, but in the case of Jezebel and Deadspin, the initial spike in traffic appears to have been followed by a new, higher plateau of traffic. (The same seems true, but to a lesser extent, for Gizmodo — perhaps in part because it was already much more prominent within the gadget-loving community when the story broke than, for example, 2007-era Jezebel or 2010-era Deadspin were within their target audiences. With Gawker, the O’Donnell story is too recent to see any real trends, and in any event, the impact will probably be lost within the remarkable overall traffic gains the site has seen.)

Fungible content strategies

I’ve purposefully set aside the (very real!) ethics issue here because, when looked at strictly from a business perspective, paying sources can be a marker for paying for content more generally. From Denton’s perspective, there isn’t much difference between paying a source $10,000 for a story and paying a reporter $10,000 for a story. They’re both cost outputs to be balanced against revenue inputs. No matter what one thinks of, say, Demand Media, the way they judge content’s value — how much money can I make off this piece? — isn’t going away.

Let’s put it another way. Let’s say a freelance reporter has written a blockbuster piece, one she’s confident will generate huge traffic numbers. She shops it around to Gawker and says it’ll cost them $10,000 to publish it. That’s a lot of money for an online story, and Denton would probably do some mental calculations: How much attention will this story get? How many new visitors will it bring to the site? What’s it worth? I’m sure there are some stories where the financial return isn’t the top factor — stories an editor just really loves and wants to publish. But just as the Internet has turned advertising into an engine for instantaneous price matching and shopping into an engine for instantaneous price comparison, it breaks down at least some of the financial barrier between journalist-as-cost and source-as-cost.

And that’s why, even beyond the very real ethical issues, it’s worth crunching the numbers on paying sources. Because in the event that Denton’s Ratio spreads and $10 CPMNV becomes a going rate for journalists as well as sources, that means for a writer to “deserve” a $50,000 salary, he’d have to generate 5 million new visitors a year. Five million is a lot of new visitors.

There’s one other line Denton had in the WaPo piece that stood out to me:

“I’m content for the old journalists not to pay for information. It keeps the price down,” Denton writes in an exchange of electronic messages. “So I’m a big supporter of that journalistic commandment – as it applies to other organizations.”

When we think of the effects of new price competition online, we often think of it driving prices down. When there are only a few people competing for an advertising dollar, they can charge higher rates; when there are lots of new competitors in the market, prices go down. But Denton’s basically arguing the equally logical flipside: I can afford to pay so little because there aren’t enough other news orgs competing for what sources have to offer. Let’s hope we don’t get to that same point with journalists.

October 18 2010

16:00

Move over, LiLo! Public-interest news can be more valuable to publishers than traffic bait

Conventional wisdom: What people really want from their journalism is some combination of celebrity gossip, naked celebrities, and gossip about naked celebrities. That may be a slight exaggeration, but it’s more than an assumption: Through the magic of web analytics, news publishers have access as never before to the collective Id of the people they serve…and again and again, such lowbrow fare as LiLo’s legal troubles and Favre’s photographic adventures rack up the pageviews, while their less sensational counterparts are rewarded for their dignity by being left alone. The more high-minded journalism — the public-interest investigations, the news about the economy and public policy — is still valuable, of course. But it’s also, we’ve assumed, a loss leader.

A study released today provides a hopeful counterpoint to all that (hopeful, that is, if you’re not Lindsay Lohan): For publishers, hard-news-focused, public-interest-oriented reporting might actually be more valuable than celebrity gossip and similarly LiLotastic fare. And not just in a good-for-democracy sense, but in a bottom-line sense. Perfect Market, a firm aimed at helping publishers maximize online revenue from their content, tracked more than 15 million news articles from 21 of its client news sites — including those of the Los Angeles Times, San Francisco Chronicle, and Chicago Tribune — from June 22 to September 21 of this year. And it found that, while the Lohan sentencing and other celebrity coverage drove significant online traffic, articles about public-interest topics — unemployment benefits, the Gulf oil spill, mortgage rates, etc. — were the top-earning news topics of the summer. The latter stories offered their publishers, overall, more advertising revenue per page view (which is to say: more bang for their advertising buck) than their fluffy counterparts.

The caveat: Perfect Market has a vested interest in the financial viability of quality content. (“At Perfect Market we believe that content matters,” the firm says in its press release. “By delivering the right content in the right format to the right user with the right relevancy, Perfect Market has increased the revenue for partners in our program by at least 20-fold.”) That said, though, the study’s holistic scope — moving beyond pageviews to focus on revenue — is an instructive approach. Traffic is notoriously fuzzy as a metric; it’s also notoriously stingy when it comes to return-on-investment. (Take The Huffington Post, which, for all its skill — PHOTOS! VIDEO! SLIDESHOW! — at leveraging our love of scandal, and for all the traffic it brings to its site, has struggled with profitability.)

For publishers struggling to sustain their operations, let alone grow them, it’s revenue that matters. And in Perfect Market’s study, via context-optimized advertising, it was consumer interest — not the casual variety that leads to quick headline-views, but the more engaged variety that leads to high time-on-site numbers and increased chances of ad clicks — that translated to revenue. Articles about social security were the most valuable to news publishers, the analysis found, generating an average of $129 in revenue for every thousand pageviews. Articles about mortgage rates were next, at $93 for every thousand views, followed by Gulf recovery jobs ($34 for every thousand).

LiLo, on the other hand? She generated only $2.50 for every 1,000 pageviews.

(It’s worth noting that the high-paying topics are united less by their hard-news nature than by their proximity to companies interested in hawking their wares. Immigration lawyers want their ads next to immigration stories; mortgage brokers and “Refinance now!” types want to be next to mortgage-rate stories; job sites want their ads on those Gulf-recovery-jobs stories. That makes sense, but it doesn’t do much for the sea of worthy news stories that won’t have an easy e-commerce hook. There aren’t many good contextual ads for Lohan court stories, but there also aren’t many for corruption investigations.)

We talk about the convergence of mediums: TV and print products and the web, video and text and multimedia, collapsing into one mega-medium. What the Perfect Market study suggests, though, is that there’s another type of convergence we would do well to cultivate: the conflation of editorial content and commercial. Earlier this year, Ken Doctor compared the revenues per unique user at the HuffPo and The New York Times; he estimated that, while the Times brought in $1 per unique user per month, the HuffPo brought in only 12 cents per user. And he attributed the discrepancy in large part to the Times’ advertising savvy: Its longtime presence in the ad-sales business means that it “owns key agency relationships.” It’s able to invest in making its ads contextually relevant to its content and, thus, to its users: AdSense, writ large. None of that is to say that the old church/state wall that has separated ads from journalism should be allowed to crumble; it is to say, though, that engagement may be just as important to news sites’ commercial content as to their editorial.

Image by Globovision used under a Creative Commons License.

September 23 2010

18:00

Peter Rojas on how a move away from blogging is a return to its roots

When Peter Rojas deleted his Facebook account a few months ago, it wasn’t because he hates social networks. The evidence? Rojas sees media sites heading in a social direction. At a talk on the future of blogging at Emerson College yesterday, he said that we’re headed for a return to connectivity, civil discussion and a bottom-up approach — the kind of things he says marked the early days of blogging.

Rojas founded the Gawker gadget site Gizmodo and went on to start its rival Engadget. In all, the half dozen properties he’s started since the early 2000s attract about 30 million unique visitors each month. That success at driving traffic is, in part, what inspired his most recent project, a networked gadgets site called gdgt.

Rojas thinks it will be increasingly difficult to build an online content business in an environment where quantity is the primary goal. The constant urge to publish more content and drive pageviews is not doing much for the reader. “[The web] is always trying to drive more clicks,” he said. “When everyone is doing it, it becomes a zero sum game.” When ads are sold on a CPM basis that requires huge pageview numbers to make money, publishers start pushing out more and more content. “It’s sort of a tragedy of the commons where the tragedy is our attention,” he said.

Rojas hopes his latest project will offer a better user experience that sidesteps those pageview demands. Gdgt is a discussion site that connects people to each other via gadgets — the ones they own, the ones they want, the ones they’ve left behind. Users contribute most of the content, in the form of reviews, ratings, and discussions. The site will eventually role out new tools that will let users build reputations. For an example of what the site is like, check out Rojas’ Gdgt profile. It’s not a news site in the way Gizmodo and Engadget are, and the founders hope that opens up more opportunities for innovative revenue streams.

Cofounder of the site Ryan Block pointed out that news sites have often rejected the use of affiliate programs like Amazon Associates, which give a small cut to sites that refer purchasers, because they believe they’d create a perverse incentive to write positive reviews. Gdgt is already experimenting with sponsored gadgets, like this Panasonic TV, where the site gets a cut each time a reader clicks through and purchases. Rojas said he can also envision relationships with companies as another source of revenue, along with pro accounts or even a research service.

The site is also running in-person events modeled after trade shows that usually only journalists and buyers get to attend. Despite a lengthy list of event sponsors, Block said he doesn’t see the events as a moneymaker, even in the long run, but an interesting way for the site to offer an “extension of the metaphor” they’re creating online — and another way to make the experience more social.

September 20 2010

14:00

L.A. Times’ controversial teacher database attracted traffic and got funding from a nontraditional source

Not so long ago, a hefty investigative series from the Los Angeles Times might have lived its life in print, starting on a Monday and culminating with abig package in the Sunday paper. But the web creates the potential for long-from and in-depth work to not just live on online, but but do so in a more useful way than a print-only story could. That’s certainly the case for the Times’ “Grading the Teachers,” a series based on the “value-added” performance of individual teachers and schools. On the Times’ site, users can review the value-added scores of 6,000 3rd- through 5th-grade teachers — by name — in the Los Angeles Unified School District as well as individual schools. The decision to run names of individual teachers and their performance was controversial.

The Times calculated the value-added scores from the 2002-2003 school year through 2008-2009 using standardized test data provided by the school district. The paper hired a researcher from RAND Corp. to run the analysis, though RAND was not involved. From there, in-house data expert and long-time reporter Doug Smith figured out how to present the information in a way that was usable for reporters and understandable to readers.

As might be expected, the interactive database has been a big traffic draw. Smith said that since the database went live, more than 150,000 unique visitors have checked it out. Some 50,000 went right away and now the Times is seeing about 4,000 users per day. And those users are engaged. So far the project has generated about 1.4 million page views — which means a typical user is clicking on more than 9 pages. That’s sticky content: Parents want to compare their child’s teacher to the others in that grade, their school against the neighbor’s. (I checked out my elementary school alma mater, which boasts a score of, well, average.)

To try to be fair to teachers, the Times gave their subjects a chance to review the data on their page and respond before publication. But that’s not easy when you’re dealing with thousands of subjects, in a school district where email addresses aren’t standardized. An early story in the series directed interested teachers to a web page where they were asked to prove their identity with a birth date and a district email address to get their data early. About 2,000 teachers did before the data went public. Another 300 submitted responses or comments on their pages.

“We moderate comments,” Smith said. “We didn’t have any problems. Most of them were immediately posteable. The level of discourse remained pretty high.”

All in all, it’s one of those great journalism moments at the intersection of important news and reader interest. But that doesn’t make it profitable. Even with the impressive pageviews, the story was costly from the start and required serious resource investment on the part of the Times.

To help cushion the blow, the newspaper accepted a grant from the Hechinger Report, the education nonprofit news organization based at Columbia’s Teachers College. [Disclosure: Lab director Joshua Benton sits on Hechinger's advisory board.] But aside from doing its own independent reporting, Hechinger also works with established news organizations to produce education stories for their own outlets. In the case of the Times, it was a $15,000 grant to help get the difficult data analysis work done.

I spoke with Richard Lee Colvin, editor of the Hechinger Report, about his decision to make the grant. Before Hechinger, Colvin covered education at the Times for seven years, and he was interested in helping the newspaper work with a professional statistician to score the 6,000 teachers using the “value-added” metric that was the basis for the series.

“[The L.A. Times] understood that was not something they had the capacity to do internally,” Colvin said. “They had already had conversations with this researcher, but they needed financial support to finish the project.” (Colvin wanted to be clear that he was not involved in the decision to run individual names of teachers on the Times’ site, just in analzying the testing data.) In exchange for the grant, the L.A. Times allowed Hechinger to use some of its content and gave them access to the data analysis, which Colvin says could have future uses.

At The Hechinger Report, Colvin is experimenting with how it can best carry out their mission of supporting in-depth education coverage — producing content for the Hechinger website, placing its articles with partner news organizations, or direct subsidies as in the L.A. Times series. They’re currently sponsoring a portion of the salary of a blogger at the nonprofit MinnPost whose beat includes education. “We’re very flexible in the ways we’re working with different organizations,” Colvin said. But, to clarify, he said, “we’re not a grant-making organization.”

As for the L.A. Times’ database, will the Times continue to update it every year? Smith says the district has not yet handed over the 2009-10 school year data, which isn’t a good sign for the Times. The district is battling with the union over whether to use value-added measurements in teacher evaluations, which could make it more difficult for the paper to get its hands on the data. “If we get it, we’ll release it,” Smith said.

July 16 2010

16:00

“What the audience wants” isn’t always junk journalism

Should news organizations give the audience what it wants?

Swap out “news organization” for “company” and “audience” for “customers” and the question seems absurd. But journalists have traditionally considered it a core principle that the audience’s taste should not be the sole guiding force behind news judgment. Coverage based on clicks is a race to the bottom, a path to slideshows of Michelle Obama’s arms and celebrity perp walks, right?

Item: Last week, when The New York Times wrote about the new Yahoo blog The Upshot, the reporter focused on the angle that it will use search data to guide editorial decisions:

Yahoo software continuously tracks common words, phrases and topics that are popular among users across its vast online network. To help create content for the blog, called The Upshot, a team of people will analyze those patterns and pass along their findings to Yahoo’s news staff of two editors and six bloggers…The news staff will then use that search data to create articles that — if the process works as intended — will allow them to focus more precisely on readers.

Yahoo staffers were dismayed, saying the search tool is just one piece of their editorial process. Michael Calderone: “NYT obsesses over use of a search tool; ignores boring, traditional stuff (breaking news, analysis, edit meetings,etc).” Andrew Golis: “Seriously, NYT misses a forest of brilliant old school original reporting & analysis for an acorn of search insights.”

Item: Washington Post ombudsman Andrew Alexander writes that the Post is steeped in a divide, with web journalists pushing to use user data. Print reporters, meanwhile, fear that “if traffic ends up guiding coverage, they wonder, will The Post choose not to pursue some important stories because they’re ‘dull’?” Then Alexander noted that the Post’s top trafficked staff-written story of the past year was about…Crocs. “The Crocs story illustrates a sobering reality about The Post’s site. Often (not always), readers are coming for the offbeat or the unusual. They’re drawn by endearing animal videos or photo galleries of celebrities.” Or rubber shoes.

But what if sometimes “what the audience wants” is more serious than what the news organization is giving them?

Item: A Pew study released Wednesday noted that, while public interest in the Gulf oil spill has dropped a bit — from 57 percent surveyed saying they are following the story closely to 43 percent — coverage of the oil spill has fallen off a cliff, dropping from 44 percent of all news coverage to 15 percent. And the drop in public interest followed the drop in coverage, not the other way around. Meanwhile, news consumers were getting a heavy dose of Lebron James and Lindsay Lohan coverage. (Note: The data is from June 10 to July 10, so before news that BP has tentatively stopped the spew.)

Item: Meanwhile, Mother Jones released its second-quarter traffic stats this week. For unique visitors, they’re up 125 percent year-over-year. Their revenue has increased 61 percent. The timing roughly coincides with the site’s decision to double down on oil spill coverage, though it cites other coverage for the uptick as well. The magazine’s Kate Sheppard follows the spill almost exclusively, filing a lively Twitter feed with links to her own work and others. That could help account for a chunk of the 676 percent jump in traffic from social media year-over-year. (Pew also found recently that the oil spill had slowly entered the social media world, picked up speed and hit a point last month where it was accounting for nearly a quarter of all links on Twitter.)

Could giving readers more of what they want mean both good journalism and a stronger bottom line? The two won’t line up every time, but it’s useful to remember that “what the audience wants” doesn’t always match the stereotype.

May 21 2010

07:17

Metrics: How do you track your audience?

How do you track audience interaction? Is it pageviews, time on site, sharing actions or a combination of all of that and more? Do the metrics you use to evaluate performance within the newsroom differ from what the business side looks at and talks to advertisers about?

April 05 2010

16:43

Is print still king? Has online made a move? Updating a controversial post

A year ago, in a Nieman Journalism Lab post that garnered 88 comments and still has viral life out there, I maintained that just three percent of newspaper content consumption happens online; the rest of it happens the old fashioned way, by people reading ink on dead trees. Given the continuing attention being paid to that conclusion (it was cited just last month by Hal Varian, Google’s chief economist, in testimony to the Federal Trade Commission), let’s revisit the numbers and see whether anything has changed.

With updates or improved data on at least some of the numbers, the general conclusions still hold: U.S. newspapers have not pushed much of their audience to their websites, nor have they followed the migration of their readership to the web. Their combined print and online readership metrics, whether measured in pageviews or in time spent, show that there’s been significant attrition since last year in the total audience for newspaper content, and that the fraction of that audience consuming newspaper content online remains in the low-to-mid single digits.

Here’s how I arrived at the numbers this year (to follow this more closely, or check my math, you can view my worksheet here):

Point of comparison: Pageviews

First, a comparison of pageviews in print and pageviews online. In print, I projected pageviews for newspaper content by taking the 2008 paid circulation reported by the Newspaper Association of America, adjusting it by the average of the two six-month circulation loss figures reported by the Audit Bureau of Circulations (March, September), and multiplying the resulting 2009 circulation by 2.128 readers per copy for weekdays and 2.477 for Sundays. (This is a 2007 Scarborough Research (PDF link) number I used last year also, but readers per copy has been a very consistent figure with little variation for decades.) This yielded total readership for weekdays and Sundays. I then made the same (discussable) assumption as last year: that the average reader of a newspaper issue looks at 24 pages, which means there is a total of 70.602 billion printed newspaper pageviews per month. That’s down almost 19 percent from last year’s 87.1 billion pages viewed. To be fair, my audience numbers last year were also based on that 2007 Scarborough data, so that’s really a two-year decline. (Jim Conaghan, research director at the NAA, tells me they have no data on the number of printed pages readers look at on average, and that there is no update to the 2007 readers-per-copy study.)

For online pageviews, NAA offers a precise number based on research by Nielsen Online. Nielsen’s methodology changed in June 2009, so I’ve used the average of the nine months from June 2009 to February 2010, which was 3.382 billion online newspaper site pageviews per month. So for print and online combined, we have a total of 73.985 billion pageviews (versus 90.3 billion last year). In other words, as measured in pageviews, 95.43 percent of total readership for newspaper content was in print; 4.57 percent of it was online. So while it appears that the online fraction has grown from 3.5 percent in the previous analysis, the bad news is that the total content exposure has dropped by about one fifth.

Point of comparison: Time on site

Some commenters to last year’s post maintained that print and online pageviews weren’t comparable. And certainly, the current wisdom says that pageviews and unique visitors don’t count nearly as much as “engagement” as measured by time spent on site as well as interaction with content. So, as I did last year, let’s look at time spent — both in print and online, print engagement versus online engagement with the newspaper content:

For the print side of the ledger, I began with the readership counts derived as above, and assumed average time spent with printed newspapers to be 25 minutes on weekdays and 35 minutes on Sundays. Now, this assumption got considerable comment flak last year, and no doubt will have its doubters this year. For those who say “I don’t know anybody who reads a newspaper at all, so how can the average be 25 minutes?” let me say that more than 40 million newspapers are still sold every day and someone is reading them, whether you know them or not. Anecdotally, half the people I see at Amy’s in Brattleboro are spending more time that that just with the New York Times. But let’s avoid the anecdotal evidence — here’s (PDF link) some U.S. Statistical Abstract data on time spent with various media, sourced from Veronis Suhler. It claims that the average person in 2009 spent 159 hours a year with newspapers (including newspaper websites), which is 26.1 minutes a day. While this tends to support the controversial pass-along factor, it’s for the average (adult) person. Since only about half the population actually reads printed newspapers (on average per day), that would mean newspaper readers spend an average of 52 minutes a day — which just strikes me as way too high. So I’m going to stick with the happy medium of 25 minutes weekdays and 35 on Sundays until someone can improve that data. (As an additional data point: According an NAA print newspaper “engagement” study (PDF link) presented a few years ago, on weekdays 45 percent of readers spent more than 30 minutes, 34 percent between 16 and 30 minutes, 21 percent under 15 minutes. Higher times were reported for Sunday editions.)

That yields total time spent with printed newspapers of 78.471 billion minutes per month. The online side is easy: averaging the last 9 months of NAA data, we get time spent at newspaper websites of 2.535 billion minutes per month. And combining print and online time spent, we have a total of 81.006 billion minutes per month spent with newspaper content. The engagement measure, therefore, says that 96.87 percent of time spent with newspaper content was in print; 3.13 percent of time spent was online. This is almost exactly the same as last year, when I found that 3.0 percent of time spent was online. But printed newspapers have lost a big chunk of total engagement as well: this year’s numbers are down 18.9 percent from last year’s analysis, which, again, really is a two-year drop of about one-fifth, with the loss occurring on the print side.

The conclusion that the overwhelming share of newspapers’ audience remains on the print side of the ledger is supported by Scarborough’s 2008 ratings of what it called the “Integrated Newspaper Audience” (PDF link) in selected markets. Measuring the cumulative 5-day audience rather than daily averages, that data showed that the incremental audience at newspaper websites added only a few percentage points to their print reach.

NAA and Nielsen are clear that their pageviews and time-spent stats since June 2009 can’t be compared with earlier months because of methodology changes, so I’ll refrain from doing that; but clearly the print/online audience split was enormously skewed last year and remains so — and most importantly, the online side is not growing. Back in June, NAA reported 3.469 billion pageviews and 2.701 billion minutes spent; in January (to avoid the short month of February), there were 3.452 billion pageviews and 2.485 billion minutes spent. Time spent per unique visitor has fallen gradually from 38:24 minutes in June to 33:09 minutes in January. In other words, while newspapers are losing readership on the print side, that disappearing audience is not following them online; at best, the online audience for newspaper content is static.

The purpose of this analysis is not to compare all “offline” news consumption with all online news consumption; it is to dissect the newspaper content audience. But as several commenters noted last year, this really means that as the audience moves online, it is getting most of its news from non-newspaper sites.

Beyond examining the split between readers of printed and online newspaper content, I also noted in another post last year that newspaper websites attracted less than one percent of all U.S. web traffic — 0.69 percent of pageviews and 0.56 percent of time spent, to be precise, in June 2009. Updating those stats with February 2010 Nielsen Online data (also detailed in the spreadsheet linked above), over the last nine months newspapers have actually lost share in both pageviews and time spent: pageview share dropped to 0.63 percent, and time spent dropped to 0.50 percent of total web traffic.

Meanwhile at newspapers, much effort and much dialogue continues to focus on getting readers to pay for content and battling aggregators — energy that might better be spent figuring out how not to lose the sizeable remaining audience for newspaper content, not by “protecting print” but by keeping the current print readers in the fold as they, too, gradually migrate to reading news online.

March 31 2010

13:56

Nieman Journalism Lab: Gawker’s new traffic metric measures ‘reader affection’

While others pour over pageviews and underscore uniques, Gawker Media has been quietly working on a new metric, one designed to measure so-called “reader affection”. This new metric is called “branded traffic” and is, according to Nieman Journalism Lab, “both more nebulous and more significant” than traditional forms of measurement.

The idea is to measure the number of visitors that arrive at the site via a direct search for its name or variations on its branding, or by typing in the site URL directly, and distinguish them from more incidental traffic.

The metric comes from a simple compound: direct type-in visits plus branded search queries in Google Analytics. In other words, Gawker Media is bifurcating its visitors in its evaluation of them, splitting them into two groups: the occasional audience, you might call it, and the core audience.

The original Gawker release highlights the value the site places on turning the internet passerby into an affectionate reader:

While distributing content across the web is essential for attracting the interest of internet passersby, courting these wanderers, massaging them into occasional visitors, and finally gaining their affection as daily readers is far more important. This core audience – borne of a compounding of word of mouth, search referrals, article recommendations, and successive enjoyed visits that result in regular readership – drives our rich site cultures and premium advertising products.

Full post at this link…

Similar Posts:



March 30 2010

18:56

A “reader affection” formula: Gawker creates a metric for branded traffic

Influence, engagement, impact: For goals that are, in journalism, kind of the whole point, they’re notoriously difficult to quantify. How do you measure, measure a year, and so on.

Turns out, though, that Gawker Media, over the past few years, has been attempting to do just that. Denton and Crew, we learned in a much-retweeted post this morning, have been “quietly tending” a metric both more nebulous and more significant than pageviews, uniques, and the other more traditional ways of impact-assessment: They’ve been measuring branded traffic — or, as the post in question delightfully puts it, “recurring reader affection.” The metric comes from a simple compound: direct type-in visits plus branded search queries in Google Analytics.

In other words, Gawker Media is bifurcating its visitors in its evaluation of them, splitting them into two groups: the occasional audience, you might call it, and the core audience. And it’s banking on the latter. “New visitors are only really valuable if they become regulars,” Denton pointed out in a tweet this morning. (That lines up with Denton’s recent pushing of unique visits over pageviews as a performance metric.)

The goal — as it is for so many things in journalism these days — is to leverage the depth against the breadth. As the post puts it:

While distributing content across the web is essential for attracting the interest of Internet passersby, courting these wanderers, massaging them into occasional visitors, and finally gaining their affection as daily readers is far more important. This core audience — borne of a compounding of word of mouth, search referrals, article recommendations, and successive enjoyed visits that result in regular readership — drives our rich site cultures and premium advertising products.

I spoke with Erin Pettigrew, Gawker Media’s head of marketing and advertising operations — and the author of the post in question — over gChat to learn more about the outlet’s branded-traffic metric.

“The idea came from a few places,” she told me.

First, for so long we concerned ourselves with reach and becoming a significant enough web population such that advertisers would move us into their consideration set for marketing spend. Now that we have attained a certain level of reach and that spend consideration, we’re looking for additional ways to differentiate ourselves against other publisher populations. So branded traffic helps to illuminate our readership’s quality over its quantity, a nuanced benefit over many of the more broadly reaching sites on the web.

Secondly, there’s a myth, especially in advertising, that frequency of visitation is wasteful to ad spend. As far as premium content sites and brand marketers go, however, that myth is untrue. So, the ‘branded traffic’ measure is part of a larger case we’re making that advertising to a core audience (who visits repeatedly) is extremely effective.

Another aspect of that case, she adds, is challenging assumptions about reader engagement. “The wisdom has been that the higher the frequency of ad exposures to a single visitor, the less effective a marketing message becomes to that visitor. To the contrary, the highly engaged reader is actually far more receptive to the publisher’s marketing messaging than the occasional passerby.

In other words, she says: “Branded traffic is to a free website what a subscriber base is to a paid content site. The psychology behind the intent to visit and engage with the publisher brand in those two instances is very similar.”

The approach’s big x-factor — whether branded traffic will get buy-in, in every sense, from marketers — remains to be determined. “It’s something we’re just beginning to explore,” Pettigrew says. But marketers, she points out, “have always considered front door takeovers or roadblocks as one of the most coveted advertising placements on a publisher website. And they “intuitively understand that the publisher brand’s halo is brightest and strongest for a reader who comes through the front door seeking the publisher’s brand experience” — which is to say, they should realize the value of the core audience. “But we’ve yet to see a metric take hold across the industry that gets at a numerical understanding of this marketer intuition.”

January 07 2010

15:13

To grow, Gawker turns its attention to unique users

Gawker Media’s web measurement of choice is shifting from pageviews to unique users. That’s a pretty big deal for an organization that led the charge in pageview obsession. Gawker founder Nick Denton explained the refocusing in a staff memo:

The target is called “US monthly uniques.” It represents a measure of each site’s domestic audience. This is the figure that journalists cite when judging a site’s competitive position. It’s also the metric by which advertisers decide which sites they will shower with dollars. Finally, a site with plenty of genuine uniques is one that has good growth prospects. Each of those first-time visitors is a potential convert.

Gawker wants to expand its audience, and in the web world that often means launching new sites targeting different audiences. That’s not the case here: Gawker has sold properties, rolled others into its flagship and cut staff in recent years.

So how will Gawker grow amidst consolidation? By focusing efforts on scoops and original content; the stuff that spreads like wildfire through Twitter and Digg. “What is new is our feeling that we have tapped out our existing core audiences, and need to incentivize writers to find the next million people,” Denton wrote in an email. And as our colleague Zach Seward pointed out on Twitter a few days ago, the most popular Gawker posts are disproportionately the ones with original reporting.

The memo points out four stories that fit this new mindset:

Think of an exclusive such as Gawker’s embassy hazing pics, Deadspin’s expose of ESPN’s horndoggery, Gizmodo’s first look of the new Microsoft tablet or io9’s Avatar review. An item which gets picked up and draws in new visitors is worth more than a catnip slideshow that our existing readers can’t help but click upon.

Gawker turned a lot of heads when it grew advertising revenue by 35 percent while the rest of the industry was imploding. Other media organizations may scoff at some of Gawker’s methods, but they’d love to have its growth pattern. If a refocusing on unique users keeps Gawker on an upswing, there’ll be a lot of new passengers on the unique-user bandwagon. I don’t believe we’ve reached the “as Gawker goes, so goes the industry” inflection point, but the company is an industry trendsetter.

I see Gawker’s move plugging into a broader evolution where web publishers seek to attract people, not just clicks. Generating an audience is tough work. Original content and exclusives require far more time and energy than excerpting and aggregating. (That’s not a shot at aggregators — just an acknowledgement of reality.) The upside is that all that extra effort can create strong relationships with audiences and advertisers alike. Engagement leads to revenue, which leads to sustainability, which stokes hope and other things in short supply these days. A focus on uniques may or may not yield better journalism, but it could create better businesses.

Update: Denton followed up with a clarification:

“One minor quibble about your piece. We periodically cut staff and sites — more aggressively than usual last year, of course. But we’ve been hiring too and investing in our most successful properties. Edit budget [is] up 20% this year,” he wrote.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl