Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 22 2013

15:00

Objectivity and the decades-long shift from “just the facts” to “what does it mean?”

1960S ART

If I had only one short sentence to describe it, I’d say that journalism is factual reports of current events. At least, that’s what I used to say, and I think it’s what most people imagine journalism is. But reports of events have been a shrinking part of American journalism for more than 100 years, as stories have shifted from facts to interpretation.

Interpretation: analysis, explanation, context, or “in-depth” reporting. Journalists are increasingly in the business of supplying meaning and narrative. It no longer makes sense to say that the press only publishes facts.

New research shows this change very clearly. In 1955, stories about events outnumbered other types of front page stories nearly 9 to 1. Now, about half of all stories are something else: a report that tries to explain why, not just what.

rise-of-context-over-events-chart

This chart is from a paper by Katharine Fink and Michael Schudson of Columbia University, which calls these types of stories “contextual journalism.” (The paper includes an extensive and readable history of all sorts of changes in journalism in the 20th century; recommended for news nerds.) The authors sampled front-page articles from The New York Times, The Washington Post, and the Milwaukee Journal Sentinel in five different years from 1955 to 2003, and handcoded each of 1,891 stories into one of four categories:

  • conventional: a simple report of an event which happened in the last 24 hours
  • contextual: a story containing significant analysis, interpretation, or explanation
  • investigative: extensive accountability or “watchdog” reporting
  • social empathy: a story about the lives of people unfamiliar to the reader

Investigative journalism picks up after the 1960s but is still only a small percentage of all front-page stories. Meanwhile, contextual journalism increases from under 10 percent to nearly half of all articles. The loser is classic “straight” news: event-centered, inverted-pyramid, who-what-when-how-but-not-so-much-why stories, which have become steadily less popular. All this in the decades before the modern Internet. In fact, previous work showed that the transition away from events began at the dawn of the 20th century.

Investigative journalism may have pride of place within the mythology of American news, but that’s not really what journalists have been up to, by and large. Instead, newspaper journalists have been producing ever more of a kind a work that is so little discussed it doesn’t really have a name. Fink and Schudson write:

…there is no standard terminology for this kind of journalism. It has been called interpretative reporting, depth reporting, long-form journalism, explanatory reporting, and analytical reporting. In his extensive interviewing of Washington journalists in the late 1970s, Stephen Hess called it ‘social science journalism’, a mode of reporting with ‘the accent on greater interpretation’ and a clear intention of focusing on causes, not on events as such. Although this category is, in quantitative terms, easily the most important change in reporting in the past half century, it is a form of journalism with no settled name and no hallowed, or even standardized, place in journalism’s understanding of its own recent past.

From this historical look, fast forward to the web era. The last several years have seen a broad conversation about “context” in news. From Matt Thompson’s key observation that a series of chronological updates don’t really inform, to Studio 20′s Explainer project, to a whole series of experiments and speculations around story form, context has been a hot topic for those trying to rethink Internet-era journalism.

I believe this type of contextual journalism is important, and I hope we will get better at understanding and teaching it. The Internet has solved the basic distribution of event-based facts in a variety of ways; no one needs a news organization to know what the White House is saying when all press briefings are posted on YouTube. What we do need is someone to tell us what it means. In other words, journalism must move up the information food chain — as, in fact, it has steadily been doing for five decades!

Why does this type of journalism not even have a name?

I have a suspicion. I think part of the problem is the professional code of “objectivity.” This a value system for journalism that has many parts: truth seeking, neutrality, ethics, credibility. But all of these things are different when the journalist’s job moves from describing events to creating interpretations.

There are usually multiple plausible ways to interpret any event, so what are our standards for saying which interpretations are right? Journalism has a long, sorry history of professional pundits whose analyses of politics and economics turn out to be no better than guessing. In concrete fields such as election forecasting, it may later be obvious who was right. In other cases, there may not be a “right” answer in the traditional, positivist sense of science. These are the classic problems of framing: Is a 0.3 percent drop in unemployment “small” or is it “better than expected”? True neutrality becomes impossible in such cases, because if something has been politicized, you’re going to piss someone off no matter how you interpret it. (See also: hostile media effect.) There may not be an objectively correct or currently knowable meaning for any particular set of factual events, but that won’t stop the fighting over the narrative.

This seems to be a tricky place for truth in journalism. Much easier to say that there are objective facts, knowably correct facts, and that that is all journalism reports. The messy complexity of providing real narratives in a real world is much less authoritative ground. Nonetheless, we all crave interpretation along with our facts. Explanation and analysis and storytelling have become prevalent in practice. We as audiences continue to demand certain types of experts, even when we can’t tell if what they’re saying is any good. We demand reasons why, even if there can be no singular truth. We demand narrative.

What this latest research says to me is that journalism has added interpretation to its core practice, but we’re not really talking about it. The profession still operates with a “just the facts, ma’am” disclaimer that no longer describes what it actually does. Perhaps this is part of why media credibility has been falling for decades.

Photo of Sol LeWitt’s “Objectivity” (1962) via AP/National Gallery of Art.

March 29 2012

13:06

Comparing apples and oranges in data journalism: a case study

A must-read for any data journalist, aspiring or otherwise, is Simon Rogers’ post on The Guardian Datablog where he compares public and private sector pay.

This is a classic apples-and-oranges situation where politicians and government bodies are comparing two things that, really, are very different. Is a private school teacher really comparable to someone teaching in an unpopular school? What is the private sector equivalent of a director of public health or a social worker?

But if these issues are being discussed, journalists must try to shed some light, and Simon Rogers does a great job in unpicking the comparisons. From pay and hours worked, to qualifications and age (big differences in both), and gender and pay inequality (more women in the public sector, more lower- and higher-paid workers in the private sector), Rogers crunches all the numbers:

“[T]he proportion of low skill jobs in the private sector has increased, and the proportion of high skill jobs in the public sector increased to around 31% of all jobs by 2011, compared 26% of all private sector jobs.

“But, at the same time, people who are most highly qualified actually get paid worse in the public sector.

“… Public sector workers tend to be older … Average mean hourly earnings peak in the early 40s in both sectors. They decline slightly approaching retirement although the decline happens earlier in the private sector than in the public sector, possibly because the higher earners in the private sector are more likely to leave the labour market earlier.

“It also shows that if you’re older in the public sector, you get paid better than in the private sector.

“… [T]he bottom 5% of workers in the public sector earn less than £6.91 per hour, whereas in the private sector, 5% of workers earn less than £5.93 per hour.”

When you find yourself in an apples-and-oranges situation you can’t avoid, this is the way to do it. Any other examples?

13:06

Comparing apples and oranges in data journalism: a case study

A must-read for any data journalist, aspiring or otherwise, is Simon Rogers’ post on The Guardian Datablog where he compares public and private sector pay.

This is a classic apples-and-oranges situation where politicians and government bodies are comparing two things that, really, are very different. Is a private school teacher really comparable to someone teaching in an unpopular school? What is the private sector equivalent of a director of public health or a social worker?

But if these issues are being discussed, journalists must try to shed some light, and Simon Rogers does a great job in unpicking the comparisons. From pay and hours worked, to qualifications and age (big differences in both), and gender and pay inequality (more women in the public sector, more lower- and higher-paid workers in the private sector), Rogers crunches all the numbers:

“[T]he proportion of low skill jobs in the private sector has increased, and the proportion of high skill jobs in the public sector increased to around 31% of all jobs by 2011, compared 26% of all private sector jobs.

“But, at the same time, people who are most highly qualified actually get paid worse in the public sector.

“… Public sector workers tend to be older … Average mean hourly earnings peak in the early 40s in both sectors. They decline slightly approaching retirement although the decline happens earlier in the private sector than in the public sector, possibly because the higher earners in the private sector are more likely to leave the labour market earlier.

“It also shows that if you’re older in the public sector, you get paid better than in the private sector.

“… [T]he bottom 5% of workers in the public sector earn less than £6.91 per hour, whereas in the private sector, 5% of workers earn less than £5.93 per hour.”

When you find yourself in an apples-and-oranges situation you can’t avoid, this is the way to do it. Any other examples?

January 23 2012

17:00

July 25 2011

14:30

Vadim Lavrusik: Five key building blocks to incorporate as we’re rethinking the structure of stories

Editor’s Note: Vadim Lavrusik is Facebook’s first Journalist Program Manager, where he is responsible for, among other things, helping journalists to create new ways to tell stories. (You may remember him from his work at Mashable.) In the article below, he provides an wide-angle overview of the key forces that are re-shaping the news article for the digital age.

If we could re-envision today’s story format — beyond the text, photographs, and occasional multimedia or interactive graphics — what would the story look like? How would the audience consume it?

Today’s web “article” format is in many ways a descendent from the golden age of print. The article is mostly a recreation of print page design applied to the web. Stories, for the most part, are coded with a styled font for the headline, byline, and body — with some divs separating complementary elements such as photographs, share buttons, multimedia items, advertising, and a comments thread, which is often so displaced from the story that it’s hard to find. It is only scratching the surface of the storytelling that is possible on the web.

In the last few years, we’ve seen some progress in new approaches to the story format on the web, but much of it has included widgets and tools tacked on for experimentation. And it doesn’t fully account for changes in user behavior and the proliferation of simple publishing tools and platforms on the web. As the Huffington Post’s Saul Hansell recently put it, “There are a lot more people saying things than there is stuff to say in this world.” Tools like Storify and Storyful enable journalists to curate the conversation that’s taking place on the social web, turning ephemeral comments into enduring narratives. A story, Jeff Jarvis notes, can be the byproduct of the process of newsgathering — the conversation.

And the conversation around the story has become, at this point, almost as important as the story itself. The decisions we make now — of design and of content creation — will inform the evolution of the story itself. So it’s worth stepping back and wondering: How can we hack today’s story into something that reflects the needs of today’s news consumers and publishers, integrates the vast amounts of content and data being created online, and generally leverages the opportunities the web has created? Below are some of the most crucial elements of online storytelling; think of it as a starting point for a conversation about the pieces tomorrow’s story format could include.

1. Context

Context wears many hats in a story. It could mean representing historical context through an interactive timeline or presenting contextualized information that puts the story in perspective. It could be an infographic, a subhead with information — or cumulative bits of information that run through a narrative. When the first American newspaper, Publick Occurrences, was published, many of its stories were only a few sentences in length. Most of its stories were reports that were gathered through word of mouth. But because of the infrequency of the publication and short length of the stories, it failed to provide the reader with adequate context in its stories. Haphazard newsgathering led to a somewhat chaotic experience for readers.

Today, though, with publication happening every millisecond, the overflow of information presents a different kind of challenge: presenting short stories in a way that still provides the consumer with context instead of just disparate pieces of information. We’ve seen a piece of the solution with the use of Storify, which enables journalists to organize the social story puzzle pieces together to suggest a bigger picture. But how can this approach be scaled? How can we provide context in a way that is not only comprehensive, but inclusive?

2. Social

Social platforms have, in short, changed the way we consume news. Over the last decade, we consumers spent a big portion of our time searching for news and seeking it out on portals and news sites. Now news finds us. We discover it from friends, colleagues, and people with whom we share intellectual interests. It’s as if on every corner one of our friends is a 1900s paperboy shouting headlines along with their personal take on the news in question. The news is delivered right to us in our personalized feeds and streams.

Social design makes the web feel more familiar. We tend to refer to readers and viewers as consumers, and that’s not only because they consume the content that is presented or pay for it as customers; it’s also because they’re consumed by the noise that the news creates. Social design adds a layer that acts as a filter for the noise.

Stories have certainly integrated social components so far, whether it’s the ability of a consumer to share a story with friends or contribute her two cents in the comments section. But how can social design be integrated into the structure of a story? Being able to share news or see what your friends have said about the piece is only scratching the surface. More importantly, how can social design play nice with other components discussed here? How do you make stories that are not just social, but also contextual — and, importantly, personal?

3. Personalization

One of the benefits of social layering on the web is the ability to personalize news delivery and provide social context for a user reading a story. A user can be presented with stories based on what their social connections have shared using applications like Flipboard, Zite, Trove, and many others. Those services incorporate social data to learn what it is you may be interested in reading about, adding a layer of cusomtization to news consumption. Based on your personal interests, you are able to get your own version of the news. It’s like being able to customize a newscast with only segments you’re interested in, or only have the sports section of the local newspaper delivered to your porch…times ten.

How can we serve consumers’ needs by delivering a story in a format they prefer, while avoiding the danger of creating news consumers who only read about things they want know (and not news they should know)? Those are big questions. One answer could have to do with format: enabling users to consume news in a format or style they prefer, enabling them to create their own personalized article design that suits their needs. Whatever it looks like, personalization is not only important in enabling users to get content in a compelling format. It’s also crucial from the business perspective: It enables publishers to learn more about their audiences to better serve them through forms of advertising, deals, and services that are just as relevant and personalized.

4. Mobile

Tomorrow’s story will be designed for the mobile news consumer. Growing accessibility to smartphones is only going to continue to increase, and the story design and format will likely increasingly cater to mobile users. They will also take into account the features of the platform the consumer is on and their behavior when they are consuming the content. The design will take into account how users interact with stories from their mobile devices, using touch-screen technology and actions. We’re already seeing mobile and tablet design influence web design.

These are challenges not only of design, but of content creation. Journalists may begin to produce more abbreviated pieces for small-screen devices, while enabling longform to thrive on tablet-sized screens. Though journalists have produced content from the field for years, the advancement of mobile technology will continue to streamline this process. Mobile publication is already integrated into content management platforms, and companies like the BBC are working on applications that will enable users to broadcast live from their mobile phones.

5. Participation

Citizens enabled by social platforms are covering revolutions on mobile devices. Users are also able to easily contribute to a story by snapping a picture or video and uploading it with their mobile devices to a platform like iReport. Tomorrow’s article will enable people to be equal participants in the story creation process.

Increasingly, participation will mean far more than simply consumption, being cast aside as a passive audience that can contribute to the conversation only by filing a comment below a published story (pending moderator approval). The likes of iReport, The Huffington Post’s “contribute” feature, or The New York Daily News’ recent uPhoto Olapic integration — which enables people to easily upload their photos to a story slideshow and share photos they’ve already uploaded to Facebook, Flickr, and elsewhere — are just the beginning. To harness participatory journalism, these features should no longer be an afterthought in the design, but a core component of it. As Jay Rosen recently put it, “It isn’t true that everyone is a journalist. But a lot more people are involved.”

Image by Holger Zscheyge used under a Creative Commons license.

January 28 2011

17:00

MoJo’s Egypt explainer: future-of-context ideas in action

This week’s unrest in Egypt brings new relevance to an old question: How do you cover an event about which most of your readers have little or no background knowledge?

Mother Jones has found one good way to do that. Its national reporter, Nick Baumann, has produced a kind of on-the-fly topic page about this week’s uprising, featuring a running description of events fleshed out with background explanation, historical context, multimedia features, and analysis. The page breaks itself down into several core categories:

The Basics
What’s Happening?
Why are Egyptians unhappy?
How did this all start?
Why is this more complicated for the US than Tunisia was?
How do I follow what’s happening in real-time?
What’s the latest?

The page also contains, as of this posting, 14 updates informing readers of new developments since the page was first started (at 1 p.m. on Tuesday) and pointing them to particularly helpful and read-worthy pieces of reporting and analysis on other sites.

In all, the MoJo page pretty much takes the Demand Media approach to the production of market-driven content — right down to its content-farm-tastic title: “What’s Happening in Egypt Explained.” The crucial difference, though, is that its content is curated by an expert journalist. In that, the page has a lot in common with the kind of curation done, by Andrew Sullivan and the HuffPost’s Nico Pitney and many others, during 2009’s uprising in Iran. That coverage, though, had an improvised, organic sense to it: We’re figuring this out as we go along. It felt frenzied. The MoJo page, on the other hand, conveys the opposite sensibility: It exudes calmness and control. Here’s what you need to know.

And that’s a significant distinction, because it’s one that can be attributed to something incredibly simple: the page’s layout. The basic design decision MoJo made in creating its Egypt explainer — breaking it down into categories, encyclopedia-style — imposes an order that more traditional attempts at dynamic coverage (liveblogs, Twitter lists, etc.) often lack.

At the same time, the page also extends the scope of traditional coverage. With their space constraints, traditional news narratives have generally had to find artful ways to cater, and appeal, to the widest possible swath of readers. (To wit: that nearly parenthetical explanation of a story’s context, usually tacked onto a text story’s lede or a nut graf.) The web’s limitless space, though, changes the whole narrative proposition of the explainer: The MoJo page rethinks explanation as “information” rather than “narrative.” It’s not trying to be a story so much as a summary. And what’s resulted is a fascinating fusion between a liveblog and a Wikipedia entry.

The MoJo page, of course, isn’t alone in producing creative, context-focused journalism: From topic pages to backgrounders, videos to video games, news organizations are experimenting with lost of exciting approaches to explanation. And it’s certainly not the only admirable explainer detailing the events in Egypt. What’s most noteworthy about MoJo’s Egypt coverage isn’t its novelty so much as its adaptability: It acknowledges, implicitly, that audience members might come into it armed with highly discrepant levels of background information. It’s casually broken down the explainer’s content according to tiers of expertise, as it explains at the top of the page:

This was originally posted at 1:00 p.m. EST on Tuesday. It is being updated and is being kept near the top of the blog. Some of the information near the top of the post may be outdated, and if you’ve been following the story closely, the information at the top will definitely seem very basic. So please scroll to the bottom of the post for the latest.

In a June episode of their “Rebooting the News” podcast, Jay Rosen and Dave Winer discussed the challenge of serving users who come into a story with varying levels of contextual knowledge. One solution they tossed around: a tiered system of news narrative, with Level 1, for example, being aimed at users who come into a story with little to no background knowledge, Level 4 for experts who simply want to learn of new developments in a story.

The MoJo page is a great example of that kind of thinking put to work. The sections Baumann’s used to organize the explainer’s content allow users to have a kind of choose-your-own adventure interaction with the information offered. They convey, overall, a sense of permissiveness. Know only a little about Egyptian politics? Hey, that’s cool. Know nothing at all? That’s cool, too.

And that’s another noteworthy element of MoJo’s Egypt explainer: It’s welcoming. And it doesn’t, you know, judge.

That’s not a minor thing, for the major reason that stories, when you lack the context to understand them, can be incredibly intimidating. If you don’t know much about Egypt’s current political landscape — or, for that matter, about the world financial system or the recent history of Afghanistan or the workings of Congress — you have very little incentive to read, let alone follow, a story about it. In news, one of the biggest barriers to entry can be simple intimidation. We talk a lot about “engagement” in journalism; one of the most fundamental ways to engage an audience, though, is by doing something incredibly simple: producing work that accommodates ignorance.

December 19 2010

18:00

Games, systems and context in journalism at News Rewired

I went to News Rewired on Thursday, along with dozens of other journalists and folk concerned in various ways with news production. Some threads that ran through the day for me were discussions of how we publish our data (and allow others to do the same), how we link our stories together with each other and the rest of the web, and how we can help our readers to explore context around our stories.

One session focused heavily on SEO for specialist organisations, but included a few sharp lessons for all news organisations. Frank Gosch spoke about the importance of ensuring your site’s RSS feeds are up to date and allow other people to easily subscribe to and even republish your content. Instead of clinging tight to content, it’s good for your search rankings to let other people spread it around.

James Lowery echoed this theme, suggesting that publishers, like governments, should look at providing and publishing their data in re-usable, open formats like XML. It’s easy for data journalists to get hung up on how local councils, for instance, are publishing their data in PDFs, but to miss how our own news organisations are putting out our stories, visualisations and even datasets in formats that limit or even prevent re-use and mashup.

Following on from that, in the session on linked data and the semantic web,Martin Belam spoke about the Guardian’s API, which can be queried to return stories on particular subjects and which is starting to use unique identifiers -MusicBrainz IDs and ISBNs, for instance – to allow lists of stories to be pulled out not simply by text string but using a meaningful identification system. He added that publishers have to licence content in a meaningful way, so that it can be reused widely without running into legal issues.

Silver Oliver said that semantically tagged data, linked data, creates opportunities for pulling in contextual information for our stories from all sorts of other sources. And conversely, if we semantically tag our stories and make it possible for other people to re-use them, we’ll start to see our content popping up in unexpected ways and places.

And in the long term, he suggested, we’ll start to see people following stories completely independently of platform, medium or brand. Tracking a linked data tag (if that’s the right word) and following what’s new, what’s interesting, and what will work on whatever device I happen to have in my hand right now and whatever connection I’m currently on – images, video, audio, text, interactives; wifi, 3G, EDGE, offline. Regardless of who made it.

And this is part of the ongoing move towards creating a web that understands not only objects but also relationships, a world of meaningful nouns and verbs rather than text strings and many-to-many tables. It’s impossible to predict what will come from these developments, but – as an example – it’s not hard to imagine being able to take a photo of a front page on a newsstand and use it to search online for the story it refers to. And the results of that search might have nothing to do with the newspaper brand.

That’s the down side to all this. News consumption – already massively decentralised thanks to the social web – is likely to drift even further away from the cosy silos of news brands (with the honourable exception of paywalled gardens, perhaps). What can individual journalists and news organisations offer that the cloud can’t?

One exciting answer lies in the last session of the day, which looked at journalism and games. I wrote some time ago about ways news organisations were harnessing games, and could do in the future – and the opportunities are now starting to take shape. With constant calls for news organisations to add context to stories, it’s easy to miss the possibility that – as Philip Trippenbachsaid at News Rewired - you can’t explain a system with a story:

Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work.

Many of the issues we cover – climate change, government cuts, the deficit – at macro level are systems that could be interestingly and interactively explored with games. (Like this climate change game here, for instance.) Other stories can be articulated and broadened through games in a way that allows for real empathy between the reader/player and the subject because they are experiential rather than intellectual. (Like Escape from Woomera.)

Games allow players to explore systems, scenarios and entire universes in detail, prodding their limits and discovering their flaws and hidden logic. They can be intriguing, tricky, challenging, educational, complex like the best stories can be, but they’re also fun to experience, unlike so much news content that has a tendency to feel like work.

(By the by, this is true not just of computer and console games but also of live, tabletop, board and social games of all sorts – there are rich veins of community journalism that could be developed in these areas too, as theRochester Democrat and Chronicle is hoping to prove for a second time.)

So the big things to take away from News Rewired, for me?

  • The systems within which we do journalism are changing, and the semantic web will most likely bring another seismic change in news consumption and production.
  • It’s going to be increasingly important for us to produce content that both takes advantage of these new technologies and allows others to use these technologies to take advantage of it.
  • And by tapping into the interactive possibilities of the internet through games, we can help our readers explore complex systems that don’t lend themselves to simple stories.

Oh, and some very decent whisky.

Cross-posted at Metamedia.

December 15 2010

16:55

Introducing Sourcerer: A Context Management System

If you want to follow the news, the World Wide Web has a lot to offer: a wide variety of information sources, powerful search tools, and no shortage of sites where people can voice their opinions.

At the same time, though, the Web can be overwhelming. Hundreds of links turn up in a Google search. Relevant information can be scattered across dozens of sites. Online conversations often generate more heat than light. And if you have a question about a news topic, it's hard to find the answer.

Wouldn't it be nice if there were a website that made it easier to keep up with and understand the news?

Soon, there could be. Let me introduce you to Sourcerer, a website prototype developed this fall by a team of graduate journalism students, including five Knight "programmer-journalist" scholarship winners.

Sourcerer-homepage-withborder.jpg

Sourcerer is a "context management system" designed to help people learn more about a topic by asking questions, answering them, backing up those answers with links, and navigating through previous coverage via a timeline.

Sourcerer emerged out of Medill's Community Media Innovation Project class, which studied the news and information needs of local audiences and the challenges facing online publishers who want to serve them.

Two of the key problems identified by the students:

  • People who don't follow every twist and turn in an ongoing story -- especially one that has deep historical context, such as the achievement gap between white and minority students in public schools -- have difficulty understanding the context of that story. Others have noted this problem as well: Matt Thompson, now of NPR, has written and spoken eloquently about "how journalists might start winning at the context game."
  • At the same time, in every community, there are knowledgeable citizens who dominate discussion boards and comment threads -- often mixing fact with opinion and intimidating those who want to learn more but are afraid of displaying their lack of understanding by asking questions. The Medill team wanted communities to benefit from the expertise of these knowledgeable citizens while creating an environment where discussion could be organized around facts, not just opinions.

Sourcerer seeks to serve people just trying to understand an issue as well as those who already have that understanding. It could be launched as part of an existing news site, or as a collaboration among multiple publishers covering a community or topic.

While the site is not quite ready for a public rollout yet, let me walk you through Sourcerer's key features:

1) Topics

The Medill team concluded that Sourcerer should be organized around topics, rather than stories. Their first challenge was figuring out how to present a complex topic in a way that is not intimidating to someone who hasn't followed the story before. After testing several approaches with users, the students settled on short summaries of key elements, with bold-face highlights and links to external sites providing background.

Sourcerer-topics.jpg

2) Questions

The second key element of Sourcerer is an interface for people to ask questions about the topic. Like many question-and-answer sites, Sourcerer allows users to "upvote" questions they think are particularly good. Questions with the most votes appear at the top, and a Sourcerer site covering multiple topics would highlight the most popular questions.

Sourcerer-questions-and-clip.jpg

3) Answers and clips

What differentiates Sourcerer from other Q&A sites is the fact that answers can be posted only if the answerer provides a link to source material backing up the answer. A key feature of the site is the News Clipper, which enables users to provide a link and also grab a key excerpt of the linked-to page for insertion into the answer on Sourcerer.

4) Voting and flagging

In addition to "upvoting" questions, Sourcerer users can also render their opinions about the answers. As with questions, users can register a "thumbs up" for answers they approve of. They can also flag answers as opinions rather than facts.

5) The timeline

One of the coolest features of Sourcerer is a timeline constructed out of the articles that are linked from the site. The timeline is built dynamically -- as answerers provide links to source material, the linked-to articles are added to the timeline.

The timeline displays the articles as a series of vertical bars. The higher the bar, the more popular the linked-to article. The timeline also shades the articles based on whether users deem them factual or opinion-based.

Sourcerer-timeline.jpg

The timeline displays the articles in chronological order, left to right. Mousing over the timeline displays the article headline and summary. The beauty of this interface is that it provides an easy way to navigate chronologically through articles published about a particular topic -- even articles published on multiple external sites.

You can get a sense of how Sourcerer works by checking out a screencast prepared by Shane Shifflett of the Sourcerer development team. The other developers were Steven Melendez, Geoffrey Hing and Andrew Paley.

We're looking for sites -- and users -- interested in participating in a beta launch. If you're interested, go to Sourcerer.US and sign up.

If you want to know a lot more about Sourcerer, the class' final report provides much more detail about the site as well as the research that led to its development. The report includes a lot of good advice for hyperlocal publishers about audience research and revenue strategies. The class also produced a separate revenue "cookbook" for hyperlocal publishers.

You can see the students present Sourcerer and their other findings and recommendations here. For even more background and context, check out LocalFourth.com, the blog the students maintained during the class. The "Fourth" is a reference to the press -- the Fourth Estate.

October 29 2010

15:00

Roanoke Times wikifies a series about a highway

When a newspaper decides to dedicate months of a reporter’s time, plus the efforts of the tech team, to a project, there’s usually a whiff of scandal in the air. But in the case of The Roanoke Times’ package on Interstate Highway 81, I-81: Facts, Feat and the Future, it was more or less reporting for the sake of reporting.

“There is a rock in the field, let’s turn over the rock and see what’s under there,” reporter Jeff Sturgeon explained to me, describing how he approached the story. “We simply said, here’s this major community asset — how well is it working?”

The four-lane highway bisects the community of about 300,000 people. Almost 60,000 people drive the route everyday, 1 in 4 of whom are truckers — a source of anxiety for lay commuters. “Many people are scared to drive on the Interstate,” Sturgeon said. “Our goal was to find something newsworthy to tell our readers about our road.”

In the end, Sturgeon’s data-driven reporting revealed that, despite fears, the road is statistically safe and truckers are some of the best drivers on it. Where perhaps just a few years ago Sturgeon’s story would have come and gone in the print edition of the paper, it now lives on at its own context-rich online home. The Times created a hub for the series that lets users interact with the data, read all the stories easily, and leave comments. The web component is a finalist for a Knight Public Service Award, presented by the Online News Association. The winner will be announced Saturday at ONA’s annual conference.

The project is a traditional piece of journalism in the sense that it ran as a series in the print edition of the paper, written by a trained reporter. But both the topic and online presentation are ideal for a news environment steeped in the web. Sturgeon’s story isn’t a scandal-driven, passing tale, but a systematic look at an important local topic, updated as new public data is released. Rather than scattering information across the site, URLs are grouped alongside video, user comments, and an interactive map of accident data. It’s a nice example of the type of journalism Matt Thompson might call Wikipedia-style journalism.

Building the story overtime was valuable for readers and for Sturgeon as well. “We wanted the site to sort of grow with the series,” he said. “The commenters had several impacts on me. They were sort of cheering me on. They, in part, sustained me, over the almost one year to get this project done. It created a connection that kept me going as I slopped through months and months of research and writing and everything it takes to get a package together.”

As new accident data becomes available, Sturgeon says they’ll continue to publish it on the interactive and let users comment. “This road is in our future,” he said.

August 26 2010

14:00

Project Argo blog is for participants, but an interesting read for outsiders

In the run-up to the launch of the D.C. local site TBD, the editors let future readers peek behind the curtain through a placeholder blog that teased new hires and plans for the project. The blog also did a great job of generating buzz; we tweeted quite a few links to the site.

So when Megan pointed me to a blog from another not-yet-launched project, NPR’s Argo Project, I assumed it would serve a similar marketing end. But this one’s different: The blog’s lead writer, editorial project manager Matt Thompson, is writing directly to the new Argo bloggers at 12 NPR member stations. Argo is a new cross-country network of reported blogs, and many of the journalists hired to run them need some tactical training in how to run a successful Argo site.

Think of it as an in-house blog that just happens to be open to the public; even though the blog is meant for NPR staff, it’s a useful read for anyone interested in the future of news or in best practices for launching a news blog. Here are a few of Thompson’s lessons:

1. You need a plan

One of the best posts on Thompson’s blog is a pre-launch checklist. (He’s since posted a revised version of the checklist on Argo’s impressive and useful docs site.) Thompson lays out a step-by-step guide for Argo participants, but it’s generally useful for anyone about to launch a new site could use (particularly if you’re using WordPress, which Argo is).

Some of the best: Do a “photowalk” for your beat (“try to capture images of things you’ll be posting about frequently”); build our your metadata beforehand (defining tags and categories before launch to straighten up your taxonomy); and reaching out to the best Creative Commons photographers on your beat (to ensure a happy group of free content providers).

2. Follow by example, steal from others

Blogging isn’t new, and Argo isn’t pretending it’s creating a new format. In fact, Thompson is urging bloggers to follow the examples of their best predecessors. He points readers to the work of trailblazers like Marc Ambinder, Nick Denton, and Andrew Sullivan. Ambinder gets a nod for his thoughts on journalism as an industry. A Nick Denton memo pushes for context (one of Thompson’s longstanding interests). And Andrew Sullivan gets praise for his pacing. The three writers certainly have different styles, different content focuses, and different missions, but Thompson has plucked out valuable advice for all of his bloggers.

3. Tactics are teachable

Thompson has a running series of posts called “dark secrets” that offer insight into how successful blogs engage an audience. Use photos. Watch your headlines. Where should you place that hyperlink? He’s got a good post on that. They’re the kinds of insights newspapers, magazines, and radio stations have compiled about their own media over time. But for this new-to-many platform, they make for helpful tips.

4. Blogging is a craft

The category Thompson posts to most frequently is “blogging technique.” His points are great: Find your morning routine, your rhythms, and your pace. Check out his post on “The blogger’s first month.” Blogging isn’t journalism for dummies — it’s a craft with its own set of practices and ways to excel.

August 19 2010

15:00

The kids are alright, part 2: What news organizations can do to attract, and keep, young consumers

[Christopher Sopher is a senior at the University of North Carolina, where he is a Morehead-Cain Scholar and a Truman Scholar. He has been a multimedia editor of the Daily Tar Heel and has worked for the Knight Foundation. His studies have focused on young people's consumption of news and participation in civic lifewhich have resulted in both a formal report and an ongoing blog, Younger Thinking.

We asked Chris to adapt some of his most relevant findings for the Lab, which he kindly agreed to do. We posted Part 1 yesterday; below is Part 2. Ed.]

Now that I have exhorted all of you to care about young people and their relationship with the news media, it’s worth examining a few of the most pertinent ideas about getting more of my peers engaged: the gap between young people’s reported interested in issues and their interest in news, the need for tools to help organize the information flow, and the crucial role of news in schools and news literacy.

A gap between interest and news consumption

The data seem to suggest that young people are simultaneously interested and uninterested in the world around them. For example, a 2007 Pew survey [pdf] found that 85 percent of 18-to-29-year-olds reported being interested in “keeping up with national affairs” — a significant increase from 1999. Yet in a 2008 study [pdf], just 33 percent of 18-to-24-year-olds (and 47 percent of people aged 25 to 34) said they enjoyed keeping up with news “a lot.” Young people also tend to score lower on surveys of political knowledge — all of which suggests that their information habits are not matching their reported interests.

There are a few compelling explanations for this apparent contradiction (beyond people’s general desire to provide socially agreeable responses). The first is that many young people may not see a consistent connection between regularly “getting the news” and staying informed about the issues that interest them. If we accept that most young people get their news at random intervals (and the overwhelming body of evidence suggests that this is the case), it’s easy to see how reading a particular day’s New York Times story about health care reform, for example, might be rather confusing if you haven’t been following the coverage regularly.

Many young people also report feelings of monotony with day-to-day issue coverage and a distaste for the process focus of most politics coverage. Some share the sentiments (about which Gina Chen has written here at the Lab) of the now-famous, if anonymous, college student who said, “If the news is that important, it will find me.” The cumulative effect of these trends is that young people go elsewhere to “keep up”: to Wikipedia articles, to friends and family, to individual pieces of particularly helpful content shared through social networks.

The “too much information” problem

Several studies have highlighted the fact that many young people feel overwhelmed by the deluge of information presented on news sites. (My two favorite pieces on this are both from the Media Management Center, found here and here here [pdf].)

This sentiment is understandable: On one day I counted, the New York Times’ homepage offered 28 stories across four columns above the scroll cutoff and another 95 below it — for a total of 123 stories, along with 66 navigation links on the lefthand bar. CNN.com also had 28 stories on top and 127 total, along with 15 navigation links. Imagine a newspaper with that many choices.

The point is that news sites need to be designed to help users manage and restrict the wealth of information, rather than presenting them with all of it at once. People can and are doing the work of “curation” on their own, of course, through iGoogle, Twitter, RSS, and social networks both online and off — but those efforts leave behind the vast majority of news outlets. Better design allows news organizations to include the kind of context and background and explanation — not to mention personalization features — that younger audiences find helpful. That idea isn’t new, but its importance for young people cannot be overstated.

Schools, news, and news literacy

News organizations need to learn from soda and snack producers and systematically infiltrate schools across the country with their products. There’s strong evidence that news-based, experiential, and interactive course design [pdf] — as well as the use of news in classrooms and the presence of strong student-produced publications — can both increase the likelihood that students will continue to seek news regularly in the future.

Many teachers are already using news [pdf] in their classrooms, but face the pressures of standardization and an apparent lack of support from administrations. A 2007 Carnegie-Knight Task Force study [pdf] also found that most teachers who do use news content in their curricula direct their students to online national outlets (such as CNN or NYTimes.com) rather than local sites, which suggests that local news organizations need to focus on building a web-based presence in schools. The Times Learning Network is an excellent model.

And when news media finally fill school halls like so much Pepsi (or, now, fruit juice), young people themselves will also need help to navigate content and become savvy consumers, which is where news literacy programs become important. The Lab’s own Megan Garber has explained their value eloquently in a piece for the Columbia Journalism Review: “The bottom line: news organizations need to make a point of seeking out young people — and of explaining to them what they do and, perhaps even more importantly, why they do it. News literacy offers news organizations the opportunity to essentially re-brand themselves.” The News Literacy Project, started by a Pulitzer-winning former Los Angeles Times reporter, is a leading example.

The point of these ideas is that there are significant but entirely surmountable obstacles to getting more young people engaged with news media — a goal with nearly universal benefits that has received far too little attention from news organizations.

I’ll conclude with a quote from NYU professor Jay Rosen, buried inside the 2005 book Tuned Out: “Student’s don’t grow up with the religion of journalism, they don’t imbibe it in the same way that students used to. Some do, but a lot don’t.” Changing that is the difficult but urgent challenge. I don’t want to be that guy who says “_____ will save journalism,” so I’ll just say this: It’s really, really, really important.

And I should probably mention that there are hundreds of recent journalism school graduates who would be more than willing to help.

Image by Paul Mayne, used under a Creative Commons license.

August 16 2010

14:30

The Guardian launches governmental pledge-tracking tool

Since it came to office nearly 100 days ago, Britain’s coalition government — a team-up between Conservatives and Liberal Democrats that had the potential to be awkward and ineffective, but has instead (if The Economist’s current cover story is to be believed) emerged as “a radical force” on the world stage — has made 435 pledges, big and small, to its constituents.

In the past, those pledges might have gone the way of so many campaign promises: broken. But no matter — because also largely forgotten.

The Guardian, though, in keeping with its status as a data journalism pioneer, has released a tool that tries to solve the former problem by way of the latter. Its pledge-tracker, a sortable database of the coalition’s various promises, monitors the myriad pledges made according to their individual status of fulfillment: “In Progress,” “In Trouble,” “Kept,” “Not Kept,” etc. The pledges tracked are sortable by topic (civil liberties, education, transport, security, etc.) as well as by the party that initially proposed them. They’re also sortable — intriguingly, from a future-of-context perspective — according to “difficulty level,” with pledges categorized as “difficult,” “straightforward,” or “vague.”

Status is the key metric, though, and assessments of completion are marked visually as well as in text. The “In Progress” note shows up in green, for example; the “Not Kept” shows up in red. Political accountability, meet traffic-light universality.

The tool “needs to be slightly playful,” notes Simon Jeffery, The Guardian’s story producer, who oversaw the tool’s design and implementation. “You need to let the person sitting at the computer actually explore it and look at what they’re interested in — because there are over 400 things in there.”

The idea was inspired, Jeffery wrote in a blog post explaining the tool, by PolitiFact’s Obameter, which uses a similar framework for keeping the American president accountable for individual promises made. Jeffery came up with the idea of a British-government version after May’s general election, which not only gave the U.S.’s election a run for its money in terms of political drama, but also occasioned several interactive projects from the paper’s editorial staff. They wanted to keep that multimedia trajectory going. And when the cobbled-together new government’s manifesto for action — a list of promises agreed to and offered by the coalition — was released as a single document, the journalists had, essentially, an instant data set.

“And the idea just came from there,” Jeffery told me. “It seemed almost like a purpose-made opportunity.”

Jeffery began collecting the data for the pledge-tracker at the beginning of June, cutting and pasting from the joint manifesto’s PDF documents. Yes, manually. (“That was…not much fun.”) In a tool like this — which, like PolitiFact’s work, merges subjective and objective approaches to accountability — context is crucial. Which is why the pledge-tracking tool includes with each pledge a “Context” section: “some room to explain what this all means,” Jeffery says. That allows for a bit of gray (or, since we’re talking about The Guardian, grey) to seep, productively, into the normally black-and-white constraints that define so much data journalism. One health care-related pledge, for example — “a 24/7 urgent care service with a single number for every kind of care” — offers this helpful context: “The Department of Health draft structural reform plan says preparations began in July 2010 and a new 111 number for 24/7 care will be operational in April 2012.” It also offers, for more background, a link to the reform plan.

To aggregate that contextual information, Jeffery consulted with colleagues who, by virtue of daily reporting, are experts on immigration, the economy, and the other topics covered by the manifesto’s pledges. “So I was able to work with them and just say, ‘Do you know about this?’ ‘Do you know about that?’ and follow things up.”

The tool isn’t perfect, Jeffery notes; it’s intended to be “an ongoing thing.” The idea is to provide accountability that is, in particular, dynamic: a mechanism that allows journalists and everyone else to “go back to it on a weekly or fortnightly basis and look at what has been done — and what hasn’t been done.” Metrics may change, he says, as the political situation does. In October, for example, the coalition government will conclude an external spending review that will help crystallize its upcoming budget, and thus political, priorities — a perfect occasion for tracker-based follow-up stories. But the goal for the moment is to gather feedback and work out bugs, “rather than having a perfectly finished product,” Jeffery says. “So it’s a living thing.”

August 06 2010

14:30

This Week in Review: Newsweek’s new owner, WikiLeaks and context, and Tumblr’s media trendiness

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

A newbie owner for Newsweek: This week was a big one for Newsweek: After being on the block since May, it was sold to Sidney Harman, a 92-year-old audio equipment mogul who’s married to a Democratic congresswoman and owns no other media properties. The price: $1, plus the responsibility for Newsweek’s liabilities, estimated at about $70 million. The magazine’s editor, Jon Meacham, is leaving with the sale, though he told Yahoo’s Michael Calderone that he had decided in June to leave when Newsweek was sold, no matter who the new owners were. Harman’s age and background and the low sale price made for quite a few biting jokes about the sale on Twitter, dutifully chronicled for us by Slate’s Jack Shafer.

Harman didn’t help himself out much by telling The New York Times he doesn’t have a plan for Newsweek. In a pair of sharp articles, The Daily Beast painted a grim picture of what exactly Harman’s getting himself into: The magazine’s revenue dropped 38 percent from 2007 to 2009, and it’s losing money in all of its core areas. The Beast noted that with no other media properties, Harman doesn’t have the synergy potential that the magazine’s previous owners, The Washington Post Co., said Newsweek would need. So why was he chosen? Apparently, he genuinely cares about the publication, and he’s planning the least number of layoffs. (That, and the other bidders weren’t too attractive, either.) PaidContent reported that his primary goal is to bring the magazine back to stability while he sets up a succession plan.

Everybody has ideas of what Harman should do with his newest plaything: Jack Shafer tells him to treat Newsweek as a magazine to be saved rather than a fun vanity project, and MarketWatch’s Jon Friedman wants to see Newsweek drop the opinion-and-analysis approach that it’s been aping from The Economist, as do several of the observers Politico talked to. (DailyFinance’s Jeff Bercovici just wants Harman to make it a little less excruciatingly dull to read.) Two other Politico sources — new media guru Jeff Jarvis and former Newsweek Tumblr wizard Mark Coatney — want to see Newsweek shift away from a print focus and figure out how to be vital on the web. Media consultant Ken Doctor proposes pushing forward on tablet editions, multimedia and interacting with readers online as the future of the magazine. Jarvis also has some pieces of advice for magazines in general, urging to them to resist the iPad’s siren song and get local, among other things.

Poynter’s Rick Edmonds has the most intriguing idea for a new Newsweek — going nonprofit. That would likely require refining its editorial mission to a narrower focus on national and international affairs, with the pop culture analysis getting cut out, Edmonds says, but he believes Harman might actually be considering a nonprofit approach. Ken Doctor suggests that with Harman’s statements about the relative unimportance of turning a profit from the magazine, he’s already blurring the lines between a for-profit and nonprofit organization.

Meanwhile, others were busy speculating about who might be the editor to lead Newsweek into its next incarnation. Names thrown out included Newsweek International editor Fareed Zakaria, Newsweek.com editor Mark Miller, Slate Group editor Jacob Weisberg, and former Time editor and CNN CEO Walter Isaacson, though Isaacson has taken himself out of consideration.

WikiLeaks and the need for context: WikiLeaks continued to see fallout from its unprecedented leak of 92,000 documents about the war in Afghanistan two weekends ago, with more cries for it to be shut down and its founder, Julian Assange, arrested, largely because its leak revealed the names of numerous Afghan informants to the U.S. Assange expressed regret for those disclosures, and WikiLeaks said it’s even asking for the Pentagon’s help in identifying and redacting names of informants in its next document dump, though the Pentagon said they haven’t heard from WikiLeaks yet. Not that the U.S. government hasn’t been trying to make contact — it demanded the documents be returned(!), and agents detained a WikiLeaks researcher at customs and then tried to talk with him again at a hacking conference this week. An Australian TV station gave a fascinating inside look at Assange’s life on the run, and Slate’s Jack Shafer contrasted Assange’s approach to leaking sensitive documents with the more government-friendly tack of traditional media outlets. WikiLeaks also had some news to report on the business-model side: It will begin collecting online micropayment donations through Flattr.

The ongoing discussion around WikiLeaks this week centered on what to do with the data it released. The Tyndall Report provided a thorough roundup of how TV news organizations responded to the leak, and several others pinned the rather ho-hum public reaction to the documents’ contents on a lack of context provided by news organizations. Former Salon editor Scott Rosenberg said the leak provides a new opportunity to shed an antiquated scoop-based definition of news and bring the reality of the war home to people. In a smart post musing on the structure of the modern news story, the Lab’s Megan Garber proposed an outlet dedicated solely to follow-up journalism, arguing that one of the biggest challenges in modern journalism is giving a sense of continuity to long-running stories. “What results is a flattening: the stories of our day, big and small, silly and significant, are leveled to the same plane, occupying the same space, essentially, in the wobbly little IKEA bookshelf that is the modular news bundle,” she wrote in a follow-up post.

Mashable also examined (in nifty infographic form!) how WikiLeaks changes the whistleblower-journalist relationship, while NPR wondered whether WikiLeaks is on the source or journalist side of equation. And PBS’ Idea Lab had something handy for news orgs: A guide to helping them think about how to handle large-scale document releases.

Tumblr trends upward: The social blogging service Tumblr got the New York Times profile treatment this week, as the paper focused on its growing popularity among news organizations who are trying to jump on it as the next big social media trend — a form of communication somewhere between Twitter and blogging. The article noted that several prominent media brands have Tumblr accounts, though many of them aren’t doing much with theirs. Over at Mediaite, Anthony De Rosa, who runs the Tumblr account for the sports blog network SB Nation, said we can expect to see still more media outlets jump on the Tumblr bandwagon, especially because it rewards smart media companies who have a distinctive voice.

New York’s Nitasha Tiku tried to douse the hype, arguing that Mark Coatney’s often-mentioned Tumblr success for Newsweek “wasn’t thanks to the distribution channel on Tumblr, it was his irreverent, conversational style — and that will be difficult for the fresh-faced interns that old-media publications don’t pay to run their Tumblrs.” And Gawker gave us a graded rundown of traditional news orgs’ Tumblr accounts.

Two Internet freedom scares: From The Wall Street Journal and The New York Times this week came two stories that have had many people concerned about issues of freedom and the web. First, the Journal ran a series on the alarming amount of your online data and behavior that companies track on behalf of advertisers. Cluetrain Manifesto co-author Doc Searls argued that while the long-held ideal of intensely personal advertising is getting closer to reality, “the advertising business is going to crash up against a harsh fact: ‘consumers’ are real people, and most real people are creeped out by this stuff.” Jeff Jarvis was much less moved by the Journal’s reporting, mocking it as scaremongering that tells us nothing new. Salon’s Dan Gillmor fell closer to Searls’ outrage than to Jarvis’ nonchalance, and media consultant Judy Sims said this series is a window into a complex future for display advertising, one that media executives need to become familiar with in a hurry.

Second, the Times unleashed an avalanche of commentary in the tech world with a report that Google and Verizon are moving toward an agreement that would allow companies to pay to get their content to web users more quickly, which would effectively end the passionately held open-Internet principle known as net neutrality. The FCC quickly suspended its closed-door net neutrality meetings, and despite denials from Google and Verizon (which Wired picked apart), a whole lot of whither-the-Internet concern ensued. I’m not going to dig too deeply into this story here (I’d rather wait until we have something concrete to opine about), but here are the best quick guides to what this might mean: J-prof Dan Kennedy, Salon’s Dan Gillmor and ProPublica’s Marian Wang.

Reading roundup: Just a couple of quick items this week:

— Thanks to Poynter, we got glimpses of a couple of softer paid-content options being tried out by GlobalPost and The Spokesman-Review of Spokane, Washington, that might be sprouting up soon elsewhere, too. The Lab’s Megan Garber profiled one of the new companies offering that type of porous paywall, MediaPass, and All Things Digital’s Peter Kafka sifted through survey results to try to divine what The New York Times’ paywall might look like.

— Google’s social media platform Google Wave officially died this week, a little more than a year after it was born. Tech pioneer Dave Winer looked at why it never took off and drew a few lessons, too.

— Finally, the Lab’s Jonathan Stray took a look at some very cool things that The Guardian is doing with data journalism using free web-based tools. It’s a great case study in a blossoming area of journalism.

June 30 2010

13:00

ProPublica’s website redesign puts “future of context” ideas to work

Late last night, ProPublica launched a redesign of its website. As most site revamps tend to be, the new propublica.org is sleeker, slicker, and generally more aesthetically pleasing than its previous incarnation. But it’s also more intuitively navigable than the previous version, incorporating the accumulated changes that the investigative outfit has learned about its users, its contributors, and its journalism in the past two-and-a-half years. As Scott Klein, the outlet’s editor of News Applications and the site revamp’s chief architect, puts it in his intro to the redesign:

When we first sat down to design our website in early 2008, we had just started as an organization, and we had yet to publish anything. We had only a skeleton staff. We had to create something of a Potemkin village website, guessing at the kinds of coverage we’d be doing and how we’d be presenting it. In the two years since, we’ve constantly tweaked the site, and have bolted on new features that we never imagined we’d be doing.

With this redesign, we’ve tried to take everything we’ve learned, and everything we’ve added, and put it together into one nice, clean site. Our hope is that the level of design sophistication now matches the sophistication of our reporting.

The revamp has been in the works, in earnest, basically since November, Klein told me — with many of the intervening months spent not in designing and coding, but in conversing: explaining to the designers the outlet hired to help with the overhaul (the San Francisco-based firm Mule) what ProPublica does and what it’s about. Before they could design ProPublica’s new website, Mule essentially “needed to get a Masters degree,” Klein says, in the organization itself.

It seems they did. Propublica.org now feels more mission-coherent than the original site. The “Donate” button is more prominent than on the previous — a not-so-subtle reminder that ProPublica, known as it is for the substantial funding it’s received from the Sandler Foundation, is always looking for more money, from more sources, to sustain its work. (Speaking of, scratch that: It’s “Donate” buttons that are prominent, three on the front page.)

The site has also added, in its “About Us” section, a list of FAQs — complete with (helpfully, delightfully) an audio-filled name-pronunciation guide: “Some pronounce it Pro-PUB-lica, some Pro-POOB-lica. Most folks here in the newsroom pronounce it Pro-PUB-lica. Of course we’re always happy to be mentioned, using any pronunciation.” (The ProPublica staff were inspired to write FAQs, senior editor Eric Umansky told me, by fellow-online-only-nonprofit Voice of San Diego — which posted its own FAQs last week.)

The new site tries to answer questions in the broader sense, too. In a recent episode of their “Rebooting the News” podcast, Jay Rosen and Dave Winer discussed the systemic challenges of the multi-level crowd: audiences — or users, or readers, or whatever term you prefer — who come into stories with differing amounts of prior knowledge, differing contextual appreciations, differing levels, essentially, of interest and information. One problem news organizations face — and it’s a design issue as much as a strictly editorial one — is how to engage and serve those different users through the same interface: the website.

The ProPublica redesign tries to address that issue by making consumption of the journalism its site contains a choose-your-own-adventure-type proposition. The revamped site, like its previous version, features, at the top of every page, a list of topics that have become focus areas of ProPublica investigations (currently, “Gulf Spill,” “New Orleans Cops,” “Loan Mods,” and six more). Now, though, the landing pages of those topic-based verticals (whose content is generally organized chronologically, river-of-news-style) also feature curated, interactive boxes that incorporate live data from ProPublica’s new applications. Check out the “Calif. Nurses” vertical, above — anchored by “Problem Nurses Remain on Job as Patients Suffer,” a finalist for this year’s Public Service Pulitzer. Scroll down past that top curated box, and there are further options for self-navigation: Users can filter stories according to their general significance (the “Major Stories Only” button), their personal significance (the “Unread Stories Only” button), their author, or their age.

The idea was to give users several paths into, and among, stories and topics, Klein explains. It’s a kind Google’s Living Stories experiment was an inspiration in that respect, he says, as was the filter-focused layout of the website of Washington’s Spokesman-Review. The changes are about making the site a personal, and even somewhat personalized, place — and about making it accessible to new users while still compelling for the old.

June 10 2010

14:00

Linking by the numbers: How news organizations are using links (or not)

In my last post, I reported on the stated linking policies of a number of large news organizations. But nothing speaks like numbers, so I also trawled through the stories on the front pages of a dozen online news outlets, counting links, looking at where they went, and how they were used.

I checked 262 stories in all, and to a certain degree, I found what you’d expect: Online-only publications were typically more likely to make good use of links in their stories. But I also found that use of links often varies wildly within the same publication, and that many organizations link mostly to their own topic pages, which are often of less obvious value.

My survey included several major international news organizations, some online-only outlets, and some more blog-like sites. Given the ongoing discussion about the value of external links, and the evident popularity of topic pages, I sorted links into “internal”, “external”, and “topic page” categories. I included only inline links, excluding “related articles” sections and sidebars.

Twelve hand-picked news outlets hardly make up an unbiased sample of the entire world of online news, nor can data from one day be counted as comprehensive. But call it a data point — or a beginning. For the truly curious, the spreadsheet contains article-level numbers and notes.

Of the dozen online news outlets surveyed, the median number of links per article was 2.6. Here’s the average number of links per article for each outlet:

Source Internal External Topic Page Total BBC News 0 0 0 0 CNN 0.3 0.2 0.7 1.2 Politico 0.7 0.2 0.6 1.5 Reuters.com 0.1 0.2 1.4 1.7 Huffington Post 1.1 1.0 0 2.1 The Guardian 0.5 0.2 1.8 2.4 Seattle Post-Intelligencer 0.9 1.9 0 2.8 Washington Post 1.0 0.3 2.0 3.3 Christian Science Monitor 2.5 1.1 0 3.6 TechCrunch 1.8 3.6 1.2 6.6 The New York Times 1 1.2 4.6 6.8 Nieman Journalism Lab 1.4 13.1 0 14.5

The median number of internal links per article was 0.95, the median number of external links was 0.65, and the median number of topic page links was also 0.65. I had expected that online-only publications would have more links, but that’s not really what we see here. TechCrunch and our own Lab articles rank quite high, but so does The New York Times. Conversely, the BBC, Reuters, CNN, and The Huffington Post are not converting from a print mindset, so I would have expected them to be more web native — but they rank at the bottom.

What’s going on here? In short, we’re seeing lots of automatically generated links to topic pages. Many organizations are using topic pages as their primary linking strategy. The majority of links from The New York Times, The Washington Post, Reuters.com, CNN, and Politico — and for some of these outlets the vast majority — were to branded topic pages.

Topic pages can be a really good idea, providing much needed context and background material for readers. But as Steve Yelvington has noted, topic pages aren’t worth much if they’re not taken seriously. He singles out “misplaced trust in automation” as a pitfall. Like many topic pages, this CNN page is nothing more than a pile of links to related stories.

It doesn’t seem very useful to use such a high percentage of a story’s links directing readers to such pages. I wonder about the value of heavy linking to broad topic pages in general. How much is the New York Times reader really served by having a link to the HBO topic page from every story about the cable industry, or the Washington Post reader served by links on mentions of the “GOP”?

I suspect that links to topic pages are flourishing because such links can be generated by automated tools and because topic pages can be an SEO strategy, not because topic page links add great journalistic value. My suspicion is that most of the topic page links we are seeing here are automatically or semi-automatically inserted. Nothing wrong with automation — but with present technology it’s not as relevant as hand-coded links.

So what do we see when we exclude topic page links?

Excluding links to topic pages — counting only definitely hand-written links — the median number of links per article drops to 1.7. The implication here is that something like 30 percent of the links that one finds in online news articles across the web go to topic pages, which certainly matches my reading experience. Sorting the outlets by internal-plus-external links also shows an interesting shift in the linking leaderboard.

Source Internal External Total BBC News 0 0 0 Reuters.com 0.1 0.2 0.3 CNN 0.3 0.2 0.5 The Guardian 0.5 0.2 0.7 Politico 0.7 0.2 0.9 Washington Post 1.0 0.3 1.3 Huffington Post 1.1 1.0 2.1 The New York Times 1 1.2 2.2 Seattle Post-Intelligencer 0.9 1.9 2.8 Christian Science Monitor 2.5 1.1 3.6 TechCrunch 1.8 3.6 5.4 Nieman Journalism Lab 1.4 13.1 14.5

The Times and the Post have moved down, and online-only outlets Seattle Post-Intelligencer and Christian Science Monitor have moved up. TechCrunch still ranks high with a lot of linking any way you slice it, and the Lab is still the linkiest because we’re weird like that. (To prevent cheating, I didn’t tell anyone at the Lab, or elsewhere, that I was doing this survey.) But the BBC, CNN, and Reuters are still at the bottom.

Linking is unevenly executed, even within the same publication. The number of links per article depended on who was writing it, the topic, the section of the publication, and probably also the phase of the moon. Even obviously linkable material, such as an obscure politician’s name or a reference to comments on Sarah Palin’s Facebook page, was inconsistently linked. Meanwhile, one anomalous Reuters story linked to the iPad topic page on every single reference to “iPad” — 16 times in one story. (I’m going to have to side with the Wikipedia linking style guide here, which says link on first reference only.)

Whether or not an article contains good links seems to depend largely on the whim of the reporter at most publications. This suggests a paucity of top-down guidance on linking, which is in line with the rather boilerplate answers I got to my questions about linking policy.

Articles posted to the “blog” section of a publication generally made heavier use of links, especially external links. The average number of external links per page at The New York Times drops from 1.2 to 0.8 if the single blog post in the sample is excluded — it had ten external links! Whatever news outlets mean by the word “blog,” they are evidently producing their “blogs” differently, because the blogs have more links.

The wire services don’t link. Stories on Reuters.com — as distinguished from stories delivered on Reuters professional products — had an average of 1.7 links per article. But only 0.3 of these links were not to topic pages, and only blog posts had any external links at all. Stories read on Reuters professional products sometimes contain links to source financial documents or other Reuters stories, though it’s not clear to me whether these systems use or support ordinary URLs. The Associated Press has no hub news website of its own so I couldn’t include it in my survey, but stories pushed to customers through their standard feed do not include inline links, though they sometimes include links in an an “On the Net” section at the end of the story.

As I wrote previously, Reuters and AP told me that the reason they don’t include inline hyperlinks is that many of their customers publish on paper only and use content management systems that don’t support HTML.

What does this all mean? The link has yet to make it into the mainstream of journalistic routine. Not all stories need links, of course, but my survey showed lots of examples where links would have provided valuable backstory, context, or transparency. Several large organizations are diligent about linking to their own topic pages, probably with the assistance of automation, but are wildly inconsistent about linking to anything else. The cultural divide between “journalists” and “bloggers” is evident by the way that writers use links (or don’t use them), even within the same newsroom. The major wire services don’t yet offer integrated hypertext products for their online customers. And when automatically generated links are excluded, online-only publications tend to take links more seriously.

June 08 2010

13:30

Why link out? Four journalistic purposes of the noble hyperlink

[To link or not to link? It's about as ancient as questions get in online journalism; Nick Carr's links-as-distraction argument is only the latest incarnation. Yesterday, Jason Fry tried to contextualize the linking debate around credibility, readability, and connectivity. Here, Jonathan Stray tries out his own, more pragmatically focused four-part division. Tomorrow, we'll have the result of Jonathan's analysis of how major news organizations link out and talk about linking out. —Josh]

You don’t need links for great journalism — the profession got along fine for hundreds of years without them. And yet most news outlets have at least a website, which means that links are now (in theory, at least) available to the majority of working journalists. What can links give to online journalism? I see four main answers.

Links are good for storytelling.

Links give journalists a way to tell complex stories concisely.

In print, readers can’t click elsewhere for background. They can’t look up an unfamiliar term or check another source. That means print stories must be self-contained, which leads to conventions such as context paragraphs and mini-definitions (“Goldman Sachs, the embattled American investment bank.”) The entire world of the story has to be packed into one linear narrative.

This verbosity doesn’t translate well to digital, and arguments rage over the viability of “long form” journalism online. Most web writing guides suggest that online writing needs to be shorter, sharper, and snappier than print, while others argue that good long form work still kills in any medium.

Links can sidestep this debate by seamlessly offering context and depth. The journalist can break a complex story into a non-linear narrative, with links to important sub-stories and background. Readers who are already familiar with certain material, or simply not interested, can skip lightly over the story. Readers who want more can dive deeper at any point. That ability can open up new modes of storytelling unavailable in a linear, start-to-finish medium.

Links keep the audience informed.

Professional journalists are paid to know what is going on in their beat. Writing stories isn’t the only way they can pass this knowledge to their audience.

Although discussions of journalism usually center around original reporting, working journalists have always depended heavily on the reporting of others. Some newsrooms feel that verifying stories is part of the value they add, and require reporters to “call and confirm” before they re-report a fact. But lots of newsrooms simply rewrite copy without adding anything.

Rewriting is required for print, where copyright prevents direct use of someone else’s words. Online, no such waste is necessary: A link is a magnificently efficient way for a journalist to pass a good story to the audience. Picking and choosing the best content from other places has become fashionably known as “curation,” but it’s a core part of what journalists have always done.

Some publishers are reluctant to “send readers away” to other work. But readers will always prefer a comprehensive source, and as the quantity of available information explodes, the relative value of filtering it increases.

Links are a currency of collaboration.

When journalists use links to “pay” people for their useful contributions to a story, they encourage and coordinate the production of journalism.

Anyone who’s seen their traffic spike from a mention on a high-profile site knows that links can have immediate monetary impact. But links also have subtler long term value, both tangible (search rankings) and intangible (reputation and status.)  One way or another, a link is generally valuable to the receiver.

A complex, ongoing, non-linear story doesn’t have to be told by a single organization. In line with the theory of comparative advantage, it probably shouldn’t be. Of course journalists can (and should) collaborate formally. But links are an irresistible glue that can coordinate journalistic production across newsrooms and bloggers alike.

This is an economy that is interwoven with the cash economy in complex ways. It may not make business sense to pay another news organization for publishing a crucial sub-story or a useful tip, but a link gives credit where credit is due — and traffic. Along this line, I wonder if the BBC’s policy of not always linking to users who supply content is misguided.

Links enable transparency.

In theory, every statement in news writing needs to be attributed. “According to documents” or “as reported by” may have been as far as print could go, but that’s not good enough when the sources are online.

I can’t see any reason why readers shouldn’t demand, and journalists shouldn’t supply, links to all online resources used in writing a story. Government documents and corporate financial disclosures are increasingly online, but too rarely linked. There are some issues with links to pages behind paywalls and within academic journals, but nothing that seems insurmountable.

Opinion and analysis pieces can also benefit from transparency. It’s unfair — and suspect — to critique someone’s position without linking to it.

Of course, reporters must also rely on sources that don’t have a URL, such as people and paper documents. But even here I would like to see more links, for transparency and context: If the journalist conducted a phone interview, can we listen to the recording? If they went to city hall and saw the records, can they scan them for us? There is already infrastructure for journalists who want to do this. A link is the simplest, most comprehensive, and most transparent method of attribution.

Photo by Wendell used under a Creative Commons license.

May 12 2010

15:00

Location, location, etc: What does the WSJ’s Foursquare check-in say about the future of location in news?

It was the Foursquare check-in heard ’round the world. Or, at least, ’round the future-of-news Twitterverse. On Friday, the Wall Street Journal checked in to the platform’s Times Square venue with some breaking news:

The headline that screenshot-taker dpstyles appended to the image is correct: the real-time, geo-targeted news update was, really, a pretty amazing use of the Foursquare platform.

It wasn’t the first time that the Journal, via its Greater New York section, has leveraged Foursquare’s location-based infrastructure for news delivery purposes. The outlet has done more with Foursquare than the much-discussed implementation of its branded badges; it has also been making regular use of the Tips function of Foursquare, which allows users to send short, location based updates — including links — to their followers. The posts range from the food-recommendation stuff that’s a common component of Tips (“@Tournesol: The distinctively French brunches here feature croques madames and monsieurs and steak frites. After dining, check out the Manhattan skyline in Gantry State Park”) to more serious, newsy fare:

@ The middle of the Hudson River: Remember the Miracle on the Hudson? Well, investigators aren’t saying that Captain “Sully” shouldn’t have landed in the river, but he probably didn’t need to. [Link])

@ George Washington Bridge: Police were told to stop and search would-be subway bomber Najibullah Zazi’s car in Sept. 2009 as he drove up to the bridge — but waved him across without finding two pounds of explosives hidden inside. [Link]

@ Old Homestead Steakhouse: Kobe beef, one of the restaurant’s most popular dishes, was pulled from the menu after Japanese cows tested positive for foot-and-mouth disease. [Link]

The general idea is, essentially, curation by way of location: geo-targeting, news dissemination edition. “You get these tips because you’re nearby,” Zach Seward, the WSJ’s outreach editor (and, of course, the late, great Lab-er) told me. “So at least in theory, that’s when you’re most interested in knowing about them.”

The Journal’s use of the check-in feature for a breaking-news story, however, suggests a shift in the platform/content relationship implied in the info-pegged-to-places structure. “Times Square evacuated” is a legitimate news item, of course, in most any context; but it’s particularly legitimate to people who happen to be in Times Square at the moment the news breaks. The Journal’s check-in acknowledged and then leveraged that fact — and, in that, changed the value proposition of location in the context of news delivery. In its previous tips, the location had been pretty much incidental to the information — a clever excuse, basically, to share a piece of news about a particular place (see, again: “The middle of the Hudson river“). In the Times Square check-in, though, the information shared was vitally connected to the physical space it referred to. Location wasn’t merely a conduit for information; it was the information. Proximity’s previously weak tie to content became a strong one.

In other words, as Seward explains of the Times Square check-in: “If you’re following the Journal, and you’re in New York, you’re going to see this at the top of your timeline on your Foursquare app. And if you’re not in New York, you’re not going to see it — or you’re not going to see it at the top. And that makes perfect sense.” Because, again: “That idea that you want to be informed about what’s around you is the fundamental principle that Foursquare is operating on.”

Whether Foursquare itself is an effective venue for a news outlet’s realization of that principle is a different issue. There are certainly advantages to Foursquare for location-aware news — its built-in user base, for one. Its curatorial power, for another. (“One really specific way in which it’s ideal,” Seward says, “is that the whole platform is designed around only telling you what’s in your vicinity.” It focuses, myopically and straightforwardly, on the near — filtering out the far.) For users, then, Foursquare-as-news-platform suggests a river whose width is narrow, whose content is familiar — and whose current is as such readily navigable.

And for news organizations, it offers a relatively organic approach to the problem of content presentation. “Generally, whether it’s in print or online, or any platform for a news organization, you have one opportunity to decide how important a story is — and give it a huge, assassinations-sized banner headline, or a small little bullet, or whatever in between,” Seward points out. “But with local news in particular, the relevancy, and the importance, varies widely, often based on where you are, and/or where you live.” A location-based infrastructure for news delivery provides, among other things, “an opportunity to make that adjustment.”

Which is not to say that Foursquare itself is ideal for those purposes. The Journal’s use of tips and, now, check-ins, Seward says, “is a bit of a hack of a system that wasn’t created for brands.” The Journal is still, according to Foursquare, located in Times Square — via a check-in clarifying that Friday’s bomb scare had been a false alarm — and until the outlet’s editors decide that there’s another story worth checking in to, it will remain that way. “Because that’s just how Foursquare works.”

There’s also the “how Foursquare works” in the more ephemeral sense. Whether you’re a badge-laden multi-mayor or find the platform to be an unholy union of the mobile web and Troop Beverly Hills, Foursquare has defined its identity, at least in its early existence, by a feature that has been both its key limitation and its key asset: the purity of its socialness. Foursquare is fun. It’s peer-to-peer. Even more importantly, it’s pal-to-pal.

The Journal’s presence on Foursquare — and, further, its leveraging of the platform for purposes of news dissemination and (oof!) branded information — adds some tension to that freewheeling spirit. (As Adam Clark Estes, The Huffington Post’s citizen journalism editor, put it: “Does @WSJ sending news alerts via @foursquare clog the utility/fun? Or challenge Twitter?”) News content, almost by inertia, has a way of infiltrating nearly every major social media platform; there’s an is nothing sacred? aspect to the criticisms of outlets’ imposition of themselves on the board-game-writ-real that is Foursquare.

That’s something Seward is well aware of. “Perhaps more than Twitter, people use Foursquare in a really personal way,” he says. “They limit who they’re following to people they actually know, and they’re expecting to see their friends there.” So it might well be jarring to find a news organization’s tips and check-ins mixed into a timeline with the personal ones. (Then again, he points out, users “can choose or not choose to have those updates pushed to them.” So that mixture, like the updates themselves, is an opt-in scenario.) And then there’s the issue of a check-in suggesting a reporter’s physical presence on the scene of a news story: How should news outlets navigate that implication? They’re “good questions,” Seward says — even as he downplays the check-in’s significance in the greater scheme of things. (“Foursquare just announced that it now has over 40 million check-ins,” Seward notes, making the Times Square update “just one of 40 million check-ins in its history.”) Still, one little check-in can suggest a lot. Location-based news — its potential and its pitfalls — is something that the Journal and, now, other outlets will likely continue to grapple with as they find their own place in the new media landscape.

May 07 2010

14:00

An involuntary Facebook for reporters and their work: Martin Moore on the U.K.’s Journalisted

In the era of big media, our conceptions of trust were tied up in news organizations. If a story was on page 1 of The New York Times, that fact alone conjured up different associations of quality, truthfulness, and trustworthiness than if it were on page 1 of The National Enquirer. Those associations weren’t consistent — many Fox News viewers would have different views on the trustworthiness of the Times than I do — but they still largely lived at the level of the news organization.

But in an era of big-media regression and splintered news — when news can be delivered online by someone you hadn’t even heard of 10 seconds ago — how does trust evolve? Does it trickle down to the individual journalist: Do we decide who to trust not based on the news organization they work for but on the reporter? Are there ways to build metadata around those long-faceless bylines that can help us through the trust thicket?

It’s a question that’s getting poked at by Journalisted, the project of the U.K.’s Media Standards Trust. You can think of Journalisted as an involuntary Facebook for British reporters — at the moment, those who work for the national newspapers and the BBC, but with hopes to expand. It tracks their work across news organizations, cataloging it and drawing what data-based conclusions it can.

So if you run across an article by Richard Norton-Taylor and have pangs of doubt about his work, you can go see what else he’s written about the subject or anything else. There’s also a bit of metadata around his journalism: A tag cloud tells you he writes more about the MI5 than anything else, although lately he’s been more focused on NATO. You can see what U.K. bloggers wrote about each of his stories, and you can find other journalists who write about similar topics. And for journalists who choose to provide it, you can learn biographical information, like the fact that Simon Rothstein is an award-winning writer about professional wrestling, so maybe his WWE stories are more worth your time.

It is very much a first step — Journalisted is not yet the vaunted distributed trust network that will help us decide who to pay attention to and who we can safely ignore. The journalist-matching metadata is really interesting, but it still doesn’t go very far in determining merit: No one’s built those tools yet. But it’s a significant initiative toward placing journalists in the context of their work and their peers, and in the new splintered world, that context is going to be important.

Our friend Martin Moore of the Media Standards Trust dropped by our spare-shelved office not long ago and I asked him to talk about Journalisted. Video above, transcript below.

Journalisted is essentially a directory of all the journalists who are published in the UK national press and on the BBC, and in the future other sites as well. Each journalist has their own profile page, a little bit like Facebook or LinkedIn, but the difference being that that page is automatically updated with links to their most recent articles. It has some basic analysis of the content of those articles, so what they write an awful lot about, and what they don’t. And, it has links to other information to give context to the journalist, so if they have a profile in the paper, or if they have a Wikipedia page, or if they have their own personal blog or website. And as of a couple of weeks ago, they can add further information themselves if they’d like to.

[...]

If you’re interested in a particular journalist and you want to know more about what they write about, again to give you context, then obviously that’s a very good way of doing it. It tells you if they come from a particular perspective, it tells you if they’ve written an awful lot about a subject. If you, for example, read a piece strongly recommending against multiple vaccinations, you might want to know if this person has a history of being anti-multiple vaccinations, or if they have particular qualifications in science that make them very good reporting on this issue, etc. So, it gives you that context.

It also, on a simpler level, can give you contact details. So, where a journalist has published their email address, we automatically serve it up. But equally they can themselves put in further contact information, if you want to follow up on a story. And we also have some interesting analytics which lead you on to journalists who write about similar topics, or if you read an article, similar articles on the same topic. So again, it’s to contextualize the news and to help you to navigate and have more reason to trust a piece.

[...]

Initially, there was a bit of shock, I think. An awful lot of journalists don’t expect the spotlight to be turned around and put on them, so we had some very interesting exchanges. Since then, it’s now been around long enough that a lot of journalists have actually started to almost use it as their online CV. They’re adding their own stuff, they’re asking us to add stuff on their behalf, and they’re seeing that it can be of benefit to them, either with sources, so that they can allow sources to contact them, and to engage with them, or, equally, with employers. Quite a number of journalists have told us that editors have looked at their Journalisted profile and made a decision as to whether to offer them some work.

[...]

There are a number of goals. The initial one that we’re working on now is to flesh out the profiles much more. So to give people much more depth around the person so that they can have a much better impression as to who this journalist is, what they write about, their qualifications, the awards that they’ve won, and the books that they’ve written, etc. So, really flesh out the individual profile.

Following on from that, we’d love to expand it. We’d love to bring in more journalists, more publications — if possible, even go international. Our hope is that in the future, it will start to become a central resource, if you like, a junction point, a linked data resource, so that it will be the place you’ll come to from either the news site, from a blog, from wherever, in order to find out more about a journalist.

April 30 2010

14:30

This Week in Review: Gizmodo and the shield law, making sense of social data, and the WSJ’s local push

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

Apple and Gizmodo’s shield law test: The biggest tech story of the last couple of weeks has undoubtedly been the gadget blog Gizmodo’s photos of a prototype of Apple’s next iPhone that was allegedly left in a bar by an Apple employee. That story got a lot more interesting for journalism- and media-oriented folks this week, when we found out that police raided a Gizmodo blogger’s apartment based on a search warrant for theft.

What had been a leaked-gadget story turned into a case study on web journalism and the shield law. Mashable and Poynter did a fine job of laying out the facts of the case and the legal principles at stake: Was Gizmodo engaged in acts of journalism when it paid for the lost iPhone and published information about it? Social media consultant Simon Owens has a good roundup of opinions on the issue, including whether the situation would be different if Gizmodo hadn’t bought the iPhone.

The Electronic Frontier Foundation, a digital rights group, came out most strongly against the raid, arguing to Wired and Laptop magazine and in its own post that California law is clear that the Gizmodo blogger was acting as a reporter. The Citizen Media Law Project’s Sam Bayard agreed, backing the point up with a bit more case history. Not everyone had Gizmodo’s back, though: In a piece written before the raid, media critic Jeff Bercovici of Daily Finance said that Gizmodo was guilty of straight-up theft, journalistic motives or no.

J-prof Jay Rosen added a helpful clarification to the “are bloggers journalists” debate (it’s actually about whether Gizmodo was engaged in an act of journalism, he says) and ex-Saloner Scott Rosenberg reached back to a piece he wrote five years ago to explain why that debate frustrates him so much. Meanwhile, the Columbia Journalism Review noted that the Gizmodo incident was just one in a long line of examples of Apple’s anti-press behavior.

Bridging the newsroom-academy gap: Texas j-prof Rosental Alves held his annual International Symposium on Online Journalism last weekend, and thanks to a lot of people’s work in documenting the conference, we have access to much of what was presented and discussed there. The conference site and Canadian professor Alfred Hermida devoted about 20 posts each to the event’s sessions and guests, so there’s loads of great stuff to peruse if you have time.

The conference included presentations on all kinds of stuff like Wikipedia, news site design, online comments, micropayments, and news innovation, but I want to highlight two sessions in particular. The first is the keynote by Demand Media’s Steven Kydd, who defended the company’s content and businessmodel from criticism that it’s a harmful “content farm.” Kydd described Demand Media as “service journalism,” providing content on subjects that people want to know about while giving freelancers another market. You can check summaries of his talk at the official site, Hermida’s blog, and in a live blog by Matt Thompson. The conference site also has video of the Q&A session and reflections on Kydd’s charisma and a disappointing audience reaction. The other session worth taking a closer look at was a panel on nonprofit journalism, which, judging from Hermida and the conference’s roundups, seemed especially rich with insight into particular organizations’ approaches.

The conference got Matt Thompson, a veteran of both the newsroom and the academy who’s currently working for NPR, thinking about what researchers can do to bring the two arenas closer together. “I saw a number of studies this weekend that working journalists would find fascinating and helpful,” he wrote. “Yet they’re not available in forms I’d feel comfortable sending around the newsroom.” He has some practical, doable tips that should be required reading for journalism researchers.

Making sense of social data: Most of the commentary on Facebook’s recent big announcements came out last week, but there’s still been plenty of good stuff since then. The tech blog ReadWriteWeb published the best explanation yet of what these moves mean, questioning whether publishers will be willing to give up ownership of their comments and ratings to Facebook. Writers at ReadWriteWeb and O’Reilly Radar also defended Facebook’s expansion against last week’s privacy concerns.

Three other folks did a little bit of thinking about the social effects of Facebook’s spread across the web: New media prof Jeff Jarvis said Facebook isn’t just identifying us throughout the web, it’s adding a valuable layer of data on places, things, ideas, everything. But, he cautions, that data isn’t worth much if it’s controlled by a company and the crowd isn’t able to create meaning out of it. Columbia grad student Vadim Lavrusik made the case for a “social nut graph” that gives context to this flood of data and allows people to do something more substantive than “like” things. PR blogger Paul Seaman wondered about how much people will trust Facebook with their data while knowing that they’re giving up some of their privacy rights for Facebook’s basic services. And social media researcher danah boyd had some insightful thoughts about the deeper issue of privacy in a world of “big data.”

The Wall Street Journal goes local: The Wall Street Journal made the big move in its war with The New York Times this week, launching its long-expected New York edition. The Times’ media columnist, David Carr, took a pretty thorough look at the first day’s offering and the fight in general, and Columbia j-prof Sree Sreenivasan liked what he saw from the Journal on day one.

Slate media critic Jack Shafer said the struggle between the Journal and the Times is a personal one for the Journal’s owner, Rupert Murdoch — he wants to own Manhattan, and he wants to see the Times go down in flames there. Meanwhile, Jeff Jarvis stifled a yawn, calling it “two dinosaurs fighting over a dodo bird.”

Along with its local edition, the Journal also announced a partnership with the geolocation site Foursquare that gives users news tips or factoids when they check in at certain places around New York — a bit more of a hard-news angle than Foursquare’s other news partnerships so far. Over at GigaOm, Mathew Ingram applauded the Journal’s innovation but questioned whether it would help the paper much.

Apple and app control: The fury over Pulitzer-winning cartoonist Mark Fiore’s proposed iPhone app has largely died down, but there were a few more app-censorship developments this week to note. MSNBC.com cartoonist Daryl Cagle pointed out that despite Apple’s letup in Fiore’s case, they’re not reconsidering their rejection of his “Tiger Woods cartoons” app. Political satirist Daniel Kurtzman had two of his apps rejected, too, and an app of Michael Wolff’s Newser column — which frequently mocks Apple’s Steve Jobs — was nixed as well. Asked about the iPad at the aforementioned International Symposium on Online Journalism, renowned web scholar Ethan Zuckerman said Apple’s control over apps makes him “very nervous.”

The New Yorker’s Ken Auletta also went deep into the iPad’s implications for publishers this week in a piece on the iPad, the Kindle and the book industry. You can hear him delve into those issues in interviews with Charlie Rose and Fresh Air’s Terry Gross.

Reading roundup: We had some great smaller conversations on a handful of news-related topics this week.

— Long-form journalism has been getting a lot of attention lately. Slate’s Jack Shafer wrote about longform.org, an effort to collect and link to the best narrative journalism on the web. Several journalistic heavyweights — Gay Talese, Buzz Bissinger, Bill Keller — sang the praises of narrative journalism during a Boston University conference on the subject.

Nieman Storyboard focused on Keller’s message, in which he expressed optimism that long-form journalism could thrive in the age of the web. Jason Fry agreed with Keller’s main thrust but took issue with the points he made to get there. Meanwhile, Jonathan Stray argued that “the web is more amenable to journalism of different levels of quality and completeness” and urges journalists not to cut on the web what they’re used to leaving out in print.

— FEED co-founder Steven Johnson gave a lecture at Columbia last week about the future of text, especially as it relates to tablets and e-readers. You can check it out here as an essay and here on video. Johnson criticizes the New York Times and Wall Street Journal for creating iPad apps that don’t let users manipulate text. The American Prospect’s Nancy Scola appreciates the argument, but says Johnson ignored the significant cultural impact of a closed app process.

— Two intriguing sets of ideas for news design online: Belgian designer Stijn Debrouwere has spent the last three weeks writing a thoughtful series of posts exploring a new set of principles for news design, and French media consultant Frederic Filloux argues that most news sites are an ineffective, restrictive funnel that cut users off from their most interesting content. Instead, he proposes a “serendipity test” for news sites.

— Finally, if you have 40 free minutes sometime, I highly recommend watching the Lab editor Joshua Benton’s recent lecture at Harvard’s Berkman Center on aggregation and journalism. Benton makes a compelling argument from history that all journalism is aggregation and says that if journalists don’t like the aggregation they’re seeing online, they need to do it better. It makes for a great introductory piece on journalism practices in transition on the web.

April 16 2010

13:20

This Week in Review: News talk and tips at ASNE, iPad’s ‘walled garden,’ and news execs look for revenue

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

Schmidt and Huffington’s advice for news execs: This week wasn’t a terribly eventful one in the future-of-journalism world, but a decent amount of the interesting stuff that was said came out of Washington D.C., site of the annual American Society of News Editors conference. The most talked-about session there was Sunday night’s keynote address by Google CEO Eric Schmidt, who told the news execs there that their industry is in trouble because it hasn’t found a way to sustain itself financially, not because its way of producing or delivering news is broken. “We have a business-model problem, we don’t have a news problem,” Schmidt said.

After buttering the crowd up a bit, Schmidt urged them to produce news for an environment that’s driven largely by mobile devices, immediacy, and personalization, and he gave them a glimpse of what those priorities look like at Google. Politico and the Lab’s Megan Garber have summaries of the talk, and paidContent has video.

There were bunches more sessions and panels (American Journalism Review’s Rem Rieder really liked them), but two I want to highlight in particular. One was a panel with New York Times media critic David Carr, new-media titan Ariana Huffington and the Orlando Sentinel’s Mark Russell on the “24/7 news cycle.” The Lab’s report on the session focused on four themes, with one emerging most prominently — the need for context to make sense out of the modern stream of news. St. Petersburg Times media critic Eric Deggans and University of Maryland student Adam Kerlin also zeroed in on the panelists’ call to develop deeper trust and participation among readers.

The second was a presentation by Allbritton’s Steve Buttry that provides a perfect fleshing-out of the mobile-centric vision Schmidt gave in his keynote. Poynter’s Damon Kiesow had a short preview, and Buttry has a longer one that includes a good list of practical suggestions for newsrooms to start a mobile transformation. (He also has slides from his talk, and he posted a comprehensive mobile strategy for news orgs back in November, if you want to dive in deep.)

There was plenty of other food for thought, too: Joel Kramer of the Twin Cities nonprofit news org MinnPost shared his experiences with building community, and one “where do we go from here?” panel seemed to capture news execs’ ambivalence about the future of their industry. Students from local universities also put together a blog on the conference with a Twitter stream and short recaps of just about every session, and it’s worth a look-through. Two panels of particular interest: One on government subsidies for news and another with Kelly McBride of Poynter’s thoughts on the “fifth estate” of citizen journalists, bloggers, nonprofits and others.

Is a closed iPad bad for news?: In the second week after the iPad’s release, much of the commentary centered once again on Apple’s control over the device. In a long, thoughtful post, Media watcher Dan Gillmor focused on Apple’s close relationship with The New York Times, posing a couple of arresting questions for news orgs creating iPad apps: Does Apple have the unilateral right to remove your app for any reason it wants, and why are you OK with that kind of control?

On Thursday he got a perfect example, when the Lab’s Laura McGann reported that Pulitzer-winning cartoonist Mark Fiore’s iPhone app was rejected in December because it “contains content that ridicules public figures.” Several other folks echoed Gillmor’s alarm, with pomo blogger Terry Heaton asserting that the iPad is a move by the status quo to retake what it believes is its rightful place in the culture. O’Reilly Radar’s Jim Stogdill says that if you bought an iPad, you aren’t really getting a computer so much as “a 16GB Walmart store shelf that fits on your lap … and Apple got you to pay for the building.” And blogging/RSS/podcasting pioneer Dave Winer says the iPad doesn’t change much for news because it’s so difficult to create media with.

But in a column for The New York Times, web thinker Steven Johnson adds an important caveat: While he’s long been an advocate of open systems, he notes that the iPhone software platform has been the most innovative in the history in computing, despite being closed. He attributes that to simpler use for its consumers, as well as simpler tasks for developers. While Johnson still has serious misgivings about the Apple’s closed policy from a control standpoint, he concludes that “sometimes, if you get the conditions right, a walled garden can turn into a rain forest.”

In related iPad issues, DigitalBeat’s Subrahmanyam KVJ takes a step back and looks at control issues with Apple, Facebook, Twitter and Google. Florida j-prof Mindy McAdams has a detailed examination of the future of HTML5 and Flash in light of Adobe’s battle with Adobe over the iPad. Oh yeah, and to the surprise of no one, a bunch of companies, including Google, are developing iPad competitors.

News editors’ pessimism: A survey released Monday by the Pew Research Center’s Project for Excellence in Journalism presented a striking glimpse into the minds of America’s news executives. Perhaps most arresting (and depressing) was the finding that nearly half of the editors surveyed said that without a significant new revenue stream, their news orgs would go under within a decade, and nearly a third gave their org five years or less.

While some editors are looking at putting up paywalls online as that new revenue source, the nation’s news execs aren’t exactly overwhelmed at that prospect: 10 percent are actively working on building paywalls, and 32 percent are considering it. Much higher percentages of execs are working on online advertising, non-news products, local search and niche products as revenue sources.

One form of revenue that most news heads are definitely not crazy about is government subsidy: Three quarters of them, including nearly 90 percent of newspaper editors, had “serious reservations” about that kind of funding (the highest level of concern they could choose). The numbers were lower for tax subsidies, but even then, only 19 percent said they’d be open to it.

The report itself makes for a pretty fascinating read, and The New York Times has a good summary, too. The St. Pete Times’ Eric Deggans wonders how bad things would have to get before execs would be willing to accept government subsidies (pretty bad), and the Knight Digital Media Center’s Amy Gahran highlights the statistics on editors’ thoughts on what went wrong in their industry.

Twitter rolls out paid search: This week was a big one for Twitter: We finally found out some of the key stats about the microblogging service, including how many users it has (105,779,710), and the U.S. Library of Congress announced it’s archiving all of everyone’s tweets, ever.

But the biggest news was Twitter’s announcement that it will implement what it calls Promoted Tweets — its first major step toward its long-anticipated sustainable revenue plan. As The New York Times explains, Promoted Tweets are paid advertisements that will show up first when you search on Twitter and, down the road, as part of your regular stream if they’re contextually relevant. Or, in Search Engine Land’s words, it’s paid search, at least initially.

Search blogger John Battelle has some initial thoughts on the move: He thinks Twitter seems to be going about things the right way, but the key shift is that this “will mark the first time, ever, that users of the service will see a tweet from someone they have not explicitly decided to follow.Alex Wilhelm of The Next Web gives us a helpful roadmap of where Twitter’s heading with all of its developments.

Anonymity and comments: A quick addendum to last month’s discussion about anonymous comments on news sites (which really has been ongoing since then, just very slowly): The New York Times’ Richard Perez-Pena wrote about many news organizations’ debates over whether to allow anonymous comments, and The Guardian’s Nigel Willmott explained why his paper’s site will still include anonymous commenting.

Meanwhile, former Salon-er Scott Rosenberg told media companies that they’d better treat it like a valuable conversation if they want it to be one (that means managing and directing it), rather than wondering what the heck’s the problem with those crazy commenters. And here at The Lab, Joshua Benton found that when the blogging empire Gawker made its comments a tiered system, their quality and quantity improved.

Reading roundup: This week I have three handy resources, three ideas worth pondering, and one final thought.

Three resources: If you’re looking for a zoomed-out perspective on the last year or two in journalism in transition, Daniel Bachhuber’s “canonical” reading list is a fine place to start. PaidContent has a nifty list of local newspapers that charge for news online, and Twitter went public with Twitter Media, a new blog to help media folks use Twitter to its fullest.

Three ideas worth pondering: Scott Lewis of the nonprofit news org Voice of San Diego talks to the Lab about how “explainers” for concepts and big news stories could be part of their business model, analysts Frederic Filloux and Alan Mutter take a close look at online news audiences and advertising, and Journal Register Co. head John Paton details his company’s plan to have one newspaper produce one day’s paper with only free web tools. (Jeff Jarvis, an adviser, shows how it might work and why he’s excited.)

One final thought: British j-prof Paul Bradshaw decries the “zero-sum game” attitude by professional journalists toward user-generated content that views any gain for UGC as a loss for the pros. He concludes with a wonderful piece of advice: “If you think the web is useless, make it useful. … Along the way, you might just find that there are hundreds of thousands of people doing exactly the same thing.”

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl