Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 25 2011

14:30

Vadim Lavrusik: Five key building blocks to incorporate as we’re rethinking the structure of stories

Editor’s Note: Vadim Lavrusik is Facebook’s first Journalist Program Manager, where he is responsible for, among other things, helping journalists to create new ways to tell stories. (You may remember him from his work at Mashable.) In the article below, he provides an wide-angle overview of the key forces that are re-shaping the news article for the digital age.

If we could re-envision today’s story format — beyond the text, photographs, and occasional multimedia or interactive graphics — what would the story look like? How would the audience consume it?

Today’s web “article” format is in many ways a descendent from the golden age of print. The article is mostly a recreation of print page design applied to the web. Stories, for the most part, are coded with a styled font for the headline, byline, and body — with some divs separating complementary elements such as photographs, share buttons, multimedia items, advertising, and a comments thread, which is often so displaced from the story that it’s hard to find. It is only scratching the surface of the storytelling that is possible on the web.

In the last few years, we’ve seen some progress in new approaches to the story format on the web, but much of it has included widgets and tools tacked on for experimentation. And it doesn’t fully account for changes in user behavior and the proliferation of simple publishing tools and platforms on the web. As the Huffington Post’s Saul Hansell recently put it, “There are a lot more people saying things than there is stuff to say in this world.” Tools like Storify and Storyful enable journalists to curate the conversation that’s taking place on the social web, turning ephemeral comments into enduring narratives. A story, Jeff Jarvis notes, can be the byproduct of the process of newsgathering — the conversation.

And the conversation around the story has become, at this point, almost as important as the story itself. The decisions we make now — of design and of content creation — will inform the evolution of the story itself. So it’s worth stepping back and wondering: How can we hack today’s story into something that reflects the needs of today’s news consumers and publishers, integrates the vast amounts of content and data being created online, and generally leverages the opportunities the web has created? Below are some of the most crucial elements of online storytelling; think of it as a starting point for a conversation about the pieces tomorrow’s story format could include.

1. Context

Context wears many hats in a story. It could mean representing historical context through an interactive timeline or presenting contextualized information that puts the story in perspective. It could be an infographic, a subhead with information — or cumulative bits of information that run through a narrative. When the first American newspaper, Publick Occurrences, was published, many of its stories were only a few sentences in length. Most of its stories were reports that were gathered through word of mouth. But because of the infrequency of the publication and short length of the stories, it failed to provide the reader with adequate context in its stories. Haphazard newsgathering led to a somewhat chaotic experience for readers.

Today, though, with publication happening every millisecond, the overflow of information presents a different kind of challenge: presenting short stories in a way that still provides the consumer with context instead of just disparate pieces of information. We’ve seen a piece of the solution with the use of Storify, which enables journalists to organize the social story puzzle pieces together to suggest a bigger picture. But how can this approach be scaled? How can we provide context in a way that is not only comprehensive, but inclusive?

2. Social

Social platforms have, in short, changed the way we consume news. Over the last decade, we consumers spent a big portion of our time searching for news and seeking it out on portals and news sites. Now news finds us. We discover it from friends, colleagues, and people with whom we share intellectual interests. It’s as if on every corner one of our friends is a 1900s paperboy shouting headlines along with their personal take on the news in question. The news is delivered right to us in our personalized feeds and streams.

Social design makes the web feel more familiar. We tend to refer to readers and viewers as consumers, and that’s not only because they consume the content that is presented or pay for it as customers; it’s also because they’re consumed by the noise that the news creates. Social design adds a layer that acts as a filter for the noise.

Stories have certainly integrated social components so far, whether it’s the ability of a consumer to share a story with friends or contribute her two cents in the comments section. But how can social design be integrated into the structure of a story? Being able to share news or see what your friends have said about the piece is only scratching the surface. More importantly, how can social design play nice with other components discussed here? How do you make stories that are not just social, but also contextual — and, importantly, personal?

3. Personalization

One of the benefits of social layering on the web is the ability to personalize news delivery and provide social context for a user reading a story. A user can be presented with stories based on what their social connections have shared using applications like Flipboard, Zite, Trove, and many others. Those services incorporate social data to learn what it is you may be interested in reading about, adding a layer of cusomtization to news consumption. Based on your personal interests, you are able to get your own version of the news. It’s like being able to customize a newscast with only segments you’re interested in, or only have the sports section of the local newspaper delivered to your porch…times ten.

How can we serve consumers’ needs by delivering a story in a format they prefer, while avoiding the danger of creating news consumers who only read about things they want know (and not news they should know)? Those are big questions. One answer could have to do with format: enabling users to consume news in a format or style they prefer, enabling them to create their own personalized article design that suits their needs. Whatever it looks like, personalization is not only important in enabling users to get content in a compelling format. It’s also crucial from the business perspective: It enables publishers to learn more about their audiences to better serve them through forms of advertising, deals, and services that are just as relevant and personalized.

4. Mobile

Tomorrow’s story will be designed for the mobile news consumer. Growing accessibility to smartphones is only going to continue to increase, and the story design and format will likely increasingly cater to mobile users. They will also take into account the features of the platform the consumer is on and their behavior when they are consuming the content. The design will take into account how users interact with stories from their mobile devices, using touch-screen technology and actions. We’re already seeing mobile and tablet design influence web design.

These are challenges not only of design, but of content creation. Journalists may begin to produce more abbreviated pieces for small-screen devices, while enabling longform to thrive on tablet-sized screens. Though journalists have produced content from the field for years, the advancement of mobile technology will continue to streamline this process. Mobile publication is already integrated into content management platforms, and companies like the BBC are working on applications that will enable users to broadcast live from their mobile phones.

5. Participation

Citizens enabled by social platforms are covering revolutions on mobile devices. Users are also able to easily contribute to a story by snapping a picture or video and uploading it with their mobile devices to a platform like iReport. Tomorrow’s article will enable people to be equal participants in the story creation process.

Increasingly, participation will mean far more than simply consumption, being cast aside as a passive audience that can contribute to the conversation only by filing a comment below a published story (pending moderator approval). The likes of iReport, The Huffington Post’s “contribute” feature, or The New York Daily News’ recent uPhoto Olapic integration — which enables people to easily upload their photos to a story slideshow and share photos they’ve already uploaded to Facebook, Flickr, and elsewhere — are just the beginning. To harness participatory journalism, these features should no longer be an afterthought in the design, but a core component of it. As Jay Rosen recently put it, “It isn’t true that everyone is a journalist. But a lot more people are involved.”

Image by Holger Zscheyge used under a Creative Commons license.

June 03 2011

16:00

The news/analysis divorce: Who gets custody of the cash?

Editor’s Note: Lab contributor Lois Beckett is a freelance journalist who focuses on meta-media reporting and the future of long-form journalism. Here, in a response to a much-discussed article predicting a “divorce” between news and analysis, she considers the economic aspects of longform. 

One of the must-read articles of the week is “The News Article is Breaking Up,” by Sulia CEO Jonathan Glick.

Glick makes the pretty standard evaluation that the traditional news article is an outdated medium for conveying information. Consumers, he argues, want either a quick, tweet-sized update — something that they can take in as part of the stream, particularly on increasingly ubiquitous smartphone platforms — or an immersive longform experience that puts those bits of information into context.

Unlike Jeff Jarvis, who argues that “the most precious resource in news is reporting,” Glick argues that “the news” should be given away for free, as a loss leader for analysis. “Long-form writing will survive and will do so by abandoning news nuggets,” he writes. Furthermore: “The good news for writers is that this dovetails with their financial and intellectual interests.”

Via a variety of social-mobile platforms, they will pass along facts and pictures as soon as they obtain them — or verify them, depending on the writer’s journalistic standards. Writers who are especially good at doing this real-time reporting will develop audiences who are attentive to their mobile alerts. News nuggets are highly viral, so successful reporters will very quickly be introduced to huge numbers of readers.

Through this loss-leading channel, writers will then be able to notify their readers about longer-form articles they have created…. These pieces will written to be saved to read later — for that time when the reader takes a moment to relax, learn, and enjoy resting by the side of the stream. Social and mobile platforms make payment much easier, so it will be practical to charge a small fee. Fifty cents for thoughtful analysis is inexpensive, and yet it is the cost of an entire newspaper today.

Let’s put aside the question of whether charging “fifty cents for thoughtful analysis” is a realistic price point. The problem with Glick’s proposed business model is that it misrepresents the relationship between explanation and appetite.

Consumers have an appetite for updates about stories they’re already following (industry news, celebrity relationships) or for big events whose importance is easy to grasp (tornadoes, sex scandals, revolutions). But for many issues, consumers develop an interest in ”news nuggets” about a topic only after reading a long-form story about it. This can be true of investigative journalism, and of almost every long-form story that isn’t about a celebrity or a piece of major breaking news. Explanatory journalism creates the appetite for news updates on many subjects, not the other way around.

For that kind of longform, Glick’s business model is nonsensical. The pieces of many stories — the chronologically gathered details — have little value, economically or otherwise, without relevant context. As a reporter, how can I tweet observations about a source my readers don’t know about, or new wrinkles in an investigation that is still a mass of contradictory evidence?

Glick also creates a dichotomy between Twitter’s raw “nuggets” of news and highly crafted long-form stories. But Twitter, as a reportorial form, is actually much richer and more flexible than that either/or framing would suggest.

Take Mother Jones human rights reporter Mac McClelland, who gained a larger audience through her vividly tweeted coverage of the BP oil spill. Her tweets aren’t exactly “highly viral news nuggets.” Some of them have a breaking-news quality, but the central appeal of her Twitter feed is cumulative. It’s a crafted narrative, with McClelland as the questing, sometimes outraged, protagonist. Her feed is a long-form story that lives inside the news stream. To break it down to atomic elements seems, somehow, to miss the point.

That’s not to say that Glick’s notion of reporters propelling themselves to long-form stardom might not work for certain types of reporting — about big political campaigns, or revolutions, or natural disasters. These are situations in which readers have enough of a grasp of what’s going on to want real-time “news nuggets,” and enough questions to be willing (maybe) to pay for a well-crafted explainer that puts the updates together. In these cases, the loss-leaders might be tweets, or they might be the long-form stories themselves, which, in turn, might attract readers to pay for access to journalists’ live updates. Or, as Gerry Marzorati suggested earlier this year, nonfiction writing itself might become a loss-leader for another form of economic sustainment: book tours, events, and other direct encounters with the public.

Glick may not arrive at the right answer, but he is asking the right question: If short articles, once the journalist’s daily bread, can indeed be replaced in part by snappier, tweeted updates, how will reporters make money?

Image by Jez used under a Creative Commons license.

May 26 2011

14:00

Topolsky and Bankoff on Engadget, SB Nation, and the new tech site that’s bringing them together

There can be a very real “through the looking glass” feel to working on a site that covers technology, especially when you start contemplating the technology of publishing. At least, that’s the situation Joshua Topolsky and his group of Engadget expats are finding themselves in as they ramp up to the fall unveiling of a new technology site that will live under the SB Nation flag.

“What we’re building and what we write about are the same thing in many ways,” Topolsky told me. “And for us that provides an incredibly unique point of observation.”

It says a lot about Topolsky, as well as his fellow Engadget-ites Nilay Patel, Ross Miller, Joanna Stern, Chris Ziegler, and Paul Miller, that while they could have spent the intervening time developing their new site in a bunker, they’ve instead decided to get out front and do what they do best, which is covering tech. They’ve been doing that on This is my next, their placeholder blog.

In migrating away from Engadget — and, in that, from the AOL/Huffington Post empire — the attraction to SB Nation, as Topolsky has written, came from the company’s publishing philosophy as much as its evolving publishing technology. As purveyors, chroniclers, and users of technology, Topolsky and his team are now in a unique position to develop a phenomenal tech site. It’s a scenario with Willy Wonk-ian overtones: They’ve been set loose in a candy store.

And yet, Topolsky told me, their aspirations are more modest than fantastical. If anything, they’re not looking to re-invent the blog or news site as we know them. They just want something that’s more adaptive both to the way stories are written and published, and to how audiences actually read them.

“We’re not trying to be Twitter or Facebook, as in this new thing people are using,” he said. “We want to be something that is just the evolved version of what we have been doing.”

The point, he said, is this: Reading on the web is an ever-changing thing, and publishers need to develop or embrace the technology that can respond to its evolution.

Topolsky isn’t releasing much information about the new site at this point, but in terms of his team’s coverage of the tech industry, he told me, they won’t be straying far from their Engadget roots. In many ways, what their Project X represents is an experiment in publishing and engagement technology, which fits in well with SB Nation’s M.O. One of the things they’re likely to be using on the site, for example, is SB Nation’s story streams, which provide constantly updated information on stories while also offering different access points to readers.

Though the site will also need to be able to accommodate things like multimedia (Topolsky said they it might use something similar to The Engadget Show for that, that that dynamic approach to narrative will work well for covering the latest updates on Google’s Android OS, say, or the tribulations of a phone producer like BlackBerry. “You write the news as seems appropriate and connect it automatically to a larger story, encompassing the narrative,” he said.

But what’s just as important as the tech, Topolsky pointed out, is an understanding between the editorial people and the developers, so when you need a new module or feature on the site both sides understand why — and how — it could work. In some of the more frustrating moments at Engadget, Topolsky said, he found himself having to plead his case to AOL developers in order to get site changes made.

That likely won’t be the case at SB Nation, which, as we’ve written about before, is more than willing to experiment with the blog format. It also helps that they’ve secured a healthy dose of new funding. When I spoke with SB Nation CEO Jim Bankoff, he noted that publishing companies are only as successful as the technology and people that comprise them.

“The foundation of our company is the marriage of editorial talent and technology, — sometimes I say people and platform,” he said. “We really believe that to be a new media-first company you have to be based on people who understand how to craft stories online.”

But other than trying to build inventive publishing systems out of the box, what makes the difference for SB Nation is its habit of addressing regular feedback from readers, Bankoff said. The developers at SB Nation, he noted, constantly update the sites based on comments from readers and contributors. If something’s in the service of making a better product, they’ll try it, he said.

Though the audiences for sports news and tech news have their own vagaries, there are some elements — cast of players, data points, and healthy competition — that they have in common. And those will go a long way towards helping to adapt and grow SB Nation’s publishing platform, Bankoff said. “Just like sports, there is an arc to every tech story — and we’re going to be able to really convey the various milestones across any big news item.”

March 04 2011

19:00

Mother Jones web traffic up 400+ percent, partly thanks to explainers

February was a record-breaking traffic month for Mother Jones. Three million unique users visited the site — a 420 percent increase from February 2010’s numbers. And MotherJones.com posted 6.6 million pageviews overall — a 275 percent increase.

The investigative magazine credits the traffic burst partly to a month of exceptional work in investigations, essays, and exposes, its editorial bread and butter: real-time coverage of the Wisconsin protests, a Kevin Drum essay on the consequences of wealth inequality in America, the first national media coverage of that infamous prank call to Wisconsin governor Scott Walker. The also mag credits the traffic, though, to its extended presence on social media: Mother Jones’ Twitter followers increased 28 percent in February, to more than 43,000; its Facebook fan base grew 20 percent, to nearly 40,000; and its Tumblr fan base grew 200 percent, to nearly 3,000 followers.

In all, the mag estimates, a cumulative 29 percent of traffic to MotherJones.com came from social media sites.

But Mother Jones also attributes the traffic explosion to a new kind of news content: its series of explainers detailing and unpacking the complexities of the situations in Tunisia, Egypt, Bahrain, Libya, and Wisconsin. We wrote about MoJo’s Egypt explainer in January, pointing out the feature’s particular ability to accommodate disparate levels of reader background knowledge; that format, Adam Weinstein, a MoJo copy editor and blogger, told me, has become the standard one for the mag’s explainers. “It was a great resource for the reader, but it also helped us to focus our coverage,” Weinstein notes. “When something momentous happens, it can be hard for a small staff to focus their energies, I think. And this was an ideal way to do that.”

The magazine started its explainer series with a debrief on Tunisia; with the Egypt explainer, written by MoJo reporter Nick Baumann, the form became a format. The explainers were “a collaborative effort,” Weinstein says — “everybody pitched in.” And the explainer layout, with the implicit permission it gives to the reader to pick and choose among the content it contains, “just became this thing where we could stockpile the information as it was coming in, and also be responsive to be people responding via social media with questions, with interests, with inquiries that they didn’t see answers to in other media outlets.”

It was a format that proved particularly useful, Weinstein notes, during the weekend after Mubarak had resigned in Egypt and when protests gained power in Libya and, stateside, Wisconsin. “All of this was happening at the same time,” he says — “none of us were getting a lot of sleep that weekend” — and “our social media just exploded.” But because MoJo’s Twitter and Tumblr and Facebook pages became, collectively, such an important interface for conversation, “we needed a really efficient way of organizing our content,” and in one convenient place. So the explainer format became, essentially, “a great landing page.”

The success of that format could offer an insight for any outlet trying to navigate between the Scylla and Charybdis of content and context. Explainers represent something of a tension for news organizations; on the one hand, they can be hugely valuable, both to readers and to orgs’ ability to create community around particular topics and news events; on the other, they can be redundant and, worse, off-mission. (“We’re not Wikipedia,”  one editor puts it.)

It’s worth noting, though, that MoJo explainers aren’t, strictly, topic pages; rather, they’re topical pages. Their content isn’t reference material catered to readers’ general interests; it’s news material catered to readers’ immediate need for context and understanding when it comes to complex, and current, situations. The pages’ currency, in other words, is currency itself.

That’s likely why the explainers have been so successful for MoJo’s traffic (and, given the outlet’s employment of digital advertising, its bottom line); it’s also why, though, the format requires strategic thinking when it comes to the resources demanded by reporting and aggregation — particularly for outlets of a small staff size, like MoJo. Explainers, as valuable as they can be, aren’t always the best way for a news outlet to add value. “We still do the long-form stories,” Weinstein notes, “and this has just given us a place to have a clearinghouse for that.” For MoJo, he says, the explainer “is a way of stitching together all the work that everyone’s been doing. And we’re thrilled that readers have responded.”

February 23 2011

17:00

The context-based news cycle: editor John O’Neil on the future of The New York Times’ Topics Pages

“There’s are a lot of people in the news industry who are very skeptical of anything that isn’t news,” says The New York Times’ John O’Neil. As the editor of the Times’ Topic Pages, which he calls a “current events encyclopedia,” O’Neil oversees 25,000 topic pages, half of which — about 12,000 or so — include some human curation.

While the rest of the newsroom is caught up in the 24-hour news cycle, constantly churning out articles, O’Neil and his team are on a parallel cycle, “harvesting the reference material every day out of what the news cycle produces.” This means updating existing topic pages, and creating new ones, based on each morning’s news. (The most pressing criterion for what gets updated first, O’Neil said, is whether “we would feel stupid not having it there.”) A few of the Times’ most highly curated topics include coffee (curated by coffee reporter Oliver Strand with additional updates by Mike White) and and Wikipedia (curated by media reporter Noam Cohen),  as well as more predictably prominent topics like Wikileaks and Egypt.

The Topics team includes three editors and two news assistants, who work with Times reporters. “People give us links to studies they’ve used for stories or articles they’ve looked at, and this is something that we do hope to expand,” O’Neil said.

But half of the topic pages are “purely automated,” O’Neil said. And O’Neil is even contemplating contracting the number of curated topic pages, as people and events drop out of relevance. (The Topic pages garner 2.5 percent of the Times’ total pageviews.) O’Neil said he had read a statistic that roughly a third of Wikipedia’s traffic came from only about 3,000 of its now more than 17 million pages. “We’re concentrating more on that upper end of the spectrum.”

In a phone conversation, I talked with O’Neil about why the Times has ventured into Wikipedia territory, how the Times’ model might be scalable for local news organizations, and why creating a “current events encyclopedia” turns out to be easier than you might think. A condensed and edited version of that conversation is below.

LB: How did the topic pages develop?

JO: Topic pages began as part of the redesign in 2006. Folks up in tech and the website realized they could combine the indexing that has actually gone on for decades with the ability to roll up a set of electronic tags. The first topic pages were just a list of stories by tag, taking advantage of the fact that we had human beings looking at stories every day and deciding if they were about this, or were they about that. Just that simple layer of human curation created lists that are much more useful than a keyword search, and they proved to be pretty popular — more popular than expected at the time.

LB: What’s the philosophy at the Times behind the topic page idea?

JO: Jill Abramson’s point of view when she started looking at this: When she was a reporter, she would work on a story for days on end, weeks on end, and pile up more and more material. You end up with a stack of manila folders full of material, and she would take all of that and boil it down to a 1,200-word story. It was a lot of knowledge that was gained in the process, and it didn’t make it to the reader. The question was: How can we try to share some of that with the reader?

My impression is that people find these pages terrifically useful. Not everybody comes to a news story with the knowledge you would have if you’d been following the story closely all along. News organizations are set up to deal with the expectation [that people] have read the paper yesterday and the day before.

LB: How do you go about transforming news stories into reference material? What does the process look like?

JO: What we found, as we did this, is that the Times is actually publishing reference material every day. It’s buried within stories. In a given day, with 200 articles in the paper, about 10 percent reperesent extremely significant developments in the field. Now we can take a small number of subjects, like Tunisia or Egypt or Lebanon or the Arizona shootings, and keep on top of everything, set the bar higher. We can really keep up with what the daily paper’s doing on the biggest stories.

LB: As you note, there’s a lot of wariness among “news” folks  around putting  effort into topic pages. For instance, when I talked with Jonathan Weber of The Bay Citizen, the Times’ San Francisco local news partner, about topic pages, he told me: ”people are looking for news from a news site….We’re not Wikipedia. You don’t really go [to a topic page] for a backgrounder, you go there for a story.” How would you respond to that?

JO: Our experience has been that that’s never been entirely true, and it’s becoming less true all the time. Look at the pound-and-half print New York Times, and think how much of that is about things other than what happened yesterday. Even in the print era, that was a pretty big chunk.

Then again, it makes sense for folks at a place like The Bay Citizen to be more skeptical about topic pages. A blog, after all, is all about keeping the items coming. And a site focused on local news would feel less need to explain background — hey, all our readers live here and know all that! — than if they were covering the Muslim Brotherhood, for instance.

LB: So what about the Wikipedia factor? Why should the Times be getting into the online encyclopedia business?

JO: I think Wikipedia is an amazing phenomenon. I use it. But there’s no field of information in which people would find there to be only one source. On Wikipedia, there’s the uncertainty principle: It’s all pretty good, but you’re never sure with any specific thing you’re looking at, how specific you can be about it. You have to be an expert to know which are the good pages and which are the not-so-good ones.

Our topic pages — and other newspaper-based pages — bring, for one thing, a level of authority or certainty. We’re officially re-purposing copy that has been edited and re-edited to the standards of The New York Times. It’s not always perfect, but people know what they can expect.

LB: What’s the business-side justification for the Topic Pages?

JO: We know that the people who come to the topic page are more likely than people who come to an article page to continue on and look at other parts of the site. It helps bring people to the site from search engines.

It’s also brand-building; it’s another way people can form an attachment. People can also subscribe to topic pages. (Every page produces an RSS feed.) We’ve begun to do some experimenting with social media. There are lots of people who want to like or follow or friend The New York Times, but a topic pages feed gives you a way of looking at a slice of this audience. It turns the supermarket into a series of niche food stands, so to speak.

LB: The Times obviously has a lot more resources than most local news outlets. Is developing topic pages something of a luxury, or is it something that makes sense to pursue on a more grass-roots level?

JO: At the Times, less than one half of one percent of the newsroom staff is re-purposing the copy. That makes it of lasting value, and makes it more accessible to people who are searching. If you think about a small regional paper, three editors would be a huge commitment. On the other hand, the array of topics on which they produce a significant amount of information that other people don’t is small. There’s a relatively small number of subjects where they feel like, “We really own this, this is key to our readership and important to our coverage.”

If people think of topic pages as the creation of original content on original subjects, it never looks feasible. If you think about it as re-purposing copy on your key subjects, I think it’s something more and more people will do.

February 18 2011

19:30

Chattarati wants to change how we talk about schools

Last month, the state of Tennessee released its comprehensive report card on pre-K-12 education for 2010.

The news wasn’t good. In Hamilton County, the seat of Chattanooga, not only did schools as a unit not make Adequate Yearly Progress (AYP) goals; in addition, not even half of the county’s elementary school students were able to demonstrate grade-level proficiency in math and reading. Overall, the data suggested, 37 percent of Hamilton’s K-12 schools aren’t meeting the (not-terribly-ambitious) education standards set by the federal government.

That’s a problem for Tennessee’s education system. But it’s also, argues one news publisher, a problem for journalism. Chattarati, a community news site for Chattanooga, is trying to do its part to improve its community’s public education system by making the data about that system comprehensible to readers. The broad goal: to change how we talk about schools.

“We wanted to have productive conversations about how the schools and students were performing here in our local county system,” John Hawbaker, Chattarati’s editor, told me. “It’s really easy to look at [the data] and say, ‘Okay, our county system got a D overall.’ You could bemoan it for a few days, and then move on.”

“But that doesn’t help anybody solve the problem. And we all have a vested interest in how the schools perform,” he says. “So it was really important for us to take a deeper look. We wanted to change the conversation.”

To do that, Chattarati’s education editor, Aaron Collier, put together an interactive, graphic depiction of the state report card results. (Chattarati started with math scores at Hamilton Country elementary schools, but plans to break the data down further by subject: another for science, another for reading, another for social studies, and so on. The plan is to produce a new graphic, in the same style, every week.) The journalists employed a local freelance designer, DJ Trischler, to design the graphic — it was inspired, Hawbaker told me, by the clean images and bold colors of the graphics in GOOD magazine — and worked together on it over the course of a couple weeks. In their spare time.

“What we knew from the beginning,” Hawbaker says, “is that we wanted to find a visual way to represent the two different measures that schools and students are graded on”: achievement (that is, how much a student learned over a year in relation to an external, set goal) and value-added (that is, year-over-year progress). Of those two, achievement tends to get the most attention, Hawbaker notes; “but I think it paints a really interesting picture — and there’s a lot more you can learn — if you’re able to look at both of them, side by side. So we wanted to represent that visually.”

That led to a grid design that puts the low-achieving, low-value-added schools at the bottom left, and the high-achieving, high-value-added schools at the top right. So you have both overall learning and relative improvement tracked on the same chart. “There’s a lot of data there; you can’t get around it,” Hawbaker notes. “But we tried to present it in a way that was easy to understand.”

That easy-to-understand aspect is key: Often, challenges in the education system — or, for that matter, problems in any huge, complex bureaucracy — can be amplified by their intimidation factor alone: When we can’t wrap our head around the problems in the first place, how can we hope to try to solve them? Complexity fatigue can be one of the biggest, broadest impediments to finding solutions to common problems. The charts Chattarati is building, like its dataviz counterparts at The New York Times and the Chicago Tribune and elsewhere, offers a micro solution to the macro problem: They try to take the “data” out of “dataset,” making sense out of the information they contain. And making that information, overall, less cognitively intimidating.

“We’ve gotten so many private comments: emails, people talking to us,” Hawbaker says. “I had a teacher at my daughter’s school stop me and tell me how much she liked it. It’s been gratifying.”

As Hawbaker and Collier, put it in a post announcing the experiment: “The temptation, of course, is to resign ourselves to disparaging talk and absolve ourselves of the school system with the coming of hard news. But with Tennessee’s dramatic shift toward tougher curriculum standards, the success of our schools will depend on an informed, community-wide dialog on some of the challenges they face.”

The site’s experiment is a small but meaningful way to get beyond the statistics — which, they hope, will help empowerment to win out over resignation. “Every step of the way,” the journalists note, “our goal is to equip you to participate in a conversation addressing this question: How can we better serve our students?”

February 16 2011

19:00

Dataviz, democratized: Google opens Public Data Explorer

Two years ago, Google acquired Gapminder, the Swedish graphics-display company whose Trendalyzer software specializes in representing data over time. (You may recall the company from this awesome and much-circulated TED talk from 2006.) Since the acquisition, Google has built out the Trendalyzer software to create its Public Data Explorer, a tool that makes large datasets easy to visualize — and, for consumers, to play with. The Explorer has created interactive and dynamic data visualizations of information about traditionally hard-to-grasp concepts like unemployment figures, income statistics, world development indicators, and more. It’s a future-of-context dream.

“It’s about not just looking at data, but really understanding and exploring it visually,” Benjamin Yolken, Google Public Data’s product manager, told me. The project’s overall mission, it’s worth noting, is a kind of macro-meets-meta version of journalism’s: “to make the world’s public data sets accessible and useful.”

The big catch, though, as far as journalism goes, has been that users haven’t been able to do much with the tool besides look at it. If you’ve gathered public data sets that would lend themselves to visualization on the Explorer, you’ve had to contact Google and ask them to visualize it for you. (“While we won’t be able to individually reply to everyone who fills out this form,” a contact form noted, “we may be in touch to learn more about your data.”)

Today, though, that’s changing: Google is opening up its Explorer tool. Yolken and Omar Benjelloun, Google Public Data’s tech lead, have written a new data format, the Dataset Publishing Language (DSPL), designed particularly to support dynamic dataviz. “DSPL is an XML-based format designed from the ground up to support rich, interactive visualizations like those in the Public Data Explorer,” Benjelloun notes in a blog post announcing the opening. (It’s the same language that the Public Data team had been using internally to produce its datasets and visualizations.) Today, that language — and an interface facilitating data upload — are available for anyone to use, putting the “public” in “public data.”

It’s an experimental feature that, like the Public Data Explorer itself — not to mention some of Google’s most fun features (Google Scribe, Google Body, Google Books’ Ngrams viewer, etc.) — lives under the Google Labs umbrella. And, importantly, it’s a feature, Yolken notes, that “allows users who may or may not have technical expertise to explore, visually, a number of public data sets.”

The newly open tool could be particularly useful for news organizations that would like to get into the dataviz game, but that don’t have the resources — of time, of talent, of money — to invest in proprietary systems. (The papers of the Journal Register Company, a news organization that has made a point of experimenting with free, web-based journalistic tools, comes to mind here — though any news outfit, big or small, could benefit.) The Public Data team had two main goals in opening up the Explorer tool to users, Yolken notes: Increasing the datasets available to be visualized and, then, distributing them. “First, we want to have lots of data sets available that are credible and useful and interesting,” he says. Second, the hope is that the tool’s embedding capabilities will allow for easy sharing of those data sets.

Though the Explorer platform is now open to anyone — and though Yolken and Benjelloun mention teachers and students as groups who might do some interesting experiments with it — they hope that journalists, in particular, will make use of the tool. Even more particularly: “data-driven journalists.”

To that end, the tool isn’t as intuitively understandable as, say, the awesomely easy Ngrams book viewer tool — “we realized that, in order to show the data properly, to make the data understandable, you really needed to describe the metadata,” Benjelloun notes — but nor does it require special expertise to use. “This format doesn’t require engineering skills,” Yolken says; then again, “it’s not as easy as a spreadsheet.” It’s somewhere in the middle — akin to learning, say, basic HTML. (Here’s more on how to use it.)

But if journos can get beyond the initial learning curve (one that, for data-driven journos, in particular, won’t be especially steep), they, and their readers, could benefit doubly. The Explorer tool allows users not just to create dynamic data visualizations, but also to avail themselves of a new way to understand those data in the first place. In other words: The tool could prove useful from both the presentation and the production ends of the journalistic spectrum. There’s something about watching data move over time, Yolken notes, that changes your perspective as a consumer of those data. “It makes you start asking questions that you wouldn’t have asked before.”

February 10 2011

19:00

On an embargo-driven beat, science reporters aim to build for context

The events that science journalists publish about most frequently are themselves acts of publishing: the appearance of research papers in peer-reviewed journals. Most journals embargo papers before publication, granting reporters access to unpublished work in exchange for an agreement not to report until the embargo is lifted. Embargoes give reporters time to study new research and seek out commentary from authoritative voices; they also allow journals to exercise power over reporters and to guard their control over the flow of scientific information. Reporters who break embargoes risk losing access to information about new findings, emerging technologies, and exciting discoveries — along with the chance to process and vet those findings to determine whether excitement is warranted.

John Rennie, the former editor-in-chief of Scientific American, is hardly alone in his frustration with the fickle and ever-shifting embargo practices of scientific journals. In a January 26 column in the Guardian, Rennie argues that embargoes encourage superficial and premature reporting on new science. “Out of fear of being scooped,” he writes, news outlets rush their coverage, “publish[ing] stories on the same research papers at the moment the embargo ends. In that stampede of coverage, opportunities for distinctive reporting are few.” As a kind of thought experiment, Rennie suggests that science journalism could answer with self-imposed embargoes, in which news outlets would agree not to report on new journal papers until six months after publication.

As Rennie admits, that isn’t going to happen. Instead, he encourages journalists to experiment with new ways of enriching reporting between embargoes, shooting the gaps with coverage that offers nuance and a broadened perspective from which to judge the significance of new findings.

Consciously looking for context

Having seen John Rennie speak about the problems of embargo-driven journalism at the ScienceOnline 2011 conference last month, British science writer Ed Yong cast about for a way to add context to his coverage of stem cell research, a beat he covers frequently for Wired, Discover, and New Scientist, among other venues. As Paul Raeburn reports at MIT’s Knight Science Journalism Tracker, Yong crafted a timeline to document the field’s major stories from the last few years. Using a free web-based timeline creator he found at Dipity.com, Yong assembled articles from major journals and coverage from science news outlets into an annotated history of the discoveries that have shaped the field. Yong calls his timeline a tool for “looking at the stories that lead up to new discoveries, rather than focusing on every new paper in isolation.” Posted at Yong’s Discover-hosted blog Not Exactly Rocket Science, the timeline is a rich and engaging piece of analysis. It also serves Yong as a resource for further reporting, giving him a baseline from which to judge the significance of emerging science before it comes out from behind its embargoes.

Yong’s tool offers another example of the future-of-context ideas we write about often here at the Lab — like explainer pages and building background into stories, issues that apply across all beats and topics. Similarly, Yong turns a tried and true model of information visualization — the timeline — into a tool for putting any given story in stem cell research into its proper light. And rather neatly, he does it with time as the axis — for time, after all, is precisely what embargoes are all about. It’s just one example, but it’s a conscious attempt to break out of the imposed news cycle of embargo-driven reporting.

The Ingelfinger Rule

In fact, science journalists are squeezed at both ends of the journals’ publishing cycles. In addition to levying embargoes, many journals also observe the so-called Ingelfinger Rule, refusing to publish research that has been reported or commented on elsewhere. Named for former New England Journal of Medicine editor Franz Ingelfinger, the rule was formulated to keep untested health-science findings from making their way into public sphere before being submitted to the peer-review process — what some call “science by press conference.” But the rule more obviously helps journals protect their revenue sources — and it is for this reason that it has been widely adopted by most science publishers, even those who operate in fields with no public-health ramifications. (The cost of those journals — $27,465 for a year’s subscription to The Journal of Comparative Neurology! — has even the most resource-rich libraries up in arms.)

Ivan Oransky agrees that Yong’s tool is a simple and effective answer to the challenge presented by the journals’ squeeze tactics, calling it “terrific” and “scalable.” Oransky, who is executive editor of Reuters Health and an MD on the faculty at the NYU School of Medicine, runs the blog Embargo Watch, where he covers the uses and misuses of embargo practices in careful detail (and which John Rennie praised in his remarks at the ScienceOnline meeting). And he echoes Rennie’s call for finding ways to do science reporting outside the restrictions imposed by journals. “Journals serve a purpose,” Oransky told me in an email, “by applying the imperfect but valuable filter of peer review. We’d all like to get away from such heavy reliance on them.” With embargoes and the Ingelfinger Rule, he argues, journals exercise a “chilling effect on communication between scientists — many publicly funded — and journalists,” frustrating reporters who try “to move science reporting upstream to cover science before it’s in one of the journals.”

In science journalism’s crowded ecosystem, the double-barreled threat of embargo and the Ingelfinger Rule can have a deranging effect, pressuring serious news outlets to compete for scoops with online aggregators and casual bloggers. And as scientists themselves join the fray in blogs or through the social media, the veneer of decorum and collegiality imposed by embargoes is becoming increasingly illusory. On its own, an analytic tool like Ed Yong’s won’t break the deranging control that journals exercise over science coverage. But in striving to report on the practice of science as well as published results, Yong’s combination of web-based publishing tools and knowledgable reporting makes for a node on a promising timeline.

January 28 2011

17:00

MoJo’s Egypt explainer: future-of-context ideas in action

This week’s unrest in Egypt brings new relevance to an old question: How do you cover an event about which most of your readers have little or no background knowledge?

Mother Jones has found one good way to do that. Its national reporter, Nick Baumann, has produced a kind of on-the-fly topic page about this week’s uprising, featuring a running description of events fleshed out with background explanation, historical context, multimedia features, and analysis. The page breaks itself down into several core categories:

The Basics
What’s Happening?
Why are Egyptians unhappy?
How did this all start?
Why is this more complicated for the US than Tunisia was?
How do I follow what’s happening in real-time?
What’s the latest?

The page also contains, as of this posting, 14 updates informing readers of new developments since the page was first started (at 1 p.m. on Tuesday) and pointing them to particularly helpful and read-worthy pieces of reporting and analysis on other sites.

In all, the MoJo page pretty much takes the Demand Media approach to the production of market-driven content — right down to its content-farm-tastic title: “What’s Happening in Egypt Explained.” The crucial difference, though, is that its content is curated by an expert journalist. In that, the page has a lot in common with the kind of curation done, by Andrew Sullivan and the HuffPost’s Nico Pitney and many others, during 2009’s uprising in Iran. That coverage, though, had an improvised, organic sense to it: We’re figuring this out as we go along. It felt frenzied. The MoJo page, on the other hand, conveys the opposite sensibility: It exudes calmness and control. Here’s what you need to know.

And that’s a significant distinction, because it’s one that can be attributed to something incredibly simple: the page’s layout. The basic design decision MoJo made in creating its Egypt explainer — breaking it down into categories, encyclopedia-style — imposes an order that more traditional attempts at dynamic coverage (liveblogs, Twitter lists, etc.) often lack.

At the same time, the page also extends the scope of traditional coverage. With their space constraints, traditional news narratives have generally had to find artful ways to cater, and appeal, to the widest possible swath of readers. (To wit: that nearly parenthetical explanation of a story’s context, usually tacked onto a text story’s lede or a nut graf.) The web’s limitless space, though, changes the whole narrative proposition of the explainer: The MoJo page rethinks explanation as “information” rather than “narrative.” It’s not trying to be a story so much as a summary. And what’s resulted is a fascinating fusion between a liveblog and a Wikipedia entry.

The MoJo page, of course, isn’t alone in producing creative, context-focused journalism: From topic pages to backgrounders, videos to video games, news organizations are experimenting with lost of exciting approaches to explanation. And it’s certainly not the only admirable explainer detailing the events in Egypt. What’s most noteworthy about MoJo’s Egypt coverage isn’t its novelty so much as its adaptability: It acknowledges, implicitly, that audience members might come into it armed with highly discrepant levels of background information. It’s casually broken down the explainer’s content according to tiers of expertise, as it explains at the top of the page:

This was originally posted at 1:00 p.m. EST on Tuesday. It is being updated and is being kept near the top of the blog. Some of the information near the top of the post may be outdated, and if you’ve been following the story closely, the information at the top will definitely seem very basic. So please scroll to the bottom of the post for the latest.

In a June episode of their “Rebooting the News” podcast, Jay Rosen and Dave Winer discussed the challenge of serving users who come into a story with varying levels of contextual knowledge. One solution they tossed around: a tiered system of news narrative, with Level 1, for example, being aimed at users who come into a story with little to no background knowledge, Level 4 for experts who simply want to learn of new developments in a story.

The MoJo page is a great example of that kind of thinking put to work. The sections Baumann’s used to organize the explainer’s content allow users to have a kind of choose-your-own adventure interaction with the information offered. They convey, overall, a sense of permissiveness. Know only a little about Egyptian politics? Hey, that’s cool. Know nothing at all? That’s cool, too.

And that’s another noteworthy element of MoJo’s Egypt explainer: It’s welcoming. And it doesn’t, you know, judge.

That’s not a minor thing, for the major reason that stories, when you lack the context to understand them, can be incredibly intimidating. If you don’t know much about Egypt’s current political landscape — or, for that matter, about the world financial system or the recent history of Afghanistan or the workings of Congress — you have very little incentive to read, let alone follow, a story about it. In news, one of the biggest barriers to entry can be simple intimidation. We talk a lot about “engagement” in journalism; one of the most fundamental ways to engage an audience, though, is by doing something incredibly simple: producing work that accommodates ignorance.

December 01 2010

12:30

ProPublica and Jay Rosen’s Studio 20 class at NYU team up to build — and share — “a better explainer”

NYU media guru Jay Rosen is announcing a new partnership between his Studio 20 graduate students and ProPublica. Their goal is to research the most effective ways to unravel complex problems for an online audience, and then build new kinds of explainers to illuminate ProPublica’s research into issues like the foreclosure crisis, finance, healthcare, and the BP oil spill.

It’s an ambitious project, and one that fits Rosen’s goal of transforming journalism schools into the R&D labs of the media industry. As part of the project, the students have launched a website, Explainer.net, that will grow into a database of the best and worst “explainer” techniques from within the news business and beyond. (One of their research projects, for example, was to analyze which media outlets explained the WikiLeaks cable story in the most helpful and compelling ways.)

I sat in on a Studio 20 class on Monday and talked with several of the 16 first-semester graduate students involved in the project. The metaphor they all used, drawn from Rosen’s SXSW panel speech on “the future of context,” was that reading daily news articles can often feel like receiving updates to software that you haven’t actually downloaded to your computer. Without some basic understanding of the larger, ongoing story, the “news” doesn’t actually make much sense. As NYU and ProPublica put it in today’s press release:

Bringing clarity to complex systems so that non-specialists can understand them is the “art” of the explainer. For instance, an explainer for the Irish debt crisis would make clear why a weakness in one country’s banks could threaten the European financial system and possibly the global recovery. A different kind of explainer might show how Medicare billing is designed to work and where the opportunities for fraud lie.

Rosen has been calling for a rigorous rethinking of how media outlets provide context since 2008, and, as Megan has noted previously here at the Lab, ProPublica has put itself at the forefront of explanatory, public-interest reporting. This summer, they redesigned their website with the goal of making it easier for users with different amounts of knowledge about a subject area to teach themselves more about a topic. (They’ve also created a broadway song about complex financial instruments.) Rosen, who brought several students to pitch the project to ProPublica in late September, said the investigative outfit was immediately enthusiastic about the partnership, which will run through the rest of this academic year.

The Explainer project will approach the problem of understanding complex systems both from the perspective of users trying to gain context on an issue, and that of journalists who need new mediums for telling background stories and sharing data that might not fit into an article format.

For that, the students will divide into three groups tasked with exploring different elements of explanation. One group is interviewing the members of ProPublica’s news team, from reporters to news app builders to the managing editor, in order to understand the organization’s workflow, what it does with the data it collects, and how its reporters explain what they’re learning to themselves as they report a story.

Journalists “love starting from zero and gaining mastery,” Rosen said. “What they disgorge by way of story is quite inadequate to what they learned. Creating containers, formats, genres, tricks, tools to make that knowledge available is part of the project.”

Another group is building Explainer.net’s WordPress website, which sometimes means teaching themselves and each other skills on an ad hoc basis. (Studio 20 is designed to be a learn-as-you-go program, in which a group of students with different specialties share their skills and pick up new ones.)

A third group  is researching the different “explainer” genres. They’re starting with examples of good and bad explanatory journalism, from maps and timelines to more specific visualizations like The National Post’s chilling illustration of how a stoning is carried out in Iran. But they’ll also be reaching far outside the media world to research techniques used in many different fields. Rosen suggested that they focus on situations where people “can’t afford to fail,” like people fixing combat aircraft, or NFL teams explaining complicated plays. The students are also looking at the “For Dummies” book franchise and the language-learning software Rosetta Stone.

When I spoke with Rachel Slaff, who’s leading the research group, she said they have found many more examples of failed explainers than background reporting that’s actually working well. It’s not just that some videos are boring, or that a timeline is clunky or a graphic too text-heavy. “The overwhelming theme is: This isn’t actually explaining anything to me. I watched this video or I looked at this chart and I left more confused than I came in,” she said. The major exception has been the BBC, which she said produces consistently effective explainers. The other group favorite has been the RSA Animate video series, in which a hand cheekily illustrates a topic as it’s being explained.

Part of the reason the project is going public so early is to connect with journalists interested in explainer or “future of context” issues. The Studio 20 group will be producing a periodic newsletter with updates on their progress, as well as building a Twitter feed — all ways to broaden the reach of the project, as well as give the graduate students practice in using social media tools.

The “build a better explainer” project is just a first step in figuring out how context-focused reporting will evolve online, Rosen said.  Google’s Living Stories and newspaper topic pages are all aimed at a larger, more complicated problem: “Where does the news accumulate as understanding?”

You can think about the accumulation of understanding in terms of a body of text, a URL where different stories are gathered together, or the way that knowledge builds in a single user, Rosen told me. Whatever the potential model, the next question is, “how do we join to the stream-of-updates part of the news system, a second part of the news system, which gives people a sense of mastery over a big story?”

In other words: Once you’ve built a better explainer, the next challenge is building it a place to live.

November 19 2010

17:30

Covering a crisis more like molasses than quicksand

How do you cover a crisis that is not a crisis in the way we generally think of one — sudden, frenzied, tragic — but rather a tragedy that builds, slowly, tragically, over time? From the BP oil spill to Haiti’s pre- and post-earthquake heartbreaks to the world financial crisis to the war in Afghanistan — the latest conflict to be nicknamed, with only slight hyperbole, “the forever war” — to the even more insidious crises that are wounded social and political institutions: Some of the most important stories journalists can cover are not singular stories at all, but phenomena that stretch and wind through time. And doing them justice, in every sense, requires not only attention to context and nuance and explanation, but also patience: keeping up with them, unpacking them, and finding ways to sustain reader interest in and outrage about them, over long — sometimes years-long — stretches.

So how do you do it?

In a panel discussion at MIT yesterday evening, co-sponsored by the school’s Communications Forum and its Center for Future Civic Media, four experts tackled that questions, considering, from their areas of expertise, the idea of continuity as it relates to journalistic narratives. MIT technology historian Rosalind Williams discussed theoretical approaches to history as a function of human impact; investigative reporter Abrahm Lustgarten discussed his experience covering slow-moving stories for ProPublica; and our own Andrea Pitzer, editor of our sister site, Nieman Storyboard, discussed the role that narrative itself can play in unpacking stories and sustaining consumer interest in them over time.

“Journalism itself is in the midst of a slow-moving crisis,” the panel’s moderator, MIT writing instructor and science journalist Thomas Levenson, noted by way of context for the event. The web allows not only for a new immediacy in news coverage, but also for, of course, a new democratization of it. “There is this lovely possibility here,” he said. But our new tools also necessitate rethinking what “news coverage” is in the first place — and how we think about representing crises that are fluid, rather than solid.

“Slow-moving crisis’ is a contradiction,” Williams pointed out; when it comes to what we think of as crises, “our language has not caught up with the events” it tries to describe. The intricacies of the institutions we’ve developed for ourselves lead, she said, to what another MIT historian, Leo Marx, has has called a “semantic void”: a state of affairs for which we lack language — words, concepts, frameworks — that can adequately convey import and magnitude.

The word “crisis” itself, Williams pointed out, which comes from Greek word kerein (“to separate or shear”), originally had the sense of a static event: a singular, sudden rip in the continuity of human events. It’s since evolved into something much more amorphous and, thus, hard to capture — the result in part, she said, of a changing attitude toward humans’ relationship with the wider world.

It’s a problem we’ve seen throughout the history of technology, Williams noted, the result of an evolving recognition that the world is not a constant in the great equation of human experience, but rather another variable. In the past, she said, our general concept of history was rooted in the idea that history itself “consists of deeds and words that take place on the stable stage that is the world” — and the stage itself was predictable and solid, in contrast to the frailty of the human condition. Now, though, we generally recognize the universality of movement: Our context moves with us.

What that can lead to, though, when it comes to narrative, is a kind of reductive continuity. “If the sun never sets on history, then historians are really challenged,” she said — as are, of course, journalists. When there’s no arc to maintain, no ending to know, there’s no conclusion by which to calibrate context. There’s nothing to root our narratives. “If it’s a never-ending story, then you don’t understand the world — you don’t understand human life,” Williams said.

What we can understand, though, is the present moment. So it’s incumbent on us, Williams concluded, to try to match our rhetoric to the realities of the movements of history. “The point is to join up the crisis-feeling,” she said, echoing William Empson, with the realities of lived experience.

And, for that, simple storytelling — the ancient art of weaving together characters and plots and excitement — can be crucial, Pitzer said. Narrative “is really how people understand public crisis,” she noted; “it’s how they understand public policy issues.” Study after study has suggested the power of story not just as an artistic product, but as a cognitive function. Narrative buys people’s attention, allows them to retain complex information longer, she said. It is a teaching tool as much as an aesthetic feature. Indeed, if we journalists don’t provide narratives in our work, Pitzer noted — if we don’t consciously weave disparate facts into some a recognizable arc of action — “we are in some way denying them the ability to understand.”

Lustgarten applied that idea to his own coverage of the recent BP oil spill — a series of related reports that he and his ProPublica colleagues tackled (and are still tackling) over a long stretch of time. ProPublica’s aim with its stories, he said, is to capture reader interest with “a bit of a drumbeat of communication”: “rather than have one critical climax,” the idea is to “publish again and again, in incremental bits,” to help readers “find a pathway through the clamor that’s so distracting.”

In the outlet’s BP reporting, realizing that idea “was an exercise in commitment,” he said — a challenge of setting, and then sticking to, a vision for contextual, continual coverage rather than discrete reports. There were certainly moments, Lustgarten noted, when smaller scoops threatened to distract them; “it was very difficult to stay focused on what we had decided to do,” he said. “It was very difficult to stay disciplined.”

As to the broad question of completeness — how do you define an “event” against other events? How do you know when your reporting is finished? — Lustgarten gave a nod to Williams’ dissatisfaction with our current framings. ProPublica’s unique setup allows its reporters to see stories through “until their organic conclusion,” he noted; but determining that end point is a matter of serendipity and sensibility as much as anything else. Often, the conclusion point often comes down to reporter interest, he said. There are no clear borders between story and not-story.

So it was more fitting that ironic that the panel didn’t reach a conclusion in its own discussions. How could it? It highlighted, though, an idea that any journalist can put to practice: the crucial necessity of the long-view mindset, the insistence on placing even the most seemingly isolated events into the broader context of history. Always asking, in other words, “Why does this matter?” And that may involve being selective about the this we share. As Williams put it: “Our biggest responsibility is to determine which facts are worthy of being discovered.”

November 17 2010

19:30

The neverending broadcast: Frontline looks to expand its docs into a continual conversation

Frontline, PBS’s public affairs documentary series, has one of the best reputations in the business for the things that journalism values most highly: courageous reporting, artful storytelling, the kind of context-heavy narrative that treats stories not simply as stories, but as vehicles of wisdom. It’s a “news magazine” in the most meaningful sense of the term.

But even an institution like Frontline isn’t immune to the disruptions of the web. Which is to say, even an institution like Frontline stands to benefit from smart leveraging of the web. The program’s leadership team is rethinking its identity to marry what it’s always done well — produce fantastic broadcasts — with something that represents new territory: joining the continuous conversation of the web. To that end, Frontline will supplement its long-form documentaries with shorter, magazine-style pieces — which require a shorter turnaround time to produce — and with online-only investigations. (The site’s motto: “Thought-provoking journalism on air and online.”)

But it’s also expanding its editorial efforts beyond packaged investigations, hoping to shift its content in a more discursive direction. Which leads to a familiar question, but one that each organization has to tackle in its own way: How do you preserve your brand and your value while expanding your presence in the online world?

One tool Frontline is hoping can help answer that question: Twitter. And not just Twitter, the conversational medium — though “we really want to be part of the journalism conversation,” Frontline’s senior producer, Raney Aronson-Rath, told me — but also Twitter, the aggregator. This afternoon, Frontline rolled out four topic-focused Twitter accounts — “micro-feeds,” it’s calling them:

Conflict Zones & Critical Threats (@FrontlineCZCT), which covers national security and shares the series’ conflict-zone reporting;

Media Watchers (@FrontlineMW), which tracks news innovation and the changing landscape of journalism;

Investigations (@FrontlineINVSTG), which covers true crime, corruption, and justice — spotlighting the best investigative reporting by Frontline and other outlets; and

World (@FrontlineWRLD), which covers international affairs.

The topic-focused feeds are basically a beat system, applied to Twitter. They’re a way of leveraging one of the core strengths of Frontline’s journalism: its depth. Which is something that would be almost impossible for Frontline, Aronson-Rath notes, to achieve with a single feed. So “we decided that the best thing for us was to be really intentional about who we were going to reach out to and what kind of topics we were going to tweet about — and not just have it be a promotional tool.”

Each feed will be run by two-person teams, one from the editorial side and the other from the promotional — under the broad logic, Aronson-Rath notes, that those two broad fields are increasingly collapsing into each other. And, even more importantly, that “all the work that we do in the social media landscape is, by its very essence, editorial.” Even something as simple as a retweet is the result of an editorial decision — and one that requires the kind of contextual judgment that comes from deep knowledge of a given topic.

So Frontline’s feed runners, Aronson-Rath notes, “are also the people who have, historically, been working in those beats in Frontline’s broadcast work.” (Frontline communications manager Jessica Smith, for example, who’ll be helping to run the “Conflict Zones” feed, covered that area previously, in cultivating the conversation between Frontline and the national security blogosphere as a component of the program’s earlier web efforts.) In other words: “These guys know what they’re doing on these beats.”

To that end, the teams’ members will be charged with leveraging their knowledge to curate content from the collective resources of all of Frontline’s contributors — from reporters to producers, public media partners to internal staff — and, of course, from the contributors across the web. The teams will work collaboratively to produce their tweets (they’ll even sit next to each other to maximize the teamwork). And some feeds will contain not just curated content, but original reporting, as well. Frontline reporters Stephen Grey and Murray Smith are about to dispatch to Afghanistan; while they’re there, they’ll tweet from @FrontlineCZCT. (They’ll tweet from personal feeds, as well, which @FrontlineCZCT curators will pull into the Frontline-branded feed.)

The broad idea behind the new approach is that audiences identify with topics as much as they do with brands. And there’s also, of course, the recognition of the sea of material out there which is of interest to consumers, but which ends up, documentary filmmaking being what it is, on the cutting-room floor. The new approach, it’s hoped, will give Frontline fans a behind-the-scenes look into the film production process. “You wouldn’t actually know where Frontline’s reporting teams are right now,” Aronson-Rath points out. “You only know when we show up.” Now, though, “when a team goes into Afghanistan, we’re going to let you know where they are. We’re going to give you some intelligence about what they’re doing. And it’ll be a completely different level of a conversation, we’re hoping.”

It’ll also be a different level of engagement — for Frontline’s producers and its consumers. It’s a small way of expanding the idea of what a public affairs documentary is, and can be, in the digital world: a process, indeed, as much as a product. “We think,” Aronson-Rath says, “that this is going to help keep our stories alive.”

October 21 2010

20:30

September 15 2010

17:00

Twitter as broadcast: What #newtwitter might mean for networked journalism

So Twitter.com’s updated interface — #newtwitter, as the Twittery hashtag goes — is upon us. (Well, upon some of us.)

The most obvious, and noteworthy, changes involved in #newtwitter are (a) the two-panel interface, which — like Tweetdeck and Seesmic and other third-party apps — emphasizes the interactive aspects of Twitter; and (b) the embeddable media elements: YouTube videos, Flickr photos (and entire streams!), Twitpics, etc. And the most obvious implications of those changes are (a) the nice little stage for advertising that the interface builds up; and (b) the threat that #newtwitter represents to third-party apps.

Taken together, those point to a broader implication: Twitter.com as an increasingly centralized space for information. And even, for our more specific purposes, news. Twitter itself, as Ev Williams put it during the company’s announcement of @anywhere, is “an information network that helps people understand what’s going on in the world that they care about.” And #newtwitter, likely, will help further that understanding. From the point of view of consumption, contextual tweets — with images! and videos! — will certainly create a richer experience for users, from both a future-of-context perspective and a more pragmatic usability-oriented one. But what about from the point of view of production — the people and organizations who feed Twitter?

The benefits of restriction

We commonly call Twitter a “platform,” the better to emphasize its emptiness, its openness, its agnosticism. More properly, though, Twitter is a medium, with all the McLuhanesque implications that term suggests. The architecture of Twitter as an interface necessarily affects the content its users produce and distribute.

And one of the key benefits of Twitter has been the fact of its constraint — which has also been the fact of its restraint. The medium’s character limitation has meant that everyone, from the user with two friends following her to the million-follower-strong media organizations, has had the same space, the same tools, to work with. Twitter has democratized narrative even more than blogs have, you could argue, because its interface — your 140 characters next to my 140 characters next to Justin Bieber’s 140 characters, all sharing the space of the screen — has been not only universal, but universally restricted. The sameness of tweets’ structures, and the resulting leveling of narrative authority, has played a big part in Twitter’s evolution into the medium we know today: throngs of users, relatively unconcerned with presentation, relatively un-self-conscious, reporting and sharing and producing the buzzing, evolving resource we call “news.” Freed of the need to present information “journalistically,” they have instead presented it organically. Liberation by way of limitation.

So what will happen when Twitter, the organism, grows in complexity? What will take place when Twitter becomes a bit more like Tumblr, with a bit of its productive limitation — text, link, publish — taken away?

The changes Twitter’s rolling out are not just cosmetic; embedded images and videos, in particular, are far more than mere adornment. A link is fundamentally, architecturally, different than an image or a video. Links are bridges: structures unto themselves, sure, but more significantly routes to other places — they’re both conversation and content, endings and beginnings at once. An image or a video, on the other hand, from a purely architectural perspective, is an end point, nothing more. It leads to nowhere but itself.

For a Twitter interface newly focused on image-based content, that distinction matters. Up until now, the only contextual components of a tweet — aside from the peripheral metadata like “time sent,” retweeted by,” etc. — have been the text and the link. The link may have led to more text or images or videos; but it also would have led to a different platform. Now, though, within Twitter itself, we’re seeing a shift from text-and-link toward text-and-image — which is to say, away from conversation and toward pure information. Which is also to say, away from communication…and toward something more traditionally journalistic. Tweets have always been little nuggets of narrative; with #newtwitter, though, individual tweets get closer to news articles.

We’ve established already that Twitter is, effectively if not officially, a news platform unto itself. #Newtwitter solidifies that fact, and then doubles down on it: It moves the news proposition away from a text-based framework…and toward an image-based one. If #twitterclassic established itself as a news platform, in other words, #newtwitter suggests that the news in question may increasingly be of the broadcast variety.

“What are you doing?” to “What’s happening?”

“Twttr” began as a pure communications platform: text messages, web-ified. The idea was simply to take the ephemeral interactions of SMS and send them to — capture them in — the cloud. The point was simplicity, casualness. (Even its name celebrated that idea: “The definition [of Twitter] was ‘a short burst of inconsequential information,’ and ‘chirps from birds,’” Jack Dorsey told the Los Angeles Times. “And that’s exactly what the product was.”)

The interface that rolled out last night — and that will continue rolling out over the next couple of weeks to users around the world — bears little resemblance to that initial vision of Twitter as captured inconsequence. Since its launch (okay, okay: its hatch), Twitter has undergone a gradual, but steady, evolution — from ephemeral conversations to more consequential information. (Recall the change in the web interface’s prompt late last year, from “What are you doing?” to “What’s happening?” That little semantic shift — from an individual frame to a universal one — marked a major shift in how Twitter shapes its users’ conception, and therefore use, of the platform. In its way, that move foreshadowed today’s new interface.) Infrastructural innovations like Lists have heightened people’s awareness of their status not simply as communicators, but as broadcasters. The frenzy of breaking-news events — from natural disasters like Haiti’s earthquake to political events like last summer’s Iranian “revolution” — have highlighted Twitter’s value as a platform for information dissemination that transcends divisions of state. They’ve also enforced users’ conception of their own tweets: visible to your followers, but visible, also, to the world. It’s always been the case, but its’ one that’s increasingly apparent: Each tweet is its own little piece of broadcast journalism.

What all that will mean for tweets’ production, and consumption, remains to be seen; Twitterers, end-user innovation-style, have a way of deciding for themselves how the medium’s interface will, and will not, be put to practice. And Twitter is still, you know, Twitter; it’s still, finally and fundamentally, about communication. But the smallness, the spareness, the convivial conversation that used to define it against other media platforms is giving way — perhaps — to the more comprehensive sensibility of the networked news organization. The Twitter.com of today, as compared to the Twitter.com of yesterday, is much more about information that’s meaningful and contextual and impactful. Which is to say, it’s much more about journalism.

August 19 2010

15:00

The kids are alright, part 2: What news organizations can do to attract, and keep, young consumers

[Christopher Sopher is a senior at the University of North Carolina, where he is a Morehead-Cain Scholar and a Truman Scholar. He has been a multimedia editor of the Daily Tar Heel and has worked for the Knight Foundation. His studies have focused on young people's consumption of news and participation in civic lifewhich have resulted in both a formal report and an ongoing blog, Younger Thinking.

We asked Chris to adapt some of his most relevant findings for the Lab, which he kindly agreed to do. We posted Part 1 yesterday; below is Part 2. Ed.]

Now that I have exhorted all of you to care about young people and their relationship with the news media, it’s worth examining a few of the most pertinent ideas about getting more of my peers engaged: the gap between young people’s reported interested in issues and their interest in news, the need for tools to help organize the information flow, and the crucial role of news in schools and news literacy.

A gap between interest and news consumption

The data seem to suggest that young people are simultaneously interested and uninterested in the world around them. For example, a 2007 Pew survey [pdf] found that 85 percent of 18-to-29-year-olds reported being interested in “keeping up with national affairs” — a significant increase from 1999. Yet in a 2008 study [pdf], just 33 percent of 18-to-24-year-olds (and 47 percent of people aged 25 to 34) said they enjoyed keeping up with news “a lot.” Young people also tend to score lower on surveys of political knowledge — all of which suggests that their information habits are not matching their reported interests.

There are a few compelling explanations for this apparent contradiction (beyond people’s general desire to provide socially agreeable responses). The first is that many young people may not see a consistent connection between regularly “getting the news” and staying informed about the issues that interest them. If we accept that most young people get their news at random intervals (and the overwhelming body of evidence suggests that this is the case), it’s easy to see how reading a particular day’s New York Times story about health care reform, for example, might be rather confusing if you haven’t been following the coverage regularly.

Many young people also report feelings of monotony with day-to-day issue coverage and a distaste for the process focus of most politics coverage. Some share the sentiments (about which Gina Chen has written here at the Lab) of the now-famous, if anonymous, college student who said, “If the news is that important, it will find me.” The cumulative effect of these trends is that young people go elsewhere to “keep up”: to Wikipedia articles, to friends and family, to individual pieces of particularly helpful content shared through social networks.

The “too much information” problem

Several studies have highlighted the fact that many young people feel overwhelmed by the deluge of information presented on news sites. (My two favorite pieces on this are both from the Media Management Center, found here and here here [pdf].)

This sentiment is understandable: On one day I counted, the New York Times’ homepage offered 28 stories across four columns above the scroll cutoff and another 95 below it — for a total of 123 stories, along with 66 navigation links on the lefthand bar. CNN.com also had 28 stories on top and 127 total, along with 15 navigation links. Imagine a newspaper with that many choices.

The point is that news sites need to be designed to help users manage and restrict the wealth of information, rather than presenting them with all of it at once. People can and are doing the work of “curation” on their own, of course, through iGoogle, Twitter, RSS, and social networks both online and off — but those efforts leave behind the vast majority of news outlets. Better design allows news organizations to include the kind of context and background and explanation — not to mention personalization features — that younger audiences find helpful. That idea isn’t new, but its importance for young people cannot be overstated.

Schools, news, and news literacy

News organizations need to learn from soda and snack producers and systematically infiltrate schools across the country with their products. There’s strong evidence that news-based, experiential, and interactive course design [pdf] — as well as the use of news in classrooms and the presence of strong student-produced publications — can both increase the likelihood that students will continue to seek news regularly in the future.

Many teachers are already using news [pdf] in their classrooms, but face the pressures of standardization and an apparent lack of support from administrations. A 2007 Carnegie-Knight Task Force study [pdf] also found that most teachers who do use news content in their curricula direct their students to online national outlets (such as CNN or NYTimes.com) rather than local sites, which suggests that local news organizations need to focus on building a web-based presence in schools. The Times Learning Network is an excellent model.

And when news media finally fill school halls like so much Pepsi (or, now, fruit juice), young people themselves will also need help to navigate content and become savvy consumers, which is where news literacy programs become important. The Lab’s own Megan Garber has explained their value eloquently in a piece for the Columbia Journalism Review: “The bottom line: news organizations need to make a point of seeking out young people — and of explaining to them what they do and, perhaps even more importantly, why they do it. News literacy offers news organizations the opportunity to essentially re-brand themselves.” The News Literacy Project, started by a Pulitzer-winning former Los Angeles Times reporter, is a leading example.

The point of these ideas is that there are significant but entirely surmountable obstacles to getting more young people engaged with news media — a goal with nearly universal benefits that has received far too little attention from news organizations.

I’ll conclude with a quote from NYU professor Jay Rosen, buried inside the 2005 book Tuned Out: “Student’s don’t grow up with the religion of journalism, they don’t imbibe it in the same way that students used to. Some do, but a lot don’t.” Changing that is the difficult but urgent challenge. I don’t want to be that guy who says “_____ will save journalism,” so I’ll just say this: It’s really, really, really important.

And I should probably mention that there are hundreds of recent journalism school graduates who would be more than willing to help.

Image by Paul Mayne, used under a Creative Commons license.

August 16 2010

14:30

The Guardian launches governmental pledge-tracking tool

Since it came to office nearly 100 days ago, Britain’s coalition government — a team-up between Conservatives and Liberal Democrats that had the potential to be awkward and ineffective, but has instead (if The Economist’s current cover story is to be believed) emerged as “a radical force” on the world stage — has made 435 pledges, big and small, to its constituents.

In the past, those pledges might have gone the way of so many campaign promises: broken. But no matter — because also largely forgotten.

The Guardian, though, in keeping with its status as a data journalism pioneer, has released a tool that tries to solve the former problem by way of the latter. Its pledge-tracker, a sortable database of the coalition’s various promises, monitors the myriad pledges made according to their individual status of fulfillment: “In Progress,” “In Trouble,” “Kept,” “Not Kept,” etc. The pledges tracked are sortable by topic (civil liberties, education, transport, security, etc.) as well as by the party that initially proposed them. They’re also sortable — intriguingly, from a future-of-context perspective — according to “difficulty level,” with pledges categorized as “difficult,” “straightforward,” or “vague.”

Status is the key metric, though, and assessments of completion are marked visually as well as in text. The “In Progress” note shows up in green, for example; the “Not Kept” shows up in red. Political accountability, meet traffic-light universality.

The tool “needs to be slightly playful,” notes Simon Jeffery, The Guardian’s story producer, who oversaw the tool’s design and implementation. “You need to let the person sitting at the computer actually explore it and look at what they’re interested in — because there are over 400 things in there.”

The idea was inspired, Jeffery wrote in a blog post explaining the tool, by PolitiFact’s Obameter, which uses a similar framework for keeping the American president accountable for individual promises made. Jeffery came up with the idea of a British-government version after May’s general election, which not only gave the U.S.’s election a run for its money in terms of political drama, but also occasioned several interactive projects from the paper’s editorial staff. They wanted to keep that multimedia trajectory going. And when the cobbled-together new government’s manifesto for action — a list of promises agreed to and offered by the coalition — was released as a single document, the journalists had, essentially, an instant data set.

“And the idea just came from there,” Jeffery told me. “It seemed almost like a purpose-made opportunity.”

Jeffery began collecting the data for the pledge-tracker at the beginning of June, cutting and pasting from the joint manifesto’s PDF documents. Yes, manually. (“That was…not much fun.”) In a tool like this — which, like PolitiFact’s work, merges subjective and objective approaches to accountability — context is crucial. Which is why the pledge-tracking tool includes with each pledge a “Context” section: “some room to explain what this all means,” Jeffery says. That allows for a bit of gray (or, since we’re talking about The Guardian, grey) to seep, productively, into the normally black-and-white constraints that define so much data journalism. One health care-related pledge, for example — “a 24/7 urgent care service with a single number for every kind of care” — offers this helpful context: “The Department of Health draft structural reform plan says preparations began in July 2010 and a new 111 number for 24/7 care will be operational in April 2012.” It also offers, for more background, a link to the reform plan.

To aggregate that contextual information, Jeffery consulted with colleagues who, by virtue of daily reporting, are experts on immigration, the economy, and the other topics covered by the manifesto’s pledges. “So I was able to work with them and just say, ‘Do you know about this?’ ‘Do you know about that?’ and follow things up.”

The tool isn’t perfect, Jeffery notes; it’s intended to be “an ongoing thing.” The idea is to provide accountability that is, in particular, dynamic: a mechanism that allows journalists and everyone else to “go back to it on a weekly or fortnightly basis and look at what has been done — and what hasn’t been done.” Metrics may change, he says, as the political situation does. In October, for example, the coalition government will conclude an external spending review that will help crystallize its upcoming budget, and thus political, priorities — a perfect occasion for tracker-based follow-up stories. But the goal for the moment is to gather feedback and work out bugs, “rather than having a perfectly finished product,” Jeffery says. “So it’s a living thing.”

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl