Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 26 2013

16:48

What’s New in Digital Scholarship: A generation gap in online news, and does The Daily Show discourage tolerance?

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

We’re at the halfway mark in our year-long odyssey tracking all things digital media and academic. Below are studies that continue to advance understanding among various hot topics: drone journalism; surveillance and the public; Twitter in conflict zones; Big Data and its limits; crowdsourced information platforms; remix culture; and much more. We also suggest some further “beach reads” at bottom. Enjoy the deep dive.

“Reuters Institute Digital News Report 2013: Tracking the Future of News”: Paper from University of Oxford Reuters Institute for the Study of Journalism, edited by Nic Newman and David A. L. Levy.

This new report provides tremendous comparative perspective on how different countries and news ecosystems are developing both in symmetrical and divergent ways (see the Lab’s write-up of the national differences/similarities highlighted.) But it also provides some interesting hard numbers relating to the U.S. media landscape; it surveys news habits of a sample of more than 2,000 Americans.

Key U.S. data points include: the number of Americans reporting accessing news by tablet in the past week rose, from 11 percent in 2012 to 16 percent in 2013; 28 percent said they accessed news on a smartphone in the last week; 75 percent of Americans reported accessing news online in the past week, while 72 percent said they got news through television and 47 percent reported having read a print publication; TV (43 percent) and online (39 percent) were Americans preferred platforms for accessing news. Further, a yawning divide exists between the preferences of those ages 18 to 24 and those over 55: among the younger cohort, 64 percent say the Web is their main source for news, versus only 25 percent among the older group; as for TV, however, 54 percent of older Americans report it as their main source, versus only 20 percent among those 18 to 24. Finally, 12 percent of American respondents overall reported paying for digital news in 2013, compared to 9 percent in 2012.

“The Rise and Fall of a Citizen Reporter”: Study from Wellesley College, for the WebScience 2013 conference. By Panagiotis Metaxas and Eni Mustafaraj.

This study looks at a network of anonymous Twitter citizen reporters around Monterrey, Mexico, covering the drug wars. It provides new insights into conflict zone journalism and information ecosystems in the age of digital media, as well the limits of raw data. The researchers, both computer scientists, analyze a dataset focused on the hashtag #MTYfollow, consisting of “258,734 tweets written by 29,671 unique Twitter accounts, covering 286 days in the time interval November 2010-August 2011.” They drill down on the account @trackmty, run by the pseudonym Melissa Lotzer, which is the largest of the accounts involved.

The scholars reconstruct a sequence in which a wild Twitter “game” breaks out — obviously, with life-and-death stakes — involving accusations about cartel informants (“hawks,” or “halcones”) and citizen watchdogs (“eagles,” or “aguilas”), with counter-accusations flying that certain citizen reporters were actually working for the Zetas drug cartel; indeed, @trackmty ends up being accused of working for the cartels. Online trolls attack her on Twitter and in blogs.

“The original Melissa @trackmty is slow to react,” the study notes, “and when she does, she tries to point to her past accomplishments, in particular the creation of [a group of other media accounts] and the interviews she has given to several reporters from the US and Spain (REF). But the frequency of her tweeting decreases, along with the community’s retweets. Finally, at the end of June, she stops tweeting altogether.” It turns out that the real @trackmty had been exposed — “her real identity, her photograph, friends and home address.”

Little of this drama was obvious from the data. Ultimately, the researchers were able to interview the real @trackmty and members of the #MTYfollow community. The big lessons, they realize, are the “limits of Big Data analysis.” The data visualizations showing influence patterns and spikes in tweet frequency showed all kinds of interesting dynamics. But they were insufficient to make inferences of value about the community affected: “In analyzing the tweets around a popular hashtag used by users who worry about their personal safely in a Mexican city we found that one must go back and forth between collecting and analyzing many times while formulating the proper research questions to ask. Further, one must have a method of establishing the ground truth, which is particularly tricky in a community of — mostly — anonymous users.”

“Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory”: Study from Ohio State University, published in the Journal of Communication. By R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch.

As the political fact-checking movement — the FactChecks and Politifacts, along with their various lesser-known cousins — has arisen, so too has a more hard-headed social science effort to get to the root causes of persistent lies and rumors, a situation made all the worse on the web. Of course, journalists hope truth can have a “corrective” effect, but the literature in this area suggests that blasting more facts at people often doesn’t work — hence, the “information deficit fallacy.” Thus, a cottage psych-media research industry has grown up, exploring “motivated reasoning,” “biased assimilation,” “confirmation bias,” “cultural cognition,” and other such concepts.

This study tries to advance understanding of how peripheral cues such as accompanying graphics and biographical information can affect how citizens receive and accept corrective information. In experiments, the researchers ask subjects to respond to claims about the proposed Islamic cultural center near Ground Zero and the disposition of its imam. It turns out that contextual information — what the imam has said, what he looks like and anything that challenges dominant cultural norms — often erodes the positive intentions of the fact-checking message.

The authors conclude that the “most straightforward method of maximizing the corrective effect of a fact-checking article is to avoid including information that activates stereotypes or generalizations…which make related cognitions more accessible and misperceptions more plausible.” The findings have a grim quality: “The unfortunate conclusion that we draw from this work is that contextual information so often included in fact-checking messages by professional news outlets in order to provide depth and avoid bias can undermine a message’s corrective effects. We suggest that this occurs when the factually accurate information (which has only peripheral bearing on the misperception) brings to mind” mental shortcuts that contain generalizations or stereotypes about people or things — so-called “naïve theories.”

“Crowdsourcing CCTV surveillance on the Internet”: Paper from the University of Westminster, published in Information, Communication & Society. By Daniel Trottier.

A timely look at the implications of a society more deeply pervaded by surveillance technologies, this paper analyzes various web-based efforts in Britain that involve the identification of suspicious persons or activity. (The controversies around Reddit and the Boston Marathon bombing suspects come to mind here.) The researcher examine Facewatch, CrimeStoppers UK, Internet Eyes, and Shoreditch Digital Bridge, all of which had commercial elements attached to crowdsourcing projects where participants monitored feed from surveillance cameras of public spaces. He points out that these “developments contribute to a normalization of participatory surveillance for entertainment, socialization, and commerce,” and that the “risks of compromised privacy, false accusations and social sorting are offloaded onto citizen-watchers and citizen-suspects.” Further, the study highlights the perils inherent in the “‘gamification’ of surveillance-based labour.”

“New Perspectives from the Sky: Unmanned aerial vehicles and journalism”: Paper from the University of Texas at Arlington, published in Digital Journalism. By Mark Tremayne and Andrew Clark.

The use of unmanned aerial vehicles (UAVs, or “drones”) in journalism is an area of growing interest, and this exploration provides some context and research-based perspective. Drones in the service of the media have already been used for everything from snapping pictures of Paris Hilton and surveying tornado damaged areas in Alabama to filming secret government facilities in Australia and protestor clashes in Poland. In all, the researchers found “eight instances of drone technology being put to use for journalistic purposes from late 2010 through early 2012.”

This practice will inevitably raise issues about the extent to which it goes too far. “It is not hard to imagine how the news media, using drones to gather information, could be subject to privacy lawsuits,” the authors write. “What the news media can do to potentially ward off the threat of lawsuits is to ensure that drones are used in an ethical manner consistent with appropriate news practices. News directors and editors and professional associations can establish codes of conduct for the use of such devices in much the same way they already do with the use of hidden cameras and other technology.”

“Connecting with the user-generated Web: how group identification impacts online information sharing and evaluation”: Study from University of California, Santa Barbara, published in Information, Communication & Society. By Andrew J. Flanagin, Kristin Page Hocevar, and Siriphan Nancy Samahito.

Whether it’s Wikipedia, Yelp, TripAdvisor, or some other giant pool of user-generated “wisdom,” user-generated platforms convene large, disaggregated audiences who form loose memberships based around apparent common interests. But what makes certain communities bond and stick together, keeping online information environments fresh, passionate, and lively (and possibly accurate)?

The researchers involved in this study perform some experiments with undergraduates to see how adding small bits of personal information — the university, major, gender, or other piece of information — to informational posts changed perceptions by viewers. Perhaps predictably, the results show that “potential contributors had more positive attitudes (manifested in the form of increased motivation) about contribution to an online information pool when they experienced shared group identification with others.”

For editors and online community designers and organizers, the takeaway is that information pools “may actually form and sustain themselves best as communities comprising similar people with similar views.” Not exactly an antidote to “filter bubble” fears, but it’s worth knowing if you’re an admin for an online army.

“Selective Exposure, Tolerance, and Satirical News”: Study from University of Texas at Austin and University of Wyoming, published in the International Journal of Public Opinion Research. By Natalie J. Stroud and Ashley Muddiman.

While not the first study to focus on the rise of satirical news — after all, a 2005 study in Political Communication on “The Daily Show with Jon Stewart” now has 230 subsequent academic citations, according to Google Scholar — this new study looks at satirical news viewed specifically in a web context.

It suggests the dark side of snark, at least in terms of promoting open-mindedness and deliberative democracy. The conclusion is blunt: “The evidence from this study suggests that satirical news does not encourage democratic virtues like exposure to diverse perspectives and tolerance. On the contrary, the results show that, if anything, comedic news makes people more likely to engage in partisan selective exposure. Further, those viewing comedic news became less, not more, tolerant of those with political views unlike their own.” Knowing Colbert and Stewart, the study’s authors can expect an invitation soon to atone for this study.

The hidden demography of new media ethics”: Study from Rutgers and USC, published in Information, Communication & Society. By Mark Latonero and Aram Sinnreich.

The study leverages 2006 and 2010 survey data, both domestic and international, to take an analytical look at how notions of intellectual property and ethical Web culture are evolving, particularly as they relate to ideas such as remixing, mashups and repurposing of content. The researchers find a complex tapestry of behavioral norms, some of them correlated with certain age, gender, race or national traits. New technologies are “giving rise to new configurable cultural practices that fall into the expanding gray area between traditional patterns of production and consumption. The data suggest that these practices have the potential to grow in prevalence in the United States across every age group, and have the potential to become common throughout the dozens of industrialized nations sampled in this study.”

Further, rules of the road have formed organically, as technology has outstripped legal strictures: “Most significantly, despite (or because of) the inadequacy of present-day copyright laws to address issues of ownership, attribution, and cultural validity in regard to emerging digital practices, everyday people are developing their own ethical frameworks to distinguish between legitimate and illegitimate uses of reappropriated work in their cultural environments.”

Beach reads:

Here are some further academic paper honorable mentions this month — all from the culture and society desk:

Photo by Anna Creech used under a Creative Commons license.

January 18 2012

15:00

Digging deeper into The New York Times’ fact-checking faux pas

Once in a while the cultural fault lines in American journalism come into unexpectedly sharp relief. Jon Stewart’s now-legendary star turn on “Crossfire” was one of those moments; the uproar over NPR’s refusal (along with most major news outlets) to call waterboarding torture was another. The New York Times may have added another clash to this canon with public editor Arthur Brisbane’s blog post on fact-checking last week.

For anyone who missed it (or the ensuing analysis, rounded up here) the exchange can be summed up in two lines of dialogue:

Times to Internet: Should we fact-check the things politicians say?

Internet to Times: Are you freakin’ kidding?

That was an actual response, and a popular refrain: More than a dozen comments included some variant of, “This is a joke, right?” Several readers compared the column to an Onion piece. By far the most common reaction, which shows up in scores of comments, was to express dismay at the question or to say it captures the abysmal state of journalism today. A typical example, from “Fed Up” in Brooklyn: “The fact that this is even a question shows us how far mainstream journalism has fallen.”

The stunning unanimity of reader responses was undoubtedly the big story, as the news intelligentsia pointed out right away. It underscores the yawning gulf that separates professional reporters’ and everyday readers’ basic understandings of what journalism is supposed to do. Of the 265 comments logged in the three hours before the Times turned off commenting, exactly two (discounting obvious sarcasm) disagreed with the proposition that reporters should challenge suspect claims made by politicians. (More on the dissenters, below.) Brisbane’s follow-up, suggesting many readers had missed the nuance by assuming the question was just whether the paper should “check facts and print the truth,” seems off base. A few may have made that mistake, but most clearly have in mind what is sometimes called “political fact-checking” — calling out distortions in political speech.

If anything, reading through the comments what’s striking is the robust and stable critical vocabulary readers share for talking about the failings of conventional journalism. More than a dozen take issue with the definition of journalistic objectivity implied by Brisbane’s wondering whether reporting that calls out untruths can also be “objective and fair.” As a reader from Chicago wrote,

I see this formulation as a problem. Objective sometimes isn’t fair. Some of the people reported on are objectively less truthful, less forthcoming, and less believable than others.

False “balance” in the news is a common trope in the comments, which readers refer to both directly (at least eight times) and via now-standard illustrations of “he said/she said” reporting, like the climate-change debate (brought up by five readers) or the parodic headline “Shape of the Earth? Views differ” (mentioned by another nine). Journalism-as-stenography also comes up frequently — at least 20 of the responses make the comparison specifically, while 16 declare that the Times may as well run press releases if it isn’t going to challenge political claims.

The disconnect between reporters and readers, and the paradox at the center of “objective” journalism, comes through most clearly in Brisbane’s rendering of the division of labor between the news and opinion pages. Pointing to a column in which Paul Krugman debunked Mitt Romney’s claim that the President travels the globe “apologizing for America,” Brisbane explains that,

As an Op-Ed columnist, Mr Krugman clearly has the freedom to call out what he thinks is a lie. My question for readers is: should news reporters do the same?

To anyone not steeped in the codes and practices of professional journalism, this sounds pretty odd: Testing facts is the province of opinion writers? What happens in the rest of the paper? As JohnInArizona commented,

Mr. Brisbane’s view of the job of op-ed columnist vs that of reporters seems skewed.

It is the job of columnist to present opinion and viewpoint, and to persuade. It is the job of reporters to present facts, as best as they can determine them.

As others have pointed out, uncritically reprinting politicians’ statements is not what a good reporter, or a good newspaper, should be doing. There is no choosing between competing facts — a statement is factual, or is not…

Journalism has a ready response for this line of critique: The truth is not always black and white, and reporters run the risk of “imposing their judgement on what is false and what is true” (Brisbane’s phrase) by weighing in on factual questions more complicated than the shape of the earth. Politicians are expert in misleading without lying, and people may genuinely disagree about what the facts are — based on different notions of what constitutes a presidential apology, for instance.

Even in these cases though, a reporter can add context to help readers assess a claim. Brisbane himself suggests that,

Perhaps the next time Mr. Romney says the President has a habit of apologizing for his country, the reporter should insert a paragraph saying, more or less: “The president has never used the word ‘apologize’ in a speech about U.S. policy or history. Any assertion that he has apologized for U.S. actions rests on a misleading interpretation of the president’s words.”

A few readers responded that the second sentence is superfluous. Several others suggested doing additional reporting around the question, along these lines: “A real reporter might try to find those speeches. A real reporter might request that the Romney campaign provide examples of times where Obama has apologized for America…”

That sort of reporting is exactly what fact-checkers at PolitiFact and the Washington Post did to refute the claim, reconstructing the “apology tour” meme as developed in various Republican documents (Romney’s 2010 book “No Apology,” a Karl Rove op-ed in The Wall Street Journal, a Heritage Foundation report, etc.) and digging into the actual text of Obama’s speeches as well as comparable ones by previous presidents. PolitiFact went so far as to interview several experts on diplomacy and political apologies. Reading the public editor’s letter, though, you’d have no idea that the key example he uses to illustrate his column already has been checked and found wanting.

More to the point, you’d have no clue about what AJR has called the “fact-checking explosion” in American journalism — a movement that is at least a decade old (the short-lived Spinsanity launched in 2001, followed by FactCheck.org in 2003) and now spans dedicated fact-checking groups as well as newspapers, TV networks, radio outlets, and even journalism schools. (Full disclosure: I’m writing a dissertation, and eventually a book, about this movement.) Fact-checking has been very much in the ether lately, with news gurus weighing in on the limits of this kind of journalism, especially during the controversy over PolitiFact’s latest “Lie of the Year” selection.

The fact-checking movement is part of a larger ongoing conversation about journalistic objectivity that began with the news media’s failures in the lead-up to the Iraq war. (See Brent Cunningham’s “Rethinking Objectivity,” Jay Rosen’s “The View from Nowhere,” Michael Massing’s “Now they Tell Us.”) Most fact-checking groups don’t spend a lot of time tweaking their peers in the press, even though the claims they check usually go unchallenged in news accounts. But they don’t have to — their entire project is a critique of mainstream journalism, a self-conscious experiment in “rethinking objectivity.”

That sense of mission — of fact-checking as a kind of reform movement — is unmistakable when fact-checkers get together, as at a New America Foundation conference on the subject last December, and one at CUNY the month before. (Here’s a write-up of the two conferences.) In a report written for the New America meeting, the Post’s original “Fact-Checker” columnist, Michael Dobbs, placed fact-checking in a long tradition of “truth-seeking” journalism that rejects the false balance practiced by political reporters today. (Three reports from that conference will be published over the next month.)

From the precincts of this emerging reformist consensus, Brisbane’s column seemed surprisingly out of touch. Still, the public editor raises questions that haven’t been answered very well in the conversation about fact-checking. It’s easy to declare, as Brook Gladstone did in a 2008 interview with Bill Moyers, that reporters should “Fact check incessantly. Whenever a false assertion is asserted, it has to be corrected in the same paragraph, not in a box of analysis on the side.” (I agree.) But why, exactly, don’t they do that today? Why has fact-checking evolved into a specialized form of journalism relegated to a sidebar or a separate site? Are there any good reasons for it to stay that way?

Answering those questions has to begin with a better understanding of why so many traditional “objective” journalists are wary of the fact-checking upstarts. Michael Schudson, a historian of journalism (and my graduate-school advisor), has written that the “objectivity norm guides journalists to separate facts from values, and report only the facts.” In practice, though, the aversion to values becomes an aversion to evaluation. Hence the traditional rule against “drawing conclusions” (discussed here) in news reports. Brisbane doesn’t flesh out this rationale, but one of his readers captured it perfectly, and is worth quoting at length:

I cannot claim to be a regular reader of the New York Times, and I cannot claim to have ever been to journalism school. Finally, I also cannot claim to know what “the truth” is. I do not understand why so many readers are presenting such unequivocal opinions as commentary here.

If a candidate for US president says something — anything — I would like to know what he or she said. That’s reporting, and that’s “the truth” in reporting: a presentation of the facts, as objectively as possible.

Whether a candidate was coy about something, exaggerating something else, using misleading language, leaving something out of his or her public statements… all of these things are analysis. …

Finally, it is the responsibility of the reader, of the informed citizen, to take all of this in and think for himself or herself, to decide where he or she stands on issues, on phenomena in society. Neither the New York Times nor any other newspaper ought to have the privilege of taking that final step for anyone.

This reads like a more thoughtful version of David Gregory’s infamous response when asked why he doesn’t fact-check his on-air guests: “People can fact-check ‘Meet the Press’ every week on their own terms.” It rests on the concern — elaborated in this Politico post and in a Journal editorial — that fact-checking tends to shade into opinion, glossing over genuine differences in political ideology. The WSJ decried a “journalistic trend that seeks to recast all political debates as matters of lies, misinformation and ‘facts,’ rather than differences of world view or principles.”

The only problem with the “don’t draw conclusions” standard is that reporters draw conclusions all of the time, and now more than ever. The decades-long trend toward more analytical reporting, probably self-evident to any news junkie, has been thoroughly documented by communications scholars (who, following Kevin Barnhurst, sometimes call this “the new long journalism” or the “decline of event-centered reporting.”) Reporters are of course especially comfortable drawing conclusions about political strategy, liberally dispensing their analysis of what a candidate or officeholder hopes to gain from particular “messaging” and whether the strategy is like to work.

So objective journalism applies the ban on drawing conclusion very selectively. What seems to make reporters uncomfortable is not analysis per se but criticism, especially criticism that can be seen as taking sides on a controversial question — which they will avoid even at the risk of profound inconsistency. Here’s Bill Keller’s much-ridiculed rationale (quoted in a Media Decoder post) for refusing to describe waterboarding as torture in the pages of the Times, though the paper had often referred to it that way before the U.S. took up the practice:

When using a word amounts to taking sides in a political dispute, our general practice is to supply the readers with the information to decide for themselves.

The result revealed the awkward gap between common sense and journalistic sense. Common sense says, if it was torture then, it’s torture now. Journalistic sense says, this is really controversial! It’s not our job to accuse a sitting president of authorizing an illegal global regime of torture! (Conversely, it was profoundly uncontroversial to apply the label to waterboarding in a country such as China. Journalists could do so unthinkingly.)

This political risk aversion is nothing new. One of the most cited pieces of research on journalism is Gaye Tuchman’s ethnographic look at the “strategic ritual” of objectivity as practiced in a newspaper newsroom in the 1960s. Tuchman stressed the essentially defensive nature of the claim to be objective, and of the reporting routines it produced. Her “newsmen” shied away from criticism of public figures or public policies — or found someone else to voice them — because they were deeply, institutionally afraid of drawing attacks or even lawsuits from the people they reported on. (They used the “news analysis” label to set off reports that were less “objective,” though Tuchman found reporters and editors could not supply a coherent rationale for distinguishing between the two kinds of stories.)

The new, professional fact-checkers are specialized in ways that mitigate (but don’t eliminate) these concerns. They dedicate pages to analyzing even a simple claim, showing all of their work, so that someone who dislikes the result might still agree it was reached fairly. Full-time fact-checkers don’t have to worry about losing “access” to a public figure, because they don’t rely on inside information. (For the same reason, fact-checkers don’t use anonymous sources.) And they obviously — to occasional criticism — make an effort to check politicians from both sides of the aisle.

And still, fact-checkers constantly come in for vehement attacks from political figures and from the reading public. It’d be hard to prove that this is more pronounced than what traditional news outlets weather; my anecdotal sense is that it might be. (They manage this feedback in interesting ways; for instance, neither FactCheck.org nor PolitiFact allow comments on the same page as their fact-checks, though they do often run selections from reader mail.) Without validating the view that “if everybody’s mad at us, we must be doing something right” — a journalistic reflex one also hears from fact-checkers — it has to be acknowledged that this is a deeply polarizing activity. Managing that polarization is part of what fact-checkers have to do in the effort to stay relevant and make a difference in public discourse.

The hope for building fact-checks into everyday news reports is that it would push political reporters to be more thoughtful and reflexive about their own work — to leave out quotable-but-dubious claims, to resist political conflict as the default frame, and in general to avoid the pat formulations that are so ably managed by political actors. But inevitably, all of us will be disappointed, even pissed off, by some of these routine fact-checks — and perhaps all the more so when they’re woven into the story itself.

To take Brisbane’s question seriously and think about how this might be put into practice, we have to consider how reporters will manage the new set of pressures this work will expose them to. And we have to confront the paradox that trying to create a platform for a more fact-based and reasonable public discourse may also, at the same time, promote further fragmentation and politicization of that discourse.

Lucas Graves is a PhD candidate in communications at Columbia University and a research fellow at the Media Policy Initiative of the New America Foundation.

January 13 2012

16:00

Craig Newmark: Fact-checking should be part of how news organizations earn trust

Okay, I’m not in the news business, and I’m not going to tell anyone how to do their job. However, it’d be good to have news reporting that I could trust again, and there’s evidence that fact checking is an idea whose time has come.

This results from smart people making smart observations, at two recent conferences about fact checking, one run by Jeff Jarvis at CUNY (with me involved) and a more recent one at the New America Foundation. I’ve surfaced the issue further by carefully circulating a prior version of this paper.

Restoring trust to the new business via fact checking might be an idea whose time has come. It won’t be easy, but we need to try.

Fact checking is difficult, time consuming, and expensive, and it’s difficult to make that work in current newsrooms. There are Wall Street-required profit margins, and the intensity of the 24×7 news cycle. The lack of fact checking becomes obvious even to guys like me who aren’t real smart.

It’s worse when, say, a cable news reporter interviews a public figure, and that figure openly lies, and the reporter is visibly conflicted but can’t challenge the public figure. That’s what Jon Stewart calls the “CNN leaves it there” problem, which may have become the norm. When such interviews are run again and quoted, that reinforces the lie, and that’s real bad for the country.

Turns out that The New York Times just asked “Should The Times Be a Truth Vigilante?” That’s a much more pointed version of the question I’ve previously posed. The comments are overwhelming, like “isn’t that what journalists do?” and the more succinct “duh.”

For sure, there are news professionals trying to address the problem, like the folks at Politifact and Factcheck.org. We also see great potential at American Public Media’s Public Insight Network; with training in fact checking, their engaged specialist citizens might become a very effective citizen fact-checking network. (This list is far from complete.)

My guess is that we’ll be seeing networks of networks of fact checkers come into being. They’ll provide easily available results using multiple tools like the truth-goggles effort coming from MIT, or maybe simple search tools that can be used in TV interviews in real time.

Seems like a number of people in journalism have similar views. Here’s Craig Silverman from Poynter reporting recent conferences. Silverman and Ethan Zuckerman had a really interesting discussion regarding the consequences of deception:

That brings me to the final interesting discussion point: the idea of consequences. Can fact checking be a deterrent to, or punishment for, lying to the public?

“I’m surprised we’re not talking about how fact checking could reduce misinformation in the long term by creating consequences, creating punishment,” said Harvard’s Ethan Zuckerman at the DC event.

I’m an optimist, and hope that an apparent surge of interest in fact checking is real. Folks, including myself, have been pushing the return of fact checking for some months now, and recently it’s become a more prominent issue in the election.

Again, this is really difficult, but necessary. I feel that the news outlets making a strong effort to fact-check will be acting in good faith and trustworthy, and profitable. However, this seems like a good way to start restoring trust to the news business.

Craig Newmark is the founder of craigslist, the network of classified ad sites, and craigconnects, an organization to connect and protect organizations doing good in the world.

January 05 2012

19:30

Hacking consensus: How we can build better arguments online

In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.

It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.

To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Defining an argument

We’re all familiar with arguments, both in media and in everyday life. But it’s worth briefly reviewing what an argument actually is, as doing so can inform how we might better structure arguments online. “The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt,” writes Douglas Walton in his book Fundamentals of Critical Argumentation. “An argument is made up of statements called premises and a conclusion. The premises give a reason (or reasons) to support the conclusion.”

So an argument can be broken up into discrete claims, unified by a structure that ties them together. But our typical conceptions of online content ignore all that. Why not design content to more easily assess each claim in an argument individually? UI designer Bret Victor is working on doing just that through a series of experiments he collectively calls “Explorable Explanations.”

Writes Victor:

A typical reading tool, such as a book or website, displays the author’s argument, and nothing else. The reader’s line of thought remains internal and invisible, vague and speculative. We form questions, but can’t answer them. We consider alternatives, but can’t explore them. We question assumptions, but can’t verify them. And so, in the end, we blindly trust, or blindly don’t, and we miss the deep understanding that comes from dialogue and exploration.

The alternative is what he calls a “reactive document” that imposes some structure onto content so that the reader can “play with the premise and assumptions of various claims, and see the consequences update immediately.”

Although Victor’s first prototype, Ten Brighter Ideas, is a list of recommendations rather than a formal argument, it gives a feel of how such a document could work. But the specific look, feel and design of his example aren’t important. The point is simply that breaking up the contents of an argument beyond the level of just a post or column makes it possible for authors, editors or the community to deeply analyze each claim individually, while not losing sight of its place in the argument’s structure.

Show me the evidence (and the conversation)

Victor’s prototype suggests a more interesting way to structure and display arguments by breaking them up into individual claims, but it doesn’t tell us anything about what sort of content should be displayed alongside each claim. To start with, each claim could be accompanied by relevant links that help the reader make sense of that claim, either by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

Each claim could be accompanied by relevant links that help the reader make sense of that claim by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

At multiple points in his column, Krugman references “the evidence,” presumably referring to parts of the economics literature that support his argument. But what is the evidence? Why can’t it be cited alongside the column? And, while we’re at it, why not link to countervailing evidence as well? For an idea of how this might work, it’s helpful to look at a crowd-sourced fact-checking experiment run by the nonprofit NewsTrust. The “TruthSquad” pilot has ended, but the content is still online. One thing that NewsTrust recognized was that rather than just being useful for comment or opinion, the crowd can be a powerful tool for sourcing claims. For each fact that TruthSquad assessed, readers were invited to submit relevant links and mark them as For, Against, or Neutral.

The links that the crowd identified in the NewsTrust experiment went beyond direct evidence, and that’s fine. It’s also interesting for the reader to see what other writers are saying, who agrees, who disagrees, etc. The point is that a curated or crowd-sourced collection of links directly relevant to a specific claim can help a reader interested in learning more to save time. And allowing space for links both for and against an assertion is much more interesting than just having the author include a single link in support of his or her claim.

Community efforts to aggregate relevant links along the lines of the TruthSquad could easily be supplemented both by editor-curators (which NewsTrust relied on) and by algorithms which, if not yet good enough to do the job on their own, can at least lessen the effort required by readers and editors. The nonprofit ProPublica is also experimenting with a more limited but promising effort to source claims in their stories. (To get a sense of the usefulness of good evidence aggregation on a really thorny topic, try this post collecting studies of the stimulus bill’s impact on the economy.)

Truth, reliability, and acceptance

While curating relevant links allows the reader to get a sense of the debate around a claim and makes it easier for him or her to learn more, making sense of evidence still takes considerable time. What if a brief assessment of the claim’s truth, reliability or acceptance were included as well? This piece is arguably the hardest of those I have described. In particular, it would require editors to abandon the view from nowhere to publish a judgment about complicated statements well beyond traditional fact-checking. And yet doing so would provide huge value to the reader and could be accomplished in a number of ways.

Imagine that as you read Krugman’s column, each claim he makes is highlighted in a shade between green and red to communicate its truth or reliability. This sort of user interface is part of the idea behind “Truth Goggles,” a master’s project by Dan Shultz, an MIT Media Lab student and Mozilla-Knight Fellow. Shultz proposes to use an algorithm to check articles against a database of claims that have previously been fact-checked by Politifact. Shultz’s layer would highlight a claim and offer an assessment (perhaps by shading the text) based on the work of the fact checkers.

The beauty of using color is the speed and ease with which the reader is able to absorb an assessment of what he or she is reading. The verdict on the statement’s truthfulness is seamlessly integrated into the original content. As Shultz describes the central problem:

The basic premise is that we, as readers, are inherently lazy… It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything?

Still, the number of statements that PolitiFact has checked is relatively small, and what I’m describing requires the evaluation of messy empirical claims that stretch the limits of traditional fact-checking. So how might a publication arrive at such an assessment? In any number of ways. For starters, there’s good, old-fashioned editorial judgment. Journalists can provide assessments, so long as they resist the view from nowhere. (Since we’re rethinking the opinion pages here, why not task the editorial board with such a role?)

Publications could also rely on other experts. Rather than asking six experts to contribute to a “Room for Debate”-style forum, why not ask one to write a lead argument and the others not merely to “respond,” but to directly assess the lead author’s claims? Universities may be uniquely positioned to help in this, as some are already experimenting with polling their own experts on questions of public interest. Or what if a Quora-like commenting mechanism was included for each claim, as Dave Winer has suggested, so that readers could offer assessments, with the best ones rising to the top?

Ultimately, how to assess a claim is a process question, and a difficult one. But numerous relevant experiments exist in other formats. One new effort, Hypothes.is, is aiming to add a layer of peer review to the web, reliant in part on experts. While the project is in its early stages, its founder Dan Whaley is thinking hard about many of these same questions.

Better arguments

What I’ve described so far may seem elaborate or resource-intensive. Few publications these days have the staff and the time to experiment in these directions. But my contention is that the kind of content I am describing would be of dramatically higher value to the reader than the content currently available. And while Victor’s UI points towards a more aggressive restructuring of content, much could be done with existing tools. By breaking up an argument into discrete claims, curating evidence and relevant links, and providing credible assessments of those claims, publishers would equip readers to form opinions on merit and evidence rather than merely on trust, intuition, or bias. Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

I have avoided a number of issues in this explanation. Notably, I have neglected to discuss counter-arguments (which I believe could be easily integrated) and haven’t discussed the tension between empirical claims and value claims (I have assumed a focus on the former). And I’ve ignored the tricky psychology surrounding bias and belief formation. Furthermore, some might cite the recent PolitiFact Lie of the Year controversy as evidence that this sort of journalism is too difficult. In my mind, that incident further illustrates the need for credible, honest referees.

Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

Returning once more to Krugman’s argument, imagine the color of the text signaling whether his claims about financial transactions and economic growth are widely accepted. Or mousing over his point about reducing deficits to quickly see links providing background on the issue. What if it turned out that all of Krugman’s premises were assessed as compelling, but his conclusion was not? It would then be obvious that something was missing. Perhaps more interestingly, what if his conclusion was rated compelling but his claims were weak? Might he be trying to convince you of his case using popular arguments that don’t hold up, rather than the actual merits of the case? All of this would finally be apparent in such a setup.

In rethinking how we structure and assess arguments online, I’ve undoubtedly raised more questions than I’ve answered. But hopefully I’ve convinced you that better presentation of arguments online is at least possible. Not only that, but numerous hackers, designers, and journalists — and many who blur the lines between those roles — are embarking on experiments to challenge how we think about content, argument, truth, and credibility. It is in their work that the answers will be found.

Image by rhys_kiwi used under a Creative Commons license.

October 27 2010

19:00

MediaBugs revamps its site with a new national focus

When it launched in public beta earlier this year, MediaBugs, Scott Rosenberg’s Knight News Challenge-winning fact-checking project, was focused on correcting errors found in publications in the Bay Area. Today, though, Mediabugs.org has undergone a redesign — not just in its interface (“just the usual iterative improvements,” Rosenberg notes), but in its scope. Overnight, MediaBugs has gone national.

Part of the site’s initial keep-it-local logic was that, as a Knight winner, the project had to be small in scope. (The News Challenge stipulates that projects focus on “geographically defined communities,” although this year they’ve loosened up that rule a bit.) But part of it was also an assumption that community is about more than geography. “My original thesis was that, first of all, it would be valuable to work on a small scale in a specific metropolitan area,” Rosenberg told me — valuable not only in terms of developing personal relationships with editors who oversee their publications’ correction efforts, but also as a way to avoid becoming “this faceless entity: yet another thing on the web that was criticizing people in the newsrooms.”

And while the community aspect has paid off when it comes to newsroom dealings — Rosenberg and his associate director, Mark Follman, have indeed developed relationships that have helped them grow the project and the cause — MediaBugs has faced challenges when it comes to “community” in the broader sense. “It’s been an uphill battle just getting people to participate,” Rosenberg notes. Part of that is just a matter of people being busy, and MediaBugs being new, and all that. But another part of it is that so much of the stuff typical users consume each day is regional or national, rather than local, in scope. When he describes MediaBugs to people, Rosenberg notes, a typical response will be: “Great idea. Just the other day, I saw this story in the paper, or I heard this broadcast, where they got X or Y wrong.” And “invariably,” he says, “the X or Y in question is on a national political story or an international story” — not, that is, a local one.

Hence, MediaBugs’ new focus on national news outlets. “I thought, if that’s what people are more worked up about, and if that’s what they want to file errors for,” Rosenberg says, “we shouldn’t stand in their way.”

The newly broadened project will work pretty much like the local version did: The site is pre-seeded (with regional and national papers, magazines, and even the websites of cable news channels), and it will rely on users to report errors found in those outlets and others — expanding, in the process, the MediaBugs database. (Its current data set includes not only a list of media organizations, their errors, and those errors’ correction status, but also, helpfully, information about outlets’ error-correction practices and processes.)

For now, Rosenberg says, the feedback loop informing news organizations of users’ bug reports, which currently involves Rosenberg or Follman contacting be-bugged organizations directly, will remain intact. But it could — and, Rosenberg hopes, it will — evolve to become a more self-automated system, via an RSS feed, email feed, or the like. “There isn’t really that much of a reason for us to be in the loop personally — except that, at the moment, we’re introducing this strange new concept to people,” Rosenberg notes. “But ultimately, what this platform should really be is a direct feedback loop where the editors and the people who are filing bug reports can just resolve them themselves.” One of the inspirations for MediaBugs is the consumer-community site Get Satisfaction, which acts as a meeting mechanism for businesses and the customers they serve. The site provides a forum, and it moderates conversations; ultimately, though, its role is to be a shared space for dialogue. And the companies themselves — which have a vested interest in maintaining their consumers’ trust — do the monitoring. For MediaBugs, Rosenberg says, “that’s the model that we would ultimately like.”

To get to that point — a point, Rosenberg emphasizes, that at the moment is a distant goal — the MediaBugs infrastructure will need to evolve beyond MediaBugs.org. “As long as we’re functioning as this website that people have to go to, that’s a limiting factor,” Rosenberg notes. “We definitely want to be more distributed out at the point where the content is.” For that, the project’s widget — check it out in action on Rosenberg’s Wordyard and on (fellow Knight grantee site) Spot.us — will be key. Rosenberg is in talks with some additional media outlets about integrating the widget into their sites (along the lines of, for example, of the Connecticut Register-Citizen’s incorporation of a fact-checking mechanism into its stories); but the discussions have been slow-going. “I’m still pretty confident that, sooner or later, we’ll start to see the MediaBugs widget planted on more of these sites,” Rosenberg says. “But it’s not anything that’s happening at any great speed.”

For now, though, Rosenberg will have his hands full with expanding the site’s scope — and with finding new ways to realize the old idea that, as he notes, “shining any kind of light on a subject creates its own kind of accountability.” And it’ll be fascinating to see what happens when that light shifts its gaze to the national media landscape. “That dynamic alone, I think, will help some of the publications whose sites are doing a less thorough job with this stuff to get their act together.”

June 08 2010

13:30

Why link out? Four journalistic purposes of the noble hyperlink

[To link or not to link? It's about as ancient as questions get in online journalism; Nick Carr's links-as-distraction argument is only the latest incarnation. Yesterday, Jason Fry tried to contextualize the linking debate around credibility, readability, and connectivity. Here, Jonathan Stray tries out his own, more pragmatically focused four-part division. Tomorrow, we'll have the result of Jonathan's analysis of how major news organizations link out and talk about linking out. —Josh]

You don’t need links for great journalism — the profession got along fine for hundreds of years without them. And yet most news outlets have at least a website, which means that links are now (in theory, at least) available to the majority of working journalists. What can links give to online journalism? I see four main answers.

Links are good for storytelling.

Links give journalists a way to tell complex stories concisely.

In print, readers can’t click elsewhere for background. They can’t look up an unfamiliar term or check another source. That means print stories must be self-contained, which leads to conventions such as context paragraphs and mini-definitions (“Goldman Sachs, the embattled American investment bank.”) The entire world of the story has to be packed into one linear narrative.

This verbosity doesn’t translate well to digital, and arguments rage over the viability of “long form” journalism online. Most web writing guides suggest that online writing needs to be shorter, sharper, and snappier than print, while others argue that good long form work still kills in any medium.

Links can sidestep this debate by seamlessly offering context and depth. The journalist can break a complex story into a non-linear narrative, with links to important sub-stories and background. Readers who are already familiar with certain material, or simply not interested, can skip lightly over the story. Readers who want more can dive deeper at any point. That ability can open up new modes of storytelling unavailable in a linear, start-to-finish medium.

Links keep the audience informed.

Professional journalists are paid to know what is going on in their beat. Writing stories isn’t the only way they can pass this knowledge to their audience.

Although discussions of journalism usually center around original reporting, working journalists have always depended heavily on the reporting of others. Some newsrooms feel that verifying stories is part of the value they add, and require reporters to “call and confirm” before they re-report a fact. But lots of newsrooms simply rewrite copy without adding anything.

Rewriting is required for print, where copyright prevents direct use of someone else’s words. Online, no such waste is necessary: A link is a magnificently efficient way for a journalist to pass a good story to the audience. Picking and choosing the best content from other places has become fashionably known as “curation,” but it’s a core part of what journalists have always done.

Some publishers are reluctant to “send readers away” to other work. But readers will always prefer a comprehensive source, and as the quantity of available information explodes, the relative value of filtering it increases.

Links are a currency of collaboration.

When journalists use links to “pay” people for their useful contributions to a story, they encourage and coordinate the production of journalism.

Anyone who’s seen their traffic spike from a mention on a high-profile site knows that links can have immediate monetary impact. But links also have subtler long term value, both tangible (search rankings) and intangible (reputation and status.)  One way or another, a link is generally valuable to the receiver.

A complex, ongoing, non-linear story doesn’t have to be told by a single organization. In line with the theory of comparative advantage, it probably shouldn’t be. Of course journalists can (and should) collaborate formally. But links are an irresistible glue that can coordinate journalistic production across newsrooms and bloggers alike.

This is an economy that is interwoven with the cash economy in complex ways. It may not make business sense to pay another news organization for publishing a crucial sub-story or a useful tip, but a link gives credit where credit is due — and traffic. Along this line, I wonder if the BBC’s policy of not always linking to users who supply content is misguided.

Links enable transparency.

In theory, every statement in news writing needs to be attributed. “According to documents” or “as reported by” may have been as far as print could go, but that’s not good enough when the sources are online.

I can’t see any reason why readers shouldn’t demand, and journalists shouldn’t supply, links to all online resources used in writing a story. Government documents and corporate financial disclosures are increasingly online, but too rarely linked. There are some issues with links to pages behind paywalls and within academic journals, but nothing that seems insurmountable.

Opinion and analysis pieces can also benefit from transparency. It’s unfair — and suspect — to critique someone’s position without linking to it.

Of course, reporters must also rely on sources that don’t have a URL, such as people and paper documents. But even here I would like to see more links, for transparency and context: If the journalist conducted a phone interview, can we listen to the recording? If they went to city hall and saw the records, can they scan them for us? There is already infrastructure for journalists who want to do this. A link is the simplest, most comprehensive, and most transparent method of attribution.

Photo by Wendell used under a Creative Commons license.

May 21 2010

14:30

This Week in Review: Facebook circles the wagons, leaky paywalls, and digital publishing immersion

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

Should Facebook be regulated?: It’s been almost a month since Facebook’s expansion of Open Graph and Instant Personalization, and the concerns about the company’s invasion of privacy continue to roll in. This week’s telling example of how much Facebook information is public comes courtesy of Openbook, a new site that uses Facebook’s API to allow you to search all public Facebook updates. (Of course, you’ll find similarly embarrassing revelations via a Twitter search, but the point is that many of these people don’t know that what they’re posting is public.)

We also got another anti-Facebook diatribe (two, actually) from a web luminary: danah boyd, the Microsoft researcher and social media expert. Boyd, who spends a lot of time talking to young people about social media, noted two observations in her first post: Many users’ mental model of who can see their information doesn’t match up with reality, and people have invested so much time and resources into Facebook that they feel trapped by its changes. In the second post, Boyd proposes that if Facebook is going to refer to itself as a “social utility” (and it’s becoming a utility like water, power or the Internet, she argues), then it needs to be ready to be regulated like other utilities.

The social media blog Mashable has chimed in with a couple of defenses of Facebook (the web is all about sharing information; Facebook has normalized sharing in a way that users want to embrace), but the din has reached Facebook’s ears. The Wall Street Journal reported that the issue has prompted deep disagreements and several days of discussions at Facebook headquarters, and a Facebook spokesman said the company is going to simplify privacy controls soon.

Meanwhile, tech investor and entrepreneur Chris Dixon posited that Facebook is going to use its web-wide Like button to corner the market on online display ads, similar to the way Google did with text ads. Facebook also launched 0.facebook.com, a simple mobile-only site that’s free on some carriers — leading Poynter’s Steve Myers to wonder whether it’s going to become the default mobile web for feature phones (a.k.a. “dumb” phones). But The New York Times argued that when it comes to social data, Facebook still can’t hold a candle to the good, old-fashioned open web.

Are iPad apps worth it?: The iPad’s sales haven’t slowed down yet — it’s been projected to outsell the Mac, and one in five Americans say they might get one — but there are still conflicting opinions over how deeply publishers should get involved with it. Slate Group head Jacob Weisberg was the latest to weigh in, arguing that iPad apps won’t help magazines and newspapers like they think it will. He makes a couple of arguments we’ve seen several times over the past month or two: App producers are entering an Apple-controlled marketplace that’s been characterized by censorship, and apps are retrograde attempts to replicate the print experience.

“They’re claustrophobic walled gardens within Apple’s walled garden, lacking the basic functionality we now expect with electronic journalism: the opportunity to comment, the integration of social media, the ability to select text and paste it elsewhere, and finally the most basic function of all: links to other sources,” Weisberg says. GQ magazine didn’t get off to a particularly encouraging start with its iPad offerings, selling just 365 copies of its $2.99 Men of the Year iPad issue.

A few other folks are saying that the iPad is ushering in fundamental changes in the way we consume personal media: At Ars Technica, Forrester analyst Sarah Rotman Epps notes that the iPad is radically different from what people say they want in a PC, but they’re still more than willing to buy it because it makes complex computing simple. (The term Forrester is using to describe the tablet era, curated computing, seems like a stretch, though.) Norwegian digital journalist John Einar Sandvand offers a similar take, saying that tablets’ distinctive convenience will further weaken print newspapers’ position. And the Lab’s Josh Benton says the iPad could have an effect on the way we write, too.

Slipping through the Times’ and WSJ’s paywalls: New York Times editor Bill Keller gave an update late last week on the plans for his paper’s much-anticipated paywall — he didn’t really tell us anything new, but did indicate the Times’ solidified plans for the wall’s implementation. In reiterating the fact that he wasn’t breaking any news, though, Keller gave Media Matters’ Joe Strupp a bit of a clearer picture about how loose the Times’ metered model will be: “Those who mainly come to the website via search engines or links from blogs, and those who only come sporadically — in short, the bulk of our traffic — may never be asked to pay at all,” Keller wrote.

In the meantime, digital media consultant Mark Potts found another leaky paywall at The Wall Street Journal. Potts canceled his WSJ.com subscription (after 15 years!) and found that he’s still able to access for free almost everything he had previously paid for with only a few URL changes and the most basic of Google skills. And even much of that information, he argues, is readily available from other sources for free, damaging the value of the venerable Journal paywall. “Even the Journal can’t enforce the kind of exclusivity that would make it worth paying for — it’s too easy to look elsewhere,” Potts writes.

Another Times-related story to note: The paper’s managing editor for news, Jill Abramson, will leave her position for six months to become immersed in the digital side of the Times’ operation. The New York Observer tries out a few possible explanations for the move.

Going all-in on digital publishing: Speaking of immersion, two publishers in the past two weeks have tried a fascinating experiment: producing an issue entirely through new-media tools. The first was 48 Hour, a new San Francisco-based magazine that puts together each issue from beginning to end in two days. The magazine’s editors announced a theme, solicited submissions via email and Twitter, received 1,500 submissions, then put together the magazine, all in 48 hours. Several who saw the finished product were fairly impressed, but CBS’s lawyers were a little less pleased about the whole ‘48 Hour’ name. Gizmodo had a Q&A with the mag’s editors (all webzine vets) and PBS MediaShift and the BBC took a closer look at the editorial process.

Second, the Journal Register Company newspaper chain finished the Ben Franklin Project, an experiment in producing a daily and weekly newspaper and website using only free, web-based tools. Two small Ohio newspapers accomplished the feat this week, and Poynter’s Mallary Jean Tenore took a look inside the effort. What she uncovered should be an inspiration for people looking to implement change in newsrooms, especially ones that might be resistant to digital media. A quote from the daily paper’s managing editor sums it up: “When we started out, we said, ‘We’re going to do what? How are we going to do this?’ Now we’re showing ourselves that we can operate in a world that, even six months ago, used to be foreign to us.”

Reading roundup: This week, I’ve got two developments and a handful of other pieces to think on:

— Yahoo bought the online content producer Associated Content for $100 million this week. News business analyst Ken Doctor examined what this deal means for Yahoo (it’s big, he says), and considers the demand-and-advertising-driven model employed by Associated Content and others like Demand Media.

— If you follow NYU professor Jay Rosen on Twitter, you’ve heard a ton about fact-checking over the past couple of months. A couple more interesting tidbits on the subject this week: Fact-checks are consistently the AP’s most popular pieces online, and Minnesota Public Radio has unveiled PoliGraph, its own fact-checking effort.

— Poynter’s Rick Edmonds compares two of the more talked-about local news startups launching this summer, Washington D.C.’s TBD and Hawaii’s Honolulu Civil Beat. He’s got some great details on both. Poynter also put together a list of 200 moments over the last decade that transformed journalism.

— If you’re up for a quick, deep thought, the Lab’s Josh Benton muses on the need for news to structure and shrink its users’ world. “I think it’s journalists who need to take up that challenge,” he says, “to learn how to spin something coherent and absorbing and contained and in-the-moment and satisfying from the chaos of the world around us.”

— And once you’re done with that, head into the weekend laughing at The Onion’s parody of newspapers’ coverage of social media startups.

March 05 2010

16:00

This Week in Review: Surveying the online news scene, web-first mags, and Facebook patents its feed

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

The online news landscape defined: Much of the discussion about journalism this week revolved around two survey-based studies. I’ll give you an overview on both and the conversation that surrounded them.

The first was a behemoth of a study by the Pew Research Center’s Internet & American Life Project and Project for Excellence in Journalism. (Here’s Pew’s overview and the full report.) The report, called “Understanding the Participatory News Consumer,” is a treasure trove of fascinating statistics and thought-provoking nuggets on a variety of aspects of the world of online news. It breaks down into five basic parts: 1) The news environment in America; 2) How people use and feel about news; 3) news and the Internet; 4) Wireless news access; and 5) Personal, social and participatory news.

I’d suggest taking some time to browse a few of those sections to see what tidbits interest you, but to whet your appetite, the Lab’s Laura McGann has a few that jumped out at her — few people exclusively rely on the Internet for news, only half prefer “objective” news, and so on.

Several of the sections spurred their own discussions, led by the one focusing on the social nature of online news. GigaOM’s Mathew Ingram has a good summary of the study’s social-news findings, and Micah Sifry of techPresident highlights the sociological angle of news participation. Tech startup guy Dave Pell calls us “Curation Nation” and notes that for all our sharing, we don’t do much of the things going on in our own backyards. And Steve Yelvington has a short but smart take, noting that the sociality of news online is actually a return to normalcy, and the broadcast age was the weird intermission: “The one-way flow that is characteristic of print and electronic broadcasting is at odds with our nature. The Internet ends that directional tyranny.”

The other section of the study to get significant attention was the one on mobile news. PBS’ Idea Lab has the summary, and Poynter’s Mobile Media blog notes that an FCC study found similar results not long ago. Finally, Jason Fry has some hints for news organizations based on the study (people love weather news, and curation and social media have some value), and Ed Cafasso has some implications for marketing and PR folks.

A web-first philosophy for magazine sites: The Columbia Journalism Review also released another comprehensive, if not quite so sprawling, study on magazines and the web. (Here’s the full report and the CJR feature based on it.) The feature is a great overview of the study’s findings on such subjects on magazines’ missions on the web, their decision-making, their business models, editing, and use of social media and blogs. It’s a long read, but quite engaging for an article on an academic survey.

One of the more surprising (and encouraging) findings of the study is that magazine execs have a truly web-centric view of their online operation. Instead of just using the Internet as an extension of their print product, many execs are seeing the web as a valuable arena in itself. As one respondent put it, “We migrated from a print publication supplemented with online articles to an online publication supplemented with print editions.” That’s a seriously seismic shift in philosophy.

CJR also put up another brief post highlighting the finding that magazine websites on which the print editor makes most of the decisions tend to be less profitable. The New York Times’ report on the study centers on the far lower editing standards that magazines exercise online, and the editing-and-corrections guru Craig Silverman gives a few thoughts on the study’s editing and fact-checking findings.

Facebook patents the news feed: One significant story left over from last week: Facebook was granted a patent for its news feed. All Facebook broke the news, and included the key parts of Facebook’s description of what about the feed it’s patenting. As the tech blog ReadWriteWeb notes, this news could be huge — the news feed is a central concept within the social web and particularly Twitter, which is a news feed. But both blogs came to the tentative conclusion that the patent covers a stream of user activity updates within a social network, not status updates, leaving Twitter unaffected. (ReadWriteWeb’s summary is the best description of the situation.)

The patent still wasn’t popular. NYU news entrepreneur Cody Brown cautioned that patents like this could move innovation overseas, and New York venture capitalist Fred Wilson called the patent “lunacy,” making the case that software patents almost always reward derivative work. Facebook, Wilson says, dominates the world of social news feeds “because they out executed everyone else. But not because they invented the idea.” Meanwhile, The Big Money’s Caitlin McDevitt points out an interesting fact: When Facebook rolled out its news feed in 2006, it was ripped by its users. Now, the feed is a big part of the foundation of the social web.

What’s j-schools’ role in local news?Last week’s conversation about the newly announced local news partnership between The New York Times and New York University spilled over into a broader discussion about j-schools’ role in preserving local journalism. NYU professor Jay Rosen chatted with the Lab’s Seth Lewis about what the project might mean for other j-schools, and made an interesting connection between journalism education and pragmatism, arguing that “our knowledge develops not when we have the most magnificent theory or the best data but when we have a really, really good problem,” which is where j-schools should start.

An Inside Higher Ed article outlines several of the issues in play in j-school local news partnerships like this one, and Memphis j-prof Carrie Brown-Smith pushes back against the idea that j-schools are exploiting students by keeping enrollment high while the industry contracts. She argues that the skills picked up in a journalism education — thinking critically about information, checking its accuracy, communicating ideas clearly, and so on — are applicable to a wide variety of fields, as well as good old active citizenship itself. News business expert Alan Mutter comes from a similar perspective on the exploitation question, saying that hands-on experience through projects like NYU’s new one is the best thing j-schools can do for their students.

This week in iPad tidbits: Not a heck of a lot happened in the world of the iPad this week, but there’ll be enough regular developments and opinions that I should probably include a short update every week to keep you up to speed. This week, the Associated Press announced plans to create a paid service on the iPad, and the book publisher Penguin gave us a sneak peek at their iPad app and strategy.

Wired editor-in-chief Chris Anderson and tech writer James Kendrick both opined on whether the iPad will save magazines: Anderson said yes, and Kendrick said no. John Battelle, one of Wired’s founders, told us why he doesn’t like the iPad: “It’s an old school, locked in distribution channel that doesn’t want to play by the new rules of search+social.”

Reading roundup: I’ve got an abnormally large amount of miscellaneous journalism reading for you this week. Let’s start with two conversations to keep an eye on: First, in the last month or so, we’ve been seeing a lot of discussion on science journalism, sparked in part by a couple of major science conferences. This is a robust conversation that’s been ongoing, and it’s worth diving into for anyone at the intersection of those two issues. NYU professor Ivan Oransky made his own splash last week by launching a blog about embargoes in science journalism.

Second, the Lab’s resident nonprofit guru Jim Barnett published a set of criteria for determining whether a nonprofit journalism outfit is legitimate. Jay Rosen objected to the professionalism requirement and created his own list. Some great nuts-and-bolts-of-journalism talk here.

Also at the Lab, Martin Langeveld came out with the second part of his analysis on newspapers’ quarterly filings, with info on the Washington Post Co., Scripps, Belo, and Journal Communications. The Columbia Journalism Review’s Ryan Chittum drills a bit deeper into the question of how much of online advertising comes from print “upsells.”

The Online Journalism Review’s Robert Niles has a provocative post contending that the distinction between creation and aggregation of news content is a false one — all journalism is aggregation, he says. I don’t necessarily agree with the assertion, but it’s a valid challenge to the anti-aggregation mentality of many newspaper execs. And I can certainly get behind Niles’ larger point, that news organization can learn a lot from online news aggregation.

Finally, two great guides to Twitter: One, a comprehensive list of Twitter resources for journalists from former newspaper exec Steve Buttry, and two, some great tips on using Twitter effectively even if you have nothing to say, courtesy of The New York Times. Enjoy.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl