Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 26 2013

16:48

What’s New in Digital Scholarship: A generation gap in online news, and does The Daily Show discourage tolerance?

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

We’re at the halfway mark in our year-long odyssey tracking all things digital media and academic. Below are studies that continue to advance understanding among various hot topics: drone journalism; surveillance and the public; Twitter in conflict zones; Big Data and its limits; crowdsourced information platforms; remix culture; and much more. We also suggest some further “beach reads” at bottom. Enjoy the deep dive.

“Reuters Institute Digital News Report 2013: Tracking the Future of News”: Paper from University of Oxford Reuters Institute for the Study of Journalism, edited by Nic Newman and David A. L. Levy.

This new report provides tremendous comparative perspective on how different countries and news ecosystems are developing both in symmetrical and divergent ways (see the Lab’s write-up of the national differences/similarities highlighted.) But it also provides some interesting hard numbers relating to the U.S. media landscape; it surveys news habits of a sample of more than 2,000 Americans.

Key U.S. data points include: the number of Americans reporting accessing news by tablet in the past week rose, from 11 percent in 2012 to 16 percent in 2013; 28 percent said they accessed news on a smartphone in the last week; 75 percent of Americans reported accessing news online in the past week, while 72 percent said they got news through television and 47 percent reported having read a print publication; TV (43 percent) and online (39 percent) were Americans preferred platforms for accessing news. Further, a yawning divide exists between the preferences of those ages 18 to 24 and those over 55: among the younger cohort, 64 percent say the Web is their main source for news, versus only 25 percent among the older group; as for TV, however, 54 percent of older Americans report it as their main source, versus only 20 percent among those 18 to 24. Finally, 12 percent of American respondents overall reported paying for digital news in 2013, compared to 9 percent in 2012.

“The Rise and Fall of a Citizen Reporter”: Study from Wellesley College, for the WebScience 2013 conference. By Panagiotis Metaxas and Eni Mustafaraj.

This study looks at a network of anonymous Twitter citizen reporters around Monterrey, Mexico, covering the drug wars. It provides new insights into conflict zone journalism and information ecosystems in the age of digital media, as well the limits of raw data. The researchers, both computer scientists, analyze a dataset focused on the hashtag #MTYfollow, consisting of “258,734 tweets written by 29,671 unique Twitter accounts, covering 286 days in the time interval November 2010-August 2011.” They drill down on the account @trackmty, run by the pseudonym Melissa Lotzer, which is the largest of the accounts involved.

The scholars reconstruct a sequence in which a wild Twitter “game” breaks out — obviously, with life-and-death stakes — involving accusations about cartel informants (“hawks,” or “halcones”) and citizen watchdogs (“eagles,” or “aguilas”), with counter-accusations flying that certain citizen reporters were actually working for the Zetas drug cartel; indeed, @trackmty ends up being accused of working for the cartels. Online trolls attack her on Twitter and in blogs.

“The original Melissa @trackmty is slow to react,” the study notes, “and when she does, she tries to point to her past accomplishments, in particular the creation of [a group of other media accounts] and the interviews she has given to several reporters from the US and Spain (REF). But the frequency of her tweeting decreases, along with the community’s retweets. Finally, at the end of June, she stops tweeting altogether.” It turns out that the real @trackmty had been exposed — “her real identity, her photograph, friends and home address.”

Little of this drama was obvious from the data. Ultimately, the researchers were able to interview the real @trackmty and members of the #MTYfollow community. The big lessons, they realize, are the “limits of Big Data analysis.” The data visualizations showing influence patterns and spikes in tweet frequency showed all kinds of interesting dynamics. But they were insufficient to make inferences of value about the community affected: “In analyzing the tweets around a popular hashtag used by users who worry about their personal safely in a Mexican city we found that one must go back and forth between collecting and analyzing many times while formulating the proper research questions to ask. Further, one must have a method of establishing the ground truth, which is particularly tricky in a community of — mostly — anonymous users.”

“Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory”: Study from Ohio State University, published in the Journal of Communication. By R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch.

As the political fact-checking movement — the FactChecks and Politifacts, along with their various lesser-known cousins — has arisen, so too has a more hard-headed social science effort to get to the root causes of persistent lies and rumors, a situation made all the worse on the web. Of course, journalists hope truth can have a “corrective” effect, but the literature in this area suggests that blasting more facts at people often doesn’t work — hence, the “information deficit fallacy.” Thus, a cottage psych-media research industry has grown up, exploring “motivated reasoning,” “biased assimilation,” “confirmation bias,” “cultural cognition,” and other such concepts.

This study tries to advance understanding of how peripheral cues such as accompanying graphics and biographical information can affect how citizens receive and accept corrective information. In experiments, the researchers ask subjects to respond to claims about the proposed Islamic cultural center near Ground Zero and the disposition of its imam. It turns out that contextual information — what the imam has said, what he looks like and anything that challenges dominant cultural norms — often erodes the positive intentions of the fact-checking message.

The authors conclude that the “most straightforward method of maximizing the corrective effect of a fact-checking article is to avoid including information that activates stereotypes or generalizations…which make related cognitions more accessible and misperceptions more plausible.” The findings have a grim quality: “The unfortunate conclusion that we draw from this work is that contextual information so often included in fact-checking messages by professional news outlets in order to provide depth and avoid bias can undermine a message’s corrective effects. We suggest that this occurs when the factually accurate information (which has only peripheral bearing on the misperception) brings to mind” mental shortcuts that contain generalizations or stereotypes about people or things — so-called “naïve theories.”

“Crowdsourcing CCTV surveillance on the Internet”: Paper from the University of Westminster, published in Information, Communication & Society. By Daniel Trottier.

A timely look at the implications of a society more deeply pervaded by surveillance technologies, this paper analyzes various web-based efforts in Britain that involve the identification of suspicious persons or activity. (The controversies around Reddit and the Boston Marathon bombing suspects come to mind here.) The researcher examine Facewatch, CrimeStoppers UK, Internet Eyes, and Shoreditch Digital Bridge, all of which had commercial elements attached to crowdsourcing projects where participants monitored feed from surveillance cameras of public spaces. He points out that these “developments contribute to a normalization of participatory surveillance for entertainment, socialization, and commerce,” and that the “risks of compromised privacy, false accusations and social sorting are offloaded onto citizen-watchers and citizen-suspects.” Further, the study highlights the perils inherent in the “‘gamification’ of surveillance-based labour.”

“New Perspectives from the Sky: Unmanned aerial vehicles and journalism”: Paper from the University of Texas at Arlington, published in Digital Journalism. By Mark Tremayne and Andrew Clark.

The use of unmanned aerial vehicles (UAVs, or “drones”) in journalism is an area of growing interest, and this exploration provides some context and research-based perspective. Drones in the service of the media have already been used for everything from snapping pictures of Paris Hilton and surveying tornado damaged areas in Alabama to filming secret government facilities in Australia and protestor clashes in Poland. In all, the researchers found “eight instances of drone technology being put to use for journalistic purposes from late 2010 through early 2012.”

This practice will inevitably raise issues about the extent to which it goes too far. “It is not hard to imagine how the news media, using drones to gather information, could be subject to privacy lawsuits,” the authors write. “What the news media can do to potentially ward off the threat of lawsuits is to ensure that drones are used in an ethical manner consistent with appropriate news practices. News directors and editors and professional associations can establish codes of conduct for the use of such devices in much the same way they already do with the use of hidden cameras and other technology.”

“Connecting with the user-generated Web: how group identification impacts online information sharing and evaluation”: Study from University of California, Santa Barbara, published in Information, Communication & Society. By Andrew J. Flanagin, Kristin Page Hocevar, and Siriphan Nancy Samahito.

Whether it’s Wikipedia, Yelp, TripAdvisor, or some other giant pool of user-generated “wisdom,” user-generated platforms convene large, disaggregated audiences who form loose memberships based around apparent common interests. But what makes certain communities bond and stick together, keeping online information environments fresh, passionate, and lively (and possibly accurate)?

The researchers involved in this study perform some experiments with undergraduates to see how adding small bits of personal information — the university, major, gender, or other piece of information — to informational posts changed perceptions by viewers. Perhaps predictably, the results show that “potential contributors had more positive attitudes (manifested in the form of increased motivation) about contribution to an online information pool when they experienced shared group identification with others.”

For editors and online community designers and organizers, the takeaway is that information pools “may actually form and sustain themselves best as communities comprising similar people with similar views.” Not exactly an antidote to “filter bubble” fears, but it’s worth knowing if you’re an admin for an online army.

“Selective Exposure, Tolerance, and Satirical News”: Study from University of Texas at Austin and University of Wyoming, published in the International Journal of Public Opinion Research. By Natalie J. Stroud and Ashley Muddiman.

While not the first study to focus on the rise of satirical news — after all, a 2005 study in Political Communication on “The Daily Show with Jon Stewart” now has 230 subsequent academic citations, according to Google Scholar — this new study looks at satirical news viewed specifically in a web context.

It suggests the dark side of snark, at least in terms of promoting open-mindedness and deliberative democracy. The conclusion is blunt: “The evidence from this study suggests that satirical news does not encourage democratic virtues like exposure to diverse perspectives and tolerance. On the contrary, the results show that, if anything, comedic news makes people more likely to engage in partisan selective exposure. Further, those viewing comedic news became less, not more, tolerant of those with political views unlike their own.” Knowing Colbert and Stewart, the study’s authors can expect an invitation soon to atone for this study.

The hidden demography of new media ethics”: Study from Rutgers and USC, published in Information, Communication & Society. By Mark Latonero and Aram Sinnreich.

The study leverages 2006 and 2010 survey data, both domestic and international, to take an analytical look at how notions of intellectual property and ethical Web culture are evolving, particularly as they relate to ideas such as remixing, mashups and repurposing of content. The researchers find a complex tapestry of behavioral norms, some of them correlated with certain age, gender, race or national traits. New technologies are “giving rise to new configurable cultural practices that fall into the expanding gray area between traditional patterns of production and consumption. The data suggest that these practices have the potential to grow in prevalence in the United States across every age group, and have the potential to become common throughout the dozens of industrialized nations sampled in this study.”

Further, rules of the road have formed organically, as technology has outstripped legal strictures: “Most significantly, despite (or because of) the inadequacy of present-day copyright laws to address issues of ownership, attribution, and cultural validity in regard to emerging digital practices, everyday people are developing their own ethical frameworks to distinguish between legitimate and illegitimate uses of reappropriated work in their cultural environments.”

Beach reads:

Here are some further academic paper honorable mentions this month — all from the culture and society desk:

Photo by Anna Creech used under a Creative Commons license.

January 18 2012

15:00

Digging deeper into The New York Times’ fact-checking faux pas

Once in a while the cultural fault lines in American journalism come into unexpectedly sharp relief. Jon Stewart’s now-legendary star turn on “Crossfire” was one of those moments; the uproar over NPR’s refusal (along with most major news outlets) to call waterboarding torture was another. The New York Times may have added another clash to this canon with public editor Arthur Brisbane’s blog post on fact-checking last week.

For anyone who missed it (or the ensuing analysis, rounded up here) the exchange can be summed up in two lines of dialogue:

Times to Internet: Should we fact-check the things politicians say?

Internet to Times: Are you freakin’ kidding?

That was an actual response, and a popular refrain: More than a dozen comments included some variant of, “This is a joke, right?” Several readers compared the column to an Onion piece. By far the most common reaction, which shows up in scores of comments, was to express dismay at the question or to say it captures the abysmal state of journalism today. A typical example, from “Fed Up” in Brooklyn: “The fact that this is even a question shows us how far mainstream journalism has fallen.”

The stunning unanimity of reader responses was undoubtedly the big story, as the news intelligentsia pointed out right away. It underscores the yawning gulf that separates professional reporters’ and everyday readers’ basic understandings of what journalism is supposed to do. Of the 265 comments logged in the three hours before the Times turned off commenting, exactly two (discounting obvious sarcasm) disagreed with the proposition that reporters should challenge suspect claims made by politicians. (More on the dissenters, below.) Brisbane’s follow-up, suggesting many readers had missed the nuance by assuming the question was just whether the paper should “check facts and print the truth,” seems off base. A few may have made that mistake, but most clearly have in mind what is sometimes called “political fact-checking” — calling out distortions in political speech.

If anything, reading through the comments what’s striking is the robust and stable critical vocabulary readers share for talking about the failings of conventional journalism. More than a dozen take issue with the definition of journalistic objectivity implied by Brisbane’s wondering whether reporting that calls out untruths can also be “objective and fair.” As a reader from Chicago wrote,

I see this formulation as a problem. Objective sometimes isn’t fair. Some of the people reported on are objectively less truthful, less forthcoming, and less believable than others.

False “balance” in the news is a common trope in the comments, which readers refer to both directly (at least eight times) and via now-standard illustrations of “he said/she said” reporting, like the climate-change debate (brought up by five readers) or the parodic headline “Shape of the Earth? Views differ” (mentioned by another nine). Journalism-as-stenography also comes up frequently — at least 20 of the responses make the comparison specifically, while 16 declare that the Times may as well run press releases if it isn’t going to challenge political claims.

The disconnect between reporters and readers, and the paradox at the center of “objective” journalism, comes through most clearly in Brisbane’s rendering of the division of labor between the news and opinion pages. Pointing to a column in which Paul Krugman debunked Mitt Romney’s claim that the President travels the globe “apologizing for America,” Brisbane explains that,

As an Op-Ed columnist, Mr Krugman clearly has the freedom to call out what he thinks is a lie. My question for readers is: should news reporters do the same?

To anyone not steeped in the codes and practices of professional journalism, this sounds pretty odd: Testing facts is the province of opinion writers? What happens in the rest of the paper? As JohnInArizona commented,

Mr. Brisbane’s view of the job of op-ed columnist vs that of reporters seems skewed.

It is the job of columnist to present opinion and viewpoint, and to persuade. It is the job of reporters to present facts, as best as they can determine them.

As others have pointed out, uncritically reprinting politicians’ statements is not what a good reporter, or a good newspaper, should be doing. There is no choosing between competing facts — a statement is factual, or is not…

Journalism has a ready response for this line of critique: The truth is not always black and white, and reporters run the risk of “imposing their judgement on what is false and what is true” (Brisbane’s phrase) by weighing in on factual questions more complicated than the shape of the earth. Politicians are expert in misleading without lying, and people may genuinely disagree about what the facts are — based on different notions of what constitutes a presidential apology, for instance.

Even in these cases though, a reporter can add context to help readers assess a claim. Brisbane himself suggests that,

Perhaps the next time Mr. Romney says the President has a habit of apologizing for his country, the reporter should insert a paragraph saying, more or less: “The president has never used the word ‘apologize’ in a speech about U.S. policy or history. Any assertion that he has apologized for U.S. actions rests on a misleading interpretation of the president’s words.”

A few readers responded that the second sentence is superfluous. Several others suggested doing additional reporting around the question, along these lines: “A real reporter might try to find those speeches. A real reporter might request that the Romney campaign provide examples of times where Obama has apologized for America…”

That sort of reporting is exactly what fact-checkers at PolitiFact and the Washington Post did to refute the claim, reconstructing the “apology tour” meme as developed in various Republican documents (Romney’s 2010 book “No Apology,” a Karl Rove op-ed in The Wall Street Journal, a Heritage Foundation report, etc.) and digging into the actual text of Obama’s speeches as well as comparable ones by previous presidents. PolitiFact went so far as to interview several experts on diplomacy and political apologies. Reading the public editor’s letter, though, you’d have no idea that the key example he uses to illustrate his column already has been checked and found wanting.

More to the point, you’d have no clue about what AJR has called the “fact-checking explosion” in American journalism — a movement that is at least a decade old (the short-lived Spinsanity launched in 2001, followed by FactCheck.org in 2003) and now spans dedicated fact-checking groups as well as newspapers, TV networks, radio outlets, and even journalism schools. (Full disclosure: I’m writing a dissertation, and eventually a book, about this movement.) Fact-checking has been very much in the ether lately, with news gurus weighing in on the limits of this kind of journalism, especially during the controversy over PolitiFact’s latest “Lie of the Year” selection.

The fact-checking movement is part of a larger ongoing conversation about journalistic objectivity that began with the news media’s failures in the lead-up to the Iraq war. (See Brent Cunningham’s “Rethinking Objectivity,” Jay Rosen’s “The View from Nowhere,” Michael Massing’s “Now they Tell Us.”) Most fact-checking groups don’t spend a lot of time tweaking their peers in the press, even though the claims they check usually go unchallenged in news accounts. But they don’t have to — their entire project is a critique of mainstream journalism, a self-conscious experiment in “rethinking objectivity.”

That sense of mission — of fact-checking as a kind of reform movement — is unmistakable when fact-checkers get together, as at a New America Foundation conference on the subject last December, and one at CUNY the month before. (Here’s a write-up of the two conferences.) In a report written for the New America meeting, the Post’s original “Fact-Checker” columnist, Michael Dobbs, placed fact-checking in a long tradition of “truth-seeking” journalism that rejects the false balance practiced by political reporters today. (Three reports from that conference will be published over the next month.)

From the precincts of this emerging reformist consensus, Brisbane’s column seemed surprisingly out of touch. Still, the public editor raises questions that haven’t been answered very well in the conversation about fact-checking. It’s easy to declare, as Brook Gladstone did in a 2008 interview with Bill Moyers, that reporters should “Fact check incessantly. Whenever a false assertion is asserted, it has to be corrected in the same paragraph, not in a box of analysis on the side.” (I agree.) But why, exactly, don’t they do that today? Why has fact-checking evolved into a specialized form of journalism relegated to a sidebar or a separate site? Are there any good reasons for it to stay that way?

Answering those questions has to begin with a better understanding of why so many traditional “objective” journalists are wary of the fact-checking upstarts. Michael Schudson, a historian of journalism (and my graduate-school advisor), has written that the “objectivity norm guides journalists to separate facts from values, and report only the facts.” In practice, though, the aversion to values becomes an aversion to evaluation. Hence the traditional rule against “drawing conclusions” (discussed here) in news reports. Brisbane doesn’t flesh out this rationale, but one of his readers captured it perfectly, and is worth quoting at length:

I cannot claim to be a regular reader of the New York Times, and I cannot claim to have ever been to journalism school. Finally, I also cannot claim to know what “the truth” is. I do not understand why so many readers are presenting such unequivocal opinions as commentary here.

If a candidate for US president says something — anything — I would like to know what he or she said. That’s reporting, and that’s “the truth” in reporting: a presentation of the facts, as objectively as possible.

Whether a candidate was coy about something, exaggerating something else, using misleading language, leaving something out of his or her public statements… all of these things are analysis. …

Finally, it is the responsibility of the reader, of the informed citizen, to take all of this in and think for himself or herself, to decide where he or she stands on issues, on phenomena in society. Neither the New York Times nor any other newspaper ought to have the privilege of taking that final step for anyone.

This reads like a more thoughtful version of David Gregory’s infamous response when asked why he doesn’t fact-check his on-air guests: “People can fact-check ‘Meet the Press’ every week on their own terms.” It rests on the concern — elaborated in this Politico post and in a Journal editorial — that fact-checking tends to shade into opinion, glossing over genuine differences in political ideology. The WSJ decried a “journalistic trend that seeks to recast all political debates as matters of lies, misinformation and ‘facts,’ rather than differences of world view or principles.”

The only problem with the “don’t draw conclusions” standard is that reporters draw conclusions all of the time, and now more than ever. The decades-long trend toward more analytical reporting, probably self-evident to any news junkie, has been thoroughly documented by communications scholars (who, following Kevin Barnhurst, sometimes call this “the new long journalism” or the “decline of event-centered reporting.”) Reporters are of course especially comfortable drawing conclusions about political strategy, liberally dispensing their analysis of what a candidate or officeholder hopes to gain from particular “messaging” and whether the strategy is like to work.

So objective journalism applies the ban on drawing conclusion very selectively. What seems to make reporters uncomfortable is not analysis per se but criticism, especially criticism that can be seen as taking sides on a controversial question — which they will avoid even at the risk of profound inconsistency. Here’s Bill Keller’s much-ridiculed rationale (quoted in a Media Decoder post) for refusing to describe waterboarding as torture in the pages of the Times, though the paper had often referred to it that way before the U.S. took up the practice:

When using a word amounts to taking sides in a political dispute, our general practice is to supply the readers with the information to decide for themselves.

The result revealed the awkward gap between common sense and journalistic sense. Common sense says, if it was torture then, it’s torture now. Journalistic sense says, this is really controversial! It’s not our job to accuse a sitting president of authorizing an illegal global regime of torture! (Conversely, it was profoundly uncontroversial to apply the label to waterboarding in a country such as China. Journalists could do so unthinkingly.)

This political risk aversion is nothing new. One of the most cited pieces of research on journalism is Gaye Tuchman’s ethnographic look at the “strategic ritual” of objectivity as practiced in a newspaper newsroom in the 1960s. Tuchman stressed the essentially defensive nature of the claim to be objective, and of the reporting routines it produced. Her “newsmen” shied away from criticism of public figures or public policies — or found someone else to voice them — because they were deeply, institutionally afraid of drawing attacks or even lawsuits from the people they reported on. (They used the “news analysis” label to set off reports that were less “objective,” though Tuchman found reporters and editors could not supply a coherent rationale for distinguishing between the two kinds of stories.)

The new, professional fact-checkers are specialized in ways that mitigate (but don’t eliminate) these concerns. They dedicate pages to analyzing even a simple claim, showing all of their work, so that someone who dislikes the result might still agree it was reached fairly. Full-time fact-checkers don’t have to worry about losing “access” to a public figure, because they don’t rely on inside information. (For the same reason, fact-checkers don’t use anonymous sources.) And they obviously — to occasional criticism — make an effort to check politicians from both sides of the aisle.

And still, fact-checkers constantly come in for vehement attacks from political figures and from the reading public. It’d be hard to prove that this is more pronounced than what traditional news outlets weather; my anecdotal sense is that it might be. (They manage this feedback in interesting ways; for instance, neither FactCheck.org nor PolitiFact allow comments on the same page as their fact-checks, though they do often run selections from reader mail.) Without validating the view that “if everybody’s mad at us, we must be doing something right” — a journalistic reflex one also hears from fact-checkers — it has to be acknowledged that this is a deeply polarizing activity. Managing that polarization is part of what fact-checkers have to do in the effort to stay relevant and make a difference in public discourse.

The hope for building fact-checks into everyday news reports is that it would push political reporters to be more thoughtful and reflexive about their own work — to leave out quotable-but-dubious claims, to resist political conflict as the default frame, and in general to avoid the pat formulations that are so ably managed by political actors. But inevitably, all of us will be disappointed, even pissed off, by some of these routine fact-checks — and perhaps all the more so when they’re woven into the story itself.

To take Brisbane’s question seriously and think about how this might be put into practice, we have to consider how reporters will manage the new set of pressures this work will expose them to. And we have to confront the paradox that trying to create a platform for a more fact-based and reasonable public discourse may also, at the same time, promote further fragmentation and politicization of that discourse.

Lucas Graves is a PhD candidate in communications at Columbia University and a research fellow at the Media Policy Initiative of the New America Foundation.

May 26 2011

18:00

Sarah Palin’s 2009 “death panel” claims: How the media handled them, and why that matters

Editor’s Note: This article was originally published on the journalism-and-policy blog Lippmann Would Roll. Written by Matthew L. Schafer, the piece is a distillation of an academic study by Schafer and Dr. Regina G. Lawrence, the Kevin P. Reilly Sr. chair of LSU’s Manship School of Mass Communication. They have kindly given us permission to republish the piece here.

It’s been almost two years now since Sarah Palin published to Facebook a post about “death panels.” In a study to be presented this week at the 61st Annual International Communications Association Conference, we analyzed over 700 stories placed in the top 50 newspapers around the country.

“The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s ‘death panel’ so his bureaucrats can decide…whether they are worthy of health care,” Palin wrote at the time.

Only three days later, PolitiFact, an arm of the St. Petersburg Times, published its appraisal of Palin’s comment, stating, “We agree with Palin that such a system would be evil. But it’s definitely not what President Barack Obama or any other Democrat has proposed.”

FactCheck.org, a project of the Annenburg Public Policy Center, would also debunk the claim, and later PolitiFact users would later vote the death panel claim to the top spot of PolitiFact’s Lie of the Year ballot.

Despite this initial dismissal of the claim by non-partisan fact checkers, a cursory search of Google turns up 1,410,000 million results, showing just how powerful social media is in a fractured media climate.

Yet, the death panel claim — as we’re sure many will remember — lived not only online, but also in the newspapers, and on cable and network television. In the current study, which ran from August 8 (the day after Palin made the claim) to September 13 (the day of the last national poll about death panels) the top 50 newspapers in the country published over 700 articles about the claims, while the nightly network news ran about 20 stories on the topic.

At the time, many commentators both in and outside of the industry offered their views on the media’s performance in debunking the death panel claim. Some lauded the media for coming out and debunking the claim, while others questioned whether it was the media’s “job” to debunk the myth at all.

“The crackling, often angry debate over health-care reform has severely tested the media’s ability to untangle a story of immense complexity,” Howard Kurtz, who as then at the Washington Post, said. “In many ways, news organizations have risen to the occasion….”

Yet, Media Matters was less impressed, at times pointing out, for example, that “the New York Times portrayed the [death panel] issue as a he said/she said debate, noting that health care reform supporters ‘deny’ this charge and call the claim ‘a myth.’ But the Times did not note, as its own reporters and columnists have previously, that such claims are indeed a myth…”

So, who was right? Did the media debunk the claim? And, if so, did they sway public opinion in the process?

Strong debunking, but confused readers

Our data indicate that the mainstream news, particularly newspapers, debunked death panels early, fairly often, and in a variety of ways, though some were more direct than others. Nevertheless, a significant portion of the public accepted the claim as true or, perhaps, as “true enough.”

Initially, we viewed the data from 30,000 feet, and found that about 40 percent of the time journalists would call the death panel claim false in their own voice, which was especially surprising considering many journalists’ own conceptions that they act as neutral arbiters.

For example, on August 9, 2009, Ceci Connolly of the Washington Post said, “There are no such ‘death panels’ mentioned in any of the House bills.”

“[The death panel] charge, which has been widely disseminated, has no basis in any of the provisions of the legislative proposals under consideration,” The New York Times’ Helene Cooper wrote a few days after Connolly.

“The White House is letting Congress come up with the bill and that vacuum of information is getting filled by misinformation, such as those death panels,” Anne Thompson of NBC News said on August 11.

Nonetheless, in more than 60 percent of the cases it’s obvious that newspapers abstained from calling the death panels claim false. (We also looked at hundreds of editorials and letters to the editor, and it’s worth noting that almost 60 percent of those debunked the claim, while the rest abstained from debunking and just about 2 percent supported the claim.)

Additionally, of journalists who did debunk the claim, almost 75 percent of those articles contained no clarification as to why they were labeling the claim as false. Indeed, it was very much a “You either believe me, or you don’t” situation without contextual support.

As shown below, whether or not journalists debunked the claim, they often times approached the controversy by also quoting one side of the debate, quoting the other, and then letting the reader dissect the validity of each side’s stance. Thus, in 30 percent of cases where journalists reported in their own words that the claim was false, they nonetheless included either side’s arguments as to why their side was right. This often just confuses the reader.

This chart shows that whether journalists abstained from debunking the death panels claim or not, they still proceeded to give equal time to each side’s supporters.

Most important is the light that this study sheds on the age-old debate over the practical limitations surrounding objectivity. Indeed, questions are continually raised about whether journalists can be objective. Most recently, this led to a controversy at TechCrunch where founder Michael Arrington was left defending his disclosure policy.

“But the really important thing to remember, as a reader, is that there is no objectivity in journalism,” Arrington wrote to critics. “The guys that say they’re objective are just pretending.”

This view, however, is not entirely true. Indeed, in the study of death panels, we found two trends that could each fit under the broad banner of objectivity.

Objectivity: procedural and substantive

First, there is procedural objectivity — mentioned above — where journalists do their due diligence and quote competitors. Second, there is substantive objectivity where journalists actually go beyond reflexively reporting what key political actors say to engage in verifying the accuracy of those claims for their readers or viewers.

Of course, every journalist is — to some extent — influenced by their experiences, predilections, and political preferences, but these traits do not necessarily interfere with objectively reporting verifiable fact. Indeed, it seems that journalists could practice either form of objectivity without being biased. Nonetheless, questions and worries still abound.

“The fear seems to be that going deeper—checking out the facts behind the posturing and trying to sort out who’s right and who’s wrong—is somehow not ‘objective,’ not ‘straight down the middle,” Rem Reider of the American Journalism Review wrote in 2007.

Perhaps because of this, journalists in our sample attempted to practice at the same time both types of objectivity: one which, arguably, serves the public interest by presenting the facts of the matter, and one which allows the journalist a sliver of plausible deniability, because he follows the insular journalistic norm of both presenting both sides of the debate.

As such, we question New York University educator and critic Jay Rosen, who has argued that “neutrality and objectivity carry no instructions for how to react” to the rise of false but popular claims. We contend that the story is more complicated: Mainstream journalists’ figurative instruction manual contains contradictory “rules” for arbitrating the legitimacy of claims.

These contradictory rules are no doubt supported by public opinion polls taken during the August and September healthcare debates. Indeed, one poll released August 20 reported that 30 percent believed that proposed health care legislation would “create death panels.” Belief in this extreme type of government rationing of health care remained impressively high (41 percent) into mid-September.

More troubling, one survey found that the percentage calling the claim true (39 percent) among those who said they were paying very close attention to the health care debate was significantly higher than among those reporting they were following the debate fairly closely (23 percent) or not too closely (18 percent).

Yet, of course, our data does not allow us to say that these numbers are a direct result of the mainstream media’s death panel coverage. Nonetheless, because mainstream media content still powers so many websites’ and news organizations’ content, perhaps this coverage did have an impact on public opinions to some indeterminable degree.

Conclusion

One way of looking at the resilience of the death panels claim is as evidence that the mainstream media’s role in contemporary political discourse has been attenuated. But another way of looking at the controversy is to demonstrate that the mainstream media themselves bore some responsibility for the claim’s persistence.

Palin’s Facebook post, which popularized the death panel, catchphrase said nothing about any specific legislative provision. News outlets and fact-checkers could examine the language of currently debated bills to debunk the claim — and many did, as our data demonstrate. Nevertheless, it appears the nebulous “death panel bomb” reached its target in part because the mainstream media so often repeated it.

Thus, the dilemma for reporters playing by the rules of procedural objectivity is that repeating a claim reinforces a sense of its validity — or at least, enshrines its place as an important topic of public debate. Moreover, there is no clear evidence that journalism can correct misinformation once it has been widely publicized. Indeed, it didn’t seem to correct the death panels misinformation in our study.

Yet, there is promise in substantive objectivity. Indeed, today more than ever journalists are having to act as curators. The only way that they can effectively do so is by critically examining the surplusage of social media messages, and debunking or refusing to reinforce those messages that are verifiable. Indeed, as more politicians use the Internet to circumvent traditional media, this type of critical curation will become increasingly important.

This is — or should be journalists’ new focus. Journalists should verify information. Moreover, they should do so without including quotations from those taking a stance that is demonstrably false. This creates a factual jigsaw puzzle that the reader must untangle. Indeed, on the one hand, the journalist is calling the claim false, and on the other, he is giving inches quoting someone who believes it’s true.

Putting aside the raucous debates about objectivity for a moment, it is clear that journalists in many circumstances can research and relay to their readers information about verifiable fact. If we don’t see a greater degree of this substantive objectivity, the public is left largely at the mercy of the savviest online communicator. Indeed, if journalists refuse to critically curate new media, they are leaving both the public and themselves in a worse off position.

Image of Sarah Palin by Tom Prete used under a Creative Commons license.

November 16 2010

19:40

Crowdsourced Fact-Checking? What We Learned from Truthsquad

In June, Senator U.S. Senator Orrin Hatch made the statement that "87 million Americans will be forced out of their coverage" by President Obama's health care plan.

It was quite a claim. But was it true?

That's a common, and important, question -- and it can often be hard to quickly nail down the real facts in the information-overloaded world we live in. Professional fact-checking organizations like PolitiFact and FactCheck.org have taken up the charge to verify the claims of politicians, pundits and newsmakers, and they provide a great service to the public. But I believe there's also a role for the average person in the fact-checking process. By actively researching and verifying what we hear in the news, we can become more informed citizens, and more discriminating news consumers. These are essential skills for all of us to develop.

With that in mind, we at NewsTrust, a non-profit social news network, have been working on Truthsquad, a community fact-checking service that helps people verify the news online, with professional guidance.

Our first pilot for Truthsquad took place in August 2010, with the help of our partners at the Poynter Institute, our advisors at FactCheck.org and our funders at Omidyar Network. That pilot was well received by our community, partners and advisors, as noted in our first report, and by third-party observers such as GigaOm. We've since hosted a variety of weekly Truthsquads, and are starting a second pilot with MediaBugs.org and RegretTheError.com to identify and correct errors in the news media. (Disclosure: MediaShift managing editor Craig Silverman runs RegretTheError.com.)

Our first test project was by our standards a success; more importantly, it revealed several important lessons about the best ways to manage crowdsourced fact-checking, and about why people participate in this activity. Here are our key takeaways from this first pilot, which I'll elaborate on below:

  • A game-like experience makes fact-checking more engaging.
  • A professional-amateur (pro-am) collaboration delivers reliable results and a civil conversation.
  • Crowd contributions are limited, requiring editorial oversight and better rewards.
  • Community fact-checking fills a gap between traditional journalism and social media.

What is Truthsquad?

truthsquadlogo.pngTruthsquad.com features controversial quotes from politicians or pundits and asks the community whether they think they are true or false. Community members are welcome to make a first guess, then check our answers and research links to see if they are correct. They can change their answer anytime, as they come across new facts.

To help participants find the right answer, we invite them to review and/or post links to factual evidence that supports or opposes each statement. A professional journalist leads each "truthsquad" to guide participants in this interactive quest. This "squad leader" then writes an in-depth verdict based on our collaborative research. That verdict is emailed to all participants, with request for comments. (It can be revised as needed.)

Finding #1: Game-Like Experience Makes Fact-Checking Engaging

We noted a significant increase in community participation for Truthsquad compared to other NewsTrust services we tested this year. Some data from our first pilot:

  • This pilot generated twice as much participation as other 2010 pilots.
  • Users gave ten times more answers per quote than reviews per story on our site.
  • Over half of the participants read linked stories, and a third answered a Truthsquad quote.
  • One in six participants reviewed the stories linked as factual evidence.

We think this high level of engagement is partly due to the game-like quality of our user experience, which starts by inviting people to guess whether a statement is true or false -- an easy task that anyone can do in under a minute.

After their first guess, people are more likely to participate as researchers, because their curiosity has been piqued and they want to know the answer. As a result, participants often take the time to review linked stories and post more evidence on their own. Without realizing it, they are becoming fact-checkers.

Finding #2: Pro-Am Collaboration Delivers Reliable Results

We decided early on that professionals needed to guide this collaborative investigation. We wanted to avoid some of the pitfalls of pure crowdsourcing initiatives, which can turn into mob scenes -- particularly around politically charged issues. At the start of this experiment, we asked experienced journalists at FactCheck.org and the Poynter Institute to coach us and our community and help write and edit some of our first verdicts.

We think the pro-am approach paid off in a number of ways:

  • Amateurs learned valuable fact-checking skills by interacting with professionals.
  • A few community members posted links that were critical to reaching our verdicts.
  • Answers from our community generally matched final verdicts from our editors.
  • We came to the same conclusions as FactCheck.org in side-by-side blind tests.
  • Comments from participants were generally civil and focused on facts.

hatchverdict.png

The results of our first pilot led our advisor Brooks Jackson, director at FactCheck.org, to comment, "So far I would say the experiment is off to a solid start. The verdicts of the Truthsquad editors seem to me to be reasonable and based on good research."

This collaboration between journalists and citizens made us all more productive. The professionals shared helpful information-gathering tips, and the citizens extended that expertise on a larger scale, with multiple checks and balances between our community and our editors. Our editors spearheaded this investigation, but the community made important contributions through comments and links to factual evidence (some of which were invaluable). On a couple occasions, we even revised our verdicts based on new evidence from our community. This focus on facts also helped set the tone for our conversations, which were generally civil and informative.

Finding #3: Crowd Contributions Are Limited, Requiring Better Rewards

Despite high levels of participation, we didn't get as many useful links and reviews from our community as we had hoped. Our editorial team did much of the hard work to research factual evidence. (Two-thirds of story reviews and most links were posted by our staff.) Each quote represented up to two days of work from our editors, from start to finish. So this project turned out to be more labor-intensive than we thought, and a daily fact-checking service will require a dedicated editorial team to guarantee reliable results.

Managing our community and responding thoughtfully to their posts also takes additional time, and is an important part of this process. In future releases, we would like to provide more coaching and educational services, as well as better rewards for our contributors.

Training citizens to separate fact from spin is perhaps the greatest benefit of our initiative, but keeping them engaged will require ingenuity and tender loving care on our part.

"It seems based on this pilot that citizens can learn fact-checking skills quite easily," said Kelly McBride of the Poynter Institute. "The challenge is to motivate them to do this occasionally."

To address this issue, future versions of Truthsquad could reward members who take the time to fact-check the news in order to get them to do it more often. We would like to give them extra points for reading, reviewing or posting stories, as well as special badges, redeemable credits and/or prizes. We can also feature high scores on leaderboards, and give monthly awards to the most deserving contributors.

Finding #4: Community Fact-Checking Fills a Need

Every survey we have done in recent years has consistently shown fact-checking as a top priority for our community, and this was confirmed by the results of this pilot.

Here are some key observations from our recent survey about NewsTrust's 2010 pilots:

  • A majority of survey respondents (61 percent) found Truthsquad useful or very useful.
  • Half of survey respondents wanted a new quote every day -- or several quotes a day.
  • Half of survey respondents said they could fact-check quotes several times per week.
  • One in seven survey respondents were prepared to donate for this service.

Screen shot 2010-11-16 at 8.43.40 AM.png

We think the generally favorable response to Truthsquad is due to two factors: a growing demand for fact-checking services, combined with a desire to contribute to this civic process. Fact-checking is still the best way to verify the accuracy of what people hear in the news, and it is perceived as an effective remedy to expose politicians or pundits who propagate misinformation.

At the same time, the explosion of social media makes people more likely to participate in investigations like these. They want this civil watchdog network, and expect to have a voice in it.

Next steps

Based on the lessons from this experiment, we would like to offer Truthsquad on an ongoing basis, with a goal to fact-check one quote a day, year-round -- as well as to feature the work of other trusted research organizations on Truthsquad.com.

We also want to let members post their own quotes for fact-checking and reward them for their contributions through both a game-like interface and more educational benefits. We have an opportunity to track the expertise of participants based on their answers, which could allow us to measure their progress with core news literacy skills, as well as their overall understanding of important public issues and the overall impact of our service.

Over time, we hope to provide more training and certification services, to build lasting research skills that could help students and adults alike play a more active role in our growing information economy. If this appeals to you, we invite you to sign up here and join our experiment.

As for that Orrin Hatch quote? In the end, 163 participants helped us fact-check his statement about health care. Our final verdict was that the Senator's claim was false. That finding was based on factual evidence provided by one of our NewsTrust members, who dug up the right set of regulations, and pointed out they had been misstated by Hatch. Our editor's verdict was confirmed by similar findings from FactCheck.org, and also matched our community's consensus: 138 participants answered that this statement was false (versus 11 who thought it was true).

More importantly, we as a community learned how to separate fact from fiction -- and became more engaged as citizens.

Fabrice Florin is executive director and founder of NewsTrust, where he manages creative and business development for this next-generation social news network. NewsTrust helps people find and share good journalism online, so they can make more informed decisions as citizens. With a 30-year track record in new media and technology, Fabrice has developed a wide range of leading-edge entertainment, education and software products. Fabrice's previous wireless venture, Handtap, was a leading provider of multimedia content for mobile phones. Fabrice was recently elected an Ashoka Fellow for his work as a social entrepreneur in journalism.

This is a summary. Visit our site for the full post ».

August 16 2010

12:00

Truth-o-Meter, 2G: Andrew Lih wants to wikify fact-checking

Epic fact: We are living at the dawn of the Information Age. Less-epic fact: Our historical moment is engendering doubt. The more bits of information we have out there, and the more sources we have providing them, the more wary we need to be of their accuracy. So we’ve created a host of media platforms dedicated to fact-checking: We have PolitiFact over here, FactCheck over there, Meet the Facts over there, @TBDFactsMachine over there, Voice of San Diego’s Fact Check blog over there, NewsTrust’s crowdsourced Truthsquad over there (and, even farther afield, source verifiers like Sunlight’s new Poligraft platform)…each with a different scope of interest, and each with different methods and metrics of verification. (Compare, for example, PolitiFact’s Truth-o-Meter to FactCheck.org’s narrative assessments of veracity.) The efforts are admirable; they’re also, however, atomized.

“The problem, if you look at what’s being done right now, is often a lack of completeness,” says Andrew Lih, a visiting professor of new media at USC’s Annenberg School of Communication & Journalism. The disparate outlets have to be selective about the scope of their fact-checking; they simply don’t have the manpower to be comprehensive about verifying all the claims — political, economic, medical, sociological — pinging like pinballs around the Internet.

But what if the current fact-checking operations could be greater than the sum of their parts? What if there were a centralized spot where consumers of news could obtain — and offer — verification?

Enter WikiFactCheck, the new project that aims to do exactly what its name suggests: bring the sensibility — and the scope — of the wiki to the systemic challenges of fact-checking. The platform’s been in the works for about two years now, says Lih (who, in addition to creating the wiki, is a veteran Wikipedian and the author of The Wikipedia Revolution). He dreamed it up while working on WikiNews; though that project never reached the scope of its sister site — largely because its premise of discrete news narratives isn’t ideal for the wiki platform — a news-focused wiki that could succeed, Lih thought, was one that focused on the core unit of news: facts themselves. When Jay Rosen added attention to the need for systematic fact-checking of news content — most notably, through his campaign to fact-check the infamously info-miscuous Sunday shows — it became even more clear, Lih told me: This could be a job for a wiki.

WikiFactCheck wants not only to crowdsource, but also to centralize, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche. Wikipedia, he points out, is ultimately “a great aggregator”; and much of WikiFactCheck’s value could similarly be, he says, to catalog the results of other fact-checking outfits “and just be a meta-site.” Think Rotten Tomatoes — simple, summative, unapologetically derivative — for truth-claims.

If the grandeur implicit in that proposition sounds familiar, it’s because the idea for WikiFactCheck is pretty much identical to the one that guided the development of Wikipedia: to become a centralized repository of information shaped by, and limited only by the commitment of, the crowd. A place where the veracity of information is arbitrated discursively — among people who are motivated by the desire for veracity itself.

Which is idealistic, yes — unicornslollipopsrainbows idealistic, even — but, then again, so is Wikipedia. “In 2000, before Wikipedia started, the idea that you would have an online encyclopedia that was updated within seconds of something happening was preposterous,” Lih points out. Today, though, not only do we take Wikipedia for granted; we become indignant in those rare cases when entries fail to offer us up-to-the-minute updates on our topics of interest. Thus, the premise of WikiFactCheck: What’s to say that Wikipedia contributors’ famous commitment — of time, of enthusiasm, of Shirkian surplus — can’t be applied to verifying information as well as aggregating it?

What such a platform would look like, once populated, remains to be seen; the beauty of a wiki being its flexibility, users will make of the site what they will, with the crowd determining which claims/episodes/topics deserve to be checked in the first place. Ideally, “an experienced community of folks who are used to cataloging and tracking these kinds of things” — seasoned Wikipedians — will guide that process, Lih says. As he imagines it, though, the ideal structure of the site would filter truth-claims by episode, or “module” — one episode of “Meet the Press,” say, or one political campaign ad. “I think that’s pretty much what you’d want: one page per media item,” Lih says. “Whether that item is one show or one ad, we’ll have to figure out.”

Another thing to figure out will be how a wiki that will likely rely on publishing comprehensive documents — transcripts, articles, etc. — to verify their contents will dance around copyright issues. But “if there ever were a slam-dunk case for meeting all the attributes of the Fair Use Doctrine,” Lih says, “this is it.” Fact-checking is criticism and comment; it has an educational component (particularly if it operates under the auspices of USC Annenberg); and it doesn’t detract from content’s commercial value. In fact: “I can’t imagine another project that could be so strong in meeting the standards for fair use,” Lih says.

And what about the most common concern when it comes to informational wikis — that people with less-than-noble agendas will try to game the system and codify baseless versions of the truth? “In the Wikipedia universe, what has shaken out is that a lot of those folks who are not interested in the truth wind up going somewhere else,” Lih points out. (See: Conservapedia.) “They find that the community that is concerned with neutrality and with getting verifiable information into Wikipedia is going to dominate.” Majority rules — in a good way.

At the same time, though, “I welcome die-hard Fox viewers,” Lih says. “I welcome people who think Accuracy in Media is the last word. Because if you can cite from a reliable source — from a congressional record, from the Census Bureau, from the Geological Survey, from CIA Factbook, from something — then by all means, I don’t really care what your political stripes are. Because the facts should win out in the end.”

Photo of Andrew Lih by Kat Walsh, used under a GNU Free Documentation License.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl