Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 26 2013

16:48

What’s New in Digital Scholarship: A generation gap in online news, and does The Daily Show discourage tolerance?

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

We’re at the halfway mark in our year-long odyssey tracking all things digital media and academic. Below are studies that continue to advance understanding among various hot topics: drone journalism; surveillance and the public; Twitter in conflict zones; Big Data and its limits; crowdsourced information platforms; remix culture; and much more. We also suggest some further “beach reads” at bottom. Enjoy the deep dive.

“Reuters Institute Digital News Report 2013: Tracking the Future of News”: Paper from University of Oxford Reuters Institute for the Study of Journalism, edited by Nic Newman and David A. L. Levy.

This new report provides tremendous comparative perspective on how different countries and news ecosystems are developing both in symmetrical and divergent ways (see the Lab’s write-up of the national differences/similarities highlighted.) But it also provides some interesting hard numbers relating to the U.S. media landscape; it surveys news habits of a sample of more than 2,000 Americans.

Key U.S. data points include: the number of Americans reporting accessing news by tablet in the past week rose, from 11 percent in 2012 to 16 percent in 2013; 28 percent said they accessed news on a smartphone in the last week; 75 percent of Americans reported accessing news online in the past week, while 72 percent said they got news through television and 47 percent reported having read a print publication; TV (43 percent) and online (39 percent) were Americans preferred platforms for accessing news. Further, a yawning divide exists between the preferences of those ages 18 to 24 and those over 55: among the younger cohort, 64 percent say the Web is their main source for news, versus only 25 percent among the older group; as for TV, however, 54 percent of older Americans report it as their main source, versus only 20 percent among those 18 to 24. Finally, 12 percent of American respondents overall reported paying for digital news in 2013, compared to 9 percent in 2012.

“The Rise and Fall of a Citizen Reporter”: Study from Wellesley College, for the WebScience 2013 conference. By Panagiotis Metaxas and Eni Mustafaraj.

This study looks at a network of anonymous Twitter citizen reporters around Monterrey, Mexico, covering the drug wars. It provides new insights into conflict zone journalism and information ecosystems in the age of digital media, as well the limits of raw data. The researchers, both computer scientists, analyze a dataset focused on the hashtag #MTYfollow, consisting of “258,734 tweets written by 29,671 unique Twitter accounts, covering 286 days in the time interval November 2010-August 2011.” They drill down on the account @trackmty, run by the pseudonym Melissa Lotzer, which is the largest of the accounts involved.

The scholars reconstruct a sequence in which a wild Twitter “game” breaks out — obviously, with life-and-death stakes — involving accusations about cartel informants (“hawks,” or “halcones”) and citizen watchdogs (“eagles,” or “aguilas”), with counter-accusations flying that certain citizen reporters were actually working for the Zetas drug cartel; indeed, @trackmty ends up being accused of working for the cartels. Online trolls attack her on Twitter and in blogs.

“The original Melissa @trackmty is slow to react,” the study notes, “and when she does, she tries to point to her past accomplishments, in particular the creation of [a group of other media accounts] and the interviews she has given to several reporters from the US and Spain (REF). But the frequency of her tweeting decreases, along with the community’s retweets. Finally, at the end of June, she stops tweeting altogether.” It turns out that the real @trackmty had been exposed — “her real identity, her photograph, friends and home address.”

Little of this drama was obvious from the data. Ultimately, the researchers were able to interview the real @trackmty and members of the #MTYfollow community. The big lessons, they realize, are the “limits of Big Data analysis.” The data visualizations showing influence patterns and spikes in tweet frequency showed all kinds of interesting dynamics. But they were insufficient to make inferences of value about the community affected: “In analyzing the tweets around a popular hashtag used by users who worry about their personal safely in a Mexican city we found that one must go back and forth between collecting and analyzing many times while formulating the proper research questions to ask. Further, one must have a method of establishing the ground truth, which is particularly tricky in a community of — mostly — anonymous users.”

“Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory”: Study from Ohio State University, published in the Journal of Communication. By R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch.

As the political fact-checking movement — the FactChecks and Politifacts, along with their various lesser-known cousins — has arisen, so too has a more hard-headed social science effort to get to the root causes of persistent lies and rumors, a situation made all the worse on the web. Of course, journalists hope truth can have a “corrective” effect, but the literature in this area suggests that blasting more facts at people often doesn’t work — hence, the “information deficit fallacy.” Thus, a cottage psych-media research industry has grown up, exploring “motivated reasoning,” “biased assimilation,” “confirmation bias,” “cultural cognition,” and other such concepts.

This study tries to advance understanding of how peripheral cues such as accompanying graphics and biographical information can affect how citizens receive and accept corrective information. In experiments, the researchers ask subjects to respond to claims about the proposed Islamic cultural center near Ground Zero and the disposition of its imam. It turns out that contextual information — what the imam has said, what he looks like and anything that challenges dominant cultural norms — often erodes the positive intentions of the fact-checking message.

The authors conclude that the “most straightforward method of maximizing the corrective effect of a fact-checking article is to avoid including information that activates stereotypes or generalizations…which make related cognitions more accessible and misperceptions more plausible.” The findings have a grim quality: “The unfortunate conclusion that we draw from this work is that contextual information so often included in fact-checking messages by professional news outlets in order to provide depth and avoid bias can undermine a message’s corrective effects. We suggest that this occurs when the factually accurate information (which has only peripheral bearing on the misperception) brings to mind” mental shortcuts that contain generalizations or stereotypes about people or things — so-called “naïve theories.”

“Crowdsourcing CCTV surveillance on the Internet”: Paper from the University of Westminster, published in Information, Communication & Society. By Daniel Trottier.

A timely look at the implications of a society more deeply pervaded by surveillance technologies, this paper analyzes various web-based efforts in Britain that involve the identification of suspicious persons or activity. (The controversies around Reddit and the Boston Marathon bombing suspects come to mind here.) The researcher examine Facewatch, CrimeStoppers UK, Internet Eyes, and Shoreditch Digital Bridge, all of which had commercial elements attached to crowdsourcing projects where participants monitored feed from surveillance cameras of public spaces. He points out that these “developments contribute to a normalization of participatory surveillance for entertainment, socialization, and commerce,” and that the “risks of compromised privacy, false accusations and social sorting are offloaded onto citizen-watchers and citizen-suspects.” Further, the study highlights the perils inherent in the “‘gamification’ of surveillance-based labour.”

“New Perspectives from the Sky: Unmanned aerial vehicles and journalism”: Paper from the University of Texas at Arlington, published in Digital Journalism. By Mark Tremayne and Andrew Clark.

The use of unmanned aerial vehicles (UAVs, or “drones”) in journalism is an area of growing interest, and this exploration provides some context and research-based perspective. Drones in the service of the media have already been used for everything from snapping pictures of Paris Hilton and surveying tornado damaged areas in Alabama to filming secret government facilities in Australia and protestor clashes in Poland. In all, the researchers found “eight instances of drone technology being put to use for journalistic purposes from late 2010 through early 2012.”

This practice will inevitably raise issues about the extent to which it goes too far. “It is not hard to imagine how the news media, using drones to gather information, could be subject to privacy lawsuits,” the authors write. “What the news media can do to potentially ward off the threat of lawsuits is to ensure that drones are used in an ethical manner consistent with appropriate news practices. News directors and editors and professional associations can establish codes of conduct for the use of such devices in much the same way they already do with the use of hidden cameras and other technology.”

“Connecting with the user-generated Web: how group identification impacts online information sharing and evaluation”: Study from University of California, Santa Barbara, published in Information, Communication & Society. By Andrew J. Flanagin, Kristin Page Hocevar, and Siriphan Nancy Samahito.

Whether it’s Wikipedia, Yelp, TripAdvisor, or some other giant pool of user-generated “wisdom,” user-generated platforms convene large, disaggregated audiences who form loose memberships based around apparent common interests. But what makes certain communities bond and stick together, keeping online information environments fresh, passionate, and lively (and possibly accurate)?

The researchers involved in this study perform some experiments with undergraduates to see how adding small bits of personal information — the university, major, gender, or other piece of information — to informational posts changed perceptions by viewers. Perhaps predictably, the results show that “potential contributors had more positive attitudes (manifested in the form of increased motivation) about contribution to an online information pool when they experienced shared group identification with others.”

For editors and online community designers and organizers, the takeaway is that information pools “may actually form and sustain themselves best as communities comprising similar people with similar views.” Not exactly an antidote to “filter bubble” fears, but it’s worth knowing if you’re an admin for an online army.

“Selective Exposure, Tolerance, and Satirical News”: Study from University of Texas at Austin and University of Wyoming, published in the International Journal of Public Opinion Research. By Natalie J. Stroud and Ashley Muddiman.

While not the first study to focus on the rise of satirical news — after all, a 2005 study in Political Communication on “The Daily Show with Jon Stewart” now has 230 subsequent academic citations, according to Google Scholar — this new study looks at satirical news viewed specifically in a web context.

It suggests the dark side of snark, at least in terms of promoting open-mindedness and deliberative democracy. The conclusion is blunt: “The evidence from this study suggests that satirical news does not encourage democratic virtues like exposure to diverse perspectives and tolerance. On the contrary, the results show that, if anything, comedic news makes people more likely to engage in partisan selective exposure. Further, those viewing comedic news became less, not more, tolerant of those with political views unlike their own.” Knowing Colbert and Stewart, the study’s authors can expect an invitation soon to atone for this study.

The hidden demography of new media ethics”: Study from Rutgers and USC, published in Information, Communication & Society. By Mark Latonero and Aram Sinnreich.

The study leverages 2006 and 2010 survey data, both domestic and international, to take an analytical look at how notions of intellectual property and ethical Web culture are evolving, particularly as they relate to ideas such as remixing, mashups and repurposing of content. The researchers find a complex tapestry of behavioral norms, some of them correlated with certain age, gender, race or national traits. New technologies are “giving rise to new configurable cultural practices that fall into the expanding gray area between traditional patterns of production and consumption. The data suggest that these practices have the potential to grow in prevalence in the United States across every age group, and have the potential to become common throughout the dozens of industrialized nations sampled in this study.”

Further, rules of the road have formed organically, as technology has outstripped legal strictures: “Most significantly, despite (or because of) the inadequacy of present-day copyright laws to address issues of ownership, attribution, and cultural validity in regard to emerging digital practices, everyday people are developing their own ethical frameworks to distinguish between legitimate and illegitimate uses of reappropriated work in their cultural environments.”

Beach reads:

Here are some further academic paper honorable mentions this month — all from the culture and society desk:

Photo by Anna Creech used under a Creative Commons license.

August 21 2012

14:30

Inside the Star Chamber: How PolitiFact tries to find truth in a world of make-believe

PolitiFact editor Bill Adair in the Star Chamber

WASHINGTON — PolitiFact’s “Star Chamber” is like Air Force One: It’s not an actual room, just the name of wherever Bill Adair happens to be sitting when it’s time to break out the Truth-O-Meter and pass judgment on the words of politicians. Today it’s his office.

Three judges preside, usually the same three: Adair, Washington bureau chief of the Tampa Bay (née St. Petersburg) Times; Angie Drobnic Holan, his deputy; and Amy Hollyfield, his boss.

For this ruling — one of four I sat in on over two days last month — Holan and Hollyfield are on the phone. Staff writer Louis Jacobson is sitting in. He is recommending a rating of False for this claim, from Rep. Jeff Duncan (R-S.C.), but Hollyfield wants to at least consider something stronger:

83% of doctors have considered leaving the profession because of #Obamacare #repealandreplace

— Jeff Duncan (@Duncan4Congress) July 10, 2012

Hollyfield: Is there any movement for a Pants on Fire?

Adair: I thought about it, but I didn’t feel like it was far enough off to be a Pants on Fire. What did you think, Lou?

Jacobson: I would agree. Basically it was a case I think of his staff blindly taking basically what was in Drudge and Daily Caller. Should they have been more diligent about checking the fine print of the poll? Yes, they should have. Were they being really reckless in what they did? No. It was pretty garden-variety sloppiness, I would say. I don’t think it rises to the level of flagrancy that I would think of a Pants on Fire.

Adair: It’s just not quite ridiculous. It’s definitely false, but I don’t think it’s ridiculous.

This scene has played out 6,000 times before, but not in public view. Like the original Court of Star Chamber, PolitiFact’s Truth-O-Meter rulings have always been secret. The Star Chamber was a symbol of Tudor power, a 15th-century invention of Henry VII to try people he didn’t much care for. While the history is fuzzy, Wikipedia’s synopsis fits the chamber’s present-day reputation: “Court sessions were held in secret, with no indictments, no right of appeal, no juries, and no witnesses.”

PolitiFact turns five on Wednesday. Adair founded the site to cover the 2008 election, but the inspiration came one cycle earlier, when a turncoat Democrat named Zell Miller told tens of thousands of Republicans that Sen. John Kerry had voted to weaken the U.S. military. “Miller was really distorting his record,” Adair says, “and yet I didn’t do anything about it.”

The team won a Pulitzer Prize for the election coverage. The site’s basic idea — rate the veracity of political statements on a six-point scale — has modernized and mainstreamed the old art of fact-checking. The PolitiFact national team just hired its fourth full-time fact checker, and 36 journalists work for PolitiFact’s 11 licensed state sites. This week PolitiFact launches its second, free mobile app for iPhone and Android, “Settle It!,” which provides a clever keyword-based interface to help resolve arguments at the dinner table. (PolitiFact’s original mobile app, at $1.99, has sold more than 24,000 copies.) The site attracts about 100,000 pageviews per day, Adair told me, and that number will certainly rise as the election draws closer and politicians get weirder.

PolitiFact's "I Brake for Pants on Fire" bumper sticker

If your job is to call people liars, and you’re on a roll doing it, you can expect a steady barrage of criticism. PolitiFact has been under fire practically as long as it has existed, but things intensified earlier this year, when Rachel Maddow criticized PolitiFact for, in her view, botching a series of rulings.

In public, Adair responded cooly: “We don’t expect our readers to agree with every ruling we make,” is his refrain. In private, it struck a nerve.

“I think the criticism in January and February, added to some of the criticism we’ve gotten from conservatives over the months, persuaded us that we needed to make some improvements in our process,” Adair told me. “We directed our reporters to slow down and not try to rush fact-checks. We directed all of our reporters and editors to make sure that [they're] clear in the ruling statement.”

Adair made a series of small changes to tighten up the journalism. And for the first time he invited a reporter — me — to watch the truth sausage get made.

The paradox of fact-checking

To understand fact-checking is to accept a paradox: “Words matter,” as PolitiFact’s core principles go, and “context matters.”

Consider this incident recently all over the news: Harry Reid says some guy told him Mitt Romney didn’t pay taxes for 10 years. It’s probably true. Some guy probably did say that to Harry Reid. But we can’t know for sure. To evaluate that statement is almost impossible without cooperative witnesses to the conversation.

Now, is Reid’s implication true? We can’t know that, either, not until someone produces evidence. So how does a fact checker handle this claim?

The Truth-O-Meter gave Reid its harshest ruling, “Pants on Fire,” a PolitiFact trademark reserved for claims it considers not only false but absurd. In the Star Chamber, judges ruled that Reid had no evidence to back up his claim.

“It is now possible to get called a liar by PolitiFact for saying something true,” complained James Poniewozik and others. But True certainly would not have sufficed, here not even Half True.

Maybe the Truth-O-Meter needs an “Unsubstantiated” rating. They considered it, but decided against it, Adair told me, “because of fears that we’d end up rating many, many things ‘unsubstantiated.’”

Whereas truth is complicated, elastic, subjective… the Truth-O-Meter is simple, fixed, unambiguous. In a way, this overly simplistic device embodies the problem PolitiFact is trying to solve.

“The fundamental irony is that the same technological changes and changes in the media system that make organizations like PolitiFact and FactCheck.org possible also make their work less effective, in that we do have this highly fragmented media environment,” said Lucas Graves, who recently defended his dissertation on fact-checking at Columbia University.

So the Truth-O-Meter is the ultimate webby invention: bite-sized, viral-ready. Whether that Pants on Fire for Reid was warranted or not, 4,300 shares on Facebook is pretty good. PolitiFact is not the only fact checker in town, but the Truth-O-Meter is everywhere; the same simplicity in its rating system that opens it to so much criticism also helps it spread, tweet by tweet.

“PolitiFact exists to be cited. It exists to be quoted,” Graves said. “Every Truth-O-Meter piece packages really easily and neatly into a five-minute broadcast segment for CNN or for MSNBC.” (In fact, Adair told me, he has appeared on CNN alone at least 300 times.)

PolitiFact political cartoon

Stories get “chambered,” in PolitiFact parlance, 10-15 times a week. Adair begins by reading the ruling statement — that is, the precise phrase or claim being evaluated — aloud. Then — and this is new, post-criticism — Adair asks four questions, highlighted in bold. (“Sounds like something from Passover, but the four questions really helps get us focused,” he says.)

Adair: We are ready to rule on the Jeff Duncan item. So the ruling statement is: “83 percent of doctors have considered leaving the profession because of ObamaCare.” Lou is recommending a False. Let’s go through the questions.

Is the claim literally true?

Adair: No.

Jacobson: No, using Obamacare.

Is the claim open to interpretation? Is there another way to read the claim?

Jacobson: I don’t think so.

Adair: I don’t think so.

Does the speaker prove the claim to be true?

Adair: No. Did you get in touch with Duncan?

Jacobson: Yes, and his office declined to speak. Politely declined.

Did we check to see how we handled similar claims in the past?

Adair: Yes, we looked at the — and this didn’t actually get included in the item…

Jacobson:The Glenn Beck item.

Adair: Was it Glenn Beck?

Jacobson: Two years ago.

Adair: I thought it was the editorial in the Financial Times or whatever. What was that?

Jacobson: Well, Beck was quoted citing a poll by Investors Business Daily.

Adair: Investors Business Daily, right.

Jacobson: We gave that a False too, I think. But similar issues, basically.

Adair: Okay. So we have checked how we handled similar things in the past. Lou is recommending a false. How do we feel about false?

Angie: I feel good.

Hollyfield: Yup.

Adair: Good. All right, not a lot of discussion on this one!

After briefly considering Pants on Fire, they agree on False.

Question No. 3 — Does the speaker prove the claim to be true? — ensures the reporter always talks to the person who made the statement. Among Maddow’s complaints was that she was never contacted for a False ruling on one of her claims.

Another change in the last year has created a lot of grief for PoitiFact: Fact checkers now lean more heavily on context when politicians appear to take credit or give blame. Which brings us to Rachel Maddow’s complaint. In his 2012 State of the Union address, President Obama said:

In the last 22 months, businesses have created more than 3 million jobs. Last year, they created the most jobs since 2005.

PolitiFact rated that Half True, saying an executive can only take so much credit for job creation. But did he take credit? Would the claim have been 100 percent true if not for the speaker? Under criticism, PolitiFact revised the ruling up to Mostly True. Maddow was not satisfied:

You are a mess! You are fired! You are undermining the definition of the word “fact” in the English language by pretending to it in your name. The English language wants its word back. You are an embarrassment. You sully the reputation of anyone who cites you as an authority on “factishness,” let alone fact. You are fired.

Maddow (in addition to many, many liberals) was already mad about PolitiFact’s pick for 2011 Lie of the Year, that Republicans had voted, through the Ryan budget, to end Medicare. Of course, her criticism then was that PolitiFact was too literal.

“Forget about right or wrong,” Graves said. “There’s no right answer if you define ‘right’ as coming up with a ruling that everybody will agree with, especially when it comes to the question of interpreting things literally or taking an account out of context.” Damned if they do, damned if they don’t.

Graves, who identifies himself as falling “pretty left” on the spectrum, has observed PolitiFact twice: for a week last year and again for a three-day training session with one of PolitiFact’s state sites.

“One of the things that comes through clearest when you spend time with fact checkers…is that they have a very healthy sense that these are imperfect judgments that they’re making, but at the same time they’re going to strive to do them as fairly as possible. It’s a human endeavor. And like all human endeavors, it’s not infallible.”

A real live Truth-O-Meter

The truth is that fact-checking, and fact checkers, are kinda boring. What I witnessed was fair and fastidious; methodical, not mercurial. (That includes the other three (uneventful) rulings I watched.) I could uncover no evidence of PolitiFact’s evil scheme to slander either Republicans or Democrats. Adair says he’s a registered independent. He won’t tell me which candidate he voted for last election, and he protects his staff members’ privacy in the voting booth. In Virginia, where he lives, Adair abstains from open primary elections. Revealing his own politics would “suggest a bias that I don’t think is there,” Adair says.

“In a hyper-partisan world, that information would get distorted, and it would obscure the reality, which is that I think political journalists do a good job of leaving their personal beliefs at home and doing impartial journalism,” he says.

Does all of this effort make a dent in the net truth of the universe? Is moving from he-said-she-said to some form of judgment, simplified as it may be, “working?” Last month, David Brooks wrote:

A few years ago, newspapers and nonprofits set up fact-checking squads, rating campaign statements with Pinocchios and such. The hope was that if nonpartisan outfits exposed campaign deception, the campaigns would be too ashamed to lie so much.

This hope was naive. As John Dickerson of Slate has said, the campaigns want the Pinocchios. They want to show how tough they are.

“I don’t think we were naive. I’ve always said anyone who imagines we can change the behavior of candidates is bound to be disappointed,” said Brooks Jackson, director of FactCheck.org. He was a pioneer of modern political fact-checking for CNN in the 1990s. “I suspect it is a fact that the junior woodchucks on the campaign staffs have now perversely come to value our criticism as some sort of merit badge, as though lying is a virtue, and a recognized lie is a bigger virtue.”

Rarely is there is a high political cost to lying. All the explainers in the world couldn’t completely blunt the impact of the Swift Boat Veterans for Truth’s campaign to denigrate John Kerry’s military service. More recently, in July, the Democratic Congressional Campaign Committee claimed Chinese prostitution money helped finance the campaign of a Republican Congressman in Ohio. PolitiFact rated it Pants on Fire.

That didn’t stop the DCCC from rolling out identical claims in Wisconsin and Tennessee. The DCCC eventually apologized. But which made more of an impression on voters, the original lie or the eventual apology from an amorphous nationwide organization?

Brendan Nyhan, a political science professor at Dartmouth College, has done a lot of research on the effects of fact-checking on the public. As he wrote for CJR:

It is true that corrective information may not change readers’ minds. My research with Georgia State’s Jason Reifler finds that corrections frequently fail to reduce misperceptions among the most vulnerable ideological group and can even make them worse (PDF). Other research has reached similarly discouraging conclusions — at this point, we know much more about what journalists should not do than how they can respond effectively to false statements (PDF).

If the objective of fact-checking is to get politicians to stop lying, then no, fact-checking is not working. “My goal is not to get politicians to stop lying,” is another of Adair’s refrains. “Our goal is…to give people the information they need to make decisions.”

Unlike The Washington Post’s Glenn Kessler, who awards Pinocchios for lies, or PolitiFact, which rates claims on a Truth-O-Meter, Jackson’s FactCheck.org doesn’t reduce its findings to a simple measurement. “I think you are telling people we can tell the difference between something that is 45 percent true and 57 percent true — and some negative number,” he said, referring to Pants on Fire. “There isn’t any scientific objective way to measure the degree of mendacity to any particular statement.”

“I think it’s fascinating that they chose to call it a Truth-O-Meter instead of a Truth Meter,” Graves said. Truth-O-Meter sounds like a kitchen gadget, or a toy. “That ‘O’ is sort of acknowledging that this is a human endeavor. There’s no such thing as a machine for perfectly and accurately making judgments of truth.”

Political cartoon by Chip Bok used with permission.

January 06 2012

15:30

This Week in Review: Lessons from Murdoch on Twitter, and paywalls’ role in 2011-12

Murdoch, Twitter, and identity: News Corp.’s Rupert Murdoch had a pretty horrible 2011, but he ended it with a curious decision, joining Twitter on New Year’s Eve. The account was quickly verified and introduced as real by Twitter chairman Jack Dorsey, dousing some of the skepticism about its legitimacy. His Twitter stream so far has consisted of a strange mix of News Corp. promotion and seemingly unfiltered personal opinions: He voiced his support for presidential candidate Rick Santorum (a former paid analyst for News Corp.’s Fox News) and ripped former Fox News host Glenn Beck.

But the biggest development in Murdoch’s Twitter immersion was about his wife, Wendi Deng, who appeared to join Twitter a day after he did and was also quickly verified as legitimate by Twitter. (The account even urged Murdoch to delete a tweet, which he did.) As it turned out, though, the account was not actually Deng, but a fake run by a British man. He said Twitter verified the account without contacting him.

This, understandably, raised a few questions about the reliability of identity online: If we couldn’t trust Twitter to tell us who on its service was who they said they were, the issue of online identity was about to become even more thorny. GigaOM’s Mathew Ingram chastised Twitter for its lack of transparency about the process, and The Washington Post’s Erik Wemple urged Twitter to get out of the verification business altogether: “The notion of a central authority — the Twitterburo, so to speak — sitting in judgment of authentic identities grinds against the identity of Twitter to begin with.” (Twitter has begun phasing out verification, limiting it to a case-by-case basis.)

Eric Deggans of the Tampa Bay Times argued that the whole episode proved that regardless of what Twitter chooses to do, “the Internet is always the ultimate verification system for much of what appears on it.” Kara Swisher of All Things Digital unearthed the problem in this particular case that led to the faulty verification: A punctuation mixup in communication with Deng’s assistant.

Columbia’s Emily Bell drew a valuable lesson from the Rupert-joins-Twitter episode: As they wade into the social web, news organizations, she argued, need to do some serious thinking about how much control they’re giving up to third-party groups who may not have journalism among their primary interests. Elsewhere in Twitter, NPR Twitter savant Andy Carvin and NYU prof Clay Shirky spent an hour on WBUR’s On Point discussing Twitter’s impact on the world.

Trend-spotting for 2011 and 2012: I caught the front end of year-in-review season in my last review before the holidays, after the Lab’s deluge of 2012 predictions. But 2011 reviews and 2012 previews kept rolling in over the past two weeks, giving us a pretty thoroughly drawn picture of the year that was and the year to come. We’ll start with 2011.

Nielsen released its list of the most-visited sites and most-used devices of the year, with familiar names — Google, Facebook, Apple, YouTube — at the top. And Pew tallied the most-talked-about subjects on social media: Osama bin Laden on Facebook and Egypt’s Hosni Mubarak on Twitter topped the lists, and Pew noted that many of the top topics were oriented around specific people and led by the traditional media.

The Next Web’s Anna Heim and Mashable’s Meghan Peters reviewed the year in digital media trends, touching on social sharing, personal branding, paywalls, and longform sharing, among other ideas. At PBS MediaShift, Jeff Hermes and Andy Sellars authored one of the most interesting and informative year-end media reviews, looking at an eventful year in media law. As media analyst Alan Mutter pointed out, though, 2011 wasn’t so great for newspapers: Their shares dropped 27 percent on the year.

One of the flashpoints in this discussion of 2011 was the role of paywalls in the development of news last year: Mashable’s Peters called it “the year the paywall worked,” and J-Source’s Belinda Alzner said the initial signs of success for paywalls are great news for the financial future of serious journalism. Mathew Ingram of GigaOM pushed back against those assertions, arguing that paywalls are only working in specific situations, and media prof Clay Shirky reflected on the ways paywalls are leading news orgs to focus on their most dedicated users, which may not necessarily be a bad thing. “The most promising experiment in user support means forgoing mass in favor of passion; this may be the year where we see how papers figure out how to reward the people most committed to their long-term survival,” he wrote.

Which leads us to 2012, and sets of media/tech predictions from the Guardian’s Dan Gillmor, j-prof Alfred Hermida, Mediaite’s Rachel Sklar, Poynter’s Jeff Sonderman, and Sulia’s Joshua Young. Sklar and Sonderman both asserted that news is going to move the needle online (especially on Facebook, according to Sonderman), and while Hermida said social media is going to start to just become part of the background, he argued that that’s a good thing — we’re going to start to find the really interesting uses for it, as Gillmor also said. J-prof Adam Glenn also chimed in at PBS MediaShift with his review of six trends in journalism education, including journo-programming and increased involvement in community news.

SOPA’s generation gap: The debate over Internet censorship and SOPA will continue unabated into the new year, and we’re continuing to see groups standing up for and against the bill, with the Online News Association and dozens of major Internet companies voicing their opposition. One web company who notoriously came out in favor of the bill, GoDaddy, faced the wrath of the rest of the web, with some 37,000 domains being pulled in two days. The web hosting company quickly pulled its support for SOPA, though it isn’t opposing the bill, either.

New York Times media critic David Carr also made the case against the bill, noting that it’s gaining support because many members of Congress are on the other side of a cultural/generational divide from those on the web. He quoted Kickstarter co-founder Yancey Strickler: “It’s people who grew up on the Web versus people who still don’t use it. In Washington, they simply don’t see the way that the Web has completely reconfigured society across classes, education and race. The Internet isn’t real to them yet.”

Forbes’ Paul Tassi wrote about the fact that many major traditional media companies have slyly promoted some forms of piracy over the past decade, and GigaOM’s Derrick Harris highlighted an idea to have those companies put some of their own money into piracy enforcement.

Tough times for the Times: It’s been a rough couple of weeks for The New York Times: Hundreds of staffers signed an open letter to Publisher Arthur Sulzberger Jr. expressing their frustration over various compensation and benefits issues. The Huffington Post’s Michael Calderone reported that the staffers’ union had also considered storming Sulzberger’s office or walking out, and Politico’s Dylan Byers noted that the signers covered a broad swath of the Times’ newsroom, cutting across generational lines.

The Atlantic’s Adam Clark Estes gave some of the details behind the union’s concerns about the inequity of the paper’s buyouts. But media consultant Terry Heaton didn’t have much sympathy: He said the union’s pleas represented an outmoded faith in the collective, and that Times staffers need to take more of an everyone-for-themselves approach.

The Times also announced it would sell its 16 regional newspapers for $143 million to Halifax Media Group, a deal that had been rumored for a week or two, and told Jim Romenesko it would drop most of its podcasts this year. To make matters worse, the paper mistakenly sent an email to more than 8 million followers telling them their print subscriptions had been canceled.

Reading roundup: Here’s what else you might have missed over the holidays:

— A few thoughtful postscripts in the debate over PolitiFact and fact-checking operations: Slate’s Dave Weigel and Forbes’ John McQuaid dissected PolitiFact’s defense, and Poynter’s Craig Silverman offered some ideas for improving fact-checking from a recent roundtable. And Greg Marx of the Columbia Journalism Review argued that fact-checkers are over-reaching beyond the bounds of the bold language they use.

— A couple of good pieces on tech and the culture of dissent from Wired: A Sean Captain feature on the efforts to meet the social information needs of the Occupy movement, and the second part of Quinn Norton’s series going inside Anonymous.

— For Wikipedia watchers, a good look at where the site is now and how it’s trying to survive and thrive from The American Prospect.

— Finally, a deep thought about journalism for this weekend: Researcher Nick Diakopoulos’ post reconceiving journalism in terms of information science.

Crystal ball photo by Melanie Cook used under a Creative Commons license.

January 05 2012

19:30

Hacking consensus: How we can build better arguments online

In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.

It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.

To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Defining an argument

We’re all familiar with arguments, both in media and in everyday life. But it’s worth briefly reviewing what an argument actually is, as doing so can inform how we might better structure arguments online. “The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt,” writes Douglas Walton in his book Fundamentals of Critical Argumentation. “An argument is made up of statements called premises and a conclusion. The premises give a reason (or reasons) to support the conclusion.”

So an argument can be broken up into discrete claims, unified by a structure that ties them together. But our typical conceptions of online content ignore all that. Why not design content to more easily assess each claim in an argument individually? UI designer Bret Victor is working on doing just that through a series of experiments he collectively calls “Explorable Explanations.”

Writes Victor:

A typical reading tool, such as a book or website, displays the author’s argument, and nothing else. The reader’s line of thought remains internal and invisible, vague and speculative. We form questions, but can’t answer them. We consider alternatives, but can’t explore them. We question assumptions, but can’t verify them. And so, in the end, we blindly trust, or blindly don’t, and we miss the deep understanding that comes from dialogue and exploration.

The alternative is what he calls a “reactive document” that imposes some structure onto content so that the reader can “play with the premise and assumptions of various claims, and see the consequences update immediately.”

Although Victor’s first prototype, Ten Brighter Ideas, is a list of recommendations rather than a formal argument, it gives a feel of how such a document could work. But the specific look, feel and design of his example aren’t important. The point is simply that breaking up the contents of an argument beyond the level of just a post or column makes it possible for authors, editors or the community to deeply analyze each claim individually, while not losing sight of its place in the argument’s structure.

Show me the evidence (and the conversation)

Victor’s prototype suggests a more interesting way to structure and display arguments by breaking them up into individual claims, but it doesn’t tell us anything about what sort of content should be displayed alongside each claim. To start with, each claim could be accompanied by relevant links that help the reader make sense of that claim, either by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

Each claim could be accompanied by relevant links that help the reader make sense of that claim by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

At multiple points in his column, Krugman references “the evidence,” presumably referring to parts of the economics literature that support his argument. But what is the evidence? Why can’t it be cited alongside the column? And, while we’re at it, why not link to countervailing evidence as well? For an idea of how this might work, it’s helpful to look at a crowd-sourced fact-checking experiment run by the nonprofit NewsTrust. The “TruthSquad” pilot has ended, but the content is still online. One thing that NewsTrust recognized was that rather than just being useful for comment or opinion, the crowd can be a powerful tool for sourcing claims. For each fact that TruthSquad assessed, readers were invited to submit relevant links and mark them as For, Against, or Neutral.

The links that the crowd identified in the NewsTrust experiment went beyond direct evidence, and that’s fine. It’s also interesting for the reader to see what other writers are saying, who agrees, who disagrees, etc. The point is that a curated or crowd-sourced collection of links directly relevant to a specific claim can help a reader interested in learning more to save time. And allowing space for links both for and against an assertion is much more interesting than just having the author include a single link in support of his or her claim.

Community efforts to aggregate relevant links along the lines of the TruthSquad could easily be supplemented both by editor-curators (which NewsTrust relied on) and by algorithms which, if not yet good enough to do the job on their own, can at least lessen the effort required by readers and editors. The nonprofit ProPublica is also experimenting with a more limited but promising effort to source claims in their stories. (To get a sense of the usefulness of good evidence aggregation on a really thorny topic, try this post collecting studies of the stimulus bill’s impact on the economy.)

Truth, reliability, and acceptance

While curating relevant links allows the reader to get a sense of the debate around a claim and makes it easier for him or her to learn more, making sense of evidence still takes considerable time. What if a brief assessment of the claim’s truth, reliability or acceptance were included as well? This piece is arguably the hardest of those I have described. In particular, it would require editors to abandon the view from nowhere to publish a judgment about complicated statements well beyond traditional fact-checking. And yet doing so would provide huge value to the reader and could be accomplished in a number of ways.

Imagine that as you read Krugman’s column, each claim he makes is highlighted in a shade between green and red to communicate its truth or reliability. This sort of user interface is part of the idea behind “Truth Goggles,” a master’s project by Dan Shultz, an MIT Media Lab student and Mozilla-Knight Fellow. Shultz proposes to use an algorithm to check articles against a database of claims that have previously been fact-checked by Politifact. Shultz’s layer would highlight a claim and offer an assessment (perhaps by shading the text) based on the work of the fact checkers.

The beauty of using color is the speed and ease with which the reader is able to absorb an assessment of what he or she is reading. The verdict on the statement’s truthfulness is seamlessly integrated into the original content. As Shultz describes the central problem:

The basic premise is that we, as readers, are inherently lazy… It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything?

Still, the number of statements that PolitiFact has checked is relatively small, and what I’m describing requires the evaluation of messy empirical claims that stretch the limits of traditional fact-checking. So how might a publication arrive at such an assessment? In any number of ways. For starters, there’s good, old-fashioned editorial judgment. Journalists can provide assessments, so long as they resist the view from nowhere. (Since we’re rethinking the opinion pages here, why not task the editorial board with such a role?)

Publications could also rely on other experts. Rather than asking six experts to contribute to a “Room for Debate”-style forum, why not ask one to write a lead argument and the others not merely to “respond,” but to directly assess the lead author’s claims? Universities may be uniquely positioned to help in this, as some are already experimenting with polling their own experts on questions of public interest. Or what if a Quora-like commenting mechanism was included for each claim, as Dave Winer has suggested, so that readers could offer assessments, with the best ones rising to the top?

Ultimately, how to assess a claim is a process question, and a difficult one. But numerous relevant experiments exist in other formats. One new effort, Hypothes.is, is aiming to add a layer of peer review to the web, reliant in part on experts. While the project is in its early stages, its founder Dan Whaley is thinking hard about many of these same questions.

Better arguments

What I’ve described so far may seem elaborate or resource-intensive. Few publications these days have the staff and the time to experiment in these directions. But my contention is that the kind of content I am describing would be of dramatically higher value to the reader than the content currently available. And while Victor’s UI points towards a more aggressive restructuring of content, much could be done with existing tools. By breaking up an argument into discrete claims, curating evidence and relevant links, and providing credible assessments of those claims, publishers would equip readers to form opinions on merit and evidence rather than merely on trust, intuition, or bias. Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

I have avoided a number of issues in this explanation. Notably, I have neglected to discuss counter-arguments (which I believe could be easily integrated) and haven’t discussed the tension between empirical claims and value claims (I have assumed a focus on the former). And I’ve ignored the tricky psychology surrounding bias and belief formation. Furthermore, some might cite the recent PolitiFact Lie of the Year controversy as evidence that this sort of journalism is too difficult. In my mind, that incident further illustrates the need for credible, honest referees.

Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

Returning once more to Krugman’s argument, imagine the color of the text signaling whether his claims about financial transactions and economic growth are widely accepted. Or mousing over his point about reducing deficits to quickly see links providing background on the issue. What if it turned out that all of Krugman’s premises were assessed as compelling, but his conclusion was not? It would then be obvious that something was missing. Perhaps more interestingly, what if his conclusion was rated compelling but his claims were weak? Might he be trying to convince you of his case using popular arguments that don’t hold up, rather than the actual merits of the case? All of this would finally be apparent in such a setup.

In rethinking how we structure and assess arguments online, I’ve undoubtedly raised more questions than I’ve answered. But hopefully I’ve convinced you that better presentation of arguments online is at least possible. Not only that, but numerous hackers, designers, and journalists — and many who blur the lines between those roles — are embarking on experiments to challenge how we think about content, argument, truth, and credibility. It is in their work that the answers will be found.

Image by rhys_kiwi used under a Creative Commons license.

May 26 2011

18:00

Sarah Palin’s 2009 “death panel” claims: How the media handled them, and why that matters

Editor’s Note: This article was originally published on the journalism-and-policy blog Lippmann Would Roll. Written by Matthew L. Schafer, the piece is a distillation of an academic study by Schafer and Dr. Regina G. Lawrence, the Kevin P. Reilly Sr. chair of LSU’s Manship School of Mass Communication. They have kindly given us permission to republish the piece here.

It’s been almost two years now since Sarah Palin published to Facebook a post about “death panels.” In a study to be presented this week at the 61st Annual International Communications Association Conference, we analyzed over 700 stories placed in the top 50 newspapers around the country.

“The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s ‘death panel’ so his bureaucrats can decide…whether they are worthy of health care,” Palin wrote at the time.

Only three days later, PolitiFact, an arm of the St. Petersburg Times, published its appraisal of Palin’s comment, stating, “We agree with Palin that such a system would be evil. But it’s definitely not what President Barack Obama or any other Democrat has proposed.”

FactCheck.org, a project of the Annenburg Public Policy Center, would also debunk the claim, and later PolitiFact users would later vote the death panel claim to the top spot of PolitiFact’s Lie of the Year ballot.

Despite this initial dismissal of the claim by non-partisan fact checkers, a cursory search of Google turns up 1,410,000 million results, showing just how powerful social media is in a fractured media climate.

Yet, the death panel claim — as we’re sure many will remember — lived not only online, but also in the newspapers, and on cable and network television. In the current study, which ran from August 8 (the day after Palin made the claim) to September 13 (the day of the last national poll about death panels) the top 50 newspapers in the country published over 700 articles about the claims, while the nightly network news ran about 20 stories on the topic.

At the time, many commentators both in and outside of the industry offered their views on the media’s performance in debunking the death panel claim. Some lauded the media for coming out and debunking the claim, while others questioned whether it was the media’s “job” to debunk the myth at all.

“The crackling, often angry debate over health-care reform has severely tested the media’s ability to untangle a story of immense complexity,” Howard Kurtz, who as then at the Washington Post, said. “In many ways, news organizations have risen to the occasion….”

Yet, Media Matters was less impressed, at times pointing out, for example, that “the New York Times portrayed the [death panel] issue as a he said/she said debate, noting that health care reform supporters ‘deny’ this charge and call the claim ‘a myth.’ But the Times did not note, as its own reporters and columnists have previously, that such claims are indeed a myth…”

So, who was right? Did the media debunk the claim? And, if so, did they sway public opinion in the process?

Strong debunking, but confused readers

Our data indicate that the mainstream news, particularly newspapers, debunked death panels early, fairly often, and in a variety of ways, though some were more direct than others. Nevertheless, a significant portion of the public accepted the claim as true or, perhaps, as “true enough.”

Initially, we viewed the data from 30,000 feet, and found that about 40 percent of the time journalists would call the death panel claim false in their own voice, which was especially surprising considering many journalists’ own conceptions that they act as neutral arbiters.

For example, on August 9, 2009, Ceci Connolly of the Washington Post said, “There are no such ‘death panels’ mentioned in any of the House bills.”

“[The death panel] charge, which has been widely disseminated, has no basis in any of the provisions of the legislative proposals under consideration,” The New York Times’ Helene Cooper wrote a few days after Connolly.

“The White House is letting Congress come up with the bill and that vacuum of information is getting filled by misinformation, such as those death panels,” Anne Thompson of NBC News said on August 11.

Nonetheless, in more than 60 percent of the cases it’s obvious that newspapers abstained from calling the death panels claim false. (We also looked at hundreds of editorials and letters to the editor, and it’s worth noting that almost 60 percent of those debunked the claim, while the rest abstained from debunking and just about 2 percent supported the claim.)

Additionally, of journalists who did debunk the claim, almost 75 percent of those articles contained no clarification as to why they were labeling the claim as false. Indeed, it was very much a “You either believe me, or you don’t” situation without contextual support.

As shown below, whether or not journalists debunked the claim, they often times approached the controversy by also quoting one side of the debate, quoting the other, and then letting the reader dissect the validity of each side’s stance. Thus, in 30 percent of cases where journalists reported in their own words that the claim was false, they nonetheless included either side’s arguments as to why their side was right. This often just confuses the reader.

This chart shows that whether journalists abstained from debunking the death panels claim or not, they still proceeded to give equal time to each side’s supporters.

Most important is the light that this study sheds on the age-old debate over the practical limitations surrounding objectivity. Indeed, questions are continually raised about whether journalists can be objective. Most recently, this led to a controversy at TechCrunch where founder Michael Arrington was left defending his disclosure policy.

“But the really important thing to remember, as a reader, is that there is no objectivity in journalism,” Arrington wrote to critics. “The guys that say they’re objective are just pretending.”

This view, however, is not entirely true. Indeed, in the study of death panels, we found two trends that could each fit under the broad banner of objectivity.

Objectivity: procedural and substantive

First, there is procedural objectivity — mentioned above — where journalists do their due diligence and quote competitors. Second, there is substantive objectivity where journalists actually go beyond reflexively reporting what key political actors say to engage in verifying the accuracy of those claims for their readers or viewers.

Of course, every journalist is — to some extent — influenced by their experiences, predilections, and political preferences, but these traits do not necessarily interfere with objectively reporting verifiable fact. Indeed, it seems that journalists could practice either form of objectivity without being biased. Nonetheless, questions and worries still abound.

“The fear seems to be that going deeper—checking out the facts behind the posturing and trying to sort out who’s right and who’s wrong—is somehow not ‘objective,’ not ‘straight down the middle,” Rem Reider of the American Journalism Review wrote in 2007.

Perhaps because of this, journalists in our sample attempted to practice at the same time both types of objectivity: one which, arguably, serves the public interest by presenting the facts of the matter, and one which allows the journalist a sliver of plausible deniability, because he follows the insular journalistic norm of both presenting both sides of the debate.

As such, we question New York University educator and critic Jay Rosen, who has argued that “neutrality and objectivity carry no instructions for how to react” to the rise of false but popular claims. We contend that the story is more complicated: Mainstream journalists’ figurative instruction manual contains contradictory “rules” for arbitrating the legitimacy of claims.

These contradictory rules are no doubt supported by public opinion polls taken during the August and September healthcare debates. Indeed, one poll released August 20 reported that 30 percent believed that proposed health care legislation would “create death panels.” Belief in this extreme type of government rationing of health care remained impressively high (41 percent) into mid-September.

More troubling, one survey found that the percentage calling the claim true (39 percent) among those who said they were paying very close attention to the health care debate was significantly higher than among those reporting they were following the debate fairly closely (23 percent) or not too closely (18 percent).

Yet, of course, our data does not allow us to say that these numbers are a direct result of the mainstream media’s death panel coverage. Nonetheless, because mainstream media content still powers so many websites’ and news organizations’ content, perhaps this coverage did have an impact on public opinions to some indeterminable degree.

Conclusion

One way of looking at the resilience of the death panels claim is as evidence that the mainstream media’s role in contemporary political discourse has been attenuated. But another way of looking at the controversy is to demonstrate that the mainstream media themselves bore some responsibility for the claim’s persistence.

Palin’s Facebook post, which popularized the death panel, catchphrase said nothing about any specific legislative provision. News outlets and fact-checkers could examine the language of currently debated bills to debunk the claim — and many did, as our data demonstrate. Nevertheless, it appears the nebulous “death panel bomb” reached its target in part because the mainstream media so often repeated it.

Thus, the dilemma for reporters playing by the rules of procedural objectivity is that repeating a claim reinforces a sense of its validity — or at least, enshrines its place as an important topic of public debate. Moreover, there is no clear evidence that journalism can correct misinformation once it has been widely publicized. Indeed, it didn’t seem to correct the death panels misinformation in our study.

Yet, there is promise in substantive objectivity. Indeed, today more than ever journalists are having to act as curators. The only way that they can effectively do so is by critically examining the surplusage of social media messages, and debunking or refusing to reinforce those messages that are verifiable. Indeed, as more politicians use the Internet to circumvent traditional media, this type of critical curation will become increasingly important.

This is — or should be journalists’ new focus. Journalists should verify information. Moreover, they should do so without including quotations from those taking a stance that is demonstrably false. This creates a factual jigsaw puzzle that the reader must untangle. Indeed, on the one hand, the journalist is calling the claim false, and on the other, he is giving inches quoting someone who believes it’s true.

Putting aside the raucous debates about objectivity for a moment, it is clear that journalists in many circumstances can research and relay to their readers information about verifiable fact. If we don’t see a greater degree of this substantive objectivity, the public is left largely at the mercy of the savviest online communicator. Indeed, if journalists refuse to critically curate new media, they are leaving both the public and themselves in a worse off position.

Image of Sarah Palin by Tom Prete used under a Creative Commons license.

April 18 2011

20:14

Another online milestone for the Pulitzer Prize

It’s prize season for journalists, and today came the biggest of them all: the Pulitzer Prizes. And the trend toward online-only news organizations playing a part in what has traditionally been a newspaper game continues.

In the journalism categories, of the 1,097 total entries, about 100 came from online-only outlets, according to Pulitzer officials. Those entries came from 60 different news organizations. That’s a healthy growth curve, considering that in 2009, the first year online entries were welcomed, 37 organizations submitted 65 entries.

In the winner’s circle again is ProPublica, which took home its second Pulitzer this year. But unlike the nonprofit’s last prize, which was for a story published in The New York Times Magazine, this year’s prize (for reporters Jesse Eisinger and Jake Bernstein) was for work that didn’t move through a partner newspaper (although they did partner with radio’s Planet Money and This American Life). As ProPublica chief Paul Steiger wrote, “This year’s Prize is the first for a group of stories not published in print.”

ProPublica’s win follows on the heels of last year’s Pulitzer for Mark Fiore and his animated editorial cartoons, which had a home on SFGate, not in the San Francisco Chronicle, and the ground-breaking Pulitzer for PolitiFact in 2009.

At the same time more online-only content is receiving a nod from the Pulitzer committee, it’s also worth noting that more projects are entering the awards that include a digital component. In this year’s journalism entries nearly a third featured online content, which is up from just one fourth last year. Of the finalists digital content was featured in seven winners, including the Milwaukee Journal Sentinel’s award for Explanatory Reporting and the Sarasota Herald-Tribune’s Investigative Reporting prize among others.

A hearty congratulations to all the winners and finalists — especially the three past or present Nieman Fellows to be honored. They are current Nieman Fellow Tony Bartelme (finalist in Feature Writing), 2005 Nieman Fellow Amy Ellis Nutt (winner in Feature Writing), and Mary Schmich (finalist in Commentary). Nutt’s win is the 107th Pulitzer (if our quick count is right) to be won by a Nieman Fellow.

November 16 2010

19:40

Crowdsourced Fact-Checking? What We Learned from Truthsquad

In June, Senator U.S. Senator Orrin Hatch made the statement that "87 million Americans will be forced out of their coverage" by President Obama's health care plan.

It was quite a claim. But was it true?

That's a common, and important, question -- and it can often be hard to quickly nail down the real facts in the information-overloaded world we live in. Professional fact-checking organizations like PolitiFact and FactCheck.org have taken up the charge to verify the claims of politicians, pundits and newsmakers, and they provide a great service to the public. But I believe there's also a role for the average person in the fact-checking process. By actively researching and verifying what we hear in the news, we can become more informed citizens, and more discriminating news consumers. These are essential skills for all of us to develop.

With that in mind, we at NewsTrust, a non-profit social news network, have been working on Truthsquad, a community fact-checking service that helps people verify the news online, with professional guidance.

Our first pilot for Truthsquad took place in August 2010, with the help of our partners at the Poynter Institute, our advisors at FactCheck.org and our funders at Omidyar Network. That pilot was well received by our community, partners and advisors, as noted in our first report, and by third-party observers such as GigaOm. We've since hosted a variety of weekly Truthsquads, and are starting a second pilot with MediaBugs.org and RegretTheError.com to identify and correct errors in the news media. (Disclosure: MediaShift managing editor Craig Silverman runs RegretTheError.com.)

Our first test project was by our standards a success; more importantly, it revealed several important lessons about the best ways to manage crowdsourced fact-checking, and about why people participate in this activity. Here are our key takeaways from this first pilot, which I'll elaborate on below:

  • A game-like experience makes fact-checking more engaging.
  • A professional-amateur (pro-am) collaboration delivers reliable results and a civil conversation.
  • Crowd contributions are limited, requiring editorial oversight and better rewards.
  • Community fact-checking fills a gap between traditional journalism and social media.

What is Truthsquad?

truthsquadlogo.pngTruthsquad.com features controversial quotes from politicians or pundits and asks the community whether they think they are true or false. Community members are welcome to make a first guess, then check our answers and research links to see if they are correct. They can change their answer anytime, as they come across new facts.

To help participants find the right answer, we invite them to review and/or post links to factual evidence that supports or opposes each statement. A professional journalist leads each "truthsquad" to guide participants in this interactive quest. This "squad leader" then writes an in-depth verdict based on our collaborative research. That verdict is emailed to all participants, with request for comments. (It can be revised as needed.)

Finding #1: Game-Like Experience Makes Fact-Checking Engaging

We noted a significant increase in community participation for Truthsquad compared to other NewsTrust services we tested this year. Some data from our first pilot:

  • This pilot generated twice as much participation as other 2010 pilots.
  • Users gave ten times more answers per quote than reviews per story on our site.
  • Over half of the participants read linked stories, and a third answered a Truthsquad quote.
  • One in six participants reviewed the stories linked as factual evidence.

We think this high level of engagement is partly due to the game-like quality of our user experience, which starts by inviting people to guess whether a statement is true or false -- an easy task that anyone can do in under a minute.

After their first guess, people are more likely to participate as researchers, because their curiosity has been piqued and they want to know the answer. As a result, participants often take the time to review linked stories and post more evidence on their own. Without realizing it, they are becoming fact-checkers.

Finding #2: Pro-Am Collaboration Delivers Reliable Results

We decided early on that professionals needed to guide this collaborative investigation. We wanted to avoid some of the pitfalls of pure crowdsourcing initiatives, which can turn into mob scenes -- particularly around politically charged issues. At the start of this experiment, we asked experienced journalists at FactCheck.org and the Poynter Institute to coach us and our community and help write and edit some of our first verdicts.

We think the pro-am approach paid off in a number of ways:

  • Amateurs learned valuable fact-checking skills by interacting with professionals.
  • A few community members posted links that were critical to reaching our verdicts.
  • Answers from our community generally matched final verdicts from our editors.
  • We came to the same conclusions as FactCheck.org in side-by-side blind tests.
  • Comments from participants were generally civil and focused on facts.

hatchverdict.png

The results of our first pilot led our advisor Brooks Jackson, director at FactCheck.org, to comment, "So far I would say the experiment is off to a solid start. The verdicts of the Truthsquad editors seem to me to be reasonable and based on good research."

This collaboration between journalists and citizens made us all more productive. The professionals shared helpful information-gathering tips, and the citizens extended that expertise on a larger scale, with multiple checks and balances between our community and our editors. Our editors spearheaded this investigation, but the community made important contributions through comments and links to factual evidence (some of which were invaluable). On a couple occasions, we even revised our verdicts based on new evidence from our community. This focus on facts also helped set the tone for our conversations, which were generally civil and informative.

Finding #3: Crowd Contributions Are Limited, Requiring Better Rewards

Despite high levels of participation, we didn't get as many useful links and reviews from our community as we had hoped. Our editorial team did much of the hard work to research factual evidence. (Two-thirds of story reviews and most links were posted by our staff.) Each quote represented up to two days of work from our editors, from start to finish. So this project turned out to be more labor-intensive than we thought, and a daily fact-checking service will require a dedicated editorial team to guarantee reliable results.

Managing our community and responding thoughtfully to their posts also takes additional time, and is an important part of this process. In future releases, we would like to provide more coaching and educational services, as well as better rewards for our contributors.

Training citizens to separate fact from spin is perhaps the greatest benefit of our initiative, but keeping them engaged will require ingenuity and tender loving care on our part.

"It seems based on this pilot that citizens can learn fact-checking skills quite easily," said Kelly McBride of the Poynter Institute. "The challenge is to motivate them to do this occasionally."

To address this issue, future versions of Truthsquad could reward members who take the time to fact-check the news in order to get them to do it more often. We would like to give them extra points for reading, reviewing or posting stories, as well as special badges, redeemable credits and/or prizes. We can also feature high scores on leaderboards, and give monthly awards to the most deserving contributors.

Finding #4: Community Fact-Checking Fills a Need

Every survey we have done in recent years has consistently shown fact-checking as a top priority for our community, and this was confirmed by the results of this pilot.

Here are some key observations from our recent survey about NewsTrust's 2010 pilots:

  • A majority of survey respondents (61 percent) found Truthsquad useful or very useful.
  • Half of survey respondents wanted a new quote every day -- or several quotes a day.
  • Half of survey respondents said they could fact-check quotes several times per week.
  • One in seven survey respondents were prepared to donate for this service.

Screen shot 2010-11-16 at 8.43.40 AM.png

We think the generally favorable response to Truthsquad is due to two factors: a growing demand for fact-checking services, combined with a desire to contribute to this civic process. Fact-checking is still the best way to verify the accuracy of what people hear in the news, and it is perceived as an effective remedy to expose politicians or pundits who propagate misinformation.

At the same time, the explosion of social media makes people more likely to participate in investigations like these. They want this civil watchdog network, and expect to have a voice in it.

Next steps

Based on the lessons from this experiment, we would like to offer Truthsquad on an ongoing basis, with a goal to fact-check one quote a day, year-round -- as well as to feature the work of other trusted research organizations on Truthsquad.com.

We also want to let members post their own quotes for fact-checking and reward them for their contributions through both a game-like interface and more educational benefits. We have an opportunity to track the expertise of participants based on their answers, which could allow us to measure their progress with core news literacy skills, as well as their overall understanding of important public issues and the overall impact of our service.

Over time, we hope to provide more training and certification services, to build lasting research skills that could help students and adults alike play a more active role in our growing information economy. If this appeals to you, we invite you to sign up here and join our experiment.

As for that Orrin Hatch quote? In the end, 163 participants helped us fact-check his statement about health care. Our final verdict was that the Senator's claim was false. That finding was based on factual evidence provided by one of our NewsTrust members, who dug up the right set of regulations, and pointed out they had been misstated by Hatch. Our editor's verdict was confirmed by similar findings from FactCheck.org, and also matched our community's consensus: 138 participants answered that this statement was false (versus 11 who thought it was true).

More importantly, we as a community learned how to separate fact from fiction -- and became more engaged as citizens.

Fabrice Florin is executive director and founder of NewsTrust, where he manages creative and business development for this next-generation social news network. NewsTrust helps people find and share good journalism online, so they can make more informed decisions as citizens. With a 30-year track record in new media and technology, Fabrice has developed a wide range of leading-edge entertainment, education and software products. Fabrice's previous wireless venture, Handtap, was a leading provider of multimedia content for mobile phones. Fabrice was recently elected an Ashoka Fellow for his work as a social entrepreneur in journalism.

This is a summary. Visit our site for the full post ».

August 16 2010

14:30

The Guardian launches governmental pledge-tracking tool

Since it came to office nearly 100 days ago, Britain’s coalition government — a team-up between Conservatives and Liberal Democrats that had the potential to be awkward and ineffective, but has instead (if The Economist’s current cover story is to be believed) emerged as “a radical force” on the world stage — has made 435 pledges, big and small, to its constituents.

In the past, those pledges might have gone the way of so many campaign promises: broken. But no matter — because also largely forgotten.

The Guardian, though, in keeping with its status as a data journalism pioneer, has released a tool that tries to solve the former problem by way of the latter. Its pledge-tracker, a sortable database of the coalition’s various promises, monitors the myriad pledges made according to their individual status of fulfillment: “In Progress,” “In Trouble,” “Kept,” “Not Kept,” etc. The pledges tracked are sortable by topic (civil liberties, education, transport, security, etc.) as well as by the party that initially proposed them. They’re also sortable — intriguingly, from a future-of-context perspective — according to “difficulty level,” with pledges categorized as “difficult,” “straightforward,” or “vague.”

Status is the key metric, though, and assessments of completion are marked visually as well as in text. The “In Progress” note shows up in green, for example; the “Not Kept” shows up in red. Political accountability, meet traffic-light universality.

The tool “needs to be slightly playful,” notes Simon Jeffery, The Guardian’s story producer, who oversaw the tool’s design and implementation. “You need to let the person sitting at the computer actually explore it and look at what they’re interested in — because there are over 400 things in there.”

The idea was inspired, Jeffery wrote in a blog post explaining the tool, by PolitiFact’s Obameter, which uses a similar framework for keeping the American president accountable for individual promises made. Jeffery came up with the idea of a British-government version after May’s general election, which not only gave the U.S.’s election a run for its money in terms of political drama, but also occasioned several interactive projects from the paper’s editorial staff. They wanted to keep that multimedia trajectory going. And when the cobbled-together new government’s manifesto for action — a list of promises agreed to and offered by the coalition — was released as a single document, the journalists had, essentially, an instant data set.

“And the idea just came from there,” Jeffery told me. “It seemed almost like a purpose-made opportunity.”

Jeffery began collecting the data for the pledge-tracker at the beginning of June, cutting and pasting from the joint manifesto’s PDF documents. Yes, manually. (“That was…not much fun.”) In a tool like this — which, like PolitiFact’s work, merges subjective and objective approaches to accountability — context is crucial. Which is why the pledge-tracking tool includes with each pledge a “Context” section: “some room to explain what this all means,” Jeffery says. That allows for a bit of gray (or, since we’re talking about The Guardian, grey) to seep, productively, into the normally black-and-white constraints that define so much data journalism. One health care-related pledge, for example — “a 24/7 urgent care service with a single number for every kind of care” — offers this helpful context: “The Department of Health draft structural reform plan says preparations began in July 2010 and a new 111 number for 24/7 care will be operational in April 2012.” It also offers, for more background, a link to the reform plan.

To aggregate that contextual information, Jeffery consulted with colleagues who, by virtue of daily reporting, are experts on immigration, the economy, and the other topics covered by the manifesto’s pledges. “So I was able to work with them and just say, ‘Do you know about this?’ ‘Do you know about that?’ and follow things up.”

The tool isn’t perfect, Jeffery notes; it’s intended to be “an ongoing thing.” The idea is to provide accountability that is, in particular, dynamic: a mechanism that allows journalists and everyone else to “go back to it on a weekly or fortnightly basis and look at what has been done — and what hasn’t been done.” Metrics may change, he says, as the political situation does. In October, for example, the coalition government will conclude an external spending review that will help crystallize its upcoming budget, and thus political, priorities — a perfect occasion for tracker-based follow-up stories. But the goal for the moment is to gather feedback and work out bugs, “rather than having a perfectly finished product,” Jeffery says. “So it’s a living thing.”

12:00

Truth-o-Meter, 2G: Andrew Lih wants to wikify fact-checking

Epic fact: We are living at the dawn of the Information Age. Less-epic fact: Our historical moment is engendering doubt. The more bits of information we have out there, and the more sources we have providing them, the more wary we need to be of their accuracy. So we’ve created a host of media platforms dedicated to fact-checking: We have PolitiFact over here, FactCheck over there, Meet the Facts over there, @TBDFactsMachine over there, Voice of San Diego’s Fact Check blog over there, NewsTrust’s crowdsourced Truthsquad over there (and, even farther afield, source verifiers like Sunlight’s new Poligraft platform)…each with a different scope of interest, and each with different methods and metrics of verification. (Compare, for example, PolitiFact’s Truth-o-Meter to FactCheck.org’s narrative assessments of veracity.) The efforts are admirable; they’re also, however, atomized.

“The problem, if you look at what’s being done right now, is often a lack of completeness,” says Andrew Lih, a visiting professor of new media at USC’s Annenberg School of Communication & Journalism. The disparate outlets have to be selective about the scope of their fact-checking; they simply don’t have the manpower to be comprehensive about verifying all the claims — political, economic, medical, sociological — pinging like pinballs around the Internet.

But what if the current fact-checking operations could be greater than the sum of their parts? What if there were a centralized spot where consumers of news could obtain — and offer — verification?

Enter WikiFactCheck, the new project that aims to do exactly what its name suggests: bring the sensibility — and the scope — of the wiki to the systemic challenges of fact-checking. The platform’s been in the works for about two years now, says Lih (who, in addition to creating the wiki, is a veteran Wikipedian and the author of The Wikipedia Revolution). He dreamed it up while working on WikiNews; though that project never reached the scope of its sister site — largely because its premise of discrete news narratives isn’t ideal for the wiki platform — a news-focused wiki that could succeed, Lih thought, was one that focused on the core unit of news: facts themselves. When Jay Rosen added attention to the need for systematic fact-checking of news content — most notably, through his campaign to fact-check the infamously info-miscuous Sunday shows — it became even more clear, Lih told me: This could be a job for a wiki.

WikiFactCheck wants not only to crowdsource, but also to centralize, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche. Wikipedia, he points out, is ultimately “a great aggregator”; and much of WikiFactCheck’s value could similarly be, he says, to catalog the results of other fact-checking outfits “and just be a meta-site.” Think Rotten Tomatoes — simple, summative, unapologetically derivative — for truth-claims.

If the grandeur implicit in that proposition sounds familiar, it’s because the idea for WikiFactCheck is pretty much identical to the one that guided the development of Wikipedia: to become a centralized repository of information shaped by, and limited only by the commitment of, the crowd. A place where the veracity of information is arbitrated discursively — among people who are motivated by the desire for veracity itself.

Which is idealistic, yes — unicornslollipopsrainbows idealistic, even — but, then again, so is Wikipedia. “In 2000, before Wikipedia started, the idea that you would have an online encyclopedia that was updated within seconds of something happening was preposterous,” Lih points out. Today, though, not only do we take Wikipedia for granted; we become indignant in those rare cases when entries fail to offer us up-to-the-minute updates on our topics of interest. Thus, the premise of WikiFactCheck: What’s to say that Wikipedia contributors’ famous commitment — of time, of enthusiasm, of Shirkian surplus — can’t be applied to verifying information as well as aggregating it?

What such a platform would look like, once populated, remains to be seen; the beauty of a wiki being its flexibility, users will make of the site what they will, with the crowd determining which claims/episodes/topics deserve to be checked in the first place. Ideally, “an experienced community of folks who are used to cataloging and tracking these kinds of things” — seasoned Wikipedians — will guide that process, Lih says. As he imagines it, though, the ideal structure of the site would filter truth-claims by episode, or “module” — one episode of “Meet the Press,” say, or one political campaign ad. “I think that’s pretty much what you’d want: one page per media item,” Lih says. “Whether that item is one show or one ad, we’ll have to figure out.”

Another thing to figure out will be how a wiki that will likely rely on publishing comprehensive documents — transcripts, articles, etc. — to verify their contents will dance around copyright issues. But “if there ever were a slam-dunk case for meeting all the attributes of the Fair Use Doctrine,” Lih says, “this is it.” Fact-checking is criticism and comment; it has an educational component (particularly if it operates under the auspices of USC Annenberg); and it doesn’t detract from content’s commercial value. In fact: “I can’t imagine another project that could be so strong in meeting the standards for fair use,” Lih says.

And what about the most common concern when it comes to informational wikis — that people with less-than-noble agendas will try to game the system and codify baseless versions of the truth? “In the Wikipedia universe, what has shaken out is that a lot of those folks who are not interested in the truth wind up going somewhere else,” Lih points out. (See: Conservapedia.) “They find that the community that is concerned with neutrality and with getting verifiable information into Wikipedia is going to dominate.” Majority rules — in a good way.

At the same time, though, “I welcome die-hard Fox viewers,” Lih says. “I welcome people who think Accuracy in Media is the last word. Because if you can cite from a reliable source — from a congressional record, from the Census Bureau, from the Geological Survey, from CIA Factbook, from something — then by all means, I don’t really care what your political stripes are. Because the facts should win out in the end.”

Photo of Andrew Lih by Kat Walsh, used under a GNU Free Documentation License.

March 23 2010

16:00

Poynter’s hiring. What will their writer/curator be up to?

For the past few days, a job posting has been making its way around the web: the Poynter Institute, it announces, is looking to hire a writer/curator for its Sense-Making Project. Which is a job title that — out of context, anyway — doesn’t itself seem to make much sense (A what for the what?). But it’s also one that’s intriguing. Writing? Curating? Sense-making? Can’t argue with that.

I asked Kelly McBride, Poynter’s Ethics Group Leader and lead faculty for the program, about the project and its new position. The Sense-Making Project itself, she told me, is a pilot effort funded by Ford and focused on the intersection between journalism and citizen engagement — and closely related to news literacy, the movement that aims to educate citizens to be savvy consumers of news. “We started with the central question of how citizens will make sense of the universe,” McBride says. And the curator position is in part predicated on one clear answer to that question: “They’re going to need some help.”

One of the project’s aims, McBride says, is to cater to the expanding group Poynter refers to as “the fifth estate”: the broad network of people, journalists both professional and non-, who are now participating in the newsgathering process. The project wants to “create a place where people who are motivated to develop new skills about consuming information can go to do that, to be in conversation and to share their ideas,” she says. And the person who takes on the writer/curator job will guide and, yes, curate that conversation.

In some ways, the position is one that requires the skills of (pardon a slight oxymoron) the classic blogger: “gathering and writing and reassembling and helping us look through all of this information that’s out there, putting a magnifying glass on certain parts of the virtual world and saying, ‘Here’s something to look at.’” But the new role will combine curation with a slightly more academic approach: one that considers the contextual aspects of information. The writer/curator will be taking, if all goes according to plan, an archaeological — and in some senses anthropological — approach to news and the social capital it engenders: a kind of Putnam-meets-Wasik-meets-Foucault-style sensibility toward social knowledge. “The whole idea of the project is, ‘What if you had someone whose only job it was, every day, to be looking at information?” McBride says. “And this person gets the new world and the old world, and isn’t writing to an audience of professional journalists, and is writing to Joe Citizen, saying, ‘Hey, this is kind of interesting.’”

It’s meme-tracking, essentially — tracing the movement of ideas though our social spaces — except with information, rather than notions, as the core proposition. “If you think about what PolitiFact does for political facts,” McBride says, “we’re thinking similar to that, only for the rest of the universe.”

It’s an intriguing idea — and one that suggests a subtle shift in the atomic structure of journalism itself: from the article as the core unit of news, and even from the blog post as that unit, to something more discrete and, yet, tantalizingly ephemeral: the fact itself, the assertion itself, the piece of information itself. Propositions that are solid and fluid at the same time. “What we’ve found,” McBride says, “is that when you start taking a single piece of information, you can actually look at the history — where it came from, who linked to what, who transformed it, and how it got to you. And then you can look at how it went out from there.” The analysis might require “diagnosing language,” she says, or “asking about the motivation of the person who delivered the information.” It also might require “asking about the setting in which the information was delivered, because things on Facebook are different from things on Twitter.”

Either way, the analysis will focus on a goal that’s quickly gaining traction in journalism: the provision of context as a means of adding value to information. With the project and its expansion, “I hope to create a body of work that reveals trends and pressure points that have yet to be revealed,” McBride says. Because “it’s in those trends that you start to say: ‘Oh, okay, here’s a tool people need.’”

December 11 2009

09:30

Politico: US local papers to syndicate fact-checking site PolitiFact

PolitiFact, the fact-checking website developed by US paper the St Petersburg Times and used during last year’s US presidential campaigns, will reportedly announce a major syndication deal with local newspapers.

The Pulitzer-prize winning site uses reporters and editors from the Times to fact-check statements made by senior politicians, lobbyists and interest groups in the US and rank the on a Truth-O-Meter. Barack Obama’s campaign promises are also being measured.

Full story at this link…

Similar Posts:



Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl