Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 16 2010

12:00

Truth-o-Meter, 2G: Andrew Lih wants to wikify fact-checking

Epic fact: We are living at the dawn of the Information Age. Less-epic fact: Our historical moment is engendering doubt. The more bits of information we have out there, and the more sources we have providing them, the more wary we need to be of their accuracy. So we’ve created a host of media platforms dedicated to fact-checking: We have PolitiFact over here, FactCheck over there, Meet the Facts over there, @TBDFactsMachine over there, Voice of San Diego’s Fact Check blog over there, NewsTrust’s crowdsourced Truthsquad over there (and, even farther afield, source verifiers like Sunlight’s new Poligraft platform)…each with a different scope of interest, and each with different methods and metrics of verification. (Compare, for example, PolitiFact’s Truth-o-Meter to FactCheck.org’s narrative assessments of veracity.) The efforts are admirable; they’re also, however, atomized.

“The problem, if you look at what’s being done right now, is often a lack of completeness,” says Andrew Lih, a visiting professor of new media at USC’s Annenberg School of Communication & Journalism. The disparate outlets have to be selective about the scope of their fact-checking; they simply don’t have the manpower to be comprehensive about verifying all the claims — political, economic, medical, sociological — pinging like pinballs around the Internet.

But what if the current fact-checking operations could be greater than the sum of their parts? What if there were a centralized spot where consumers of news could obtain — and offer — verification?

Enter WikiFactCheck, the new project that aims to do exactly what its name suggests: bring the sensibility — and the scope — of the wiki to the systemic challenges of fact-checking. The platform’s been in the works for about two years now, says Lih (who, in addition to creating the wiki, is a veteran Wikipedian and the author of The Wikipedia Revolution). He dreamed it up while working on WikiNews; though that project never reached the scope of its sister site — largely because its premise of discrete news narratives isn’t ideal for the wiki platform — a news-focused wiki that could succeed, Lih thought, was one that focused on the core unit of news: facts themselves. When Jay Rosen added attention to the need for systematic fact-checking of news content — most notably, through his campaign to fact-check the infamously info-miscuous Sunday shows — it became even more clear, Lih told me: This could be a job for a wiki.

WikiFactCheck wants not only to crowdsource, but also to centralize, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche. Wikipedia, he points out, is ultimately “a great aggregator”; and much of WikiFactCheck’s value could similarly be, he says, to catalog the results of other fact-checking outfits “and just be a meta-site.” Think Rotten Tomatoes — simple, summative, unapologetically derivative — for truth-claims.

If the grandeur implicit in that proposition sounds familiar, it’s because the idea for WikiFactCheck is pretty much identical to the one that guided the development of Wikipedia: to become a centralized repository of information shaped by, and limited only by the commitment of, the crowd. A place where the veracity of information is arbitrated discursively — among people who are motivated by the desire for veracity itself.

Which is idealistic, yes — unicornslollipopsrainbows idealistic, even — but, then again, so is Wikipedia. “In 2000, before Wikipedia started, the idea that you would have an online encyclopedia that was updated within seconds of something happening was preposterous,” Lih points out. Today, though, not only do we take Wikipedia for granted; we become indignant in those rare cases when entries fail to offer us up-to-the-minute updates on our topics of interest. Thus, the premise of WikiFactCheck: What’s to say that Wikipedia contributors’ famous commitment — of time, of enthusiasm, of Shirkian surplus — can’t be applied to verifying information as well as aggregating it?

What such a platform would look like, once populated, remains to be seen; the beauty of a wiki being its flexibility, users will make of the site what they will, with the crowd determining which claims/episodes/topics deserve to be checked in the first place. Ideally, “an experienced community of folks who are used to cataloging and tracking these kinds of things” — seasoned Wikipedians — will guide that process, Lih says. As he imagines it, though, the ideal structure of the site would filter truth-claims by episode, or “module” — one episode of “Meet the Press,” say, or one political campaign ad. “I think that’s pretty much what you’d want: one page per media item,” Lih says. “Whether that item is one show or one ad, we’ll have to figure out.”

Another thing to figure out will be how a wiki that will likely rely on publishing comprehensive documents — transcripts, articles, etc. — to verify their contents will dance around copyright issues. But “if there ever were a slam-dunk case for meeting all the attributes of the Fair Use Doctrine,” Lih says, “this is it.” Fact-checking is criticism and comment; it has an educational component (particularly if it operates under the auspices of USC Annenberg); and it doesn’t detract from content’s commercial value. In fact: “I can’t imagine another project that could be so strong in meeting the standards for fair use,” Lih says.

And what about the most common concern when it comes to informational wikis — that people with less-than-noble agendas will try to game the system and codify baseless versions of the truth? “In the Wikipedia universe, what has shaken out is that a lot of those folks who are not interested in the truth wind up going somewhere else,” Lih points out. (See: Conservapedia.) “They find that the community that is concerned with neutrality and with getting verifiable information into Wikipedia is going to dominate.” Majority rules — in a good way.

At the same time, though, “I welcome die-hard Fox viewers,” Lih says. “I welcome people who think Accuracy in Media is the last word. Because if you can cite from a reliable source — from a congressional record, from the Census Bureau, from the Geological Survey, from CIA Factbook, from something — then by all means, I don’t really care what your political stripes are. Because the facts should win out in the end.”

Photo of Andrew Lih by Kat Walsh, used under a GNU Free Documentation License.

August 10 2010

14:00

All Our Ideas facilitates crowdsourcing — of opinions

What do readers want from the news?

It’s a hard question to answer, and not only because we don’t often know what we like until we find ourselves liking it. To figure it out, news outlets have traffic patterns on the one hand, and, if they choose, user surveys on the other; each is effective and unsatisfying in its own way. But what about the middle ground — an analytic approach to creative user feedback?

Meet All Our Ideas, the “suggestion box for the digital age“: a crowdsourcing platform designed to crowdsource concepts and opinions rather than facts alone. The platform was designed by a team at Princeton under the leadership of sociology professor Matt Salganik — initially, to create a web-native platform for sociological research. (The platform is funded in part by Google’s Research Awards program.) But its potential uses extend far beyond sociology — and, for that matter, far beyond academia. “The idea is to provide a direct idea-sharing platform where people can be heard in their own voices,” Salganik told me; for news outlets trying to figure out the best ways to harness the wisdom and creativity and affection of their users, a platform that mingles commenting and crowdsourcing could be a welcome combination.

The platform’s user interface is deceptively simple: at each iteration, it asks you to choose between two choices, as you would at the optometrist’s office: “Is A better…or B?” (In fact, Salganik told me, All Our Ideas’ structure was inspired by the kitten-cuteness-comparison site Kittenwar, which aims to find images of the “winningest” kittens (and — oof — the “losingest”) through a similar A/B selection framework.) But the platform also gives you the option — and here’s the “crowdsourcing” part — of adding your own idea into the mix. Not as a narrative addition — the open-ended “Additional Comments” box of traditional surveys — but as a contribution that will be added into the survey’s marketplace and voted up or down by the other survey-takers. (The open-ended responses are limited in length — to, natch, 140 characters — thus preventing modern-day Montaignes from gumming up the works.) You can vote on as many pairings — or as few — as you wish, and contribute as many/few ideas as you wish.

That contribution aspect is a small, but significant, shift. (Think of All Our Ideas, in fact, like Google Moderator — with a cleaner interface and, more significantly, hidden results that prevent users from being influenced by others’ feedback.) Because, it should be said: in general, from the user perspective, traditional surveys suck. With their pre-populated, multiple-choice framework, with those “Additional Comments” boxes (whose contents one assumes, won’t be counted as “data” proper and so likely won’t be counted all), they tend to preclude creativity on the part of the people taking them. They fall victim to a paradox: the larger the population of survey-takers — and thus, ostensibly, the more rigorous the data they can provide — the less incentive individual users have to take them. Or to take them seriously.

But All Our Ideas, with its invitation to creativity implicit in its “Add your own idea” button, adjusts that dynamic. The point is to inspire participation — meaningful participation — by a simple interface with practically no barriers to entry. The whole thing was designed, Salganik says, “to be very light and easy.”

Here’s what it looks like (you can also test it out for yourself using All Our Ideas’ sample interface — a survey issued by Princeton’s student government asking undergrads what improvements it should make to campus life):

The ease-of-use translates to the survey-issuers, as well: All Our Ideas is available for sites to use via an API and, for the less tech-savvy or more time-pressed, an embeddable widget. (Which is also to say: it’s free.) Surveyors can tailor the platform to the particular survey they want to run, seeding it with initial ideas and deciding whether the survey run will be entirely algorithmic or human-moderated. For the latter option, each surveyor designates a moderator, charged with approving user-generated ideas before they become part of a survey’s idea marketplace; for both options, users themselves can flag ideas as inappropriate.

So far, it’s been used by organizations like Catholic Relief Services in Baltimore, which used the platform to survey more than 4,000 employees — based out of 150 offices worldwide and speaking several different languages — about what makes an ideal relief worker; Columbia Law School’s student government used it to find the best idea for improving campus life (that survey got 15,000 votes, Salganik told me, with 200 new ideas uploaded in the first 48 hours). And the Princeton student government survey got more than 2,000 students to contribute 40,000 votes and 100 new ideas in the space of a few weeks.

A new way to survey

All Our Ideas, Salganik says, “deals with a fundamental problem that exists in the social sciences in terms of how we aggregate information.” Traditionally, academics can gather feedback either using pre-populated surveys, which are good at quantifying huge swaths of information, but also limited in the scope of the data they can gather…or, on the other hand, using focus groups and interviews, which are great for gathering open, unstructured information — information that’s “unfiltered by an pre-existing biases that you might have,” Salganik points out — but that are also difficult to analyze. Not to mention inefficient and, often, expensive.

And from the surveyers’ perspective, as well, surveys can be a blunt instrument: their general inability to quantify narrative feedback has forced survey-writers to rely on pre-determined questions. Which is to say, on pre-determined answers. “I’ve actually designed some surveys before, and had the suspicion that I’d left something out,” Salganik says. It’s a guessing game — educated guessing, yes, but guessing all the same. “You only get out what you put in,” he points out. And you don’t know what you don’t know.

But “one of the patterns we see consistently is that ideas that are uploaded by users sometimes score better than the best ideas that started it off,” Salganik says. “Because no matter how hard you try, there are just ideas out there that you don’t know.” But other people do.

Conceptual Crowdsourcing

That utility easily translates to news organizations, who might use All Our Ideas to crowdsource thoughts on anything from news articles to opinion pieces to particular areas of editorial focus. “Let’s say you’re a newspaper,” Salganik says. “You could have one of these [surveys] set up for each neighborhood in a city. You could have twenty of them.”

The platform could also be used to conduct internal surveys — particularly useful at larger organizations, where the lower-level reporters, editors, and producers who man the trenches of daily journalism might have the most meaningful ideas about organizational priorities…but where those workers’ voices might also have the least chance of being heard. News outlets both mammoth and slightly less so have been trying to rectify that asymmetry; an org-wide survey, where every contribution exists on equal footing with every other, could bring structure to the ideal of an idea marketplace that is — yes — truly democratic.

But perhaps the most significant use of the platform could be broad-scale and systemic: surveying users about, yes, what they want. (See, for example, ProPublica’s employment of an editorially focused reader survey a couple months ago.) Pose one basic question — broad (“What kinds of stories are you most interested in knowing?”) or narrow (“Whom should we bring on as our next columnist?”) — and see what results. That’s a way of giving more agency to users than traditional surveys have; it’s also a way of letting them know that you value their opinions in the first place.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl