Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 22 2010

11:00

Help Me Investigate – anatomy of an investigation

Earlier this year I and Andy Brightwell conducted some research into one of the successful investigations on my crowdsourcing platform Help Me Investigate. I wanted to know what had made the investigation successful – and how (or if) we might replicate those conditions for other investigations.

I presented the findings (presentation embedded above) at the Journalism’s Next Top Model conference in June. This post sums up those findings.

The investigation in question was ‘What do you know about The London Weekly?‘ – an investigation into a free newspaper that was (they claimed – part of the investigation was to establish if this was a hoax) about to launch in London.

The people behind the paper had made a number of claims about planned circulation, staffing and investment that most of the media reported uncritically. Martin Stabe, James Ball and Judith Townend, however, wanted to dig deeper. So, after an exchange on Twitter, Judith logged onto Help Me Investigate and started an investigation.

A month later members of the investigation had unearthed a wealth of detail about the people behind The London Weekly and the facts behind their claims. Some of the information was reported in MediaWeek and The Media Guardian podcast Media Talk; some formed the basis for posts on James Ball’s blog, Journalism.co.uk and the Online Journalism Blog. Some has, for legal reasons, remained unpublished.

A note on methodology

Andrew conducted a number of semi-structured interviews with contributors to the investigation. The sample was randomly selected but representative of the mix of contributors, who were categorised as either ‘alpha’ contributors (over 6 contributions), ‘active’ (2-6 contributions) and ‘lurkers’ (whose only contribution was to join the investigation). These interviews formed the qualitative basis for the research.

Complementing this data was quantitative information about users of the site as a whole. This was taken from two user surveys – one when the site was 3 months’ old and another at 12 months – and analysis of analytics taken from the investigation (such as numbers and types of actions, frequency, etc.)

What are the characteristics of a crowdsourced investigation?

One of the first things I wanted to analyse was whether the investigation data matched up to patterns observed elsewhere in crowdsourcing and online activity. An analysis of the number of actions by each user, for example, showed a clear ‘power law’ distribution, where a minority of users accounted for the majority of activity.

This power law, however, did not translate into a breakdown approaching the 90-9-1 ‘law of participation inequality‘ observed by Jakob Nielsen. Instead, the balance between those who made a couple of contributions (normally the 9% of the 90-9-1 split) and those who made none (the 90%) was roughly equal. This may have been because the design of the site meant it was not possible to ‘lurk’ without being a member of the site already, or being invited and signing up.

Adding in data on those looking at the investigation page who were not members may have shed further light on this.

What made the crowdsourcing successful?

Clearly, it is worth making a distinction between what made the investigation successful as a series of outcomes, and what made crowdsourcing successful as a method.

What made the community gather, and continue to return? One hypothesis was that the nature of the investigation provided a natural cue to interested parties – The London Weekly was published on Fridays and Saturdays and there was a build up of expectation to see if a new issue would indeed appear.

I was curious to see if the investigation had any ‘rhythm’. Would there be peaks of interest correlating to the expected publication?

The data threw up something else entirely. There was indeed a rhythm but it was Wednesdays that were the most popular day for people contributing to the investigation.

Why? Well, it turned out that one of the investigation’s ‘alpha’ contributors – James Ball – set himself a task to blog about the investigation every week. His blog posts appeared on a Wednesday.

That this turned out to be a significant factor in driving activity tells us one important lesson: talking publicly and regularly about the investigation’s progress is key.

This data was backed up from the interviews. One respondent mentioned the “weekly cue” explicitly.

More broadly, it seems that the site helped keep track of a number of discussions taking place around the web. Having been born from a discussion on Twitter, further conversations on Twitter resulted in further people signing up, along with comments threads and other online discussion. This fit the way the site was designed culturally – to be part of a network rather than asking people to do everything on-site.

But the planned technical connectivity of the site with the rest of the web (being able to pull related tweets or bookmarks, for example) had been dropped during development as we focused on core functionality. This was not a bad thing, I should emphasise, as it prevented us becoming distracted with ‘bells and whistles’ and allowed us to iterate in reaction to user activity rather than our own assumptions of what users would want. This research shows that user activity and informs future development accordingly.

The presence of ‘alpha’ users like James and Judith was crucial in driving activity on the site – a pattern observed in other successful investigations. They picked up the threads contributed by others and not only wove them together into a coherent narrative that allowed others to enter more easily, but also set the new challenges that provided ways for people to contribute. The fact that they brought with them a strong social network presence is probably also a factor – but one that needs further research.

The site has always been designed to emphasise the role of the user in driving investigations. The agenda is not owned by a central publisher, but by the person posing the question – and therefore the responsibility is theirs as well. In this sense it draws on Jenkins’ argument that “Consumers will be more powerful within convergence culture – but only if they recognise and use that power.” This cultural hurdle may be the biggest one that the site has to address.

Indeed, the site is also designed to offer “Failure for free”, allowing users to learn what works and what doesn’t, and begin to take on that responsibility where required.

The investigation also suited crowdsourcing well, as it could be broken down into separate parts and paths – most of which could be completed online: “Where does this claim come from?” “Can you find out about this person?” “What can you discover about this company?”. One person, for example, used Google Streetview to establish that the registered address of the company was a postbox.

Other investigations that are less easily broken down may be less suitable for crowdsourcing – or require more effort to ensure success.

A regular supply of updates provided the investigation with momentum. The accumulation of discoveries provided valuable feedback to users, who then returned for more. In his book on Wikipedia, Andrew Lih (2009 p82) notes a similar pattern – ‘stigmergy‘ – that is observed in the natural world: “The situation in which the product of previous work, rather than direct communication [induces and directs] additional labour”. An investigation without these ‘small pieces, loosely joined’ might not suit crowdsourcing so well.

One problem, however, was that those paths led to a range of potential avenues of enquiry. In the end, although the core questions were answered (was the publication a hoax and what were the bases for their claims) the investigation raised many more questions.

These remained largely unanswered once the majority of users felt that their questions had been answered. Like any investigation, there came a point at which those involved had to make a judgement whether they wished to invest any more time in it.

Finally, the investigation benefited from a diverse group of contributors who contributed specialist knowledge or access. Some physically visited stations where the newspaper was claiming distribution to see how many copies were being handed out. Others used advanced search techniques to track down details on the people involved and the claims being made, or to make contact with people who had had previous experiences with those behind the newspaper.

The visibility of the investigation online led to more than one ‘whistleblower’ approach providing inside information.

What can be done to make it better?

Looking at the reasons that users of the site as a whole gave for not contributing to an investigation, the majority attributed this to ‘not having enough time’. Although at least one interviewee, in contrast, highlighted the simplicity and ease of contributing, it needs to be as easy and simple as possible for users to contribute in order to lower the perception of effort and time needed.

Notably, the second biggest reason for not contributing was a ‘lack of personal connection with an investigation’, demonstrating the importance of the individual and social dimension of crowdsourcing. Likewise, a ‘personal interest in the issue’ was the single largest factor in someone contributing. A ‘Why should I contribute?’ feature on each investigation may be worth considering.

Others mentioned the social dimension of crowdsourcing – the “sense of being involved in something together” – what Jenkins (2006) would refer to as “consumption as a networked practice”.

This motivation is also identified by Yochai Benkler in his work on networks. Looking at non-financial reasons why people contribute their time to online projects, he refers to “socio-psychological reward”. He also identifies the importance of “hedonic personal gratification”. In other words, fun. (Interestingly, these match two of the three traditional reasons for consuming news: because it is socially valuable, and because it is entertaining. The third – because it is financially valuable – neatly matches the third reason for working).

While it is easy to talk about “Failure for free”, more could be done to identify and support failing investigations. We are currently developing a monthly update feature that would remind users of recent activity and – more importantly – the lack of activity. The investigators in a group might be asked whether they wish to terminate the investigation in those cases, emphasising their role in its progress and helping ‘clean up’ the investigations listed on the first page of the site.

That said, there is also a danger is interfering too much in reducing failure. This is a natural instinct, and I have to continually remind myself that I started the project with an expectation of 95-99% of investigations ‘failing’ through a lack of motivation on the part of the instigator. That was part of the design. It was the 1-5% of questions that gained traction that would be the focus of the site (this is how Meetup works, for example – most groups ‘fail’ but there is no way to predict which ones. As it happens, the ‘success’ rate of investigations has been much higher than expected). One analogy is a news conference where members throw out ideas – only a few are chosen for investment of time and energy, the rest ‘fail’.

In the end, it is the management of that tension between interfering to ensure everything succeeds – and so removing the incentive for users to be self-motivated – and not interfering at all – leaving users feeling unsupported and unmotivated – that is likely to be the key to a successful crowdsourcing project. More than a year into the project, this is still a skill that I am learning.

August 16 2010

16:34

Nieman: Exploring a niche for non-niche fact-checking

There are a number of fact-checking platforms online, including PolitiFact, FactCheck and Meet the Facts. “The efforts are admirable. They’re also, however, atomised,” writes Nieman Journalism Lab’s Megan Garber.

Now Andrew Lih, associate professor of new media at USC’s Annenberg School of Communication & Journalism and author of The Wikipedia Revolution, has plans to bring the scope of the wiki format to the world of fact-checking with WikiFactCheck.

WikiFactCheck wants not only to crowdsource, but also to centralise, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche.

Full story at this link…Similar Posts:



12:00

Truth-o-Meter, 2G: Andrew Lih wants to wikify fact-checking

Epic fact: We are living at the dawn of the Information Age. Less-epic fact: Our historical moment is engendering doubt. The more bits of information we have out there, and the more sources we have providing them, the more wary we need to be of their accuracy. So we’ve created a host of media platforms dedicated to fact-checking: We have PolitiFact over here, FactCheck over there, Meet the Facts over there, @TBDFactsMachine over there, Voice of San Diego’s Fact Check blog over there, NewsTrust’s crowdsourced Truthsquad over there (and, even farther afield, source verifiers like Sunlight’s new Poligraft platform)…each with a different scope of interest, and each with different methods and metrics of verification. (Compare, for example, PolitiFact’s Truth-o-Meter to FactCheck.org’s narrative assessments of veracity.) The efforts are admirable; they’re also, however, atomized.

“The problem, if you look at what’s being done right now, is often a lack of completeness,” says Andrew Lih, a visiting professor of new media at USC’s Annenberg School of Communication & Journalism. The disparate outlets have to be selective about the scope of their fact-checking; they simply don’t have the manpower to be comprehensive about verifying all the claims — political, economic, medical, sociological — pinging like pinballs around the Internet.

But what if the current fact-checking operations could be greater than the sum of their parts? What if there were a centralized spot where consumers of news could obtain — and offer — verification?

Enter WikiFactCheck, the new project that aims to do exactly what its name suggests: bring the sensibility — and the scope — of the wiki to the systemic challenges of fact-checking. The platform’s been in the works for about two years now, says Lih (who, in addition to creating the wiki, is a veteran Wikipedian and the author of The Wikipedia Revolution). He dreamed it up while working on WikiNews; though that project never reached the scope of its sister site — largely because its premise of discrete news narratives isn’t ideal for the wiki platform — a news-focused wiki that could succeed, Lih thought, was one that focused on the core unit of news: facts themselves. When Jay Rosen added attention to the need for systematic fact-checking of news content — most notably, through his campaign to fact-check the infamously info-miscuous Sunday shows — it became even more clear, Lih told me: This could be a job for a wiki.

WikiFactCheck wants not only to crowdsource, but also to centralize, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche. Wikipedia, he points out, is ultimately “a great aggregator”; and much of WikiFactCheck’s value could similarly be, he says, to catalog the results of other fact-checking outfits “and just be a meta-site.” Think Rotten Tomatoes — simple, summative, unapologetically derivative — for truth-claims.

If the grandeur implicit in that proposition sounds familiar, it’s because the idea for WikiFactCheck is pretty much identical to the one that guided the development of Wikipedia: to become a centralized repository of information shaped by, and limited only by the commitment of, the crowd. A place where the veracity of information is arbitrated discursively — among people who are motivated by the desire for veracity itself.

Which is idealistic, yes — unicornslollipopsrainbows idealistic, even — but, then again, so is Wikipedia. “In 2000, before Wikipedia started, the idea that you would have an online encyclopedia that was updated within seconds of something happening was preposterous,” Lih points out. Today, though, not only do we take Wikipedia for granted; we become indignant in those rare cases when entries fail to offer us up-to-the-minute updates on our topics of interest. Thus, the premise of WikiFactCheck: What’s to say that Wikipedia contributors’ famous commitment — of time, of enthusiasm, of Shirkian surplus — can’t be applied to verifying information as well as aggregating it?

What such a platform would look like, once populated, remains to be seen; the beauty of a wiki being its flexibility, users will make of the site what they will, with the crowd determining which claims/episodes/topics deserve to be checked in the first place. Ideally, “an experienced community of folks who are used to cataloging and tracking these kinds of things” — seasoned Wikipedians — will guide that process, Lih says. As he imagines it, though, the ideal structure of the site would filter truth-claims by episode, or “module” — one episode of “Meet the Press,” say, or one political campaign ad. “I think that’s pretty much what you’d want: one page per media item,” Lih says. “Whether that item is one show or one ad, we’ll have to figure out.”

Another thing to figure out will be how a wiki that will likely rely on publishing comprehensive documents — transcripts, articles, etc. — to verify their contents will dance around copyright issues. But “if there ever were a slam-dunk case for meeting all the attributes of the Fair Use Doctrine,” Lih says, “this is it.” Fact-checking is criticism and comment; it has an educational component (particularly if it operates under the auspices of USC Annenberg); and it doesn’t detract from content’s commercial value. In fact: “I can’t imagine another project that could be so strong in meeting the standards for fair use,” Lih says.

And what about the most common concern when it comes to informational wikis — that people with less-than-noble agendas will try to game the system and codify baseless versions of the truth? “In the Wikipedia universe, what has shaken out is that a lot of those folks who are not interested in the truth wind up going somewhere else,” Lih points out. (See: Conservapedia.) “They find that the community that is concerned with neutrality and with getting verifiable information into Wikipedia is going to dominate.” Majority rules — in a good way.

At the same time, though, “I welcome die-hard Fox viewers,” Lih says. “I welcome people who think Accuracy in Media is the last word. Because if you can cite from a reliable source — from a congressional record, from the Census Bureau, from the Geological Survey, from CIA Factbook, from something — then by all means, I don’t really care what your political stripes are. Because the facts should win out in the end.”

Photo of Andrew Lih by Kat Walsh, used under a GNU Free Documentation License.

April 24 2010

21:16

Questioning the health of Wikipedia

At the final session of the ISOJ 2010, Andrew Lih, University of Southern California presented his research into the health of Wikipedia (PDF).

His interest is prompted by talk about Wikipedia reaching its limits and a slowdown in the growth of the site.

Lih notes that Wikipedia had grown so quickly that from 2006-2009 there was no data, until a massive data dump towards the end of 2009.

Stats from 2009 started to show a leveling off of edits to Wikipedia, with new article production flattening out from 2007.

Lih showed that edits leveled across many languages, aside from Russian.

Some reasons for this decline could be the arcane wiki editing system and more rules about submissions and edits.

Stats show that between an eighth and a quarter of users create an entry but never hit save. To prove the point, Lih showed videos of users expressing confusion about how to edit entries. Basically, everyone found it very difficult to add and edit.

When Wikipedia starting, it had very simple rules such as neutral point of view. Now, there are a stack of rules which create barriers to entry to the community.

The Wikimedia Foundation argues that the community is stabilising. But Lih questioned this would enough to sustain the site and ensure a steady stream of new additions.

He suggested one possible scenario was a slow steady decline in quality, or a lack of timely content. And Lih noted that there was a slow trickling in of spam content into Wikipedia.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl