Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 16 2012

14:00

Did Global Voices Use Diverse Sources on Twitter for Arab Spring Coverage?

Citizen journalism and social media have become major sources for the news, especially after the Arab uprisings of early 2011. From Al Jazeera Stream and NPR's Andy Carvin to the Guardian's "Three Pigs" advertisement, news organizations recognize that journalism is just one part of a broader ecosystem of online conversation. At the most basic level, journalists are following social media for breaking news and citizen perspectives. As a result, designers are rushing to build systems like Ushahidi's SwiftRiver to filter and verify citizen media.

Audience analytics and source verification only paint part of the picture. While upcoming technologies will help newsrooms understand their readers and better use citizen sources, we remain blind to the way the news is used in turn by citizen sources to gain credibility and spread ideas. That's a loss for two reasons. Firstly, it opens newsrooms up to embarrassing forms of media manipulation. Most importantly, we're analytically blind to one of bloggers' and citizen journalists' greatest incentives: attention.

Re-imagining media representation

For my MIT Media Lab master's thesis, I'm trying to re-imagine how we think about media representation in online media ecosystems. Over the next year, my main focus will be gender in the media. But this summer, for a talk at the Global Voices Summit in Nairobi, I developed a visualization of media representation in Global Voices, which has been reporting on citizen media far longer than most news organizations.

(I'm hoping the following analysis of Global Voices convinces you that tracking media representation is exciting and important. If your news organization is interested in developing these kinds of metrics, or if you're a Global Voices editor trying to understand whose voices you amplify, I would love to hear from you. Contact me on Twitter at @natematias or at natematias@gmail.com.)

Media Representation in Global Voices: Egypt and Libya

My starting questions were simple: Whose voices (from Twitter) were most cited in Global Voices' coverage of the Arab uprisings, and how diverse were those voices? Was Global Voices just amplifying the ideas of a few people, or were they including a broad range of perspectives? Global Voices was generous enough to share its entire English archive going back to 2004, and I built a data visualization tool for exploring those questions across time and sections:

globalvoices.jpg

Let's start with Egypt. (Click to load the Egypt visualization.) Global Voices has been covering Egypt since its early days. The first major spike in coverage occurred in February 2007 when blogger Kareem Amer was sentenced to prison for things he said on his blog. The next spike in coverage, in February 2009, occurred in response to the Cairo bombing. The largest spike in Egypt coverage starts at the end of January 2011 in response to protests in Tahrir Square and is sustained over the next few weeks. Notice that while Global Voices did quote Twitter from time to time (citing 68 unique Twitter accounts the week of the Cairo bombing), the diversity of Twitter citation grew dramatically during the Egyptian uprising -- and actually remained consistently higher thereafter.

Tracking twitter citations

Why was Global Voices citing Twitter? By sorting articles by Twitter citation in my visualization, it's possible to look at the posts which cite the greatest number of unique Twitter accounts. Some posts reported breaking news from Tahrir, quoting sources from Twitter. Others report on viral political hashtag jokes, a popular format for Global Voices posts. Not all posts cite Egyptian sources. This post on the global response to Egyptian uprising shares tweets from around the world.

twitteraccounts.jpg

By tracking Twitter citation in Global Voices, we're also able to ask: Whose voices was GlobalVoices amplifying? Citation in blogs and the news can give a source exposure, credibility, and a growing audience.

In the Egypt section, the most cited Twitter source was Alaa Abd El Fattah, an Egyptian blogger, software developer, and activist. One of the last times he was cited in Global Voices was in reference to his month-long imprisonment in November 2011.

Although Alaa is prominent, Global Voices relied on hundreds of other sources. The Egypt section cites 1,646 Twitter accounts, and @alaa himself appears alongside 368 other accounts.

One of those accounts is that of Sultan al-Qassemi, who lives in Sharjah in the UAE, and who translated arabic Tweets into English throughout the Arab uprisings. @sultanalqassemi is the fourth most cited account in Global Voices Egypt, but that accounts for only 28 posts out of the 65 where he is mentioned. This is very different from Alaa, who is cited primarily just within the Egypt section.

sultan.jpg

Let's look at other sections where Sultan al-Qassemi is cited in Global Voices. Consider, for example, the Libya section, where he appears in 18 posts. (Click to load the Libya visualization.) Qassemi is cited exactly the same number of times as the account @ChangeInLibya, a more Libya-focused Twitter account. Here, non-Libyan voices have been more prominent: Three out of the five most cited Twitter accounts (Sultan al-Qassemi, NPR's Andy Carvin, and the Dubai-based Iyad El-Baghdadi) aren't Libyan accounts. Nevertheless, all three of those accounts were providing useful information: Qassemi reported on sources in Libya; Andy Carvin was quoting and retweeting other sources, and El-Baghdadi was creating situation maps and posting them online. With Libya's Internet mostly shut down from March to August, it's unsurprising to see more outside commentary than we saw in the Egypt section.

globalvoiceslibya.jpg

Where Do We Go From Here?

This very simple demo shows the power of tracking source diversity, source popularity, and the breadth of topics that a single source is quoted on. I'm excited about taking the project further, to look at:

  • Comparing sources used by different media outlets
  • Auto-following sources quoted by a publication, as a way for journalists to find experts, and for audiences to connect with voices mentioned in the media
  • Tracking and detecting media manipulators
  • Developing metrics for source diversity, and developing tools to help journalists find the right variety of sources
  • Journalist and news bias detection, through source analysis
  • Comparing the effectiveness of closed source databases like the Public Insight Network and Help a Reporter Out to open ecosystems like Twitter, Facebook, and online comments. Do source databases genuinely broaden the conversation, or are they just a faster pipeline for PR machines?
  • Tracking the role of media exposure on the popularity and readership of social media accounts

Still Interested?

I'm sure you can think of another dozen ideas. If you're interested in continuing the conversation, try out my Global Voices Twitter Citation Viewer (tutorial here), add a comment below, and email me at natematias@gmail.com.

Nathan develops technologies for media analytics, community information, and creative learning at the MIT Center for Civic Media, where he is a Research Assistant. Before MIT, Nathan worked in UK startups, developing technologies used by millions of people worldwide. He also helped start the Ministry of Stories, a creative writing center in East London. Nathan was a Davies-Jackson Scholar at the University of Cambridge from 2006-2008.

This post originally appeared on the MIT Center for Civic Media blog.

January 05 2012

19:30

Hacking consensus: How we can build better arguments online

In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.

It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.

To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Defining an argument

We’re all familiar with arguments, both in media and in everyday life. But it’s worth briefly reviewing what an argument actually is, as doing so can inform how we might better structure arguments online. “The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt,” writes Douglas Walton in his book Fundamentals of Critical Argumentation. “An argument is made up of statements called premises and a conclusion. The premises give a reason (or reasons) to support the conclusion.”

So an argument can be broken up into discrete claims, unified by a structure that ties them together. But our typical conceptions of online content ignore all that. Why not design content to more easily assess each claim in an argument individually? UI designer Bret Victor is working on doing just that through a series of experiments he collectively calls “Explorable Explanations.”

Writes Victor:

A typical reading tool, such as a book or website, displays the author’s argument, and nothing else. The reader’s line of thought remains internal and invisible, vague and speculative. We form questions, but can’t answer them. We consider alternatives, but can’t explore them. We question assumptions, but can’t verify them. And so, in the end, we blindly trust, or blindly don’t, and we miss the deep understanding that comes from dialogue and exploration.

The alternative is what he calls a “reactive document” that imposes some structure onto content so that the reader can “play with the premise and assumptions of various claims, and see the consequences update immediately.”

Although Victor’s first prototype, Ten Brighter Ideas, is a list of recommendations rather than a formal argument, it gives a feel of how such a document could work. But the specific look, feel and design of his example aren’t important. The point is simply that breaking up the contents of an argument beyond the level of just a post or column makes it possible for authors, editors or the community to deeply analyze each claim individually, while not losing sight of its place in the argument’s structure.

Show me the evidence (and the conversation)

Victor’s prototype suggests a more interesting way to structure and display arguments by breaking them up into individual claims, but it doesn’t tell us anything about what sort of content should be displayed alongside each claim. To start with, each claim could be accompanied by relevant links that help the reader make sense of that claim, either by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

Each claim could be accompanied by relevant links that help the reader make sense of that claim by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

At multiple points in his column, Krugman references “the evidence,” presumably referring to parts of the economics literature that support his argument. But what is the evidence? Why can’t it be cited alongside the column? And, while we’re at it, why not link to countervailing evidence as well? For an idea of how this might work, it’s helpful to look at a crowd-sourced fact-checking experiment run by the nonprofit NewsTrust. The “TruthSquad” pilot has ended, but the content is still online. One thing that NewsTrust recognized was that rather than just being useful for comment or opinion, the crowd can be a powerful tool for sourcing claims. For each fact that TruthSquad assessed, readers were invited to submit relevant links and mark them as For, Against, or Neutral.

The links that the crowd identified in the NewsTrust experiment went beyond direct evidence, and that’s fine. It’s also interesting for the reader to see what other writers are saying, who agrees, who disagrees, etc. The point is that a curated or crowd-sourced collection of links directly relevant to a specific claim can help a reader interested in learning more to save time. And allowing space for links both for and against an assertion is much more interesting than just having the author include a single link in support of his or her claim.

Community efforts to aggregate relevant links along the lines of the TruthSquad could easily be supplemented both by editor-curators (which NewsTrust relied on) and by algorithms which, if not yet good enough to do the job on their own, can at least lessen the effort required by readers and editors. The nonprofit ProPublica is also experimenting with a more limited but promising effort to source claims in their stories. (To get a sense of the usefulness of good evidence aggregation on a really thorny topic, try this post collecting studies of the stimulus bill’s impact on the economy.)

Truth, reliability, and acceptance

While curating relevant links allows the reader to get a sense of the debate around a claim and makes it easier for him or her to learn more, making sense of evidence still takes considerable time. What if a brief assessment of the claim’s truth, reliability or acceptance were included as well? This piece is arguably the hardest of those I have described. In particular, it would require editors to abandon the view from nowhere to publish a judgment about complicated statements well beyond traditional fact-checking. And yet doing so would provide huge value to the reader and could be accomplished in a number of ways.

Imagine that as you read Krugman’s column, each claim he makes is highlighted in a shade between green and red to communicate its truth or reliability. This sort of user interface is part of the idea behind “Truth Goggles,” a master’s project by Dan Shultz, an MIT Media Lab student and Mozilla-Knight Fellow. Shultz proposes to use an algorithm to check articles against a database of claims that have previously been fact-checked by Politifact. Shultz’s layer would highlight a claim and offer an assessment (perhaps by shading the text) based on the work of the fact checkers.

The beauty of using color is the speed and ease with which the reader is able to absorb an assessment of what he or she is reading. The verdict on the statement’s truthfulness is seamlessly integrated into the original content. As Shultz describes the central problem:

The basic premise is that we, as readers, are inherently lazy… It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything?

Still, the number of statements that PolitiFact has checked is relatively small, and what I’m describing requires the evaluation of messy empirical claims that stretch the limits of traditional fact-checking. So how might a publication arrive at such an assessment? In any number of ways. For starters, there’s good, old-fashioned editorial judgment. Journalists can provide assessments, so long as they resist the view from nowhere. (Since we’re rethinking the opinion pages here, why not task the editorial board with such a role?)

Publications could also rely on other experts. Rather than asking six experts to contribute to a “Room for Debate”-style forum, why not ask one to write a lead argument and the others not merely to “respond,” but to directly assess the lead author’s claims? Universities may be uniquely positioned to help in this, as some are already experimenting with polling their own experts on questions of public interest. Or what if a Quora-like commenting mechanism was included for each claim, as Dave Winer has suggested, so that readers could offer assessments, with the best ones rising to the top?

Ultimately, how to assess a claim is a process question, and a difficult one. But numerous relevant experiments exist in other formats. One new effort, Hypothes.is, is aiming to add a layer of peer review to the web, reliant in part on experts. While the project is in its early stages, its founder Dan Whaley is thinking hard about many of these same questions.

Better arguments

What I’ve described so far may seem elaborate or resource-intensive. Few publications these days have the staff and the time to experiment in these directions. But my contention is that the kind of content I am describing would be of dramatically higher value to the reader than the content currently available. And while Victor’s UI points towards a more aggressive restructuring of content, much could be done with existing tools. By breaking up an argument into discrete claims, curating evidence and relevant links, and providing credible assessments of those claims, publishers would equip readers to form opinions on merit and evidence rather than merely on trust, intuition, or bias. Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

I have avoided a number of issues in this explanation. Notably, I have neglected to discuss counter-arguments (which I believe could be easily integrated) and haven’t discussed the tension between empirical claims and value claims (I have assumed a focus on the former). And I’ve ignored the tricky psychology surrounding bias and belief formation. Furthermore, some might cite the recent PolitiFact Lie of the Year controversy as evidence that this sort of journalism is too difficult. In my mind, that incident further illustrates the need for credible, honest referees.

Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

Returning once more to Krugman’s argument, imagine the color of the text signaling whether his claims about financial transactions and economic growth are widely accepted. Or mousing over his point about reducing deficits to quickly see links providing background on the issue. What if it turned out that all of Krugman’s premises were assessed as compelling, but his conclusion was not? It would then be obvious that something was missing. Perhaps more interestingly, what if his conclusion was rated compelling but his claims were weak? Might he be trying to convince you of his case using popular arguments that don’t hold up, rather than the actual merits of the case? All of this would finally be apparent in such a setup.

In rethinking how we structure and assess arguments online, I’ve undoubtedly raised more questions than I’ve answered. But hopefully I’ve convinced you that better presentation of arguments online is at least possible. Not only that, but numerous hackers, designers, and journalists — and many who blur the lines between those roles — are embarking on experiments to challenge how we think about content, argument, truth, and credibility. It is in their work that the answers will be found.

Image by rhys_kiwi used under a Creative Commons license.

November 26 2011

18:05

@wblau - The Future of News: What to learn from Fukushima and the Arab Spring?

Check out Wolfgang Blau's Google+ page (link below). Readers are invited to send their questions.

Wolfgang Blau @wblau | Google+ :: Debate coming up: "The Future of News: What to learn from Fukushima and the Arab Spring?". - Wolfgang Blau will be chairing a debate between the Director for New Media at Al Jazeera English, Mohamed Nanabhay and the Japanese internet pioneer and new director of the MIT Media Lab, Joichi Ito. The debate will take place at the News World Summit in Hong Kong.

[Wolfgang Blau:] Which tools and methods would you both recommend for verifying social media sources in crisis reporting?

Continue to read plus.google.com

Visit the site News World Summit, HK, www.news-worldsummit.org

January 04 2011

17:02

Winning a Golden Ticket to the MIT Media Lab

I'm a graduate student at the MIT Media Lab. I guess I'm old now. I started writing this post three months ago and in the blink of an eye an entire semester whizzed past my head. Or perhaps into my head would be more accurate; it's just that kind of place.

I want to share a little bit about how the Lab works from a student's perspective, along with some first impressions from my first semester. It should be worthwhile for anyone interested in media labs. For everyone else I'll be sure to touch on where civic and community media fit into the operation.

The Lab: A Newcomer's Guide

If you don't know much about the Lab, here is my go-to description: imagine Charlie and the Chocolate Factory. Replace some of the Oompa Loompas with grad students (the rest are robots), and most of the candy wonders with technological ones. This isn't as far off as you might think; we even have a glass elevator.

Now that you have the big picture, I'll explain some of the inner workings.

Research Groups

The lab is organized into entities called research groups, which accept students. Each group has its own focus, and is led by a faculty member. Group sizes vary, but as of this writing there are about 24 groups and 139 students in the Lab, so you can do the math.

The groups' focuses fall across a wide spectrum. For example, New Media Medicine aims to improve the way healthcare is practiced around the world, while Opera of the Future is redefining music for the modern age. My group, Information Ecology, hopes to incorporate interactions with digital information more naturally into our day-to-day lives.

Sponsors

All of this research is funded by a consortium of sponsors. These companies help foot the bill and in return they get VIP treatment and licenses to any IP generated during the time of their sponsorship. Hey Washington Post, where are you? Or other major news organizations, for that matter.

Every year there are two huge celebrations called sponsor weeks in which all of the students and most of the faculty hustle bustle without sleep to prepare all of their demos and show off everything that the Lab has been working on since the last get-together. There is no cramming involved at all, I swear...

Classes

In addition to being researchers, everyone is still a student. Masters students take five classes over two years. The courses can be from anywhere in MIT, although many first year students start with ones from Media Lab. The same faculty members that lead the research groups lead Media Lab courses.

The best courses are often lottery-based. For instance, How to Make Almost Anything is in incredibly high demand because after taking it you know how to make almost anything.

The Center for Future Civic Media

About five years ago, the Knight Foundation gave a sizable grant to the Media Lab. The grant funded a "center" -- namely, the Center for Future Civic Media, a safe haven for anyone interested in pursuing projects related to information and physical community.

Centers are different from research groups because they don't accept new students; instead they sneakily lure current students into their clutches. There are a few centers besides C4FCM, such as the Center for Future Storytelling and the now defunct Center for Future Banking. They all provide direct support for research that fits into their theme.

The C4FCM leverages the Media Lab in a way that research groups can't because it has potential access to everyone. It can attack a problem from dozens of angles at once. There is also a new research group that starts fresh next year called Civic Media, so really they have it all going for them.

For obvious reasons I have a second home in the Center.

So, is it any good?

Earlier this semester I found myself in trouble. I was working on my composites project for How to Make Almost Anything -- I was making a fabricated pet rock. A silly project, maybe, but I had to make something out of composite and I wasn't prepared to make an airplane. That isn't the point. The point is that I needed googley eyes, and it was 3:00 in the morning.

I sent an email to msgs, the Lab-wide mailing list, with few expectations. Within five minutes I had half a dozen replies from people who were still awake, still working, and had access to a stash of eyes that I could use. What a place!

The plethora of available eyes at 3:00 a.m. reflects one of the most important characteristics of the Media Lab: An almost universal appreciation for fun. This spirit makes the Lab one of a kind, and without it people would have a much harder time breaking away and trying new things. They definitely wouldn't work as hard to attempt the impossible -- you need to have fun if you're going to do something as stupid as that.

Before I started my time here I was warned by several students not to fall into the all-too-common trap of putting too much energy into projects that are just silly, goofy, and don't have real impact on the world. So far I have been too busy learning to sink much time into projects at all, but I understand the temptation.

There are many merits to this place -- it has more thought diversity, skill sets, and resources than you can shake a stick at -- but what sets it apart is the need for that warning. It is the fine line that everyone here walks. To do the best work you have to think like a kid living in a crazy person's body, but you can't forget your calling.

Oh, and the other thing that separates it from other institutions is Food Cam.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl