Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 27 2013

16:27

Sensor journalism, storytelling with Vine, fighting gender bias and more: Takeaways from the 2013 Civic Media Conference

mit-knight-civic-media-conference-2013Are there lessons journalists can learn from Airbnb? What can sensors tell us about the state of New York City’s public housing stock? How can nonprofits, governments, and for-profit companies collaborate to create places for public engagement online?

There were just a few of the questions asked at the annual Civic Media Conference hosted by MIT and the Knight Foundation in Cambridge this week. It covered a diverse mix of topics, ranging from government transparency and media innovation to disaster relief and technology’s influence on immigration issues. (For a helpful summary of the event’s broader themes check out VP of journalism and innovation Michael Maness‘s wrap-up talk.)

There was a decided bent towards pragmatism in the presentations, underscored by Knight president Alberto Ibargüen‘s measured, even questioning introduction to the News Challenge winners. “I ask myself what we have actually achieved,” he said of the previous cycles of the News Challenge. “And I ask myself how we can take this forward.”

While the big news was the announcement of this year’s winners and the fate of the program going forward, there were plenty of discussions and presentations that caught our attention.

Panelists and speakers — from Republican Congressman Darrell Issa and WNYC’s John Keefe to Columbia’s Emily Bell and recent MIT grads — offered insights on engagement (both online and off), data structure and visualization, communicating with government, the role of editors, and more. In the words of The Boston Globe’s Adrienne Debigare, “We may not be able to predict the future, but at least we can show up for the present.”

One more News Challenge

Though Ibargüen spoke about the future of the News Challenge in uncertain terms, Knight hasn’t put the competition on the shelf quite yet. Maness announced that there would indeed one more round of the challenge this fall with a focus on health. That’s about all the we know about the next challenge; Maness said Knight is still in the planning stages of the cycle and whatever will follow it. Maness said they want the challenge to address questions about tools, data, and technology around health care.

Opening up the newsroom

One of the more lively discussions at the conference focused on how news outlets can identify and harness the experience of outsiders. Jennifer Brandel, senior producer for WBEZ’s Curious City, said one way to “hack” newsrooms was to open them up to stories from freelance writers, but also to more input from the community itself. Brandel said journalists could also look beyond traditional news for inspiration for storytelling, mentioning projects like Zeega and the work of the National Film Board of Canada.

Laura Ramos, vice president of innovation and design for Gannett, said news companies can learn lessons on user design and meeting user needs from companies like Airbnb and Square. Ramos said another lesson to take from tech companies is discovering, and addressing, specific needs of users.

newsroominsidepanel

Bell, director of the Tow Center for Digital Journalism at Columbia University, said one solution for innovation at many companies has been creating research and development departments. But with R&D labs, the challenge is integrating the experiments of the labs, which are often removed from day-to-day activity, to the needs of the newsroom or other departments. Bell said many media companies need leadership that is open to experimentation and can juggle the immediate needs of the business with big-picture planning. Too often in newsrooms, or around the industry, people follow old processes or old ideas and are unable to change, something Bell compared to “watching six-year-olds playing soccer,” with everyone running to the ball rather than performing their role.

Former Knight-Mozilla fellow Dan Schultz said the issue of innovation comes down to how newsrooms allocate their attention and resources. Schultz, who was embedded at The Boston Globe during his fellowship, said newsrooms need to better allocate their developer and coding talent between day-to-day operations like dealing with the CMS and experimenting on tools that could be used in the future. Schultz said he supports the idea of R&D labs because “good technology needs planning,” but the needs of the newsroom don’t always meet with long-range needs on the tech side.

Ramos and Schultz both said one of the biggest threats to change in newsrooms can be those inflexible content management systems. Ramos said the sometimes rigid nature of a CMS can force people to make editorial decisions based on where stories should go, rather than what’s most important to the reader.

Vine, Drunk C-SPAN, and gender bias

!nstant: There was Nieman Foundation/Center for Civic Media crossover at this year’s conference: 2013 Nieman Fellows Borja Echevarría de la Gándara, Alex Garcia, Paula Molina, and Ludovic Blecher presented a proposal for a breaking news app called !nstant. The fellows created a wireframe of the app after taking Ethan Zuckerman’s News and Participatory Media class.

The app, which would combine elements of liveblogging and aggregation around breaking news events, was inspired by the coverage of the Boston marathon bombing and manhunt. The app would pull news and other information from a variety of sources, “the best from participatory media and traditional journalism,” Molina said. Rather than being a simple aggregator, !nstant would use a team of editors to curate information and add context to current stories when needed. “The legacy media we come from is not yet good at organizing the news in a social environment,” said Echevarría de la Gándara.

Drunk C-SPAN and Opened Captions: Schultz also presented a project — or really, an idea — that seems especially timely when more Americans than usual are glued to news coming out of the capitol. When Schultz was at the Globe, he realized it would be both valuable and simple to create an API that pulls closed captioning text from C-SPAN’s video files, a project he called Opened Captions, which we wrote about in December. “I wanted to create a service people could subscribe to whenever certain words were spoken on C-SPAN,” said Schultz. “But the whole point is [the browser] doesn’t know when to ask the questions. Luckily, there’s a good technology out there called WebSocket that most browsers support that allows the server and the browser to talk to each other.”

To draw attention to the possibilities of this technology, Schultz began experimenting with a project called Drunk C-SPAN, in which he aimed to track key terms used by candidates in a televised debate. The more the pols repeat themselves, the more bored the audience gets and the “drunker” the program makes the candidates sound.

But while Drunk C-SPAN was topical and funny, Schultz says the tool should be less about what people are watching and more about what they could be watching. (Especially since almost nobody in the gen pop is watching C-SPAN regularly.) Specifically, he envisions a system in which Opened Captions could send you data about what you’re missing on C-SPAN, translate transcripts live, or alert you when issues you’ve indicated an interest in are being discussed. For the nerds in the house, there could even be a badge system based on how much you’ve watched.

Schultz says Opened Captions is fully operational and available on GitHub, and he’s eager to hear any suggestions around scaling it and putting it to work.

followbiasFollow Bias is a Twitter plugin that calculates and visualizes the gender diversity of your Twitter followers. When you sign in to the app, it graphs how many of your followers are male, female, brands, or bots. Created by Nathan Mathias and Sarah Szalavitz of the MIT Media Lab, Follow Bias is built to counteract the pernicious function of social media that allows us to indulge our unconscious biases and pass them along to others, contributing to gender disparity in the media rather than counteracting it.

The app is still in private beta, but a demo, which gives a good summary of gender bias in the media, is online here. “The heroes we share are the heroes we have,” it reads. “Among lives celebrated by mainstream media and sites like Wikipedia, women are a small minority, limiting everyone’s belief in what’s possible.” The Follow Bias server updates every six hours, so the hope is that users will try to correct their biases by broadening the diversity of their Twitter feed. Eventually, Follow Bias will offer metrics, follower recommendations, and will allow users to compare themselves to their friends.

LazyTruth: Last fall, we wrote about Media Lab grad student Matt Stempeck’s LazyTruth, the Gmail extension that helps factcheck emails, particularly chain letters and phishing scams. After launching LazyTruth last fall, Stempeck told the audience at the Civic Media conference that the tool has around 7,000 users. He said the format of LazyTruth may have capped its growth: “We’ve realized the limits of Chrome extensions, and browser extensions in general, in that a lot of people who need this tool are never going to install browser extensions.”

Stempeck and his collaborators have created an email reply service to LazyTruth, that lets users send suspicious messages to ask@lazytruth.com to get an answer. Stempeck said they’ve also expanded their misinformation database with information from Snopes, Hoax-Slayer and Sophos, an antivirus and computer security company.

LazyTruth is now also open source, with the code available on GitHub. Stempeck said he hopes to find funding to expand the fact-checking into social media platforms.

Vine Toolkit: Recent MIT graduate Joanna Kao is working on a set of tools that would allow journalists or anyone else to use Vine in storytelling. The Vine Toolkit would provide several options to add context around the six-second video clips.

Kao said Vines offer several strengths and weaknesses for journalists: the short length, ease of use, and the built-in social distribution network around the videos. But the length is also problematic, she said, because it doesn’t provide context for readers. (Instagram’s moving in on this turf.) One part of the Vine Toolkit, Vineyard, would let users string together several vines that could be captioned and annotated, Kao said. Another tool, VineChatter, would allow a user to see conversations and other information being shared about specific Vine videos.

Open Space & Place: Of algorithms and sensor journalism

WNYC: We also heard from WNYC’s John Keefe during the Open Space & Place discussion. Keefe shared the work WNYC did around tracking Hurricane Sandy, and, of course, the Lab’s beloved Cicada Project. (Here’s our most recent check-in on that invasion topic.)

keefecicadas

As Keefe has told the Lab in the past, the next big step in data journalism will be figuring out what kind of stories can come out of asking questions of data. To demonstrate that idea, Keefe said WNYC is working on a new project measuring air quality in New York City by strapping sensors to bikers. This summer, they’ll be collaborating with the Mailman School of Public Health to do measurement runs across New York. Keefe said the goal would be to fill in gaps in government data supplied by particulate measurement stations in Brooklyn and the Bronx. WNYC is also interested in filling in data gaps around NYC’s housing authority, says Keefe. After Hurricane Sandy, some families living in public housing went weeks without power and longer without heat or hot water. Asked Keefe: “How can we use sensors or texting platforms to help these people inform us about what government is or isn’t doing in these buildings?”

With the next round of the Knight News Challenge focusing on health, keep on eye on these data-centric, sensor-driven, public health projects, because they’re likely to be going places.

Mapping the Globe: Another way to visualize the news, Mapping the Globe lets you see geographic patterns in coverage by mapping The Boston Globe’s stories. The project’s creator, Lab researcher Catherine D’Ignazio, used the geo-tagged locations already attached to more than 20,000 articles published since November 2011 to show how many of them relate to specific Boston neighborhoods — and by zooming out, how many stories relate to places across the state and worldwide. Since the map also displays population and income data, it’s one way to see what areas might be undercovered relative to who lives there — a geographical accountability system of sorts.

This post includes good screenshots of the prototype interactive map. The patterns raise lots of questions about why certain areas receive more attention than others: Is the disparity tied to race, poverty, unemployment, the location of Globe readers? But D’Ignazio also points out that there are few conclusive correlations or clear answers to her central question — “When does repeated newsworthiness in a particular place become a systemic bias?”

January 05 2012

19:30

Hacking consensus: How we can build better arguments online

In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.

It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.

To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Defining an argument

We’re all familiar with arguments, both in media and in everyday life. But it’s worth briefly reviewing what an argument actually is, as doing so can inform how we might better structure arguments online. “The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt,” writes Douglas Walton in his book Fundamentals of Critical Argumentation. “An argument is made up of statements called premises and a conclusion. The premises give a reason (or reasons) to support the conclusion.”

So an argument can be broken up into discrete claims, unified by a structure that ties them together. But our typical conceptions of online content ignore all that. Why not design content to more easily assess each claim in an argument individually? UI designer Bret Victor is working on doing just that through a series of experiments he collectively calls “Explorable Explanations.”

Writes Victor:

A typical reading tool, such as a book or website, displays the author’s argument, and nothing else. The reader’s line of thought remains internal and invisible, vague and speculative. We form questions, but can’t answer them. We consider alternatives, but can’t explore them. We question assumptions, but can’t verify them. And so, in the end, we blindly trust, or blindly don’t, and we miss the deep understanding that comes from dialogue and exploration.

The alternative is what he calls a “reactive document” that imposes some structure onto content so that the reader can “play with the premise and assumptions of various claims, and see the consequences update immediately.”

Although Victor’s first prototype, Ten Brighter Ideas, is a list of recommendations rather than a formal argument, it gives a feel of how such a document could work. But the specific look, feel and design of his example aren’t important. The point is simply that breaking up the contents of an argument beyond the level of just a post or column makes it possible for authors, editors or the community to deeply analyze each claim individually, while not losing sight of its place in the argument’s structure.

Show me the evidence (and the conversation)

Victor’s prototype suggests a more interesting way to structure and display arguments by breaking them up into individual claims, but it doesn’t tell us anything about what sort of content should be displayed alongside each claim. To start with, each claim could be accompanied by relevant links that help the reader make sense of that claim, either by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

Each claim could be accompanied by relevant links that help the reader make sense of that claim by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.

At multiple points in his column, Krugman references “the evidence,” presumably referring to parts of the economics literature that support his argument. But what is the evidence? Why can’t it be cited alongside the column? And, while we’re at it, why not link to countervailing evidence as well? For an idea of how this might work, it’s helpful to look at a crowd-sourced fact-checking experiment run by the nonprofit NewsTrust. The “TruthSquad” pilot has ended, but the content is still online. One thing that NewsTrust recognized was that rather than just being useful for comment or opinion, the crowd can be a powerful tool for sourcing claims. For each fact that TruthSquad assessed, readers were invited to submit relevant links and mark them as For, Against, or Neutral.

The links that the crowd identified in the NewsTrust experiment went beyond direct evidence, and that’s fine. It’s also interesting for the reader to see what other writers are saying, who agrees, who disagrees, etc. The point is that a curated or crowd-sourced collection of links directly relevant to a specific claim can help a reader interested in learning more to save time. And allowing space for links both for and against an assertion is much more interesting than just having the author include a single link in support of his or her claim.

Community efforts to aggregate relevant links along the lines of the TruthSquad could easily be supplemented both by editor-curators (which NewsTrust relied on) and by algorithms which, if not yet good enough to do the job on their own, can at least lessen the effort required by readers and editors. The nonprofit ProPublica is also experimenting with a more limited but promising effort to source claims in their stories. (To get a sense of the usefulness of good evidence aggregation on a really thorny topic, try this post collecting studies of the stimulus bill’s impact on the economy.)

Truth, reliability, and acceptance

While curating relevant links allows the reader to get a sense of the debate around a claim and makes it easier for him or her to learn more, making sense of evidence still takes considerable time. What if a brief assessment of the claim’s truth, reliability or acceptance were included as well? This piece is arguably the hardest of those I have described. In particular, it would require editors to abandon the view from nowhere to publish a judgment about complicated statements well beyond traditional fact-checking. And yet doing so would provide huge value to the reader and could be accomplished in a number of ways.

Imagine that as you read Krugman’s column, each claim he makes is highlighted in a shade between green and red to communicate its truth or reliability. This sort of user interface is part of the idea behind “Truth Goggles,” a master’s project by Dan Shultz, an MIT Media Lab student and Mozilla-Knight Fellow. Shultz proposes to use an algorithm to check articles against a database of claims that have previously been fact-checked by Politifact. Shultz’s layer would highlight a claim and offer an assessment (perhaps by shading the text) based on the work of the fact checkers.

The beauty of using color is the speed and ease with which the reader is able to absorb an assessment of what he or she is reading. The verdict on the statement’s truthfulness is seamlessly integrated into the original content. As Shultz describes the central problem:

The basic premise is that we, as readers, are inherently lazy… It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything?

Still, the number of statements that PolitiFact has checked is relatively small, and what I’m describing requires the evaluation of messy empirical claims that stretch the limits of traditional fact-checking. So how might a publication arrive at such an assessment? In any number of ways. For starters, there’s good, old-fashioned editorial judgment. Journalists can provide assessments, so long as they resist the view from nowhere. (Since we’re rethinking the opinion pages here, why not task the editorial board with such a role?)

Publications could also rely on other experts. Rather than asking six experts to contribute to a “Room for Debate”-style forum, why not ask one to write a lead argument and the others not merely to “respond,” but to directly assess the lead author’s claims? Universities may be uniquely positioned to help in this, as some are already experimenting with polling their own experts on questions of public interest. Or what if a Quora-like commenting mechanism was included for each claim, as Dave Winer has suggested, so that readers could offer assessments, with the best ones rising to the top?

Ultimately, how to assess a claim is a process question, and a difficult one. But numerous relevant experiments exist in other formats. One new effort, Hypothes.is, is aiming to add a layer of peer review to the web, reliant in part on experts. While the project is in its early stages, its founder Dan Whaley is thinking hard about many of these same questions.

Better arguments

What I’ve described so far may seem elaborate or resource-intensive. Few publications these days have the staff and the time to experiment in these directions. But my contention is that the kind of content I am describing would be of dramatically higher value to the reader than the content currently available. And while Victor’s UI points towards a more aggressive restructuring of content, much could be done with existing tools. By breaking up an argument into discrete claims, curating evidence and relevant links, and providing credible assessments of those claims, publishers would equip readers to form opinions on merit and evidence rather than merely on trust, intuition, or bias. Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

I have avoided a number of issues in this explanation. Notably, I have neglected to discuss counter-arguments (which I believe could be easily integrated) and haven’t discussed the tension between empirical claims and value claims (I have assumed a focus on the former). And I’ve ignored the tricky psychology surrounding bias and belief formation. Furthermore, some might cite the recent PolitiFact Lie of the Year controversy as evidence that this sort of journalism is too difficult. In my mind, that incident further illustrates the need for credible, honest referees.

Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.

Returning once more to Krugman’s argument, imagine the color of the text signaling whether his claims about financial transactions and economic growth are widely accepted. Or mousing over his point about reducing deficits to quickly see links providing background on the issue. What if it turned out that all of Krugman’s premises were assessed as compelling, but his conclusion was not? It would then be obvious that something was missing. Perhaps more interestingly, what if his conclusion was rated compelling but his claims were weak? Might he be trying to convince you of his case using popular arguments that don’t hold up, rather than the actual merits of the case? All of this would finally be apparent in such a setup.

In rethinking how we structure and assess arguments online, I’ve undoubtedly raised more questions than I’ve answered. But hopefully I’ve convinced you that better presentation of arguments online is at least possible. Not only that, but numerous hackers, designers, and journalists — and many who blur the lines between those roles — are embarking on experiments to challenge how we think about content, argument, truth, and credibility. It is in their work that the answers will be found.

Image by rhys_kiwi used under a Creative Commons license.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl