- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me."
Almost a year ago, I was hired by Ushahidi to work as an ethnographic researcher on a project to understand how Wikipedians managed sources during breaking news events.
Ushahidi cares a great deal about this kind of work because of a new project called SwiftRiver that seeks to collect and enable the collaborative curation of streams of data from the real-time web about a particular issue or event. If another Haiti earthquake happened, for example, would there be a way for us to filter out the irrelevant, the misinformation, and build a stream of relevant, meaningful and accurate content about what was happening for those who needed it? And on Wikipedia's side, could the same tools be used to help editors curate a stream of relevant sources as a team rather than individuals?
When we first started thinking about the problem of filtering the web, we naturally thought of a ranking system that would rank sources according to their reliability or veracity. The algorithm would consider a variety of variables involved in determining accuracy, as well as whether sources have been chosen, voted up or down by users in the past, and eventually be able to suggest sources according to the subject at hand. My job would be to determine what those variables are -- i.e., what were editors looking at when deciding whether or not to use a source?
I started the research by talking to as many people as possible. Originally I was expecting that I would be able to conduct 10 to 20 interviews as the focus of the research, finding out how those editors went about managing sources individually and collaboratively. The initial interviews enabled me to hone my interview guide. One of my key informants urged me to ask questions about sources not cited as well as those cited, leading me to one of the key findings of the report (that the citation is often not the actual source of information and is often provided in order to appease editors who may complain about sources located outside the accepted Western media sphere). But I soon realized that the editors with whom I spoke came from such a wide variety of experience, work areas and subjects that I needed to restrict my focus to a particular article in order to get a comprehensive picture of how editors were working. I chose a 2011 Egyptian revolution article on Wikipedia because I wanted a globally relevant breaking news event that would have editors from different parts of the world working together on an issue with local expertise located in a language other than English.
Using Kathy Charmaz's grounded theory method, I chose to focus editing activity (in the form of talk pages, edits, statistics and interviews with editors) from January 25, 2011 when the article was first created (within hours of the first protests in Tahrir Square), to February 12 when Mubarak resigned and the article changed its name from "2011 Egyptian protests" to "2011 Egyptian revolution." After reviewing the big-picture analyses of the article using Wikipedia statistics of top editors, and locations of anonymous editors, etc., I started work with an initial coding of the actions taking place in the text, asking the question, "What is happening here?"
I then developed a more limited codebook using the most frequent/significant codes and proceeded to compare different events with the same code (looking up relevant edits of the article in order to get the full story), and to look for tacit assumptions that the actions left out. I did all of this coding in Evernote because it seemed the easiest (and cheapest) way of importing large amounts of textual and multimedia data from the web, but it wasn't ideal because talk pages, when imported, need to be re-formatted, and I ended up using a single column to code data in the first column since putting each conversation on the talk page in a cell would be too time-consuming.
I then moved to writing a series of thematic notes on what I was seeing, trying to understand, through writing, what the common actions might mean. I finally moved to the report writing, bringing together what I believed were the most salient themes into a description and analysis of what was happening according to the two key questions that the study was trying to ask: How do Wikipedia editors, working together, often geographically distributed and far from where an event is taking place, piece together what is happening on the ground and then present it in a reliable way? And how could this process be improved?
Ethnography Matters has a great post by Tricia Wang that talks about how ethnographers contribute (often invisible) value to organizations by showing what shouldn't be built, rather than necessarily improving a product that already has a host of assumptions built into it.
And so it was with this research project that I realized early on that a ranking system conceptualized this way would be inappropriate -- for the single reason that along with characteristics for determining whether a source is accurate or not (such as whether the author has a history of presenting accurate news article), a number of important variables are independent of the source itself. On Wikipedia, these include variables such as the number of secondary sources in the article (Wikipedia policy calls for editors to use a majority of secondary sources), whether the article is based on a breaking news story (in which case the majority of sources might have to be primary, eyewitness sources), or whether the source is notable in the context of the article. (Misinformation can also be relevant if it is widely reported and significant to the course of events as Judith Miller's New York Times stories were for the Iraq War.)
This means that you could have an algorithm for determining how accurate the source has been in the past, but whether you make use of the source or not depends on factors relevant to the context of the article that have little to do with the reliability of the source itself.
Another key finding recommending against source ranking is that Wikipedia's authority originates from its requirement that each potentially disputed phrase is backed up by reliable sources that can be checked by readers, whereas source ranking necessarily requires that the calculation be invisible in order to prevent gaming. It is already a source of potential weakness that Wikipedia citations are not the original source of information (since editors often choose citations that will be deemed more acceptable to other editors) so further hiding how sources are chosen would disrupt this important value.
On the other hand, having editors provide a rationale behind the choice of particular sources, as well as showing the variety of sources rather than those chosen because of loading time constraints may be useful -- especially since these discussions do often take place on talk pages but are practically invisible because they are difficult to find.
Analyzing the talk pages of the 2011 Egyptian revolution article case study enabled me to understand how Wikipedia editors set about the task of discovering, choosing, verifying, summarizing, adding information and editing the article. It became clear through the rather painstaking study of hundreds of talk pages that editors were:
It was important to discover the work process that editors were following because any tool that assisted with source management would have to accord as closely as possible with the way that editors like to do things on Wikipedia. Since the process is managed by volunteers and because volunteers decide which tools to use, this becomes really critical to the acceptance of new tools.
After developing a typology of sources and isolating different types of Wikipedia source work, I made two sets of recommendations as follows:
Regarding a ranking system for sources, I'd argue that a descriptive repository of major media sources from different countries would be incredibly beneficial, but that a system for determining which sources are ranked highest according to usage would yield really limited results. (We know, for example, that the BBC is the most used source on Wikipedia by a high margin, but that doesn't necessarily help editors in choosing a source for a breaking news story.) Exposing the variables used to determine relevancy (rather than adding them up in invisible amounts to come up with a magical number) and showing the progression of sources over time offers some opportunities for innovation. But this requires developers to think out of the box in terms of what sources (beyond static texts) look like, where such sources and expertise are located, and how trust is garnered in the age of Twitter. The full report provides details of the recommendations and the findings and will be available soon.
This is my first comprehensive ethnographic project, and one of the things I've noticed when doing other design and research projects using different methodologies is that, although the process can seem painstaking and it can prove difficult to articulate the hundreds of small observations into findings that are actionable and meaningful to designers, getting close to the experience of editors is extremely valuable work that is rare in Wikipedia research. I realize now that in the past when I actually studied an article in detail, I knew very little about how Wikipedia works in practice. And this is only the beginning!
Heather Ford is a budding ethnographer who studies how online communities get together to learn, play and deliberate. She currently works for Ushahidi and is studying how online communities like Wikipedia work together to verify information collected from the web and how new technology might be designed to help them do this better. Heather recently graduated from the UC Berkeley iSchool where she studied the social life of information in schools, educational privacy and Africans on Wikipedia. She is a former Wikimedia Foundation Advisory Board member and the former Executive Director of iCommons - an international organization started by Creative Commons to connect the open education, access to knowledge, free software, open access publishing and free culture communities around the world. She was a co-founder of Creative Commons South Africa and of the South African nonprofit, The African Commons Project as well as a community-building initiative called the GeekRetreat - bringing together South Africa's top web entrepreneurs to talk about how to make the local Internet better. At night she dreams about writing books and finding time to draw.
This article also appeared at Ushahidi.com and Ethnography Matters. Get the full report at Scribd.com.
I agree with David Winer, when he writes that already today you don't have to wait for the journalists to publish the news. But I'm also quite sure, that journalists will still be there, even in two generations from now, but their role will for sure have changed.
Scripting News :: David Winer writes: "Journalism itself is becoming obsolete. I know the reporters don't want to hear this, and they're likely to blast me, even try to get me 'fired' (it's happened before) because at least for the next few months I hang my hat at a J-school. I happen to think journalism was a response to publishing being expensive. It cost a lot of money to push bits around the net before there was a net. They had to have huge capital-intensive printing plants, fleets of trucks and delivery boys with paper routes."
[David Winer:] Now we can hear directly from the sources and build our own news networks. It's still early days for this, and it wasn't that long ago that we depended on journalists for the news. But in a generation or two we won't be employing people to gather news for us. It'll work differently.
Continue to read Dave Winer, scripting.com
Chicago Tribune :: Reading news articles just became "gamified." With new features on Google News, reading news articles has turned into a game where readers are rewarded with news badges based on their reading habits. Where do you stand? On Google News, the average reader of political news has read 20 articles about politics in the last six months. According to the Official Google Blog, the site has access to more than 50,000 sources.
Continue to read www.chicagotribune.com
Every Friday, Mark Coddington sums up the week’s top stories about the future of news.
The short, happy-ish life of TBD: Just six months after it launched and two weeks after a reorganization was announced, the Washington, D.C., local news site was effectively shuttered this week, when its corporate parent, Allbritton Communications (it’s owned by Robert Allbritton and includes Politico), cut most of its jobs, leaving only an arts and entertainment operation within the website of Allbritton’s WJLA-TV.
TBD had been seen many as a bellwether in online-only local news, as Poynter’s Mallary Jean Tenore documented in her historical roundup of links about the site, so it was quite a shock and a disappointment to many future-of-newsies that it was closed so quickly. The response — aptly compiled by TBDer Jeff Sonderman — was largely sympathetic to TBD’s staff (former TBD manager Jim Brady even wrote a pitch to prospective employers on behalf of the newly laid off community engagement team). Many observers on Twitter (and Terry Heaton on his blog) pointed squarely at Allbritton for the site’s demise, with The Batavian’s Howard Owens drawing out a short, thoughtful lesson: “Legacy managers will nearly always sabotage innovation. Wall of separation necessary between innovators and legacy.”
Blogger Mike Clark pointed out that TBD’s traffic was beating each of the other D.C. TV news sites and growing as well. The Washington Post reported that while traffic wasn’t a problem, turning it into revenue was — though the fact that TBD’s ads were handled by WJLA staffers might have contributed to that.
Mallary Jean Tenore wrote an insightful article talking to some TBD folks about whether their company gave them a chance to fail. Lehigh j-prof Jeremy Littau was unequivocal on the subject: “Some of us have been talking today on Twitter about whether TBD failed. Nonsense. TBD wasn’t given enough time to fail.”
While CUNY j-prof Jeff Jarvis lamented that “TBD will be painted as a failure of local news online when it’s a failure of its company, nothing more,” others saw some larger implications for other online local news projects. Media analyst Alan Mutter concluded that TBD’s plight is “further evidence that hyperlocal journalism is more hype than hope for the news business,” and Poynter’s Rick Edmonds gave six business lessons for similar projects from TBD’s struggles. Journal Register Co. CEO John Paton ripped Edmonds’ analysis, arguing that Allbritton “can’t pretend to have seriously tried the hyperlocal business space after a six-month experiment it derailed half-way in.”
Applying Apple’s new rules: Publishers’ consternation over Apple’s new subscription plan for mobile devices continued this week, with Frederic Filloux at Monday Note laying out many publishers’ frustrations with Apple’s proposal. The New York Times’ David Carr and The Guardian’s Josh Halliday both covered publishers’ Apple subscription conundrum, and one expert told Carr, “If you are a publisher, it puts things into a tailspin: The business model you have been working with for many years just lost 30 percent off the top.”
At paidContent, James McQuivey made the case for a lower revenue share for Apple, and Dan Gillmor wondered whether publishers will stand up to Apple. The company may also be facing scrutiny from the U.S. Justice Department and Federal Trade Commission for possible antitrust violations, The Wall Street Journal reported.
The fresh issue regarding Apple’s subscription policy this week, though, was the distinction between publishing apps and more service-oriented apps. The topic came to the fore when the folks from Readability, an app that allows users to read articles in an advertising-free environment, wrote an open letter ripping Apple for rejecting their app, saying their new policy “smacks of greed.” Ars Technica’s Chris Foresman and Apple blogger John Gruber noted, though, that Readability’s 30%-off-the-top business model is a lot like Apple’s.
Then Apple’s Steve Jobs sent a short, cryptic email to a developer saying that Apple’s new policy applies only to publishing apps, not service apps. This, of course, raised the question, in TechCrunch’s words, ”What’s a publishing app?” That’s a very complex question, and as Instapaper founder Marco Arment wrote, one that will be difficult for Apple to answer consistently. Arment also briefly noted that Jobs’ statement seems to contradict the language of Apple’s new guidelines.
Giving voice to new sources of news: This month’s Carnival of Journalism, posted late last week, focused on ways to increase the number of news sources. It’s a broad question, and it drew a broad variety of answers, which were ably summarized by Courtney Shove. I’m not going to try to duplicate her work here, but I do want to highlight a few of the themes that showed up.
David Cohn, the Carnival’s organizer, gave a great big-picture perspective to the issue, putting it in the context of power and the web. Kim Bui and Dan Fenster defended the community-driven vision for news, with Bui calling journalists to go further: “Let’s admit it, we’ve never trusted the public.” There were several calls for journalists to include more underrepresented voices, with reports and ideas like a refugee news initiative, digital news bus, youth journalism projects, and initiatives for youth in foreign-language families.
The J-Lab’s Jan Schaffer gave 10 good ideas to the cause, and Drury j-prof Jonathan Groves and Gannett’s Ryan Sholin shared their ideas for local citizen news projects, while TheUpTake’s Jason Barnett endorsed a new citizen-journalism app called iBreakNews.
Three bloggers, however, objected to the Carnival’s premise in the first place. Daniel Bachhuber of CUNY argued that improving journalism doesn’t necessarily mean adding more sources, recommending instead that “Instead of increasing the number of news sources, we should focus on producing durable data and the equivalent tools for remixing it.” Lauren Rabaino warned against news oversaturation, and the University of Colorado’s Steve Outing said that more than new sources, we need better filters and hubs for them.
Blogging’s continued evolution: The “blogging is dead” argument has popped up from time to time, and it was revived again this week in the form of a New York Times story about how young people are leaving blogs for social networking sites like Facebook and Twitter. Several people countered the argument, led by GigaOM’s Mathew Ingram, who said that blogging isn’t declining, but is instead evolving into more of a continuum that includes microblogging services like Twitter, traditional blog formats like Wordpress, and the hybrid that is Tumblr. He and Wordpress founding developer Matt Mullenweg shared the same view — that “people of all ages are becoming more and more comfortable publishing online,” no matter the form.
Scott Rosenberg, who’s written a history of blogging, looked at statistics to make the point, noting that 14 percent of online adults keep a blog, a number he called astounding, even if it starts to decline. “As the online population becomes closer to universal, that is an extraordinary thing: One in ten people writing in public. Our civilization has never seen anything like it.” In addition, Reuters’ Anthony DeRosa argued that longer-form blogging has always been a pursuit of older Internet users.
Reading roundup: I’ve got a few ongoing stories to update you on, and a sampling of an unusually rich week in thoughtful pieces.
— A couple of sites took a peek at Gawker’s traffic statistics to try to determine the effectiveness of its recent redesign. TechCrunch saw an ugly picture; Business Insider was cautiously optimistic based on the same data. Gawker disputed TechCrunch’s numbers, and Terry Heaton tried to sort through the claims.
— A couple of Middle East/North Africa protest notes: The New York Times told us about the response to Egypt’s Internet blackout and the role of mobile technology in documenting the protests. And Amy Gahran of the Knight Digital Media Center gave some lessons from the incredible Twitter journalism of NPR’s Andy Carvin.
— The Daily is coming to Android tablets this spring, and its free trial run has been extended beyond the initial two weeks.
— Matt DeRienzo of the Journal Register Co. wrote about an intriguing idea for a news org/j-school merger.
— Alan Mutter made the case for ending federal funding for public journalism.
— At 10,000 Words, Lauren Rabaino had some awesome things news organizations can learn from tech startups, including thinking of news as software and embracing transparency.
— And here at the Lab, Northwestern prof Pablo Boczkowski gave some quick thoughts on how we tend to associate online news with work, and what that means. He sheds some light about an under-considered aspect of news — the social environments in which we consume it.
Karen Markey had a fairly straightforward idea: Teach students to steer clear of unreliable sources of information through the use of a game.
What the University of Michigan professor wants her students to focus on navigating is academic research. But instead of citing credible references on the rise of the Medici family, what if we could apply a similar game to distinguishing the credibility of news sources?
“The problem is today’s students still don’t know where to go for authoritative, good information that is trustworthy,” said Markey. “But they sure do know how to go to the web.”
If we swapped out “students” for “readers,” you’d have the basis of an argument for media literacy and the importance of finding a way for readers (and journalists themselves) to find good information.
The game Markey created, BiblioBouts, could potentially be an example to educators, j-schools or nonprofits on how to teach media literacy. It’s an idea that’s getting investment, like the Knight Foundation’s funding of the expansion of a civics and news literacy program in West Virginia called Globaloria.
In BiblioBouts, students gather citations from library databases or online sources and rank them against each other based on credibility, content, and relevance to assigned topics. The game is built off Zotero, an open-source online citation tool that lets users organize and share research. In a way, the game is a little like the academic equivalent of Final Fantasy or World of Warcraft: You assemble the best team possible and hope to come out on top. Though maybe it’s a little like the Legend of Zelda in a “gather the tools you’ll need for the journey” way. (Then again, I may just be a big nerd.)
Through rating and tagging each other’s citations, students evaluate what makes a good source, with (hopefully) the more thorough and useful sources rising to the top. If competitiveness is any kind of factor students will look at the winning sources and want to emulate that process, Markey said. “It puts people in situations where the game-like features encourage them to continue playing,” she said. “And if they continue playing, hopefully they’ll learn more.”
It’s arguable that doing research has never been easier, thanks to the likes of Google and Wikipedia. Markey said professors aren’t surprised by studies saying students lend too much credence to search rankings in Google rather than relevance or authority. But Markey is clear that she’s not entrenched in an anti-Internet camp when it comes to research. She said there are plenty of good tools (Google Scholar, for instance), as well as sources for surfacing information — but students need to learn to be more discerning and know when to look deeper.
BiblioBouts may seem like a technology solution to a technology problem, in that you’re using one system to try and bring order to another (solving the “there’s too much information” problem, or perhaps the filter failure problem). But Markey thinks making more critical readers is the answer, and in that way BiblioBouts is just a tool.
“I think we need to teach people methodologies,” she said. “When you retrieve something on the web, you need to ask questions about what I am looking at and whether the information can be trusted.”
Markey can see a ready analog in journalism and the idea of media literacy. A similar game, call it truth-squading or BS-detecting, could be used either in training would-be journalists how to ferret out information, or creating more shrewd news consumers. “We need to be critical consumers of information to make decisions that impact our lives,” she said.
Image by Kimli used under a Creative Commons license.
Juliet Shaw writes in a guest post on No Sleep ‘Til Brooklands about her experience of fighting The Daily Mail through the courts after they published an apparently fabricated article (her dissection of the article and its fictions is both painstaking and painful).
There is no happy ending, but there are almost 100 comments. And once again you are struck by the power of sources to tell their side of the story. For Juliet Shaw you could just as well read Melanie Schregardus, or the Dunblane Facebook Group.
Among the comments is Mail reader Elaine, who says
“I have always taken their stance and opinions with a large doze of salt. It will be even larger now. Thank goodness for the internet – as a balance to the Mail I can access the Guardian and the Independent to see their take on a particular world/UK event.”
But also in the comments are others who say they have suffered from being the subject of fabricated articles in the Mail – first Catherine Hughes:
“The article was so damaging to my freelance career that editors I was working with now no longer answer my emails. ‘Heartbroken, devastated and gutted’ doesn’t even come close to how I feel. It happened in September and I am still distraught.”
Then Pomona:
“[I have] been a victim of the Daily Fail’s “journalism” on two occasions: once when my first marriage broke up and they printed a lurid and utterly innaccurate story about me (I’m no celeb, just Jo Public), and more recently when one of their journalists lifted and printed a Facebook reply to their request for information (leaving out the bit where I told them I did not permit them to use or reprint any part of my post)”
And Anonymous:
“The Daily Mail said they were looking for a real life example of a similar case of teachers exploiting trust to complement a news story. They promised to protect my anonymity, use only a very small picture and as one of a number of case studies. A week later a double page spread – taken up mostly with a picture of me – bore the headline ‘Dear Sir, I think I Love you’. The quotes bore no resemblance to what I said and made it sound like I liked the teacher?! Instead of what really happened – a drunken shuffle in the back of a car and a feeling of abuse of trust and sadness the next day.”
“When the article was published, my role as welfare officer was never mentioned, the average overdraft had become *my* overdraft, and I was apparently on the verge of jacking in my studies in despair.”
“I applied as a case study, the photoshoot, the invasive questions. Took months to get my expenses after dozens of ignored emails. Thankfully the article never went to print. At the time I was annoyed but now I am thankful. I also work in PR and would feel extremely uncomfortable offering anyone as a case study for a client. No matter how large the exposure.”
“I complained to the editor. He insisted that all journalists identify themselves as such every time. And that his employee had done no wrong. In short, he was calling ME a liar. And as all interviews are recorded he could prove it. I said, Okay, listen to the recording then! He replied, No, I don’t need to. I stand by my writers.”
Other comments mention similar experiences, some with other newspapers. It’s a small point, driven home over and over again: power has shifted.
As the story of the shooting of Arizona Congresswoman Gabrielle Giffords continues to unfold, we’re seeing another example of Twitter in motion and the different approaches news organizations take to using social media.
Twitter has proven its usefulness to the media in breaking news as a real-time search tool, an instantaneous publisher, and a source discovery service. It’s that last point that is often of most use — and interest — to reporters on Twitter, finding and talking to people who could be useful in a story. But making that approach can be difficult — if not downright awkward. How does Twitter etiquette work when approaching a potential source, particularly when that approach plays out in the open?
NYC The Blog tracked the media requests of Caitie Parker, a woman who tweeted that the shooting took place near her house and that she was a former classmate of the alleged shooter. And that’s when the stampede for interviews began, with more than 30 interview requests coming in on Twitter from The New York Times, CNN, the Associated Press, and more. (Not to mention a similar number by email and Facebook.) So what were their approaches?
Anthony De Rosa of Reuters seems to be the first person to find Parker and through a series of tweets conducted something in between an interview and standard fact checking. But De Rosa’s discovery seems to be what broke the floodgates on Parker.
Reporters from outlets like the Arizona Daily Star, Arizona Republic, and KTAR radio in Phoenix made a pitch for talking with the local guy, as they tried to compete with the national media parachuting in to cover the story. At least one reporter from the Los Angeles Times tried to play up his local ties, telling Parker he “went to school at UofA.”
Sometimes you have to roll the dice on name recognition and hope it has a little sway. The New York Times wants to talk to you! PBS NewsHour wants to talk to you!
At least a few reporters cut to the chase and asked Parker a question outright, or sought to verify new information about the shooter.
Another approach uses a little empathy — as in, “I know yr overwhelmed,” or “sorry to add to circus.”
ABC News White House correspondent Jake Tapper is known for being savvy when it comes to using social media in his reporting. Tapper apparently decided to cut to the chase and use the parlance of Twitter when reaching out to Parker: “how can abc news get in touch w you? I will follow u so u can DM me”
The end result of all this attention from journalists?
A brief treat for sports fans and future-of-media junkies: Bill Simmons’ column at ESPN.com about his accidental tweeting last week about Patriots wide receiver Randy Moss’ trade to the Minnesota Vikings. Simmons heard a rumor about the trade from a source and meant to send a direct message (“moss Vikings”) to ESPN reporter Adam Schefter. Instead, Simmons accidentally tweeted it to the world, which made the story blow up from private rumor to public discussion in record time.
The column talks about how that happened, but more interestingly, it also gets into how journalists think about Twitter today, as an outlet for breaking news, as a source, and as a forum for speculation. It’s worth a read, but here are a couple excerpts:
Twitter, which exacerbates the demands of immediacy, blurs the line between reporting and postulating, and forces writers to chase too many bum steers. With every media company unabashedly playing the “We Had It First!” game, reporters’ salary and credibility hinges directly on how many stories they break. That entices reporters to become enslaved to certain sources (almost always agents or general managers), push transparent agendas (almost always from those same agents or GMs) and “break” news before there’s anything to officially break. It also swings the source/reporter dynamic heavily toward the source. Take care of me and I will take care of you.
[...]
On the surface, this annoys me to no end. Who cares? It’s not like we have some giant scoreboard keeping track of everything. But my reporter friends all say the same thing: It’s not about one scoop but the entire body of scoops (not just for the reporter, but the company that employs them). Think of Ichiro grinding out 200 hits every season. Yeah, most of them are mundane singles … but they add up. For readers, that volume turns it into a “feel” thing.
I feel like that guy breaks his share of stories, hence, I trust him. Or flipping that around: I don’t trust that guy, he throws stuff out there left and right and half of it’s not true.
So yeah, there’s no official scoreboard for scoops. We just subconsciously keep score. As do editors. As do media companies. Some will do whatever it takes to pad their stats, whether it’s pimping every decision someone makes to get repaid with information later, playing the odds by reporting something they hope is true (and if it is, they look like a stud), spinning every angle against someone who once butted heads with a favored source, whatever. The best reporters maintain relationships, avoid agendas, craft good narratives, never stop cultivating new sources and — occasionally — break news simply because it’s an outcome of being good at their jobs. That’s what should matter. And that’s how they should be judged. I wish that were always the case.
[...]
In the Twitter era, we see writers repeatedly toss out nuggets of information without taking full ownership. It’s my least favorite thing about Twitter (because it’s wishy-washy) and one of my favorite things about Twitter (because nonstop conjecture is so much fun for sports fans). We saw it happen during the LeBron saga, the baseball trade deadline, Favre’s latest round of “I Might Come Back” — it’s just part of following sports in 2010. Call it “pseudo-reporting”: telling your audience that you think something happened or that you heard something happened, and somehow that sentiment becomes actual news.
Simmons also gives a window into the source development process, detailing how an NBA-exec candidate tried to get Simmons to promote him for a job in exchange for the promise of later scoops. Overall, it’s a great, self-aware piece useful for any journalist thinking about how Twitter fits into new workflows.
Last week, I wrote about the Guardian’s new network of science blogs, which — in a first for the paper — is allowing its (growing) cadre of bloggers to publish directly to the Guardian’s site. The effort, though new for the Guardian, isn’t necessarily new for media organizations in general. In 2008, Eric Berger, a science reporter at the Houston Chronicle — and author of the paper’s SciGuy blog — assembled a team of scientists to contribute to a network of blogs whose topics include climate change, the environment, astronomy, and more. The goal: “to provide a neutral space for scientists and the general public to meet and speak on the issues of the day.”
The “.sphere” experiment — the blogs had titles like Atmo.sphere, Cosmo.sphere, and Evo.sphere — “had some successes and failures,” Berger noted in a later blog post. Some of the blogs fizzled; new ones were born. And one of the biggest determinants of success was, unsurprisingly, the dynamics of authorship: the people at the blogs’ helm. As the project evolved, the focus went from group contributions — several scientists, and some volunteer lay people, writing the content and guiding discussions — to blogs that are written “mostly by individuals.”
I spoke with Berger about that shift. We focused on science blogs; the lessons, though, are relevant to any news organization looking to extend its reach through tapping the talents and expertise of independent bloggers.
Blogging requires passion — about the subject matter and about communication itself. Dave Winer’s notion of a “natural born blogger” is instructive not just for amateur bloggers, but for those networked with professional sites, as well. ”People have to want to do it; they have to be interested in it,” Berger says. “And if they like doing it, then they’ll do it more, and they’ll do it better. Because if you’re writing about stuff that you’re interested in and enjoying what you’re doing, it’s going to come through in your writing. It’s going to show your readers that you’re engaged — and going to make them more prone to be engaged, as well.”
The common conception of the scientist locked in academia’s ivory tower is one held not only by many members of the public, but by some scientists, as well. There’s an occasional tendency, Berger points out, for scientists to see themselves and their work as isolated from the rest of the world. (That’s a tendency, I’d add, that can afflict journalism, as well.) Success in blogging, though, requires getting down to solid ground. “You’ve got to have someone who wants to have a conversation with the public about topics that the public is interested in,” Berger says. And, when it comes to guiding a blog, “a big part of it is convincing the scientists that it’s worth their time not only to write blog entries, but also to interact with people in the comments.” Many scientists have no interest in that, he notes — so the trick is finding the ones who are willing to join the fray.
“You’ve got to find the right scientist” – someone who understands the public with whom they’re conversing. Scientists in particular are used to communicating with peers, Berger notes. But “it’s different with a newspaper — it’s an audience of lay people. A lot of people are looking at the website when they’re at work – and so they’re looking to amuse and to educate themselves.” A good blog network will be populated by writers who strike a balance between those two goals.
In addition to looking for Winer’s “natural born bloggers,” you want scientists who are able to marry the expertise of their fields with the ability to connect with the public. “Generally, it’s the people who write more to a general level” who are most successful at blogging, Berger says. “People are not going to read a blog that is primarily educational,” he notes. And “most people aren’t spending their free time on the web to get astronomy lectures, I hate to say.” Instead, in general, “people want stuff either that’s related to the news of what’s happening or that has some kind of popular hook. It’s difficult for science as a topic to compete with things like sports or religion — or politics, of course — which are some of the most popular blog subjects here and elsewhere.” To make it compete, you need writers who are able to refashion science from a niche topic into one of general interest — by moderating content and by writing with, for lack of a better word, flair.
Since communication is so important to the blogging equation (see point one), experts who make good sources might also make good bloggers, Berger notes. “If I’ve interviewed someone in the past, and they’ve been really helpful, or have explained things in a good way, or been willing to return calls quickly, then that person would be a good candidate – or at least someone to suggest” as a blogger, Berger says. Often, he points out, the PR people at universities have a good sense of their faculty’s comfort with external communication; they can be a great resource in finding academics who’d have both the interest and the ability to become good bloggers.
A good blog network, Berger says, depends in large part on a willingness to experiment — not only on the part of the bloggers themselves, but of the network leaders, as well. Perhaps the primary principle is trial-and-error. “I had some hits and I had some misses,” he notes of his two years of network-ing, but by being open to trying out different bloggers and formats and content areas, the network is also open to unexpected successes.
“You kind of have to let people do what they do, when they can,” Berger says. “Different people are going to write different things. Some people are doing it because they want to write, and they’re interested in saying their piece on things; other people are interested in educating. You just kind of let people do what’s to their strength.”
The International Forum for Responsible Media blog has posted details of an interesting judgement this week by the Grand Chamber of the European Court of Human Rights, which centres on the rights of journalists to protect confidential sources.
In the case of Sanoma Uitgevers BV v Netherlands, the court held unanimously that the requirement of the applicant to provide material to the public prosecutor was not prescribed by law and violated Article 10 of the European Convention on Human Rights.
The case refers to journalists from a car magazine who had attended an illegal car race and taken photographs in 2002. The authorities had demanded the journalists hand over their images to police.
Following ongoing legal disputes, which led to the material being surrendered and then later returned to the magazine, the case came before the European Court of Human Rights. The magazine challenged the legalities surrounding the disclosure of information to the police that would have revealed their journalists’ sources. In its original judgement, dated 2009, the court found that “the information contained on the CD-ROM had been relevant and capable of identifying the perpetrators of other crimes investigated by the police and the authorities had only used that information for those purposes”.
But following the referral of the case to the Grand Chamber this week, which included a media intervention by bodies including the Guardian News and Media and the Committee to Protect Journalists, the court held that Article 10 of the European Convention on Human Rights had been violated and awarded the claimants 35,000 Euros for costs and expenses.
The Inforrm blog has more background information on the case and a link to the judgement in full.Similar Posts:
The Icelandic parliament has voted unanimously to create what are intended to be the strongest media freedom laws in the world. And Iceland intends these measures to have international impact, by creating a safe haven for publishers worldwide — and their servers.
The proposal, known as the Icelandic Modern Media Initiative, requires changes to Icelandic law to strengthen journalistic source protection, freedom of speech, and government transparency.
“The Prime Minister voted for it, and the Minister of Finance, and everybody present,” says Icelandic Member of Parliament Birgitta Jónsdóttir, who has been the proposal’s chief sponsor. Her point is that Iceland is serious about this. The country is in the mood for openness after a small group of bankers saddled it with crippling debt, and the proposal ties neatly into the country’s strategy to be prime server real-estate.
But although the legislative package sounds very encouraging from a freedom of expression point of view, it’s not clear what the practical benefits will be to organizations outside Iceland. In his analysis of the proposal, Arthur Bright of the Citizen Media Law Project has noted that, in one major test case of cross-border online libel law, “publication” was deemed to occur at the point of download — meaning that serving a controversial page from Iceland won’t keep you from getting sued in other countries. But if nothing else, it would probably prevent your servers from being forcibly shut down.
There might be other benefits too. Wikileaks says that it routes all submissions through Sweden, where investigations into the identity of an anonymous source are illegal. Wikileaks was heavily involved in drafting and promoting the Icelandic package, and whatever your opinion of their current controversies, they’ve proven remarkably immune to legal prosecution in their short history. Conceivably, other journalism organizations could gain some measure of legal protection for anonymous sources if all communications were routed through Iceland.
All of which is to say that issues of press censorship have long since passed the point of globalization. When an aggrieved party in country A can sue a publisher in country B through the courts of country C (as in these examples), press freedom must be understood — and fought for — at an international level.
“It has not only an impact here, but in changing the dialog in Europe,” Jónsdóttir told me.
But it will be some time before the full repercussions of Iceland’s move are felt. For a start, the new laws are not yet written. Icelandic lawyer Elfa Ýir of the Ministry of Culture is leading the drafting effort, and expects to have the help of volunteer legal experts and law students. (“Iceland is still suffering from the financial meltdown,” says Jónsdóttir.) The complex legislative changes will be passed in several parts, possibly beginning late this year.
“It should be done in about a year,” Jónsdóttir said. “I’ll be following this very closely.”
And then it may be further years before we understand, from case law, exactly what an “offshore freedom of expression haven” means to journalists worldwide. Nonetheless, I hope to get a discussion started among the high-powered media law types at the Annenberg-Oxford Summer Institute next month, and we’ll see if we can get a more precise understanding of the practical consequences of Iceland’s move — and how journalists might use it to protect their work. If you have some insight, do drop the Lab a line.
Photo of Iceland by Trey Ratcliff used under a Creative Commons license.
"Tell the chef, the beer is on me."
"Basically the price of a night on the town!"
"I'd love to help kickstart continued development! And 0 EUR/month really does make fiscal sense too... maybe I'll even get a shirt?" (there will be limited edition shirts for two and other goodies for each supporter as soon as we sold the 200)
Drawing out the audience: Inside BBC’s User-Generated Content Hub
The hub sits in the “heart” of the BBC’s newsroom in London, and has been operating 24/7 since last fall with a staff of about twenty people. Journalism student Caroline Beavon posted a tantalizing video interview with unit head Matthew Eltringham earlier this year, but there was so much more I wanted to know. How does one find sources for stories happening overseas? Why centralize all social media interactions within one unit at the BBC? To what extent does audience reaction and suggestion drive the news agenda?
So when I bumped into one of the hub’s journalists at a talk in Hong Kong recently, I fairly pounced on her for an interview. Silvia Costeloe, a broadcast journalist at the UGC Hub, very kindly sat down with me to explain that the purpose of the hub is to find and connect with the people around news stories, wherever they are in the world and whatever tools or sites they use to communicate.
Hub journalists scour the Internet for pictures, videos, and other content that might contribute to a story, which they then verify and clear for use. But they also find people, sources who can be contacted by reporters in other departments within the BBC.
What sorts of specialized skills does this demand? “Well, you need to be a journalist, really,” said Costeloe. But the job is also about filtering the enormous amount of noise on the Internet for that one original tweet by an eyewitness. Costeloe said that finding those gems is mostly a matter of persistence and organization. Still, she offered a few practical hints, such as searching for people with a specific location listed in their Twitter profile, or putting “pix” or “vid” in your search to find multimedia content, or watching who local news organizations are watching.
But the hub does more than collect what’s already out there: it uses the BBC’s own website to solicit content, sources, and stories. Costeloe told me that much of their most interesting news gathering comes from comment forms at the bottom of stories, asking for feedback.
The hub’s journalists answer emails generated by stories and read the comments. This makes them the primary back-channel from the BBC’s audience to its journalists. There was a fascinating and comprehensive 2008 study on the impact of “user-generated content” at the BBC, which found that “journalists and audiences display markedly different attitudes towards…audience material,” among many other things. So I asked Costeloe to what degree user feedback shapes the news agenda today.
Keeping track of what’s happening online. Finding sources close to the story. Paying attention to audience feedback. Aren’t those things every journalist should be doing in the Internet era? Yes, says Costeloe, but there is still a strong argument for a specialized unit.
We also discussed the BBC’s comment moderation approach, the working relationship between the hub and the developers of the BBC web site, how stories are updated based on user feedback, and other good stuff. Listen to the 20-minute interview in the player below, download the MP3 here, or read the full transcript which follows.
[See post to listen to audio]