Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 01 2013

14:57

Monday Q&A: Denise Malan on the new data-driven collaboration between INN and IRE

Every news organization wishes it could have more reporters with data skills on staff. But not every news organization can afford to make data a priority — and even those that do can sometimes find the right candidates hard to find.

A new collaboration between two journalism nonprofits — the Investigative News Network and Investigative Reporters and Editors — aims to address this allocation issue. Denise Malan, formerly a investigative and data reporter at the Corpus Christi Caller-Times, will fill the new role of INN director of data services, offering “dedicated data-analysis services to INN’s membership of more than 80 nonprofit investigative news organizations,” many of them three- or four-person teams that can’t find room or funding for a dedicated data reporter.

It’s a development that could both strengthen the investigative work being done by these institutions and skill building around data analysis in journalism. Malan has experience in training journalists in skills of procuring, cleaning, and analyzing data, and she has high hopes for the kinds of stories and networked reporting that will be produced by this collaboration. We talked about IRE’s underutilized data library, potentially disruptive Supreme Court decisions around freedom of information, the unfortunate end for wildlife wandering onto airplane runways, and what it means to translate numbers into stories.

O’Donovan: How does someone end up majoring in physics and journalism?
Malan: My freshman year they started a program to do a bachelor of arts in physics. Physics Lite. And you could pair that with business or journalism or English — something that was really your major focus of study, but the B.A. in physics would give you a good science background. So you take physics, you take calculus, you take statistics, and that really gives you the good critical thinking and data background to pair with something else — in my case, journalism.
O’Donovan: I guess it’s kind of easy to see how that led into what you’re doing now. But did you always see them going hand in hand? Or is that something that came later?
Malan: In college, I thought I was going to be a science writer. That was the main reason I paired those. When I got into news and started going down the path of data journalism, I was very glad to have that background, for sure. But I started getting more into the data journalism world when the Caller-Times in Corpus Christi sent me to the IRE bootcamp, where it’s a weeklong, intensive week where you concentrate on learning Excel and Access and the different pitfalls you can face in data — some basic cleaning skils. That’s really what got me started in the data journalism realm. And then the newspaper continued to send me to training — to the CAR conferences every year and local community college classes to beef up my skills.
O’Donovan: So, how long were you at the Caller-Times?
Malan: I was there seven years. I started as a reporter in June 2006, and then moved up into editing in May of 2010.
O’Donovan: And in the time that you were there as their data person, what are some stories that you were particularly proud of, or made you feel like this was a a burgeoning field?
Malan: We focused on intensely local projects at the Caller-Times. One of the ones that I was really proud of I worked on with our city hall reporter Jessica Savage. She found out that the city streets are a huge issue in Corpus Christi. If you’ve ever driven here, you know they are just horrible — a disaster. And the city is trying to find a billion dollars to fix them.

So our city hall reporter found out that the city keeps a database of scores called the Pavement Condition Index. Basically, it’s the condition of your street. So we got that database and we merged it with a file of streets and color-coded it so people could fully see what the condition of their street was, and we put it a database for people to find their exact block. This was something the city did not want to give us at first, because if people know the condition of their street scores, they’re going to demand that we do something about it. We’re like, “Yeah, that’s kind of the idea.” But that database became the basis for an entire special section on our streets. We used it to find people on streets who scored a 0, and talked about how it effects their life — how often they have to repair their cars, how often they walk through giant puddles.

And then we paired it with a breakout box of every city council member and their score. We did a map online, which, for over a year, actually, has been a big hit while the city is discussing how they’re going to find this money. People have been using it as a basis for the debate that they’re having, which, to me, is really kind of how we make a difference. Using this data that the city had, bringing it to light, making it accessible, I think, has really just changed the debate here for people. So that’s one thing I’m really proud of — that we can give people information to make informed decisions.

O’Donovan: Part of your new position is going to be facilitating and assisting other journalists in starting to understand how to do this kind of work. How do you tell reporters that this isn’t scary — that it’s something they can do or they can learn? How do you begin that conversation?
Malan: [At the Caller-Times] we adopted the philosophy that data journalism isn’t just something that one nerdy person in the office does, but something that everyone in the newsroom should have in their toolbox. It really enhances eery beat at the newspaper.

I would do training sessions occasionally on Excel, Google Fusion Tables, Caspio to show everyone in the newsroom, “Here’s what’s possible.” Some people really pick up on it and take it and run with it. Some people are not as math oriented and are not going to be able to take it and run with it themselves, but at least they know those tools are available and what it’s possible to do with them.

So some of the reporters would be just aware of how we could analyze data and they would keep their eyes open for databases on their beats, and other reporters would run with it. That philosophy is very important in any newsroom today. A lot of what I’m going to be doing with IRE and INN is working with the INN members in helping them to gather the data and analyze it and inform their local reporting. So a lot of the same roles, but in a broader context.

O’Donovan: So a lot of it is understanding that everyone is going to come at it with a different skill level.
Malan: Yes, absolutely. All our members have different levels of skills. Some of our members have very highly skilled data teams, like ProPublica, Center for Public Integrity — they’re really at the forefront of data journalism. Other members are maybe one- or two-person newsrooms that may not have the training and don’t have any reporters with those skills. So the skill sets are all over the board. But it will be my job to help, especially smaller newsrooms, plug into those resources — especially the resources at IRE — the best they can, with the data library there and the training available there. We help them bring up their own skills and enhance their own reporting.
O’Donovan: When a reporter comes to you and says, “I just found this dataset or I just got access to it” — how do you dive into that information when it comes to looking for stories? How do you take all of that and start to look for what could turn into something interesting?
Malan: A lot of it depends on the data set. Just approach every set of data as a source that you’re interviewing. What is available there? What is maybe missing from the data is something you want to think about too? And you definitely want to narrow it down: A lot of data sets are huge, especially these federal data sets that might have records containing, I don’t know, 120 fields, but maybe you’re only interested in three of them. So you want to get to know the data set, and what is interesting in it, and you want to really narrow your focus.

One collaboration that INN did was using data gathered by NASA for the FAA, and it was essentially near misses — incidents at airports like hitting deer on the runway, and all these little things that can happen but aren’t necessarily reported. They all get compiled in this database, and pilots write these narratives about it, so that field is very interesting to them. There were four or five INN members who collaborated on that, and they all came away with different stories because they all found something else that was interesting for them locally.

O’Donovan: This position you’ll hold is about bringing the work of INN and IRE together. What’s that going to look like? We talk all the time about how journalism is moving in a more networked direction — where do you see this fitting into that?
Malan: IRE and INN have always had a very close relationship, and I think that this position just kind of formalizes that. I will be helping INN members plug into the resources of IRE, especially the data library, I’ll be working closely with Liz Lucas, the database director at IRE, and I’m actually going to be living near IRE so I can work more closely with them. Some of that data there is very underutilized and it’s really interesting and maybe hasn’t been used in any projects, especially on a national level.

So we can take that data and I can kind of help analyze it, help slice it for the various regions we might be looking at, and help the INN members use that data for their stories. I’ll basically be acting as almost a translator to get this data from the IRE and help the INN members use it.

Going the other way, with INN members, they might come up with some project idea where data isn’t available from the database library, or it might be something where we have to gather data from every state individually, so we might compile that and whatever we end up with will be sent back to the IRE library and made available to other IRE members. So it’s a two-way relationship.

O’Donovan: So in terms of managing this collaboration, what are the challenges? Are you think of building an interface for sharing data or documents?
Malan: We’re going to be setting up a kind of committee of data people with INN to have probably monthly calls and just discuss ideas, what they’re working on, brainstorming, possible ideas. I want it to be a very organic, ground-up process — I don’t want it to be dictating what the projects should be. I want the members to come up with their own ideas. So we’ll be brainstorming and coming up with things, and we’ll be managing the group through Basecamp and communicating that way. A lot of the other members are already on Basecamp and communicate that way through INN.

We’ll be communicating through this committee and coming up with ideas and I’l be working with other members to, to reach out to them. If we come up with an idea that deals with health care, for example, I might reach out to some of the members that are especially focused on health care and try to bring in other members on it.

O’Donovan: Do you foresee collaborations between members, like shared reporting and that kind of thing?
Malan: Yeah, depending on the project. Some of it might be shared reporting; some of it might be someone does a main interview. If we’re doing a crime story dealing with the FBI’s Uniformed Crime Report, maybe we just have one reporter from every property, we nominate one person to do the interview with the FBI that everyone can use in their own story, which they localize with their own data. So, yeah, depending on the project, we’ll have to kind of see how the reporting would shake out.
O’Donovan: Do you have any specific goals or types of stories you want to tell, or even just specific data sets you’re eager to get a look at?
Malan: I think there are several interesting sets in the IRE data library that we might go after at first. There’s really interesting health sets, for example, from the FDA — one of them is a database of adverse affects from drugs, complaints that people make that drugs have had adverse effects. So yeah, some of those can be right off the bat, ready to go and parse and analyze.

Some other data sets we might be looking at will be a little harder to get, will take some FOIs and some time to get. There are several major areas that our members focus on and that we’ll be looking at projects for. Environment, for example — fracking is a large issue, and how environment effects public health. Health care, especially with the Affordable Care Act coming into effect next year is going to be a large one. Politics, government, how money effects influences politicians is a huge area as we come up on the 2016 elections and the 2014 midterms. And education is another issue with achievement gaps, graduation rates, charter schools — those are all large issues that our members follow. Finding those commonalties and dealing with data sets, digging into that is going to be my first priority.

O’Donovan: The health question is interesting. Knight announced its next round of News Challenge grants is going to be all around health.
Malan: I’m excited about that. We have several members that are really specifically focused on healt,h so I feel like we might be able to get something good with that.
O’Donovan: Health care stuff or more public health stuff?
Malan: It’s a mix, but a lot of stuff is geared toward the Affordable Care Act now.
O’Donovan: Gathering these data sets must often involve a lot of coordination across states and jurisdictions.
Malan: Yeah, absolutely. One thing I am a little nervous about is the Supreme Court’s recent ruling in the Virginia case where they can now require you to live in a state to put in an FOI. That might complicate things a little bit. I know there are several groups working on lists of people who will put an FOI in for you in various states. But that can kind of just slow down the process and put a little kink in and add to the timeline. I’m concerned of course that now they know it’s been ruled constitutional that every state might make that the law. It could be a huge thing. A management nightmare.
O’Donovan: What kind of advice do you normally give to reporters who are struggling to get information that they know they should be allowed to have?
Malan: That’s something we encountered a lot here, especially getting data in the proper format, too. Laws on that can vary from state to state. A lot of governments will give you paper or PDF format, instead of the Excel or text file that you asked for. It’s always a struggle.

The advice is to know the law as best you can, know what exceptions are allowed under your state law, be able to quote — you don’t have to have the law memorized, but be able to quote specific sections that you know are on your side. Be prepared with your requests, and be prepared to fight for it. And in a lot of cases, it is a fight.

O’Donovan: That’s an interesting intersection of technical and legal skill. That’s a lot of education dollars right there.
Malan: Yeah, no kidding.
O’Donovan: When you do things like attend the NICAR conference and assess the scene more broadly, where do you see the most urgent gaps in the data journalism field? Is it that we need more data analysts? More computer scientists? More reporters with the fluency in communicating with government? More legal aid? If you could allocate more resources, where would you put them right now?
Malan: There’s always going to be a need for more very highly skilled data journalists who can gather these national sets, analyze them, clean them, get them into a digestible format, visualize them online, and inform readers. I would like to see more general beat reporters interested in data and at least getting skills in Excel and even Access — because the beat reporters are the ones on the ground, using their sources, finding these data sets or not finding them if they’re not aware of what data is. I would really like this to be a bigger push to at least educate most general beat reporters to a certain level.
O’Donovan: Where do you see the data journalism movement headed over the next couple years? What would your next big hope for the field be?
Malan: Well, of course I hope for it to go kind of mainstream, and that all reporters will have some sort of data skills. It’s of course harder with fewer and fewer resources, and reporters are learning how to tweet and Instagram, and there are demands on their time that have never been there.

But I would hope it would become just an normal part of journalism, that there would be no more “data journalism” — that it just becomes part of what we do, because it’s invaluable to reporting and to really helping ferret out the truth and to give context to stories.

June 28 2013

14:00

ProPublica introduces a magazine to reach new readers on mobile

propublicamagProPublica wants to get in the magazine business.

The investigative news nonprofit is launching a monthly digital magazine for iOS devices that will collect the best of its reporting on current topics in the news. The first issue of ProPublica The Magazine, “In the Crosshairs,” is focused on war and gun violence, with stories on drone strikes and the Guatemalan civil war.

ProPublica The Magazine is free and will be delivered via Apple’s Newsstand. And that, more than developing a new line of revenue, is the point for ProPublica: finding a new avenue to reach readers. Specifically, as ProPublica president Dick Tofel told me, to get mobile readers.

“The real point is this puts us in the Newsstand, that pushes us to people, which we hope is a big plus,” he said.

As a news organization, ProPublica has always used partnerships with others to spread its work to new readers. But as the site has matured, staffers have invested more time in building their own audience. A big area of desired growth, Tofel told me, is in mobile, and on iOS devices in particular.

The way Tofel sees it, the magazine is like a monthly version of ProPublica’s work packaging stories for ebooks. But the magazine will allow ProPublica to be a little more timely, while also being thematic around issues that are important to readers. Or, Tofel puts it another way, “It’s a little like This American Life, where he does those multi-story episodes.”

ProPublica is not alone in wanting to develop a product that can repackage reporting and is a good fit for mobile devices. Earlier in June, The Atlantic introduced The Atlantic Weekly, which collects the work of The Atlantic, The Atlantic Wire, and The Atlantic Cities for $2.99 a month. ProPublica partnered with 29th Street Publishing to create the magazine. The company, which has also helped publishers like The Awl create magazines for iOS, uses a relatively lightweight CMS that makes it easy for publishers to transform existing stories into mobile-friendly reads.

Since ProPublica isn’t bringing on additional staff to produce the monthly magazine, they needed something easy to use, said Krista Kjellman Schmidt, ProPublica’s deputy news apps editor. Schmidt will be responsible for preparing the magazine each month, working with other editors to identify a theme and combing through ProPublica’s archive to select the best stories. Schmidt said she’s already at work on the second issue, which looks at race and housing in America. “These stories we’re trying to patch together in a new way so readers can see the long arc of an investigation,” she said.

Schmidt said the magazine is an experiment for ProPublica. While they have an iPhone app, many readers also prefer reading the site on a mobile browser. The magazine puts ProPublica into another venue on iOS devices in Newsstand, setting it up to be discovered by new readers. The richer magazine-like design encourages publishers to find new ways to curate stories and push users to read deeply, she said. Schmidt said they decided to deliver the magazine monthly to gauge reader interest and how the production process fits into their other routines. She said they’ll evaluate the project over the course of the next year.

June 27 2013

16:27

Sensor journalism, storytelling with Vine, fighting gender bias and more: Takeaways from the 2013 Civic Media Conference

mit-knight-civic-media-conference-2013Are there lessons journalists can learn from Airbnb? What can sensors tell us about the state of New York City’s public housing stock? How can nonprofits, governments, and for-profit companies collaborate to create places for public engagement online?

There were just a few of the questions asked at the annual Civic Media Conference hosted by MIT and the Knight Foundation in Cambridge this week. It covered a diverse mix of topics, ranging from government transparency and media innovation to disaster relief and technology’s influence on immigration issues. (For a helpful summary of the event’s broader themes check out VP of journalism and innovation Michael Maness‘s wrap-up talk.)

There was a decided bent towards pragmatism in the presentations, underscored by Knight president Alberto Ibargüen‘s measured, even questioning introduction to the News Challenge winners. “I ask myself what we have actually achieved,” he said of the previous cycles of the News Challenge. “And I ask myself how we can take this forward.”

While the big news was the announcement of this year’s winners and the fate of the program going forward, there were plenty of discussions and presentations that caught our attention.

Panelists and speakers — from Republican Congressman Darrell Issa and WNYC’s John Keefe to Columbia’s Emily Bell and recent MIT grads — offered insights on engagement (both online and off), data structure and visualization, communicating with government, the role of editors, and more. In the words of The Boston Globe’s Adrienne Debigare, “We may not be able to predict the future, but at least we can show up for the present.”

One more News Challenge

Though Ibargüen spoke about the future of the News Challenge in uncertain terms, Knight hasn’t put the competition on the shelf quite yet. Maness announced that there would indeed one more round of the challenge this fall with a focus on health. That’s about all the we know about the next challenge; Maness said Knight is still in the planning stages of the cycle and whatever will follow it. Maness said they want the challenge to address questions about tools, data, and technology around health care.

Opening up the newsroom

One of the more lively discussions at the conference focused on how news outlets can identify and harness the experience of outsiders. Jennifer Brandel, senior producer for WBEZ’s Curious City, said one way to “hack” newsrooms was to open them up to stories from freelance writers, but also to more input from the community itself. Brandel said journalists could also look beyond traditional news for inspiration for storytelling, mentioning projects like Zeega and the work of the National Film Board of Canada.

Laura Ramos, vice president of innovation and design for Gannett, said news companies can learn lessons on user design and meeting user needs from companies like Airbnb and Square. Ramos said another lesson to take from tech companies is discovering, and addressing, specific needs of users.

newsroominsidepanel

Bell, director of the Tow Center for Digital Journalism at Columbia University, said one solution for innovation at many companies has been creating research and development departments. But with R&D labs, the challenge is integrating the experiments of the labs, which are often removed from day-to-day activity, to the needs of the newsroom or other departments. Bell said many media companies need leadership that is open to experimentation and can juggle the immediate needs of the business with big-picture planning. Too often in newsrooms, or around the industry, people follow old processes or old ideas and are unable to change, something Bell compared to “watching six-year-olds playing soccer,” with everyone running to the ball rather than performing their role.

Former Knight-Mozilla fellow Dan Schultz said the issue of innovation comes down to how newsrooms allocate their attention and resources. Schultz, who was embedded at The Boston Globe during his fellowship, said newsrooms need to better allocate their developer and coding talent between day-to-day operations like dealing with the CMS and experimenting on tools that could be used in the future. Schultz said he supports the idea of R&D labs because “good technology needs planning,” but the needs of the newsroom don’t always meet with long-range needs on the tech side.

Ramos and Schultz both said one of the biggest threats to change in newsrooms can be those inflexible content management systems. Ramos said the sometimes rigid nature of a CMS can force people to make editorial decisions based on where stories should go, rather than what’s most important to the reader.

Vine, Drunk C-SPAN, and gender bias

!nstant: There was Nieman Foundation/Center for Civic Media crossover at this year’s conference: 2013 Nieman Fellows Borja Echevarría de la Gándara, Alex Garcia, Paula Molina, and Ludovic Blecher presented a proposal for a breaking news app called !nstant. The fellows created a wireframe of the app after taking Ethan Zuckerman’s News and Participatory Media class.

The app, which would combine elements of liveblogging and aggregation around breaking news events, was inspired by the coverage of the Boston marathon bombing and manhunt. The app would pull news and other information from a variety of sources, “the best from participatory media and traditional journalism,” Molina said. Rather than being a simple aggregator, !nstant would use a team of editors to curate information and add context to current stories when needed. “The legacy media we come from is not yet good at organizing the news in a social environment,” said Echevarría de la Gándara.

Drunk C-SPAN and Opened Captions: Schultz also presented a project — or really, an idea — that seems especially timely when more Americans than usual are glued to news coming out of the capitol. When Schultz was at the Globe, he realized it would be both valuable and simple to create an API that pulls closed captioning text from C-SPAN’s video files, a project he called Opened Captions, which we wrote about in December. “I wanted to create a service people could subscribe to whenever certain words were spoken on C-SPAN,” said Schultz. “But the whole point is [the browser] doesn’t know when to ask the questions. Luckily, there’s a good technology out there called WebSocket that most browsers support that allows the server and the browser to talk to each other.”

To draw attention to the possibilities of this technology, Schultz began experimenting with a project called Drunk C-SPAN, in which he aimed to track key terms used by candidates in a televised debate. The more the pols repeat themselves, the more bored the audience gets and the “drunker” the program makes the candidates sound.

But while Drunk C-SPAN was topical and funny, Schultz says the tool should be less about what people are watching and more about what they could be watching. (Especially since almost nobody in the gen pop is watching C-SPAN regularly.) Specifically, he envisions a system in which Opened Captions could send you data about what you’re missing on C-SPAN, translate transcripts live, or alert you when issues you’ve indicated an interest in are being discussed. For the nerds in the house, there could even be a badge system based on how much you’ve watched.

Schultz says Opened Captions is fully operational and available on GitHub, and he’s eager to hear any suggestions around scaling it and putting it to work.

followbiasFollow Bias is a Twitter plugin that calculates and visualizes the gender diversity of your Twitter followers. When you sign in to the app, it graphs how many of your followers are male, female, brands, or bots. Created by Nathan Mathias and Sarah Szalavitz of the MIT Media Lab, Follow Bias is built to counteract the pernicious function of social media that allows us to indulge our unconscious biases and pass them along to others, contributing to gender disparity in the media rather than counteracting it.

The app is still in private beta, but a demo, which gives a good summary of gender bias in the media, is online here. “The heroes we share are the heroes we have,” it reads. “Among lives celebrated by mainstream media and sites like Wikipedia, women are a small minority, limiting everyone’s belief in what’s possible.” The Follow Bias server updates every six hours, so the hope is that users will try to correct their biases by broadening the diversity of their Twitter feed. Eventually, Follow Bias will offer metrics, follower recommendations, and will allow users to compare themselves to their friends.

LazyTruth: Last fall, we wrote about Media Lab grad student Matt Stempeck’s LazyTruth, the Gmail extension that helps factcheck emails, particularly chain letters and phishing scams. After launching LazyTruth last fall, Stempeck told the audience at the Civic Media conference that the tool has around 7,000 users. He said the format of LazyTruth may have capped its growth: “We’ve realized the limits of Chrome extensions, and browser extensions in general, in that a lot of people who need this tool are never going to install browser extensions.”

Stempeck and his collaborators have created an email reply service to LazyTruth, that lets users send suspicious messages to ask@lazytruth.com to get an answer. Stempeck said they’ve also expanded their misinformation database with information from Snopes, Hoax-Slayer and Sophos, an antivirus and computer security company.

LazyTruth is now also open source, with the code available on GitHub. Stempeck said he hopes to find funding to expand the fact-checking into social media platforms.

Vine Toolkit: Recent MIT graduate Joanna Kao is working on a set of tools that would allow journalists or anyone else to use Vine in storytelling. The Vine Toolkit would provide several options to add context around the six-second video clips.

Kao said Vines offer several strengths and weaknesses for journalists: the short length, ease of use, and the built-in social distribution network around the videos. But the length is also problematic, she said, because it doesn’t provide context for readers. (Instagram’s moving in on this turf.) One part of the Vine Toolkit, Vineyard, would let users string together several vines that could be captioned and annotated, Kao said. Another tool, VineChatter, would allow a user to see conversations and other information being shared about specific Vine videos.

Open Space & Place: Of algorithms and sensor journalism

WNYC: We also heard from WNYC’s John Keefe during the Open Space & Place discussion. Keefe shared the work WNYC did around tracking Hurricane Sandy, and, of course, the Lab’s beloved Cicada Project. (Here’s our most recent check-in on that invasion topic.)

keefecicadas

As Keefe has told the Lab in the past, the next big step in data journalism will be figuring out what kind of stories can come out of asking questions of data. To demonstrate that idea, Keefe said WNYC is working on a new project measuring air quality in New York City by strapping sensors to bikers. This summer, they’ll be collaborating with the Mailman School of Public Health to do measurement runs across New York. Keefe said the goal would be to fill in gaps in government data supplied by particulate measurement stations in Brooklyn and the Bronx. WNYC is also interested in filling in data gaps around NYC’s housing authority, says Keefe. After Hurricane Sandy, some families living in public housing went weeks without power and longer without heat or hot water. Asked Keefe: “How can we use sensors or texting platforms to help these people inform us about what government is or isn’t doing in these buildings?”

With the next round of the Knight News Challenge focusing on health, keep on eye on these data-centric, sensor-driven, public health projects, because they’re likely to be going places.

Mapping the Globe: Another way to visualize the news, Mapping the Globe lets you see geographic patterns in coverage by mapping The Boston Globe’s stories. The project’s creator, Lab researcher Catherine D’Ignazio, used the geo-tagged locations already attached to more than 20,000 articles published since November 2011 to show how many of them relate to specific Boston neighborhoods — and by zooming out, how many stories relate to places across the state and worldwide. Since the map also displays population and income data, it’s one way to see what areas might be undercovered relative to who lives there — a geographical accountability system of sorts.

This post includes good screenshots of the prototype interactive map. The patterns raise lots of questions about why certain areas receive more attention than others: Is the disparity tied to race, poverty, unemployment, the location of Globe readers? But D’Ignazio also points out that there are few conclusive correlations or clear answers to her central question — “When does repeated newsworthiness in a particular place become a systemic bias?”

June 26 2013

20:57

With gay marriage sure to spark emotional responses, The Washington Post and New York Times try structuring comments

Back in March, we wrote about a New York Times experiment to add more structure to reader comments on big stories. In that case, the story was the election of Pope Francis; The Times asked readers to notate their responses with whether they were Catholic, whether they were surprised by the appointment, and whether they approved of it. That added structure allowed other readers to view comments through those lenses, putting a filter on what could have been, on another platform, an overwhelming “Read 5,295 comments” link.

Today brought some more big news: the Supreme Court’s ruling that the Defense of Marriage Act, which prevented federal recognition of same-sex marriages, was unconstitutional. And today both the Times and The Washington Post brought structure to reader response.

First the Post:

wapo-structured-comments-doma

The interactive — credited to Katie Park, Leslie Passante, Ryan Kellett, and Masuma Ahuja — steps past the pro-/anti-gay marriage debate and instead asks why readers care: “Why do the Supreme Court’s decisions on gay marriage matter to you?” The given choices — “It engages my moral or religious beliefs,” “It impacts someone I know,” and the like — then provide the raw data for a lovely flower-like Venn-diagram data visualization. (With colors sufficiently muted to avoid immediate rainbow associations.)

The open response question also tries to steer clear of pro/con by asking: “Now, in your own words, use your experience to tell us how these decisions resonate with you.” It’s generated over 2,800 responses at this writing, and you can sort through them all via the structured filters.

Now the Times:

nytimes-doma-supreme-court-comment

The Times’ interactive was built by Lexi Mainland, Michael Strickland, Scott Blumenthal, John Niedermeyer, and Tyson Evans. They selected six key excerpts from today’s opinions — four from Anthony Kennedy and one each from Antonin Scalia and Samuel Alito — and asked readers whether they agree or disagree with each. (There’s also an “unsure” option for those who don’t fancy themselves constitutional scholars.)

Along with that quantifiable response, readers were asked to leave a brief comment explaining their position. The excerpts appear in random order on each load. And, just as the pope experiment separated out responses from Catholics, this Times interactive pulls out comments from people who identify as gay. Like the Post, the Times uses a non-standard call for responses: Rather than responding to a news story, they’re asked to “Respond to the Justices.”

(The responses so far don’t do much to change the stereotype of Times readers as liberal. Justice Kennedy’s four excerpts — from the majority opinion, striking down DOMA — have been agreed with 130 times and disagreed with just four times. In contrast, Scalia and Alito’s pro-DOMA comments are losing Times readers 76 to 7 and 73 to 6, respectively.)

As news organizations try to figure out better ways to benefit from their audiences — ways that go beyond an unstructured “What do you think?” — these efforts from the Post and the Times are welcome. Big stories that generate big emotion deserve custom treatment if you want to get the most of your readers. Comments are just another kind of content, and as content becomes more intelligently structured, comments should follow suit.

16:48

What’s New in Digital Scholarship: A generation gap in online news, and does The Daily Show discourage tolerance?

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

We’re at the halfway mark in our year-long odyssey tracking all things digital media and academic. Below are studies that continue to advance understanding among various hot topics: drone journalism; surveillance and the public; Twitter in conflict zones; Big Data and its limits; crowdsourced information platforms; remix culture; and much more. We also suggest some further “beach reads” at bottom. Enjoy the deep dive.

“Reuters Institute Digital News Report 2013: Tracking the Future of News”: Paper from University of Oxford Reuters Institute for the Study of Journalism, edited by Nic Newman and David A. L. Levy.

This new report provides tremendous comparative perspective on how different countries and news ecosystems are developing both in symmetrical and divergent ways (see the Lab’s write-up of the national differences/similarities highlighted.) But it also provides some interesting hard numbers relating to the U.S. media landscape; it surveys news habits of a sample of more than 2,000 Americans.

Key U.S. data points include: the number of Americans reporting accessing news by tablet in the past week rose, from 11 percent in 2012 to 16 percent in 2013; 28 percent said they accessed news on a smartphone in the last week; 75 percent of Americans reported accessing news online in the past week, while 72 percent said they got news through television and 47 percent reported having read a print publication; TV (43 percent) and online (39 percent) were Americans preferred platforms for accessing news. Further, a yawning divide exists between the preferences of those ages 18 to 24 and those over 55: among the younger cohort, 64 percent say the Web is their main source for news, versus only 25 percent among the older group; as for TV, however, 54 percent of older Americans report it as their main source, versus only 20 percent among those 18 to 24. Finally, 12 percent of American respondents overall reported paying for digital news in 2013, compared to 9 percent in 2012.

“The Rise and Fall of a Citizen Reporter”: Study from Wellesley College, for the WebScience 2013 conference. By Panagiotis Metaxas and Eni Mustafaraj.

This study looks at a network of anonymous Twitter citizen reporters around Monterrey, Mexico, covering the drug wars. It provides new insights into conflict zone journalism and information ecosystems in the age of digital media, as well the limits of raw data. The researchers, both computer scientists, analyze a dataset focused on the hashtag #MTYfollow, consisting of “258,734 tweets written by 29,671 unique Twitter accounts, covering 286 days in the time interval November 2010-August 2011.” They drill down on the account @trackmty, run by the pseudonym Melissa Lotzer, which is the largest of the accounts involved.

The scholars reconstruct a sequence in which a wild Twitter “game” breaks out — obviously, with life-and-death stakes — involving accusations about cartel informants (“hawks,” or “halcones”) and citizen watchdogs (“eagles,” or “aguilas”), with counter-accusations flying that certain citizen reporters were actually working for the Zetas drug cartel; indeed, @trackmty ends up being accused of working for the cartels. Online trolls attack her on Twitter and in blogs.

“The original Melissa @trackmty is slow to react,” the study notes, “and when she does, she tries to point to her past accomplishments, in particular the creation of [a group of other media accounts] and the interviews she has given to several reporters from the US and Spain (REF). But the frequency of her tweeting decreases, along with the community’s retweets. Finally, at the end of June, she stops tweeting altogether.” It turns out that the real @trackmty had been exposed — “her real identity, her photograph, friends and home address.”

Little of this drama was obvious from the data. Ultimately, the researchers were able to interview the real @trackmty and members of the #MTYfollow community. The big lessons, they realize, are the “limits of Big Data analysis.” The data visualizations showing influence patterns and spikes in tweet frequency showed all kinds of interesting dynamics. But they were insufficient to make inferences of value about the community affected: “In analyzing the tweets around a popular hashtag used by users who worry about their personal safely in a Mexican city we found that one must go back and forth between collecting and analyzing many times while formulating the proper research questions to ask. Further, one must have a method of establishing the ground truth, which is particularly tricky in a community of — mostly — anonymous users.”

“Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory”: Study from Ohio State University, published in the Journal of Communication. By R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch.

As the political fact-checking movement — the FactChecks and Politifacts, along with their various lesser-known cousins — has arisen, so too has a more hard-headed social science effort to get to the root causes of persistent lies and rumors, a situation made all the worse on the web. Of course, journalists hope truth can have a “corrective” effect, but the literature in this area suggests that blasting more facts at people often doesn’t work — hence, the “information deficit fallacy.” Thus, a cottage psych-media research industry has grown up, exploring “motivated reasoning,” “biased assimilation,” “confirmation bias,” “cultural cognition,” and other such concepts.

This study tries to advance understanding of how peripheral cues such as accompanying graphics and biographical information can affect how citizens receive and accept corrective information. In experiments, the researchers ask subjects to respond to claims about the proposed Islamic cultural center near Ground Zero and the disposition of its imam. It turns out that contextual information — what the imam has said, what he looks like and anything that challenges dominant cultural norms — often erodes the positive intentions of the fact-checking message.

The authors conclude that the “most straightforward method of maximizing the corrective effect of a fact-checking article is to avoid including information that activates stereotypes or generalizations…which make related cognitions more accessible and misperceptions more plausible.” The findings have a grim quality: “The unfortunate conclusion that we draw from this work is that contextual information so often included in fact-checking messages by professional news outlets in order to provide depth and avoid bias can undermine a message’s corrective effects. We suggest that this occurs when the factually accurate information (which has only peripheral bearing on the misperception) brings to mind” mental shortcuts that contain generalizations or stereotypes about people or things — so-called “naïve theories.”

“Crowdsourcing CCTV surveillance on the Internet”: Paper from the University of Westminster, published in Information, Communication & Society. By Daniel Trottier.

A timely look at the implications of a society more deeply pervaded by surveillance technologies, this paper analyzes various web-based efforts in Britain that involve the identification of suspicious persons or activity. (The controversies around Reddit and the Boston Marathon bombing suspects come to mind here.) The researcher examine Facewatch, CrimeStoppers UK, Internet Eyes, and Shoreditch Digital Bridge, all of which had commercial elements attached to crowdsourcing projects where participants monitored feed from surveillance cameras of public spaces. He points out that these “developments contribute to a normalization of participatory surveillance for entertainment, socialization, and commerce,” and that the “risks of compromised privacy, false accusations and social sorting are offloaded onto citizen-watchers and citizen-suspects.” Further, the study highlights the perils inherent in the “‘gamification’ of surveillance-based labour.”

“New Perspectives from the Sky: Unmanned aerial vehicles and journalism”: Paper from the University of Texas at Arlington, published in Digital Journalism. By Mark Tremayne and Andrew Clark.

The use of unmanned aerial vehicles (UAVs, or “drones”) in journalism is an area of growing interest, and this exploration provides some context and research-based perspective. Drones in the service of the media have already been used for everything from snapping pictures of Paris Hilton and surveying tornado damaged areas in Alabama to filming secret government facilities in Australia and protestor clashes in Poland. In all, the researchers found “eight instances of drone technology being put to use for journalistic purposes from late 2010 through early 2012.”

This practice will inevitably raise issues about the extent to which it goes too far. “It is not hard to imagine how the news media, using drones to gather information, could be subject to privacy lawsuits,” the authors write. “What the news media can do to potentially ward off the threat of lawsuits is to ensure that drones are used in an ethical manner consistent with appropriate news practices. News directors and editors and professional associations can establish codes of conduct for the use of such devices in much the same way they already do with the use of hidden cameras and other technology.”

“Connecting with the user-generated Web: how group identification impacts online information sharing and evaluation”: Study from University of California, Santa Barbara, published in Information, Communication & Society. By Andrew J. Flanagin, Kristin Page Hocevar, and Siriphan Nancy Samahito.

Whether it’s Wikipedia, Yelp, TripAdvisor, or some other giant pool of user-generated “wisdom,” user-generated platforms convene large, disaggregated audiences who form loose memberships based around apparent common interests. But what makes certain communities bond and stick together, keeping online information environments fresh, passionate, and lively (and possibly accurate)?

The researchers involved in this study perform some experiments with undergraduates to see how adding small bits of personal information — the university, major, gender, or other piece of information — to informational posts changed perceptions by viewers. Perhaps predictably, the results show that “potential contributors had more positive attitudes (manifested in the form of increased motivation) about contribution to an online information pool when they experienced shared group identification with others.”

For editors and online community designers and organizers, the takeaway is that information pools “may actually form and sustain themselves best as communities comprising similar people with similar views.” Not exactly an antidote to “filter bubble” fears, but it’s worth knowing if you’re an admin for an online army.

“Selective Exposure, Tolerance, and Satirical News”: Study from University of Texas at Austin and University of Wyoming, published in the International Journal of Public Opinion Research. By Natalie J. Stroud and Ashley Muddiman.

While not the first study to focus on the rise of satirical news — after all, a 2005 study in Political Communication on “The Daily Show with Jon Stewart” now has 230 subsequent academic citations, according to Google Scholar — this new study looks at satirical news viewed specifically in a web context.

It suggests the dark side of snark, at least in terms of promoting open-mindedness and deliberative democracy. The conclusion is blunt: “The evidence from this study suggests that satirical news does not encourage democratic virtues like exposure to diverse perspectives and tolerance. On the contrary, the results show that, if anything, comedic news makes people more likely to engage in partisan selective exposure. Further, those viewing comedic news became less, not more, tolerant of those with political views unlike their own.” Knowing Colbert and Stewart, the study’s authors can expect an invitation soon to atone for this study.

The hidden demography of new media ethics”: Study from Rutgers and USC, published in Information, Communication & Society. By Mark Latonero and Aram Sinnreich.

The study leverages 2006 and 2010 survey data, both domestic and international, to take an analytical look at how notions of intellectual property and ethical Web culture are evolving, particularly as they relate to ideas such as remixing, mashups and repurposing of content. The researchers find a complex tapestry of behavioral norms, some of them correlated with certain age, gender, race or national traits. New technologies are “giving rise to new configurable cultural practices that fall into the expanding gray area between traditional patterns of production and consumption. The data suggest that these practices have the potential to grow in prevalence in the United States across every age group, and have the potential to become common throughout the dozens of industrialized nations sampled in this study.”

Further, rules of the road have formed organically, as technology has outstripped legal strictures: “Most significantly, despite (or because of) the inadequacy of present-day copyright laws to address issues of ownership, attribution, and cultural validity in regard to emerging digital practices, everyday people are developing their own ethical frameworks to distinguish between legitimate and illegitimate uses of reappropriated work in their cultural environments.”

Beach reads:

Here are some further academic paper honorable mentions this month — all from the culture and society desk:

Photo by Anna Creech used under a Creative Commons license.

June 24 2013

15:44

Opening up government: A new round of Knight News Challenge winners aims to get citizens and governments better information

Moments ago, the Knight Foundation announced its latest round of winners in the Knight News Challenge, its currently semiannual competition to identify fundable ideas that advance the interests of journalism and the information needs of communities. This round focused on the open government movement, and its eight winners all fit squarely into that box. More about them below.

But the big news is what Knight Foundation CEO Alberto Ibargüen just said here in Cambridge at the opening morning of the 2013 MIT-Knight Civic Media Conference. He asked openly for ideas on what the future of the News Challenge should be, because, as he put it, “It may be finished. It may be that, as a device for doing something, it may be that we’ve gone as far as we can take it.”

#civicmedia @ibarguen @knightfdn asks for ideas on how to take #newschallenge to the next level. He asks is #newschallenge dead? Send ideas

— Damian Thorman (@dthorman) June 24, 2013

The six-year-old News Challenge is probably the highest-profile effort to fund innovation in journalism and media. It has funded many dozens of projects over the years, and beyond that, its application process has forced thousands of people to turn fuzzy ideas into concrete proposals. Knight devotes $5 million a year to the News Challenge, which has evolved from a single annual open call to a series of smaller, faster, more focused contests, which a significant reboot leading into 2012.

With more than a half decade in the rearview, Ibargüen asked what had been accomplished: “What have we actually achieved? How have we changed the way people receive their information? How have we affected the existing news community?….They take, I think, comparatively little notice of the things people in this room do.”

To be clear, he gave no sign of stepping away from funding journalism innovation, which remains a core Knight mission. But he noted that the foundation had maximum flexibility in how to accomplish that goal: “We have a huge luxury: We can do whatever we want to do. We can use whatever process we want to use.”

Which was behind his question to the assembled crowd: “What would you do if you had decided to invest $5 million a year in figuring out how to best get news and information to communities? What would you do?”

There will be at least one more round of the News Challenge later this year (topic TBA), but beyond that, Knight’s thinking about where to take the broader idea. Ibargüen said he expected the foundation would make these decisions over the next four to five months. If you’ve got an idea, get in touch with Knight.

But that’s the future. How about the brand new round of winners? Civic Insight promises to create better databases of vacant properties so activists can better connect land to opportunities. OpenCounter wants to make it easier for small businesses to navigate local regulation. Outline.com aims to build public policy simulators, estimating the impact of legislative decisions on people’s circumstances. The Oyez Project will offer clear case summaries of the suits before American appellate courts. GitMachines wants to make it easier for governments to add servers quickly.

As I wrote in January for the last round of announcements, the “News” in Knight News Challenge seems to be moving out of the spotlight in favor of a broader concept of connecting civic information to people who can use it. In the classical American 20th century news model, that was a role that typically involved journalists as intermediaries. Today, though, those communities of self-interest can organize in ways more efficient than a newspaper’s subscriber list. While a few of the projects funded could be of use to journalists — making data available to the general public also makes it available to reporters, who can then approach it with a different set of interests — they’re not the primary target. (That growing disconnect, I imagine, is something that will be addressed in whatever new form the News Challenge takes.)

Civic Insight

Award: $220,000
Organization: Civic Industries
Project leads: Alex Pandel, Eddie Tejeda and Amir Reavis-Bey
Twitter: @CivicInsight, @alexpandel, @maromba, @eddietejeda

Neighbors, cities, nonprofits and businesses all have an interest in seeing vacant properties become productive again. However, a lack of public access to information about these properties makes it difficult for groups to work together on solutions. By plugging directly into government databases, Civic Insight provides real-time information on vacant and underutilized properties, enabling more collaborative, data-driven community development. With Civic Insight, journalists and residents can search for a property on a map and learn about its ownership, inspection and permitting history, and subscribe to receive real-time notifications about changes. Civic Insight grew out of a successful pilot in New Orleans called BlightStatus, which was created during the team’s 2012 Code for America fellowship. It is now available for licensed use by cities nationwide. Knight Foundation’s support will help the team expand the software and test new use cases in more communities.

Team: Eddie Tejeda is a web developer and former Code for America fellow who brings 10 years of experience working on open-source civic projects such as Digress.it and Littlesis.org. Tejeda is engaged in the Open Gov movement in his home city of Oakland, where he co-founded OpenOakland and serves as a mayoral appointee to the city’s Public Ethics Commission, which oversees government transparency.

Alex Pandel is a designer, communicator and community organizer. Before her 2012 Code for America fellowship with the City of New Orleans, Pandel was engaged in public-interest advocacy work with CalPIRG, as well as designing print and web solutions for organizations like New York Magazine and The Future Project.

Amir Reavis-Bey is a software engineer with experience building client-server applications for investment bank equities trading. He also has web development experience helping non-profits to collaborate and share resources online to promote human rights activism. He spent 2012 partnering with the City of New Orleans as a Code for America fellow.

GitMachines

Award: $500,000
Organization: GitMachines
Project leads: Greg Elin, Rodney Cobb, Ikjae Park, Terence Rose, Blaine Whited and John Lancaster
Twitter: @gregelin

Governments are often reluctant to adopt new software and technology because of security and compliance concerns. GitMachines allows developers doing civic innovation to easily build new technology governments can use faster, by offering a grab-and-go depot of accreditation-ready servers that support their projects. Unlike traditional servers that can take hours or days to set-up, GitMachines can be up and running in minutes and are pre-configured to meet government guidelines. This makes it easier for governments to adopt open source software, and will help government agencies adopt new technology more quickly in the future.

Team: Rodney Cobb is a mobile developer and data analyst working in Washington D.C. Through his previous work with Campus Compact, Cobb has worked on several projects combing civic engagement/service learning and virtual interaction. Cobb received a bachelor’s in political science from Clark-Atlanta University and his master’s in politics from New York University.

Greg Elin has spent 20 years developing easy-to-use information tools and helping organizations embrace disruptive technologies. In 2006, Elin created the Sunlight Foundation’s Sunlight Labs. Previously, he was chief technology officer at United Cerebral Palsy before entering the civil service in 2010 as one of the first chief data officers in federal government. Elin has been leading the Federal Communications Commissions’ efforts to lower data collection burden and improve data sharing with modern web service APIs. He was a member of the White House Task Force on Smart Disclosure exploring machine-readable data as a policy tool and citizen aid. Elin has a master’s in interactive telecommunications from New York University’s Tisch School of Art.

John Lancaster has bachelor’s degree in computer science, a minor in studio art and is studying for his master’s of information systems technology. He has worked as a technology consultant the past four years at the Department of State where he builds mission critical websites that reach a global audience in over 60 languages, and manages the server infrastructure that supports the entire operation.

Ikjae Park is an expert in software development and system administration working for a government contractor and has developed enterprise JAVA applications at Salesforce.com, among others. He is passionate about development and making a simple workflow process for the community.

Terence Rose is a senor business Analyst with MIL Corp., currently leading the content development and user experience for high profile Department of Commerce projects. He previously worked as a technologist on contract for the Office of Head Start.

Blaine Whited is a programmer and systems administrator with a bachelor’s in computer science.

OpenCounter

Award: $450,000
Organization: OpenCounter
Project leads: Peter Koht, Joel Mahoney
Twitter: @opencounter, @yurialeks, @joelmahoney

While entrepreneurs may have market-moving ideas, very few can expertly navigate the local government permitting process that allows them to open and operate. Whether it’s a startup, boutique or restaurant, OpenCounter helps to simplify this interaction with city government. It collects and sorts data on existing regulations while providing running totals of the costs and time involved in setting up shop. A team of Code for America fellows developed and piloted OpenCounter in Santa Cruz, Calif. during 2012. Knight Foundation funds will support OpenCounter’s expansion to new communities, including several 2013 Code for America cities.

Team: Peter Koht, a self-described civics nerd, is an experienced economic development professional who most recently worked for the City of Santa Cruz. Koht worked on a number of issues at the city, including leading a regional broadband policy group, opening up city data and spearheading policy initiatives that lowered administrative barriers to job creation. Previous to his public sector role, he worked in technology and media.

Joel Mahoney is a civic technologist and serial entrepreneur. He was an inaugural fellow at Code for America, and served as a technical advisor to the organization. Before Code for America, Mahoney founded several startups, including an online travel site, a genetics visualization tool and an m-health platform for diabetics. His work has been featured in The Washington Post, The Boston Globe and The New York Times.

Open Gov for the Rest of Us

Award: $350,000
Organization: LISC Chicago
Project leads: Susana Vasquez, Dionne Baux, Demond Drummer, Elizabeth Rosas-Landa
Twitter: @liscchicago

Open Gov for the Rest of Us is seeking to engage neighborhoods on Chicago’s South Side in the Open Government movement. The three-stage campaign will connect more residents to the Internet, promote the use of open government tools and develop neighborhood-driven requests for new data that address residents’ needs. Building on the success of LISC Chicago’s Smart Communities program and Data Friday series, the project aims to spread a culture of data and improved use of digital tools in low-income neighborhoods by directly involving their residents.

Team: Susana Vasquez is LISC Chicago’s executive director. Vasquez joined LISC in 2003 as a program officer and soon became director of the office’s most ambitious effort – the New Communities Program, a 10-year initiative to support comprehensive community development in 16 neighborhoods. She has a bachelor’s degree in history from the University of Illinois and a master’s from Harvard University’s Kennedy School of Government.

Dionne Baux, a LISC Chicago program officer who works on economic development and technology programs, has worked in city government and for nonprofits for more than seven years. Baux leads LISC’s Smart Communities program, which is designed to increase digital access and use by youth, families, businesses and other institutions. She has a master’s degree in public administration, with a focus in government, from Roosevelt University.

Demond Drummer is tech organizer for Teamwork Englewood, an organization formed in 2003 as part of LISC Chicago’s New Communities Program. Its goal is to strengthen the Englewood neighborhood on Chicago’s South Side. Drummer joined Webitects, a web design firm, in summer 2009. Previously, he coordinated a youth leadership and civic engagement initiative in Chicago. A graduate of Morehouse College, he is completing a master’s degree at the University of Chicago Divinity School.

Elizabeth Rosas-Landa is the Smart Communities program manager at The Resurrection Project in Chicago’s Pilsen neighborhood. A Mexico City native, she received a bachelor’s degree in information technology from Insurgentes University and later joined the Marketing and Promotion Company in Mexico. In 2008, she moved to the United States to work with community organizations on technology issues. At The Resurrection Project, Rosas-Landa has implemented computer literacy programs for residents and businesses.

Outline.com

Award: unspecified, through Knight Enterprise Fund
Organization: Outline.com
Project leads: Nikita Bier, Jeremy Blalock, Erik Hazzard, Ray Kluender
Twitter: @OutlineUSA

Outline.com is developing an online public policy simulator that allows citizens and journalists to visualize the impact that particular policies might have on people and their communities. For instance, with Outline.com, a household can measure how a tax cut or an increase in education spending will affect their income. The project builds on the team’s award-winning app Politify, which simulated the impacts of the Obama and Romney economic plans during the 2012 campaign. The Outline.com simulator uses models developed by a team of economists, backed by open data on American households from the IRS, the Census Bureau and other sources. The Commonwealth of Massachusetts has hired Outline.com to develop an official pilot. The team is a part of the accelerator TechStars Boston.

Team: Nikita Bier, CEO, recently graduated from the University of California at Berkeley with honors and degrees in business administration and political economy. During his college years, he researched higher education finance, receiving recognition for his insights from the president of the university. While a student, he founded Politify.us, an award-winning election application that received national coverage. Before that, he worked in business development at 1000memories, a Greylock and YCombinator-backed startup.

Jeremy Blalock, CPO, led design and development for Politify.us. He is currently on leave from UC Berkeley, where he studied electrical engineering and computer science.

Erik Hazzard, CTO, is an active member of the data visualization and mapping communities. He was formerly lead developer at Visual.ly. He is the author of OpenLayers 2.10 Beginner’s Guide. He graduated from Florida State University with a bachelor’s degree in information science.

Ray Kluender graduated with honors from the University of Wisconsin with majors in economics, mathematics and political science. His extensive research experience includes involvement in developing value-added models of teacher effectiveness for Atlanta, New York City and Los Angeles public schools, election forecasting for Pollster.com and studying optimal health insurance design and government intervention in health care at the National Bureau of Economic Research. He will be starting his Ph.D. in economics at MIT this August.

Note: Outline.com is receiving funds through the Knight Enterprise Fund, an early stage venture fund that invests in for-profit ventures aligned with Knight’s mission of fostering informed and engaged communities. In line with standard venture-capital practices, the funding amounts are not being disclosed.

Oyez Project

Award: $600,000
Organization: University of Chicago, Kent School of Law
Project lead: Jerry Goldman
Twitter: @oyez

The activities of courts across the country are often hard to access and understand. For the past 20 years, the Oyez Project has worked to open the U.S. Supreme Court by offering clear case summaries, opinions and free access to audio recordings and transcripts. With Knight Foundation funding, Oyez will expand to state supreme and federal appellate courts, offering information to the public about the work of these vital but largely anonymous institutions. Beginning in the five largest states that serve over one-third of the American public, Oyez will work with courts to catalog materials and reformat them following open standards practices. In conjunction with local partners, Oyez will annotate the materials, adding data and concise summaries that make the content more accessible for a non-legal audience. Oyez will release this information under a Creative Commons license and make it available online and through a mobile application.

Team: Professor Jerry Goldman of the IIT Chicago-Kent College of Law has brought the U.S. Supreme Court closer to everyone through the Oyez Project. He has collaborated with experts in linguistics, psychology, computer science and political science with major financial support from the National Science Foundation, the National Endowment for the Humanities, Google and a select group of national law firms to create an archive of 58 years of Supreme Court audio. In recent years, Oyez has put the Supreme Court in your pocket with mobile apps, iSCOTUSnow and PocketJustice.

Plan in a Box

Award: $620,000
Organization: OpenPlans
Project lead: Frank Hebbert, Ellen McDermott, Aaron Ogle, Andy Cochran, Mjumbe Poe
Twitter: @OpenPlans

Local planning decisions can shape everything about a community — from how residents get around, to how they interact with their neighbors and experience daily life. Yet information on projects — from new plans for downtown centers to bridge replacements — is often difficult to obtain. This project will be an open-source web-publishing tool that makes it easy to engage people in the planning process. With minimal effort, city employees will be able to create and maintain a useful website that provides information that citizens and journalists need while integrating with social media and allowing for public input.

Team: Aaron Ogle is an OpenPlans software developer. Prior to OpenPlans, he was a fellow at Code for America where he partnered with the City of Philadelphia to build solutions to help foster civic engagement. He specializes in JavaScript and GIS development and has contributed to such applications as reroute.it, septa.mobi, changeby.us, walkshed.org and phillystormwater.org.

Andy Cochran, creative director, provides design vision for OpenPlans’ projects, building user interfaces for tools that enable people to be better informed and stay engaged in local issues. Cochran has a bachelor’s degree from the Maryland Institute College of Art, and he has over a decade of experience in print and web design.

Ellen McDermott leads OpenPlans’ outreach to community organizations and cities, to help them be effective in using digital and in-person engagement tools. She also manages operations for OpenPlans. Previously, McDermott was the director of finance and administration for Honeybee Robotics, a technology supplier to the NASA Mars programs. She is a graduate of Amherst College and King’s College London.

Frank Hebbert leads the software team at OpenPlans. Outside of work, he volunteers with Planning Corps, a network of planners providing assistance to non-profit and community groups. Hebbert holds a master’s degree in city planning from MIT.

Mjumbe Poe is a software developer for OpenPlans. Previously, Poe was a fellow at Code for America, and before that a research programmer at the University of Pennsylvania working on modeling and simulation tools for the social sciences.

Procure.io

Award: $460,000
Organization: Department of Better Technology
Project leads: Clay Johnson and Adam Becker
Twitter: @cjoh @AdamJacobBecker

The government procurement process can be both highly complicated and time-consuming — making it difficult for small businesses to discover and bid on contracts and for journalists and transparency advocates to see where public money is going. As White House Presidential Innovation Fellows, Clay Johnson and Adam Becker built a simple tool for governments to easily post requests for proposals, or RFPs. Based on its early success at the federal level, the team is planning to expand the software to help states and cities. In addition, they will build a library of statements of work that any agency can adapt to their needs. The goal is to bring more competition into government bidding, as a way to both reduce costs and ensure that the most qualified team gets the job.

Team: Clay Johnson may be best known as the author of The Information Diet: A Case for Conscious Consumption. Johnson was also one of the founders of Blue State Digital, the firm that built and managed Barack Obama’s online campaign for the presidency in 2008. Since 2008, Johnson has worked on opening government, as the director of Sunlight Labs until 2010, and as a director of Expert Labs until 2012. He was named the Google/O’Reilly Open Source Organizer of the Year in 2009, was one of Federal Computing Week’s Fed 100 in 2010, and won the CampaignTech Innovator award in 2011. In 2012, he was appointed as an inaugural Presidential Innovation Fellow and led the RFP-EZ project, a federal experiment in procurement innovation.

Adam Becker is a software developer and entrepreneur. He co-founded and served as chief technology officer of GovHub, a civic-oriented startup that was the first to provide users a comprehensive, geographically calculated list of their government officials. In 2012, he was appointed alongside Johnson as an inaugural Presidential Innovation Fellow and led the development of RFP-EZ.

June 20 2013

19:05

Seeking an ocean of audience: Honolulu Civil Beat partners with Huffington Post to seek new revenue streams

When Honolulu Civil Beat launched three years ago, it took some contrarian stands. At a time when many civic-minded journalism startups were filing for nonprofit status, Civil Beat bet on succeeding as a for-profit. When many thought digital advertising would be the key driver of revenue growth, Civil Beat didn’t take ads. And when most news startups were trying to build an audience by giving away their content, Civil Beat was betting on subscriptions — and pricy ones, at that.

The news site’s latest move — partnering with The Huffington Post to launch HuffPost Hawaii this fall — is an attempt to balance out some of those bets in a quest for greater revenue diversity. HuffPost is, of course, dedicated to free content with wide reach, and its business is built around the kind of ads that Civil Beat ignores.

“Civil Beat is a model with a focus of trying to build something new — not just in how we write stories and deliver them, but how we pay for them,” site general manager Jayson Harper said. “Huffington Post in some sense provides us with a megaphone to give that to a larger population within the state who will hear and see who we are.”

Founded by eBay founder Pierre Omidyar and Randy Ching, Civil Beat focuses on politics, government, and investigations, and it charges a comparatively steep subscription price to read and comment on the site — $20 per month, higher than even The New York Times. That will remain. The two sites will run in parallel; Civil Beat will look and operate essentially the same way it does now, with some HuffPost Hawaii stories running off of its homepage and subscription prices unchanged. HuffPost Hawaii will exist as a separate site creating most of its own content, with Civil Beat stories excerpted there as well.

Civil Beat says the two sites will also maintain “separate staffs,” though that applies only to the writer-reporters, since editor Patti Epler and general manager Harper will be in charge of both sites.

The partnership is not a comment on Civil Beat’s commitment to subscriptions, Harper said, and the site is not in financial trouble. Still, “the subscription model is a very tough model to create complete financial sustainability,” he said.

Unlike Civil Beat, HuffPost Hawaii will have traditional advertising displayed alongside quick takes on Hawaii news and, according to HuffPost’s announcement, content like “slideshows of Hawaii beaches.” Harper said the Civil Beat organization will “absolutely” benefit from that revenue, though a confidentiality agreement barred him from releasing the specifics of how and if that money will be allowed to flow to Civil Beat.

“The real reason we’re doing this is because we do see ways to grow revenue and it makes sense for both parties,” Harper said, referring to potential new Civil Beat subscribers, revenue from HuffPost Hawaii ads, and the additional brand awareness that may make their sponsorships more valuable.

Civil Beat, which currently operates with six reporters and two editors, will indirectly benefit from the collaboration because it will allow Epler to hire three new reporters for the HuffPost side. “I hope that the Huffington Post staff can be covering things like the governor’s press conference, or, say, a helicopter that goes down in downtown Honolulu — they’ll do that, and our staff won’t have to do that anymore,” Epler said. “That will free up some of our beat writers to do more in-depth things,” like a recent multi-part investigation into oversight of a polluted local waterway.

In search of revenue diversity

In Hawaii, as elsewhere, the media business is in flux. Financial troubles forced Honolulu Weekly this month to announce it was publishing its final issue (though its editor has now said she is attempting a revival). Three local TV news stations merged in 2009. The remaining Honolulu daily, the Star-Advertiser, also operates with a partial paywall (the site’s front page, breaking news, and blogs remain free). But there is still some audience loyalty: Ad Age recently reported that Hawaiians are paying attention, with 47 percent of Honolulu adults saying they read a daily newspaper, one of the highest numbers in the country.

As his model for diversifying Civil Beat’s revenue, Harper pointed to the Texas Tribune, which is grant-supported but also makes significant money from events and other sponsorships. It’s not an apples-to-apples comparison — the Tribune is a nonprofit, and Texas’ population is 19 times the size of Hawaii’s. Still, Harper is working to organize sponsored events and potentially allow for sponsors to claim parts of the Civil Beat site itself. “It’s not the only way to build a sustainable revenue model for online news organizations, but it’s a good start,” he said.

Epler and Harper recognize that The Huffington Post’s model is built around traffic and Civil Beat’s is not. But they hope their collaboration with The Huffington Post helps them with those sponsorship efforts, too. “To increase the share in the market of the stories we’re doing has tangible benefits — the more we can talk to our partners and see people talking about those stories, the better,” Harper said.

For The Huffington Post, the Civil Beat collaboration is more like its international partnerships — which include agreements with Le Monde for its French edition, Gruppo Espresso in Italy, and The Asahi Shimbun in Japan — than its other U.S. city verticals. Those international partnerships excerpt content from those news organizations, whereas verticals like HuffPost Chicago, Detroit, and Miami simply collect content related to those metro areas. In explaining the Huffington Post’s interest in Hawaii, Arianna Huffington cited her relationship with Omidyar and seemed to view the site as a chance to learn from the Hawaiian culture.

“As the world’s oasis for unplugging and recharging — and the home of the Aloha spirit — Hawaii is an ideal place to explore all these themes and to engage the community,” Huffington said in an email.

On Civil Beat itself, reaction to the partnership has been largely, well, civil — minus a few Facebook comments. “Some people were like, ‘This is the end of Civil Beat, nice knowing ya, the Huffington Post is going to take over,’” Epler said. But their model hasn’t changed, she insisted. “I wrote a column maybe two weeks ago saying, we’re not getting eaten by the HuffPost monster. That’s just not what’s happening.”

Photo of downtown Honolulu from Diamond Head by John Fowler used under a Creative Commons license.

15:00

“If you’re not feeling it, don’t write it”: Upworthy’s social success depends on gut-checking “regular people”

Back in November, the Lab’s own Adrienne LaFrance wrote a number of words about Upworthy, a social packaging and not-quite-news site that has become remarkably successful at making “meaningful content” go viral. She delved into their obsession with testing headlines, their commitment to things that matter, their aggressive pushes across social media, and their commitment to finding stories with emotional resonance.

Things have continued to go well for Upworthy — they’re up to 10 million monthly uniques from 7.5. At the Personal Democracy Forum in New York, editorial director Sara Critchfield shared what she sees as Upworthy’s secret sauce for shareability, namely, seeking out content that generates a significant emotional response from both the reader and the writer.

upworhty monthlys

A slide from Critchfield’s PDF presentation.

Critchfield emphasized that using emotional input in editorial planning isn’t about making ad hoc decisions, it’s about making space for that data in the workflow, or “making it a bullet point.”

Here’s how she explained it:

When I spoke with Critchfield after her talk, she underscored the way in which packaging content is Upworthy’s bread and butter (most likely WonderBread and Land o’ Lakes [Sorry, Don Draper]).

“If you watch people shop in a grocery store, 95% of the time they are scanning the shelves for the packaging, making the choices on that before they turn the bottle around and look at the nutrition information. People choose their media that way too. So you can have a piece of media with the exact same nutritional value in it with different packaging and the consumer is going to choose the one that appeals to them most,” she said.

But before you can package content, you have to create it — or at least, select it from out of the vastness of the Internet. The people who do that are Critchfield’s handpicked team of curators.

“Of the things we curate at Upworthy, I think our editorial staff is what we pride ourselves the most on curating. We really focus on regular people. We reject the idea that the media elite or people who have been trained in a certain way somehow have the monopoly on editorial judgement, what matters or should matter. So we focus almost exclusively on hiring non-professionally trained writers,” she says. “To be honest, it’s sometimes difficult for folks who have professional background to come into Upworthy and have success.”

In other words, Critchfield builds the element of genuine emotional response into her team by hiring people who were never trained to worry about what’s news, and what isn’t.

“I tell my writers, ‘If you’re not feeling it, don’t write it.’,” says Critchfield. “We don’t really force people, we don’t let an editorial calendar dictate what we do. There will be big current events, and if someone on staff feels really passionate about it, then we cover it. And if there aren’t, then we don’t.”

The vast majority of Upworthy’s traffic comes from social media sites, where Critchfield says conversation is more valuable to the reader anyway. Some of their biggest hits have been about the economy, bullying and, recently, as displayed in her talk, funding cancer research after a young musician died of pancreatic cancer.

Critchfield says she encourages her curators to have huge vision for their posts. If they don’t expect it to get millions of views, then it’s not worth posting. Adam Mordecai is a great example of that kind of intuition, she says. He’s the guy who posted “This Kid Just Died. What He Left Behind is Wondtacular,” the video about cancer that ended up raising tens of thousands of dollars. (The original YouTube video got 433,000 Facebook shares; Upworthy’s got 2.5 million.)

Trained journalists are often rubbed the wrong way by the idea of writing headlines like that, or being asked to spend so much time on them. (Critchfield says instead of spending 58 minutes writing a story and 2 minutes on a headline, most journalists would be better served by spending 30 or 40 minutes on their piece and 20 to 30 on their headline. “People look at me and say that’s crazy, I don’t have time, I would never do that,” she says, “and they walk away all sad. That’s happened to me over and over again.”)

“I have a broadcast journalist who just came in and said, ‘Sara, I just can’t get over it. Every time I write ‘wanna’ in a headline, I feel like I’m going to hell,’” she says. “You have to match appropriately to the context. You’re competing — people on Facebook are at a party. They’re around friends, they’re trying to define themselves, they’re trying to look at baby pictures. You have to join the party, but be the cooler kid at that party. You’re not going to do it by speaking formally to people who are there to have fun.”

Fighting that training can be hard, which is why Critchfield has so carefully assembled team of “normal people.” “In the curation of the staff, I look for heart. What moves this person? There are people on staff — I have an improv comedian, I have a professional poker player, I have someone who works for the Harlem Children’s Zone, I have a person who used to be a software developer,” says Critchfield. “What they’re trained in isn’t as important as the compilation of a group of people with various hearts and passions.”

Or at least mostly normal people. Femi Oke was a radio producer when she decided to apply for a job at Upworthy. Oke says she was looking for a side gig that would give her experience with social media when she saw an ad for the job. “In typical Upworthy fashion,” she says, “it wasn’t a normal ad. It was a crazy ad — it was really intriguing.”

Oke describes going through an intensive training process at a retreat in Colorado where the curators learned to “speak Upworthy.” At first, she was surprised that the majority of the staff weren’t journalists, but soon the strategy of broadening the audience through diverse hires started to make more sense. But as the site’s popularity grew, Oke says it became increasingly important for curators to embrace traditional media tasks, like fact-checking. “As people started to see them as news, they started doing things news organizations would do,” says Oke. “They have such a fantastic reputation, they don’t want to ruin it.”

Since starting at Upworthy, Oke’s been hired to host The Stream, Al-Jazeera’s social media-centric daily online TV show, a concept born out of the Arab Spring. “At the end of each show, we have a teaser for what we’re doing on the next show. It would be a really heavy, intense, stodgy but accurate breakdown of what the next day’s subject is. I walked in and said, if we can’t make it a one-liner where I’m going to watch the show tomorrow, we shouldn’t be writing that,” Oke remembers. “My producers said, ‘Oh my god, she’s crazy.’”

So for a show on the 50th anniversary of the African Union, she might say “Happy 50th birthday, African Union! Are you looking good — or do you need a makeover?”

“That’s me anyway, but Upworthy made me even more certain that that was the style of broadcast that works for all media. It’s about being inclusive, accepting, and inviting people in.”

The one thing Critchfield says brings all the curators together is their competitive spirit and obsession with metrics. All Upworthy curators have direct access to the analytics for their work, and she says they are obsessed with testing different tricks. (How many more people will click this story if there’s a curse word in the headline?) But Critchfield says no post gets published without gut-checking its author to see how committed they are to the larger cause it’s meant to represent.

“We’ve really clarified internally that we can’t separate data analytics from human editorial judgment. Working to combine those two together is sometimes difficult,” she says. “What makes a thing viral can have just as much to do with how the person writing the piece up or working with the piece feels about it as it does with big data or listening tools.”

Photo by Esty Stein / Personal Democracy Media used under a Creative Commons license.

June 19 2013

23:48

Respond to this: The Boston Globe wants to offer iPhone users a native app and a cheap price

In the debate over native apps versus mobile websites, The Boston Globe is officially hedging its bets. And in the how-much-to-charge paywall debate, it’s going surprisingly low.

Today the newspaper is releasing a new native iPhone app as an extension of the subscription based BostonGlobe.com. Considering that the launch of the well-reviewed BostonGlobe.com two and a half years ago was considered a landmark in responsive design — meaning it reflowed readily from desktop to tablet to smartphone without the need for a native app — it’s an interesting move.

As is the price: A full subscription to the Boston Globe iPhone app will cost just $3.99 a month. That’s $47.88 a year. Compare that to the alternatives: At full freight, a seven-day print-plus-digital subscription runs $727 a year, while a digital-only subscription costs $207 a year. All for the same content.

“A year-and-a-half in, we’ve been able to grow the subscriber base with our own systems and relationship with the customer. But this gives us access to another group of people we think we haven’t been able to get as well,” said Jeff Moriarty, the Globe’s vice president of digital products and general manager of Boston.com.

That audience, Moriarty said, is smartphone users — in this case iOS users who enjoy reading in the app environment, like discovering material through Newsstand, and who take advantage of the simplicity of the app store’s one-click purchasing.

A supplement to responsive design, not a replacement

boston-globe-app-1
boston-globe-app-6
boston-globe-app-3
boston-globe-app-4
boston-globe-app-5
boston-globe-app-2
boston-globe-app-7
boston-globe-app-8
boston-globe-app-9
boston-globe-app-10
boston-globe-app-11

The Globe is, like other smart news organizations, recognizing that mobile is the future of news consumption. But its big bet was on responsive design — in a sense, a bet on mobile news being consumed in the browser rather than in a dedicated app — even though there were plenty of discussions within the Globe at the time about the wisdom of having a separate iPhone app to supplement its new web strategy.

Moriarty said the core of the newspaper’s two-site strategy remains the same: Boston.com will be the destination for free news, entertainment, and information, while BostonGlobe.com will be the home to the Globe’s reporting. But the new app also acknowledges that there are some things responsive sites and mobile browsers can’t do. As HTML5 evolves, fewer and fewer of those things are about technological constraints. But apps do still have some advantages in discovery and attention — being there to be found in the App Store, having a default position on the user’s home screen, and in the case of Apple’s Newsstand, some advantages in terms of automated issue delivery. (Although some of those advantages are changing.)

But Moriarty said going native shouldn’t be interpreted as a step away from responsive design. Taking the app route opens up users to a familiar set of gestures for reading and navigating, enables push notifications, and allows for a higher degree of customization, Moriarty said, noting that he couldn’t think of anyone “who has been as aggressive with responsive web design as we have and come back to the app market to take advantage of that as a niche play.” And newspapers can use all the niches they can assemble these days.

The Globe app echoes the newspaper typography and general feel of BostonGlobe.com. It offers up all the main sections of the Globe, but also lets readers create a customizable feed of headlines or scan a selection of trending stories. Two additional features, weather and traffic, are likely to add some utility to the app for readers in the Boston metro area.

“We focused on making it feel very mobile-native as opposed to porting an existing presentation over,” said Michael Manning, the Globe’s director for emerging products.

The Globe built the app over several months in conjunction with digital design company Mobiquity. The overall goal, Manning told me, was to create a reading experience that puts efficiency and utility front and center. App users are able to browse sections at will, or just check in on their preferences and the latest trending stories, Manning said. “We picture it as allowing people to pull out sections of the paper,” he said.

It’s the first time the paper has experimented with offering readers a broader degree of control over what they want to read. Personalization is a way of providing additional value to mobile readers, particularly those who may only have a few minutes to read at any time, Manning said. Pulling in that data on readers can also be useful to the Globe. “For us it’s really about what are the right ways to nudge people towards customization and personalization without making it a core requirement to experience the app,” Manning said.

Aiming at price-sensitive readers

Maybe even more interesting is the pricing, which would seem to undercut the substantially higher rates the paper is charging elsewhere. For any digital subscriber who does all their BostonGlobe.com reading on an iPhone, it seems like a no-brainer to get the same product in a native wrapper for 75 percent off. The bet here is that the low pricing will attract more revenue from new iPhone-addicted subscribers than it will chase off from digital and print subscribers downgrading. (As of April 1, the Globe reported roughly 32,000 digital subscribers, which includes replica editions and e-reader subscribers.)

The app even offers something BostonGlobe.com doesn’t — zero advertising for paying customers. (Non-paying app users can read five chosen-by-the-Globe articles a day, with advertising.)

I asked Moriarty about that risk, and he said it was a possibility they’ve considered. He thinks more readers would be reluctant to give up the perks and mobility of the higher-priced bundles. In order for the Globe to succeed, it has to meet readers at different levels, whether it’s for free on Boston.com, within the Boston Globe app, or in print, Moriarty told me. The hope is that the app could be a doorway into a broader connection to the Globe, he said.

“We don’t anticipate a lot of switching there,” Moriarty said. “We hope it’s a place where people will step into the Globe products and appreciate it and want it in other places as well.”

The Globe’s move could be the first of a number of similar shifts to seek out new products at lower price points. The New York Times Co., the Globe’s parent company (for at least a little while longer), announced in April that it would debut new cheaper and more expensive digital products to complement its existing packages.

Those moves come amid some industry-wide concerns that digital paywalls may be proving more effective at keeping some traditional newspaper readers than in attracting younger ones, who might be priced out by higher rates. The Times Co.’s announcements were specifically put in the context of The New York Times itself, not the Globe, but it seems that similar ideas are at work just up I-95.

June 18 2013

15:37

The Times of London, navigating audience with a strict paywall, retires its opinion Tumblr

times opinionWhen you bet on a strict, un-leaky paywall as The Times of London has, you’re forced to get creative about how to put your work in front of new audiences — particularly if you’re trying to influence their opinions. Unlike its fellow Times across the Atlantic, the U.K. paper has chosen not to allow a set number of articles per month or a number of free routes around the paywall.

So a year ago, The Times set up a Tumblr for its opinion content, with the aim of giving “a flavour of what our columnists and leader writers do, how they think, and what influences their writing.”

After initially posting 80 times or more a month, posting fell off, and earlier this month, the Times Opinion Tumblr was shut down, with editors announcing they would be moving all opinion content back to its original home on the newspaper’s main site.

“We wanted to see if it attracted new readers to The Times and were very clear, with ourselves and our readers, that it was an experiment to see how it could work for us. It flourished in parts, but we’ve come to the conclusion that it wasn’t quite right for us,” communities editor Ben Whitelaw wrote in a post that also appeared on the Times Digital Development blog.

The Times reactivated its Comment Central opinion blog — behind the paywall — on the same day that the Tumblr blog was shuttered. Whitelaw wrote that posts to the blog would occasionally be free-to-access.

Nick Petrie, The Times’ social media and campaigns editor, told me that the Tumblr page was part of an effort to draw in new digital subscribers to TheTimes.co.uk. Regular Times columnists like Oliver Kamm and Daniel Finkelstein posted shorter “off-the-cuff” pieces on the page, which were freely viewable to all visitors. Times Opinion had amassed 66,000 followers since its launch, Petrie said, “but it wasn’t driving traffic back to the site.”

“Tumblr seemed like a good, light, easy-to-use platform that we could use to give people a taste of our comment and opinion, which is obviously the type of journalism that the Times is renowned for,” Petrie explained. “There was a hope that pushing out a small amount of original journalism, of original comment and opinion, would further enhance the idea of giving people a taste of what’s on offer if they became a subscriber.”

Reaching an audience to influence

What to do about opinion writing behind a paywall is a question newspapers have dealt with as long as there have been paywalls. Opinions, after all, are meant to influence, and influence would seem to grow along with the audience reading them. The Wall Street Journal, a paywall early adopter, committed early on to posting many of its opinion pieces online for free even while most news content was subscriber-only. Meanwhile, The New York Times took the opposite approach in the mid 2000s with TimesSelect, which kept the news free but put the newspaper’s columnist behind a paywall.

(The Wall Street Journal also began posting pieces from its editorial page on an Opinion Journal Tumblr, but back in 2007; like the U.K.’s Times, the Journal also stopped updating the page about a year after its debut.)

Petrie said that The Times had not specifically set up analytics for the Times Opinion Tumblr, so the editors aren’t sure what kind of traffic the page generated. According to comScore data, The Times has seen a substantial increase in traffic over the past year, from 748,000 unique worldwide visitors in April 2012 to nearly 1.5 million in April 2013 — but that’s still far behind other British newspapers without strict paywalls such as The Guardian, which has over 18 million monthly uniques in the United States alone and well over 30 million worldwide.

The Times, owned by the soon-to-split News Corp., remains on shaky financial ground; last week, acting editor John Witherow announced that the paper would be cutting 20 editorial jobs as a result of the parent company’s decision to separate its newspaper and entertainment holdings, The Guardian reported. The Times has seen a major decline in online readership since erecting the paywall in 2010.

“The idea is that everything that we publish is worth being paid for,” Petrie said.

Teaser pages, which allow readers to view the first 100 words of every article, were integrated into the Times site in October 2012 and may be a driver of The Times’ increased traffic. Only 881,000 unique visitors came to the site in October 2012 according to ComScore — a modest increase from the previous spring.

After the 100-word previews became a standard part of the site, Petrie said that the opinion Tumblr “became slightly defunct in that moment…We’re pursuing a strategy that essentially, we want to bring people in to see our journalism, rather than take our journalism out of our space — that’s why we’ve relaunched the Comment Central blog, which had been incredibly popular before we started charging.” That blog will soon feature podcasts on opinion topics, and Petrie noted that the Times is developing new strategies to attract paying subscribers to the site.

“That’s something we’re working on at the moment, but we’re not ready to talk about that yet,” he said.

June 17 2013

09:48

Monday Q&A: Designer David Wright, departing NPR for Twitter, has just one favor to ask

David Wright is an award-winning designer who, in his time at NPR, worked on everything from their mobile music platform to NPR’s homepage design. Wright has spent a lot of time sharing his design philosophy with the news world, trying to explain how he built what he built, but also trying to make news managers understand the importance of making design a priority early on.

But now, Wright is leaving the news world behind — sort of. He’ll be moving over to Twitter, to work with what he considers an all-star team of web platform designers. (He joins API whiz Daniel Jacobson, now at Netflix, as NPR talent to move to prominent positions in the technology world — not the most common path.) Though not entirely sure what projects he’ll be working on, Wright says he has a lot of big ideas for simplifying Twitter and making it a bigger part of a variety of websites. And while he won’t be working in a newsroom anymore, Wright predicts he’ll learn a lot about how people are sharing and consuming information that could, down the road, be of great value to publishers.

Days before his final departure, we chatted about building platforms for distribution of audio, narrowcasting, Twitter on steroids, and World War II-era telephone operators.

O’Donovan: So! Obviously, there’s a lot of exciting stuff going on for you. How does it feel to be packing up and heading out of NPR?
Wright: I think that I have not yet realized how hard it will be for me to walk out of this building on Friday afternoon.

There’s a lot of really amazing stuff going on here, and it’s bittersweet because I’ve been really excited about what we’ve done here, and I don’t know if I really realize what it will be like to leave some of this amazing work unfinished.

O’Donovan: What are some of the bigger projects that are hanging out there, that you’re passing on? How do you and with whom do you hope to see them completed?
Wright: Well, that they will be completed is not really a question. They will be. The big ones that are going on right now are — the most obvious one is we’re in the midst of a fully responsive redesign of NPR.org that we began last fall and are picking away at, section by section. We’ve recently launched the small screen version of a redesigned and rethought homepage, and soon we’ll be releasing that to more viewports and moving on to the rest of the important pages, not just NPR.org. So that’s exciting. And I think in many ways the team is moving at a breakneck pace and I’ll be excited when the whole site kind of is finally launched in this unified rethought visual vocabulary. It’s amazing work, I”m super proud of it and the team that’s in the trench right now.

The other one that’s cool is one we haven’t really launched publicly yet, but we’re thinking a lot about what a reimagined kind of radio experience would feel like — taking some of the best of on-demand pieces that we know and love from services like Rdio and Spotify and Pandora and thinking about how public radio fits into that picture. So a lot of experiments that we’re taking small steps with. It’ll make me sad to not really be as involved with those anymore.

O’Donovan: The last couple years, you’ve spoken a lot about how you think about design and the fact that it’s different from storytelling. Can you talk a little about how — and I know it’s obviously been a dynamic experience — how your philosophy there has changed over time and if there are any big elements of it that have changed since you started?
Wright: I think that obviously the more that I’ve worked with what has become here, over the time that I’ve worked at NPR, the product team has really become a — I thought we were pretty high performing when I got here, but the kind of talent that we’ve been able to add to the team, and the methods and the processes that we use to create digital products has become more and more refined over time. I think that it’s fair to say that any of my thoughts that I’ve shared publicly are certainly the synthesis of my ideas, but they’re so greatly informed by the amazing people that I’ve worked with here.

So I think as we’ve gotten better at making stuff, my thoughts and how we could refine the process has really gotten a bit more sharp. But I think what’s most fascinating, and maybe what I’m most proud of leaving this building, is to be able to look back and see what an important part design has been able to play in the making of products here. I think it’s easy for lots of people to recognize that it is an important ingredient, but sometimes it’s hard to get an organization to understand why that is, and I’m really fortunate to have had a lot of really willing people here at NPR who’ve heard that message and have really embraced it. I think that’s changed a lot as we’ve sort of matured on our own, and our own understanding of what makes good products, we’ve really been able to convince a lot of folks here that design is really at the core of helping us figure out what problems to solve and how to solve them and most importantly how to solve them well.

O’Donovan: One of the questions I had for you was: How do you explain to management or people who are really editorially focused the importance of design? You’ve said that it’s the number one problem journalism is facing. Do you still feel that way? And for people who are struggling to make that clear, how would you advise them?
Wright: I think without calling out any specific products that are out there, I think we can all say that we’ve used and experienced journalism on platforms and in different situations that were less than satisfying, and I think just being pretty unbiased about what we think makes a good experience and what we think makes a great experience and what we think makes a terrible experience. They’re easy cases to show. Do we want to be more like this, or do we want to be like this? Nobody argues with the fact that everybody wants to create a good experience, but I think the most important thing that we as design thinkers can do is help explain how design is not really something that is an option.

If you want to create a good product, it can’t be a condition — it has to be something that is an absolute. You must include it, in order to create efficiencies, in your process, and make things that are fantastic and meaningful to people and beautiful and useful. It’s really about calling out these examples and saying: Here’s a perfect case where work design was involved from the beginning and it made this product better. And whether you’re editorial, or you’re a manager or a coder or a designer, you can look at those and from a pretty unbiased point of view say, Yes, you’re right, that’s better because design was involved.

O’Donovan: So, and you can explain to me the extent to which this is accurate, but assuming that you’re taking a step away from designing for news, as you look farther down the road, for people who are still in it, what are the next major hurdles? If you were still working for NPR, what would be the next areas you wanted to tackle?
Wright: I think NPR is a bit of an interesting animal, only because the kind of content that we deal with is a little bit different given how audio-centric much of what we do is. But I think it’s going to be really interesting for organizations who are wrestling with really getting their content to appear how they want to on many different platforms. I think that’s crucial. If a news organization is really not thinking beyond — I’m stating the obvious here  — if a news organization is not thinking beyond the desktop browser, I think that’s going to become much more problematic in the next coming years.

We’ve been really good at building stories and trying to express what I like to call editorial intention in what viewport size on the desktop. We can go to any one of our home pages, anybody in the news business, you can go to a homepage when it’s a papal conclave story or a Boston marathon or an election night, we know those patterns and we’re good at them. We can reflect them well on the desktop.

I think we have a harder time thinking about how to take what editors can do, what news professionals do, how they express themselves in other places, and separating them from the desktop page. So figuring out ways — news professionals need to express hierarchy and importance of stories and that this one is louder than this one — figuring out ways to make sure that works everywhere, I think is going to be a really big challenge for a lot of organizations, but a very important one to solve.

O’Donovan: You said at one point, I think, that you expected to see NPR rather quickly have a larger audience on mobile than they did for desktop. How close are you to that?
Wright: I think, as far as numbers go, we’re not there yet. But a little bit of that is from the hip and it will be interesting to see if history agrees with my proclamation there. I think there are actually more and more organizations who are being, and I can’t really name any off the top of my head, that have definitely read anecdotally about how mobile traffic is something that is catching up for everyone a lot faster than anyone really would have thought. There’s really no surprise in that. I think that should really be expected. If anyone’s paying any attention to any of our competitors in this space, which they all should be, that’s not a big surprise.

But yeah, I think we’re close — I think that every month we’re really seeing traffic growth across the board and most of our platforms and, on the desktop for sure it’s incremental, but it’s really quite a bit more pronounced on the mobile web for us. And I think it can only continue, given the number of devices and potential people that we can reach.

O’Donovan: You mentioned how NPR has a unique situation because of being predominantly audio focused. But I’ve heard from at least a handful of people that there are people out there who really think that it’s going to become a much more important part of everyone’s strategy, in the same way that we talk about video. But there are definitely people who say that audio will become increasingly important. Do you think that there are in your mind any big design takeaways that people who might be looking to get more into the audio game would want to know about?
Wright: Yeah! I think that. I would certainly not claim that NPR has solved every problem in this space, and in fact, I think if you asked a lot of people here, we still have some pretty major problems to solve in terms of how we distribute audio effectively.

I think that SoundCloud is doing a great job of creating some really interesting innovations in the space. Anybody could look at them and say that seems like a really solid platform for audio distribution. I think, for us, the most important thing is to think about why — the distribution could be anybody’s game and I think that, especially for organizations who aren’t very well resourced, nobody wants to. Just like we don’t all want to rebuild the exact same CMS at great expense and very little gain, I don’t know that everybody just wants to invest in building audio delivery platforms.

But I would say that I think it’s so important ot understand why people gravitate toward audio, the same way they gravitate toward video or photography. What is the recipe, or the formula, that goes into creating compelling audio that matters? I think it has much less to do with design of the experience right now and much more to do with what makes great audio great audio. I think we’ve all been fans of podcasts that are amazing, and there are certainly lots and lots of things not produced by a public radio community that are amazing and that we love. There are lots of things that public radio creates that are amazing and we love. But that has a whole lot more to do with the content than the presentation and delivery. There’s lots of room for innovation there. My best advice to anybody who wanted to get into that game, is to really think a lot about why people love it.

O’Donovan: I was looking over some presentations you’ve given in the past, and one of the things that you emphasize is that, while you haven’t served as a storyteller or journalist at NPR, you have taken away from working around journalists not only the importance but the ease with which it is possible to ask questions and get out there and talk to people about the things you want to know about. So I want to move into talking about your next step at Twitter, and how you see that playing into the rest of your career.
Wright: Anybody who wants to be successful at making products has to be able to draw on all of the places where we are when we’re making things. Often the most important person whose voice is the most important part of your team is the user. We have a lot of really great ethnologists and design thinkers and people who are creating content, but the people who are consuming it are often absent from that conversation. So it’s less about “is this a good idea” or not. I think it’s almost always a good idea at some stage in the process to really incorporate users and listeners into what w’ere making.

Journalists, if they’re not doing it, they should be, because they’re already really good at that. They’re good at talking to people and asking hard questions and figuring out how to get somebody to say something meaningful, even if it’s hard to coax it out of them. It’s a natural fit. So if you’re trying to convince journalists and news organizations that talking to people about what you make is a good idea, it’s easy to tell them by saying, you’ve already done this. You already do this.

As far as the next phase in my career, I’ve fortunately had the great pleasure of working shoulder to shoulder with some really amazing journalists and some really amazing storytellers and some really great reporters, and people who know how to chase stories, and I think I’ll always feel in some way like I’d like to try to harness what I’ve witnessed, a lot of what these talented people do. In my own practice, whether it’s outside of a newsroom or in newsroom, I’d always like to channel that as best I can.

You know, What would David Gilkey do in this scenario? He wouldn’t be shy about approaching this curmudgeonly user? No, he wouldn’t.

O’Donovan: How do you approach the curmudgeonly user? How do you go about getting that opinion?
Wright: At NPR — there’s a million ways that organizations do it, but at NPR we use a mix of what I might really consider three major kinds of audience feedback.

The first one is really kind of faceless and it’s field surveys. We do this the most infrequently, but basically we’ll say we really need to get a handle right now on how we think people are consuming digital news of this sort. We’ll put together a fairly lengthy survey and field it with the understanding that by the time we process the results, they’ll already be a little bit dated. But it gives us a really good snapshot of the time of how many people really listen to NPR, how many people are looking at news sites, how many people are reading newspapers, how many people are watching TV news — getting demographic information about that. We do that pretty infrequently, but it gives us pretty low resolution blocks of people we need to be thinking about and it helps us figure out where opportunities are.

The second kind of testing or conversation that we have really has to do with our ability to have long-term conversations with people about stuff that we make. So existing users of products — we have opportunities that are much more regular than large surveys — opportunities to reach out to people that are on our listener panel, or other people that we can intercept or make callouts in social media and say, “Take five minutes and tell us what you think of this particular feature.” We can ask questions that way. It certainly puts more of a face on things and we can get very specific about product features.

And then the most specific thing we try to do is actually bring real life users into our world and sit down with them and lead them through testing that says, “Hey, we made this thing, it’s kind of half baked, and we want you to use it and tell us what you think about it.” Pretty standard user testing.

O’Donovan: I do want to talk about what’s coming up for you. How much do you know about what you’re doing there, what are you really excited about, what do you hope to see come out of it?
Wright: I wish that I had a lot more information — well, no, I’m fine having a vague sense of what’s going on, but, you know, we know what Twitter is and what Twitter does and it was really exciting for me to be able to visit with that team and learn about the kinds of things that they’re tackling and the things they’ve built and some of the plans they have for the future.

To the extent that I can be specific about what I’ll be working on, what I do know is that I’ll be focused at first on a platform team. I’ll be working with the Twitter for websites team, figuring out interesting ways to make Twitter appear in more places than it does today. The thing that I think I’m most excited about is that I feel like I work with a very talented team at NPR and I know it will be true about the team I’ll be working with at Twitter — just a lot of really smart people that, as a designer, I’ve just really respected and followed a lot in my design career.

To have so many of those people — like Doug Bowman and Mike Davidson, founder of Newsvine — just really smart, smart thinkers who, like I said, are respected web design heroes of mine. I’m really excited to be able to go and solve problems with them, to get up in the morning and go to work and try to figure out how to make Twitter better, which is great. I think that the other part that is a huge selling point for me is, there’s lots of reasons that my family wanted to get to California and be there and be a part of the amazing community that’s happening, and the amazing design community that is part and parcel of the Bay Area.

But for me to leave news is hard. It’s really hard. But what’s interesting is that while I feel like I’ll be leaving journalism, going to work at Twitter, I don’t have to squint very hard to see how involved I still will be in news. I think it’s a really fascinating platform for me to be able, as a user and an observer and a guy who spent a lot of time in newsrooms, who watch what I think this platform could do.

O’Donovan: In the future, how do you see people, specifically journalists but also more broadly, using Twitter differently than they do now? What would you like to build for, for what kind of usage?
Wright: That’s something I’m really looking forward to learning more about as a person on the inside, because I think that even though I’ve spent a considerable amount of time talking with folks there about what they’re trying to build and what they need help building, I still don’t think I have a full enough picture to really know what is even in the realm of possibility. I have lots of sort of fantastic dreams about what could happen, what sort of the Twitter-on-steroids might look like — and maybe even a simpler version of Twitter. I think there are many things. It’s probably too much for me to speculate right now.
O’Donovan: Well, give us one fantasy. What’s the craziest Twitter-on-steroids fantasy?
Wright: Wow.

I think that there’s something fascinating to me about the range of information that you can get from Twitter. We often talk a lot at NPR about making sure that people get their vegetables and also get really delicious pieces of candy that we can distribute through the radio. What I love about Twitter is that, in a combined stream, I can see this heartbreaking story or updates from people who are literally fighting for their lives in Egypt as they are in the midst of the revolution, and then, you know, followed by an update from somebody who’s, you know, explaining how hungover they are because they were at their favorite bar. I think that, as a medium, like, what else does that? That is never part of the presentation that happens on our broadcast news.

This is certainly not a Twitter-on-steroids idea, but, in terms of being able to harness the ability for people be able to narrowcast to them in really specific ways, I think has such a reach potential. That my mom can find value in that, in a way that manifests itself very differently from the way that I would, but we can find the same value out of it with very different content. I think that the challenge of kind of wrestling with: How do you create an experience that will be as useful for my mom as it will be for me, using the same basic parts and concepts but obviously delivering very different content? That’s a fascinating problem to solve, and I’m excited to roll my sleeves up and give it a go.

O’Donovan: Was it you who was tweeting about explaining Twitter to your grandmother recently?
Wright: It was. Absolutely it was. I believe the thing I said after that was if she gets sudden onset dementia, I’m going to feel partially responsible.

Yeah, the thing that’s cool though, is, her story is great. She was a telephone operator back in the day, where, we’ve all seen the photos of people plugging all the lines into one of the boards. One of my favorite stories that she told me was she was manning a board the night that it was V-E Day, and as soon as the word got to the United States via the phones, that victory in Europe had happened, she said the board lit up!

So I was trying to tell her: This is what we do now. We don’t use phones anymore — this is what we do. The fact that Twitter would blow up with this information is how we know it. She said, “I think I understand.” I said, “All right, well, it’s the same thing.”

O’Donovan: I guess it’s not altogether entirely undigestible.
Wright: Yeah, there was part of it she thought was pretty cool.
O’Donovan: We talked a little bit at the beginning about designing for audio, and I’d be curious to know if you think those skills will come into play.
Wright: We haven’t talked specifically about audio, but we had a lot of really great conversations about the experiences that I’ve had designing for news organizations and bringing with me information about how publishers specifically are using lots of different tools, Twitter included.

I’ve built really — I wouldn’t call them large muscles, I’d call them interesting muscles over the last 12 years, thinking a lot about these problems. I think it would be really hard for me to not apply some of those skills. It’s become such a part of my DNA that I’m interested in seeing, How do those skills fit and work? How are those patterns applied out of a newsroom?

O’Donovan: I would imagine you also bring some insight about what publishers want or what they think they want from ads. I don’t know if you have thoughts about Twitter’s different advertising strategies lately, but you’ve said in the past that you think publishers need to be thinking really differently about advertising.
Wright: What I said in my conversations with people at Twitter was, if you’re looking for a client services manager to go to newsrooms and write down the requirements of what would make Twitter great on news websites — I’ve been pretty vocal about some of the things news organizations aren’t doing as well as I wish they were, or things I wish they were doing differently — but I was pretty clear that I might not be the best guy to come in and take those kinds of requirements down.

So I’ve been critical about some of the things publishers have done and ads are chief among them. I think display advertising as we know it is in many cases a race to the bottom and is in many ways unsustainable. What I’m excited to learn more about is advertising models that are so native to the platform — and Twitter is a good example of that, I think the best is obviously Google’s AdWords, AdSense — and just feel like part of the experience. I won’t have a ton of good things to bring along with me from the point of view of what I think digital news design is doing in general. I think it will be more of the other way around, where I eventually might be able to bring some of the information and things I’ve learned working with a company like Twitter to help publishers figure out how to think differently about how they’re doing things. I expect it might go the other way.

O’Donovan: So, let’s say you’re leaving NPR and the whole news world throws you a party. And they say, What’s the one thing we can do for you while you’re gone? What would you want them to do?
Wright: Oh! This is easy. My one wish is for every news org, to really understand the importance of having — it doesn’t have to be a visual designer, but to have design thinkers have a seat at the table. It’s not about deciding what work gets done or how it gets done but it’s really about making design as important an ingredient in what we make as editorial, as reporters, writers, photographers, and technologists. Designers bring such a unique value to the table and can just help solve so many interesting problems in really thoughtful ways. The best presents the news community could give me is to say, Everybody, designers are always at the table.

Photo by Casey Capachi via the ONA.

June 11 2013

17:48

Privacy versus transparency: Connecticut bans access to many homicide records post-Newtown

Editor’s note: Our friends at Harvard’s Digital Media Law Project wrote this interesting post on the new, Newtown-inspired limits on public access to information about homicides in Connecticut. We thought it was worth amplifying, so we’re republishing it here.

digital-media-law-project-dmlp-cmlpAt a time when citizens increasingly call for government transparency, the Connecticut legislature recently passed a bill to withhold graphic information depicting homicides from the public in response to records from last December’s devastation at Sandy Hook Elementary School.

Though secret discussions drafting this bill reportedly date back to at least early April, the bill did not become public knowledge until an email was leaked to the Hartford Courant on May 21. The initial draft of what became Senate Bill 1149 offered wide protection specifically for families of victims of the December 14 shootings, preventing disclosure of public photographs, videos, 911 audio recordings, death certificates, and more.

Since then, there has been a whirlwind of activity in Connecticut. After a Fox reporter brought to the attention of Newtown families a blog post by Michael Moore suggesting the gruesome photos should be released, parents of children lost in the terrible shooting banded together to write a petition to “keep Sandy Hook crime scene information private.” The petition, which received over 100,000 signatures in a matter of days, aimed to “urge the Connecticut legislature to pass a law that would keep sensitive information, including photos and audio, about this tragic day private and out of the hands of people who’d like to misuse it for political gain.”

As this petition was clearly concerned about exploitation by Moore and others, Moore later clarified his position, emphasizing that the photos should not be released without the parents’ permission. Rather, he spoke about the potential significance of these photos if used voluntarily to resolve the gun control debate, in the same manner that Emmet Till’s mother releasing a photo of her son killed by the KKK influenced the civil rights movement.

Like the petitioners, members of the Connecticut legislature responded with overwhelming support for SB 1149. Working into the early hours of June 4, the last day of the legislative session, the state Senate and House approved the bill 33-2 and 130-2, respectively. The bill as approved exempts photographs, film, video, digital or other images depicting a homicide victim from being part of the public record “to the extent that such record could reasonably be expected to constitute an unwarranted invasion of the personal privacy of the victim or the victim’s surviving family members.” The bill particularly protects child victims, exempting from disclosure the names of victims and witnesses under 18 years old. It would also limit disclosure of audio records of emergency and other law enforcement calls as public records, such that portions describing the homicide victim’s condition would not have to be released, though this provision will be reevaluated by a 17-member task force by May 2014.

Though more limited in scope than the original draft with respect of the types of materials that may not be disclosed, this final bill addresses all homicides committed in the state, not only the massacre in Newtown. It was signed by Governor Dannel Malloy within twelve hours of the legislature’s vote and took effect immediately.

From the beginning, this topic has raised concerns with respect to Connecticut’s Freedom of Information Act and government transparency. In addition to being drafted in secrecy, the bill was not subjected to the traditional public hearing process. All four representatives who voted against SB 1149 raised these democratic concerns, challenging the process and scope of this FOI exemption. This blogger agrees that in its rush to appropriately protect the grieving families of Newtown before the session ended, Connecticut’s legislature went too far in promoting privacy over public access to records, namely with respect to the broad extension of the bill to all homicides and limitations on releasing 911 calls.

Though influenced primarily by the plight of those in Newtown, SB 1149 makes no distinction based on the gravity or brutality of the homicide, or any other factor that may relate to the strength of the privacy interest. Instead, it restricts access to traditionally public records for all homicides in the state, reaching far beyond the massacre at Sandy Hook. As the Chief State’s Attorney Kevin Kane said with respect to photographs depicting injuries to victims and recordings of their distress, “it seems to me that the intrusion of the privacy of the individuals outweighs any public interest in seeing these.” Pressure to expand the bill as Kane desired came primarily from advocates of the legislature’s Black and Puerto Rican Caucus. They criticized the fairness of differentiating between the protection owed to Newtown families and that due the families of homicide victims in urban areas, where homicides occur more frequently.

This fairness and equality based argument raises valid concerns about how the legislature is drawing the line between protected and unprotected records: If limited to the shootings at Sandy Hook, then in the future, what level of severity would make visual records of a killing “worthy” of exemption from disclosure? But an all-inclusive exemption like the one Connecticut passed goes too far in restricting the public’s access to important public records. It restricts public access to information so long as a minimal privacy interest is established, regardless of the strength of the interest in disclosure. While restricting the release of photos of the young children who lost their lives this past December is based in a strong privacy interest that far outweighs the public or governmental interest, the same cannot be said for every homicide that has occurred or will occur in the state. The potential lasting consequences of this substantial exemption from the FOIA should not be overlooked or minimized in the face of today’s tragedy.

SB1149 is also problematic in that it extends to recordings of emergency calls. While there is some precedent for restricting access to gruesome photos and video after a tragedy, this is far more limited with respect to audio recordings. Recordings have been made available to the public after many of our nation’s tragic shootings, including the recordings from the first responders to Aurora, 911 calls and surveillance video footage from Columbine, as well as 911 calls from the Hartford Distributors and Trayvon Martin shootings. While a compromise was reached in permitting the general release of these recordings, the bill includes a provision that prevents disclosure of audio segments describing the victim’s condition. Although there is a stronger interest in limiting access to the full descriptions of the child victims at Sandy Hook, weighing in favor of nondisclosure in that limited circumstance, emergency response recordings should be released in their entirety in the majority of homicide cases.

This aspect of the law in particular may have grave consequences for the future of the state’s transparency. Records of emergency calls traditionally become public records and are used by the media and ordinary citizens alike to evaluate law enforcement and their response to emergencies. The condition of the victim is an essential element of evaluating law enforcement response. As the president of the Society of Professional Journalists, Sonny Albarado, noted, “If you hide away documents from the public, then the public has no way of knowing whether police…have done their jobs correctly.” In other words, these calls serve as an essential check on government. As a nation which strives for an informed and engaged citizenry, making otherwise public records unavailable is rarely a good thing and should be done with more public discussion and caution than recently afforded by Connecticut’s legislature.

Connecticut’s bill demonstrates a frightening trend away from access and transparency. Colleen Murphy, the executive director of the Connecticut Freedom of Information Commission, has observed a gradual change in “toward more people asking questions about why should the public have access to information instead of why shouldn’t they.” It has never been easy to balance privacy rights with the freedom of information, and this is undoubtedly more difficult in today’s digital age where materials uploaded to the Internet exist forever. Still, our commitment to self-regulation, progress, and the First Amendment weighs in favor of disclosure. Exceptions should be limited to circumstances, like the Newtown shooting, where the privacy interest strongly outweighs the public’s interest in accessing information. As the Connecticut Council on Freedom of Information wrote in a letter to Governor Malloy, “History has demonstrated repeatedly that governments must favor disclosure. Only an informed society can make informed judgments on issues of great moment.”

Kristin Bergman is an intern at the Digital Media Law Project and a rising 3L at William & Mary Law School. Republished from the Digital Media Law Project blog.

Photo of Connecticut state capitol by Jimmy Emerson used under a Creative Commons license.

17:00

OpenData Latinoamérica: Driving the demand side of data and scraping towards transparency

“There’s a saying here, and I’ll translate, because it’s very much how we work,” Miguel Paz said to me over a Skype call from Chile. “But that doesn’t mean that it’s illegal. Here, it’s ‘It’s better to ask forgiveness than to ask permission.””

Paz is a veteran of the digital news business. The saying has to do with his approach to scraping public data from governments that may be slow to share it. He’s also a Knight International Journalism Fellow, the founder of Hacks/Hackers Chile, and a recent Knight News Challenge winner. A few years ago, he founded Poderopedia, a database of Chilean politicians and their many connections to political organizations, government offices, and businesses.

But freeing, organizing, and publishing data in Chile alone is not enough for Paz, which is why his next project, in partnership with Mariano Blejman of Argentina’s Hacks/Hackers network, is aimed at freeing data from across Latin America. Their project is called OpenData Latinoamérica. Paz and Blejman hope to build a centralized home where all regional public data can be stored and shared.

Their mutual connection through Hacks/Hackers is key to the development of OpenData Latinoamérica. The network will make itself, to whatever extent possible, available for trouble shooting and training as the project gets off the ground and civic hackers and media types learn both how to upload data sets as well as make use of the information they find there.

Another key partnership helping make OpenData Latinoamérica possible is with the World Bank Institute’s Global Media Development program, which is run by Craig Hammer. Hammer believes the data age is revolutionizing government, non-government social projects, and how we make decisions about everyday life.

“The question for us, is, What are we gonna do with the data? Data for what? Bridging that space between opening the data and how it translates into improving the quality of people’s lives around the world requires a lot of time and attention,” he says. “That’s really where the World Bank Institute and our programmatic work is focused.”

A model across the Atlantic

Under Hammer, the World Bank helped organize and fund Africa Open Data, a similar project launched by another Knight fellow, Justin Arenstein. “The bank’s own access-to-information policy provides for a really robust opportunity to open its own data,” Hammer says, “and in so doing, provide support to countries across regions to open their own data.”

Africa Open Data is still in beta, but bringing together hackers, journalists, and information in training bootcamps has already led to reform-producing journalism. In a post about the importance of equipping the public for the data age, Hammer tells the story of Irene Choge, a journalist from Kenya who attended a training session hosted by the World Bank in conjunction with Africa Open Data.

She…examined county-level expenditures on education infrastructure — specifically, on the number of toilets per primary school…Funding allocated for children’s toilet facilities had disappeared, resulting in high levels of open defecation (in the same spaces where they played and ate). This increased their risk of contracting cholera, giardiasis, hepatitis, and rotavirus, and accounted for low attendance, in particular among girls, who also had no facilities during their menstruation cycles. The end result: poor student performance on exams…Through Choge’s analysis and story, open data became actionable intelligence. As a result, government is acting: ministry resources are being allocated to correct the toilet deficiency across the most underserved primary schools and to identify the source of the misallocation at the root of the problem.

Hammer calls Africa Open Data a useful “stress test” for OpenData Latinoamérica, but Paz says the database was also a natural next step in a series of frustrations he and Blejman had encountered in their other work.

“Usually, the problem you have is: Everything is cool before the hackathon, and during the hackathon,” says Paz. “But after, it’s like, who are the people who are working on the project? What’s the status of the project? Can I follow the project? Can I be a part of the project?” The solution to this problem ended up being Hackdash, which was actually Blejman’s brainchild — an interface that helps hackers keep abreast of the answers to those questions and thereby shore up the legacy of various projects.

So thinking about ways that international hackers can organize and communicate across the region is nothing new to Paz and Blejman. “One hackathon, we would do something, and another person who didn’t know about that would do something else. So when we saw the Open Data Africa platform, we thought it was a really great idea to do in Latin America,” he says.

Blejman says the contributions of the World Bank have been essential to the founding of OpenData Latinoamérica, especially in organizing the data bootcamps. Hammer says he sees the role of the bank as building a bridge between civic hackers and media. “More than a platform,” he says it’s, “an institution in and of itself to help connect sources of information to government and help transform that data into knowledge and that knowledge into action.”

Giving people the tools to understand the power of data is an important tenet of Hammer’s open data philosophy. He believes the next step for Big Data is global data literacy, which he says is most immediately important for “very specific and arguably strategic public constituencies — journalists, media, civic hackers, and civil society.” Getting institutions, like newspapers, to embrace the importance of data literacy rather than relying on individual interest is just one goal Hammer has in mind.

“I’m not talking about data visualization skills for planet Earth,” he says. “I’m saying, it’s possible — or it should be possible — for anybody that wants to have these skills to have them. If we’re talking about data as the real democratizer — open data as meaningful democratization of information — then it has to be digestible and accessible and consumable by everyone and everybody who wants to access and digest and consume it.”

Increasing the desire of the public for more, freer data is what Hammer calls stoking the demand side. He says it’s great if governments are willingly making information accessible, but for it to be useful, people have to understand its power and seek to unleash it.

“What’s great about OpenData Latinoamérica is it’s in every way a demand-side initiative, where the public is liberating its own data — it’s scraping data, it’s cleaning it,” he says. “Open data is not solely the purview of the government. It’s something that can be inaugurated by public constituencies.”

For example, in Argentina, where the government came late to the open data game, Blejman says he saw a powerful demand for information spring up in hackers and journalists around him. When they saw what other neighboring countries had and what they could do with that information, they demanded the same, and Argentina’s government began to release some of that data.

“We need to think about open data as a service, because no matter how much advocacy from NGOs, people don’t care about ‘open data’” per se, Paz says. “They care about data because it affects their life, in a good or bad way.”

Another advantage Bleman and Paz had when heading into OpenData Latinoamérica was the existence of Junar, a Chilean software platform founded by Javier Pajaro, who was a frustrated analyst when he decided to embrace open data platforms and help others do the same. Blejman said that, while Africa Open Data opted for CKAN, using a local, Spanish-language company that was already familiar to members of the Hacks/Hackers network has strengthened the project, making it easier to troubleshoot problems as they arise. He also said Junar’s ability to give participating organizations more control fit nicely into their hands-off, crowd-managed vision for future day-to-day operation of the database.

Organizing efforts

Paz and Blejman have high hopes for the stories and growth that will come from OpenData Latinoamérica. “What we expect from these events is for people to start using data, encourage newspapers to organize around data themes, and have the central hub for what they want to consume,” Blejman said.

They hope to one day bring in data from every country in Latin America, but they acknowledge that some will be harder to reach than others. “Usually, the federated governments, it’s harder to get standardized data. So, in a country like Argentina, which is a federated state with different authorities on different levels, it’s harder to get standardized data than in a republic where there’s one state and no federated government,” says Paz. “But then again, in Chile, we have a really great open data and open government and transparency allows, but we don’t have great data journalism.” (Chile is a republic.)

Down the road, they’d also like to provide a secure way for anonymous sources to dump data to the site. Paz says in his experience as a news editor, 20–25 percent of scoops come from anonymous tips. But despite developments like The New Yorker’s recent release of Strongbox, OpenData Latinoamérica is still working out a secure method that doesn’t require downloading Tor, but is more secure than email. Blejman also added that, for now, whatever oversight they have over the quality and accuracy of the original data they’re working with is minimal: “At the end, we cannot control the original sources, and we are just trusting the organizations.”

But more than anything, Paz is excited about seeing the beginnings of the stories they’ll be able to tell. He plans to use documents about public purchases made by Chile’s government to build an app that allows citizens to track what their government is spending money on, and what companies are being contracted those dollars.

Another budding story exemplifies the extent to which Paz has taken to heart Craig Hammer’s emphasis on building demand. In Chile, there is currently a significant outcry from students over the rising cost of education. Protests in favor of free education are ongoing. In response, Paz decided to harness this focus, energy, and frustration into a scrape-a-thon (or #scrapaton) to be held June 29 in Santiago. They will focus on scraping data on the owners of universities, companies that contract with universities, and who owns private and subsidized schools.

“There’s a joke that says if you put five gringos — and I don’t mean gringos in a disrespectful way — if you put five U.S. people in a room, they’re probably going to invent a rocket,” says Paz. “If you put five Chileans in a room, they’re probably going to fight each other. So one of the things — we’re not just building tools, we’re also building ways of working together, and making people trust each other.” Blejman added that he hopes the recent release of a Spanish-language version of the Open Data Handbook (El manual de Open Data) will further facilitate collaboration between hackers in various Latin American countries.

With a project of this size and scope, there are also some ambitious designs around measurement. Paz hopes to track how many stories and projects originate with datasets from OpenData Latinoamérica. Craig Hammer wants to quantify the social good of open data, a project he says is already underway via the World Wide Web Foundation’s collaboration with the Open Data for Development Camp.

“If there is a cognizable and evidentiary link between open data and boosting shared prosperity,” Hammer says, “then I think that would be, in many cases, the catalytic moment for open data, and would enable broad recognition of why it’s important and why it’s a worthwhile investment, and broad diffusion of data literacy would really explode.”

Hammer wants people to take ownership of data and realize it can help inform decisions at all levels, even for individuals and families. Once that advantage is made clear to the majority of the population, he says, the demand will kick in, and all kinds of organizations will feel pressured to share their information.

“There’s this visceral sense that data is important, and that it’s good. There’s recognition that opening information and making it broadly accessible is in and of itself a global public good. But it doesn’t stop there, right? That’s not the end,” he says. “That’s the beginning.”

Photo of Santiago student protesters walking as police fire water canons and tear gas fills the air, Aug. 8, 2012 by AP/Luis Hidalgo.

May 29 2013

16:51

What’s New in Digital Scholarship: Teen sharing on Facebook, how Al Jazeera uses metrics, and the tie between better cellphone coverage and violence

library-shelves-of-academic-journals-cc

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

This month’s edition of What’s New In Digital Scholarship is an abbreviated installment — we’re just posting our curated list of interesting new papers and their abstracts. We’ll provide a fuller analysis at the half-year mark, in our June edition. Until then, happy geeking out!

“Mapping the global Twitter heartbeat: The geography of Twitter.” Study from the University of Illinois Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, published in First Monday. By Kalev Leetaru, Shaowen Wang, Guofeng Cao, Anand Padmanabhan, and Eric Shook.

Summary: “In just under seven years, Twitter has grown to count nearly three percent of the entire global population among its active users who have sent more than 170 billion 140-character messages. Today the service plays such a significant role in American culture that the Library of Congress has assembled a permanent archive of the site back to its first tweet, updated daily. With its open API, Twitter has become one of the most popular data sources for social research, yet the majority of the literature has focused on it as a text or network graph source, with only limited efforts to date focusing exclusively on the geography of Twitter, assessing the various sources of geographic information on the service and their accuracy. More than three percent of all tweets are found to have native location information available, while a naive geocoder based on a simple major cities gazetteer and relying on the user — provided Location and Profile fields is able to geolocate more than a third of all tweets with high accuracy when measured against the GPS-based baseline. Geographic proximity is found to play a minimal role both in who users communicate with and what they communicate about, providing evidence that social media is shifting the communicative landscape.

“Predicting Dissemination of News Content in Social Media: A Focus on Reception, Friending, and Partisanship.” Study from Ohio State, published in Journalism & Mass Communication Quarterly. By Brian E. Weeks and R. Lance Holbert.

Summary: “Social media are an emerging news source, but questions remain regarding how citizens engage news content in this environment. This study focuses on social media news reception and friending a journalist/news organization as predictors of social media news dissemination. Secondary analysis of 2010 Pew data (N = 1,264) reveals reception and friending to be positive predictors of dissemination, and a reception-by-friending interaction is also evident. Partisanship moderates these relationships such that reception is a stronger predictor of dissemination among partisans, while the friending-dissemination link is evident for nonpartisans only. These results provide novel insights into citizens’ social media news experiences.”

“Al Jazeera English Online: Understanding Web metrics and news production when a quantified audience is not a commodified audience.” Study from George Washington University, published in Digital Journalism. By Nikki Usher.

Summary: “Al Jazeera English is the Arab world’s largest purveyor of English language news to an international audience. This article provides an in-depth examination of how its website employs Web metrics for tracking and understanding audience behavior. The Al Jazeera Network remains sheltered from the general economic concerns around the news industry, providing a unique setting in which to understand how these tools influence newsroom production and knowledge creation. Through interviews and observations, findings reveal that the news organization’s institutional culture plays a tremendous role in shaping how journalists use and understand metrics. The findings are interpreted through an analysis of news norms studies of the social construction of technology.”

“Teens, Social Media and Privacy.” Report from the Pew Internet & American Life Project and Harvard’s Berkman Center for Internet & Society. By Mary Madden, Amanda Lenhart, Sandra Cortesi, Urs Gasser, Maeve Duggan, and Aaron Smith.

Summary: “Teens are sharing more information about themselves on social media sites than they have in the past, but they are also taking a variety of technical and non-technical steps to manage the privacy of that information. Despite taking these privacy-protective actions, teen social media users do not express a high level of concern about third-parties (such as businesses or advertisers) accessing their data; just 9% say they are ‘very’ concerned. Key findings include: Teens are sharing more information about themselves on their social media profiles than they did when we last surveyed in 2006: 91% post a photo of themselves, up from 79% in 2006; 71% post their school name, up from 49%; 71% post the city or town where they live, up from 61%; 53% post their email address, up from 29%; 20% post their cell phone number, up from 2%. 60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings: 56% of teen Facebook users say it’s ‘not difficult at all’ to manage the privacy controls on their Facebook profile; 33% Facebook-using teens say it’s ‘not too difficult’; 8% of teen Facebook users say that managing their privacy controls is ‘somewhat difficult,’ while less than 1% describe the process as ‘very difficult.’”

“Historicizing New Media: A Content Analysis of Twitter.” Study from Cornell, Stoneybrook University, and AT&T Labs Research, published in the Journal of Communication. By Lee Humphreys, Phillipa Gill, Balachander Krishnamurthy, and Elizabeth Newbury.

Summary: “This paper seeks to historicize Twitter within a longer historical framework of diaries to better understand Twitter and broader communication practices and patterns. Based on a review of historical literature regarding 18th and 19th century diaries, we created a content analysis coding scheme to analyze a random sample of publicly available Twitter messages according to themes in the diaries. Findings suggest commentary and accounting styles are the most popular narrative styles on Twitter. Despite important differences between the historical diaries and Twitter, this analysis reveals long-standing social needs to account, reflect, communicate, and share with others using media of the times.” (See also.)

“Page flipping vs. clicking: The impact of naturally mapped interaction technique on user learning and attitudes.” Study from Penn State and Ohio State, published in Computers in Human Behavior. By Jeeyun Oh, Harold R. Robinson, and Ji Young Lee.

Summary: “Newer interaction techniques enable users to explore interfaces in a more natural and intuitive way. However, we do not yet have a scientific understanding of their contribution to user experience and theoretical mechanisms underlying the impact. This study examines how a naturally mapped interface, page-flipping interface, can influence user learning and attitudes. An online experiment with two conditions (page flipping vs. clicking) tests the impact of this naturally mapped interaction technique on user learning and attitudes. The result shows that the page-flipping feature creates more positive evaluations of the website in terms of usability and engagement, as well as greater behavioral intention towards the website by evoking greater perception of natural mapping and greater feeling of presence. In terms of learning outcomes, however, participants who flip through the online magazine show less recall and recognition memory, unless they perceive page flipping as more natural and intuitive to interact with. Participants perceive the same content as more credible when they flip through the content, but only if they appreciate the coolness of the medium. Theoretical and practical implications will be discussed.”

“Influence of Social Media Use on Discussion Network Heterogeneity and Civic Engagement: The Moderating Role of Personality Traits.” Study from the University of Alabama, Tuscaloosa, and the University of Texas at Austin, published in the Journal of Communication. By Yonghwan Kim, Shih-Hsien Hsu, and Homero Gil de Zuniga.

Summary: “Using original national survey data, we examine how social media use affects individuals’ discussion network heterogeneity and their level of civic engagement. We also investigate the moderating role of personality traits (i.e., extraversion and openness to experiences) in this association. Results support the notion that use of social media contributes to heterogeneity of discussion networks and activities in civic life. More importantly, personality traits such as extraversion and openness to experiences were found to moderate the influence of social media on discussion network heterogeneity and civic participation, indicating that the contributing role of social media in increasing network heterogeneity and civic engagement is greater for introverted and less open individuals.”

“Virtual research assistants: Replacing human interviewers by automated avatars in virtual worlds.” Study from Sammy Ofer School of Communications, Interdisciplinary Center Herzliya (Israel), published in Computers in Human Behavior. By Béatrice S. Hasler, Peleg Tuchman, and Doron Friedman.

Summary: “We conducted an experiment to evaluate the use of embodied survey bots (i.e., software-controlled avatars) as a novel method for automated data collection in 3D virtual worlds. A bot and a human-controlled avatar carried out a survey interview within the virtual world, Second Life, asking participants about their religion. In addition to interviewer agency (bot vs. human), we tested participants’ virtual age, that is, the time passed since the person behind the avatar joined Second Life, as a predictor for response rate and quality. The human interviewer achieved a higher response rate than the bot. Participants with younger avatars were more willing to disclose information about their real life than those with older avatars. Surprisingly, the human interviewer received more negative responses than the bot. Affective reactions of older avatars were also more negative than those of younger avatars. The findings provide support for the utility of bots as virtual research assistants but raise ethical questions that need to be considered carefully.”

“Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa.” Study from Duke and German Institute of Global and Area Studies (GIGA), published in the American Political Science Review. By Jan H. Pierskalla and Florian M. Hollenbach.

Summary: “The spread of cell phone technology across Africa has transforming effects on the economic and political sphere of the continent. In this paper, we investigate the impact of cell phone technology on violent collective action. We contend that the availability of cell phones as a communication technology allows political groups to overcome collective action problems more easily and improve in-group cooperation, and coordination. Utilizing novel, spatially disaggregated data on cell phone coverage and the location of organized violent events in Africa, we are able to show that the availability of cell phone coverage significantly and substantially increases the probability of violent conflict. Our findings hold across numerous different model specifications and robustness checks, including cross-sectional models, instrumental variable techniques, and panel data methods.”

Photo by Anna Creech used under a Creative Commons license.

May 22 2013

14:00

Jaron Lanier wants to build a new middle class on micropayments

jaron-lanier-cc

jaron-lanier-who-owns-the-future“We’re used to treating information as ‘free,’” writes Jaron Lanier in his latest book Who Owns the Future?, “but the price we pay for the illusion of ‘free’ is only workable so long as most of the overall economy isn’t about information.”

Lanier argues that a free-culture mindset is dismantling the middle-class economy. In his estimation, the idea “that mankind’s information should be free is idealistic, and understandably popular, but information wouldn’t need to be free it no one were impoverished.”

Who Owns the Future?, like his 2010 book You Are Not a Gadget, is another manifesto attempting to rebuff what he sees as the contemporary ethos of the web. But the followup also refreshingly attempts to pose solutions, one where all participants in this information-based world are paid for what they do and distribute on the web. Throughout, it places particular emphasis on the ways digital technology has unsettled the so-called “creative class” — journalists, musicians, photographers, and the like. As he sees it, the tribulations of those working in such fields may be a premonition for the middle class as a whole. It’s “urgent,” he writes, “to determine if the felling of creative-class careers was an anomaly or an early warning of what is to happen to immeasurably more middle-class jobs later in this century.”

I recently spoke with Lanier and we discussed the ways he sees digital networking disrupting the media, why he thinks advertising can no longer sustain paid journalism, and why he misses the future. Lightly edited and condensed, here’s a transcript of our conversation.

Eric Allen Been: You were one of the early advocates of the notion that “information wants to be free.” An idea most media companies initially embraced when it came to the web, and one that now some seem to regret. Could you talk a little bit about why you changed your mind on this line of thinking?
Jaron Lanier: Sure. It was based on empirical results. The idea sounded wonderful 30 years ago. It sounded wonderful in the way that perfect libertarianism or perfect socialism can. It sounds right, but with all these attempts to make a perfect system, it doesn’t work out so well. Empirically, what I’ve seen is the hollowing out of middle-class opportunities and that there is an absurdity to the way it’s going. I think we’re not getting the benefits that I initially anticipated.
Been: When it came to journalism, what were some of those benefits that you originally expected? I imagine you then thought it would be a largely positive thing.
Lanier: Yeah. To use the terminology of the time, we — that is, me and others who were behind a lot of the ideas behind the Web 2.0 ethos or whatever — wanted to “supplant” or “make obsolete” the existing channels of journalism and the existing types of jobs in journalism. But what would come instead would be better — more open and all of that — and less intermediated. What happened instead was a little bit of what we anticipated. In a sense, the vision came true. Yes, anybody can blog and all that — and I still like that stuff — but the bigger problem is that an incredible inequity developed where the people with big computers who were routing what journalists did were getting all the formal benefits. Mainly the money, the power. And the people who were doing the work were so often just getting informal benefits, like reputation and the ability to promote themselves. That isn’t enough. The thing that we missed was how much power would accrue to the people with the biggest computers. That was the thing we didn’t really think through.
Been: Historically, technological advances have caused disruptions to industries, but they’ve also tended to provide new jobs to replace the wiped-out ones. There seems to be some optimism in a lot of quarters that journalism can get eventually get on the right track, economically speaking, within the digital world. But you don’t think so.
Lanier: The system is slowly destroying itself. I’ll give you an example of how this might work out. Let’s suppose you say in the future, journalists will figure out how to attach themselves to advertising more directly so they’re not left out of the loop. Right now, a lot of journalism is aggregated in various services that create aggregate feeds of one kind or another and those things sell advertising for the final-stop aggregator. And the people doing the real work only get a pittance. A few journalists do well but it’s very few — it’s a winner-take-all world where only a minority does well. Yes, there are a few people, for instance, who have blogs with their own ads and that can bring in some money. You can say, “Well, isn’t that a good model and shouldn’t that be emulated”? The problem is that they’re dependent on the health of the ad servers that place ads. Very few people can handle that directly. And the problem with that is the whole business of using advertising to fund communication on the Internet is inherently self-destructive, because the only stuff that can be advertised on Google or Facebook is stuff that Google hasn’t already forced to be free.

As an example, you might have a company that makes toys and you advertise the toys on Google, and that might show up in journalism about toy safety or something. So journalists can eek some money from people who sell toys. That’s kind of like the traditional model of advertising-supported journalism.

But every type of business that might advertise on Google is gradually being automated and turning into more of an information business. In the case of toys, there’s a 3-D printer where people print out toys. At some point, that will become better and better and more common, and whenever that happens, what happened to music with Napster will happen to toys. It’ll be all about the files and the machines that actually print out the toys. If the files that print out the toys can be made free, the only big business will be the routing of those files, which might be Google or Facebook handling that, and there will be nobody left to advertise on Google.

That’ll happen with everything else — pharmaceuticals, transportation, natural resources — every single area will be subject to more and more automation, which doesn’t have to put people out of work. The only reason automation leads to unemployment is the idea of information being free. It’s a totally artificial problem, but if journalists are counting the Google model to live on, it won’t work. Google is undermining itself, and there will be no one left to buy advertisements.

Been: Speaking of advertising, I’m interested in hearing what you think about a lot of people currently lauding BuzzFeed and its use of native advertising. There’s a lot of talk about it solving “the problems of both journalism and advertising at once”, or it being some sort of guiding light for a “future of paid journalism.”
Lanier: Advertising, in whatever form, just can’t be the only possible business plan for information. It forces everybody to ultimately compete for the same small pool of advertisers. How much of the economy can advertising really be? It can’t be the whole market. Why on earth are Google and Facebook competing for the same customers when they actually do totally different things? It’s a peculiar problem. You’re saying that there’s only one business plan, one customer set, and everybody has to dive after that. It becomes a very narrow game — there’s not enough there for everybody. It could work out locally a little bit, but it’s not an overall solution.
Been: And your solution is what you call a “humanistic information economy.” Could you talk a little bit about how such a system would work?
Lanier: There are some theoretical reasons that lead me to believe that if you monetized a deeply connected open network, the distribution of benefits to people would look like a middle class. In other words, there would be a lot of wealth in a lot of people’s hands that could outspend any elite, which is critical for democracy and a market economy to survive. So one benefit is you could get a consistent middle class even when the economy gets really automated. It becomes a real information economy.

A humanistic economy would create a middle class in a new way, instead of through unions and other ad hoc mechanisms. It would create a middle class by compensating people for their value in terms of references to the network. It would create an expanding economy instead of a static one, which is also important. It’s built around the people instead of the machines. It would be a change in paradigm.

Been: In the book, you write: “If we demand that everyone turn into a freelancer, then we will all eventually pay an untenable price in heartbreak.” But a lot of what you’re proposing strikes me, in some senses, as a freelance economy.
Lanier: That’s right. What I’m proposing is actually a freelance economy, but it’s a freelance economy where freelancing earns you not just income but also wealth. That’s an important distinction to make. What I think should happen is as you start providing information to the network, it then will become a part of other services that grow over time.

So, for instance, let’s suppose you translate between languages, and some of your translations provide example phrase translations that are used in automatic translators. You would keep getting dribbles of royalties from having done that, and you start accumulating a lot of little ways that you’re getting royalties — not in the sense of contractual royalties, just little payments from people that are doing things that benefited from information you provided. If you look at people’s interest in social networking, you see a middle-class distribution of interest. A lot of people would get a lot of little dribs and drabs, and it would accumulate over a lifetime so you’d start to have more and more established information that had been referenced by you that people are using. What should happen is you should start accumulating wealth, some money that shows up because of your past as well as your present moment.

Been: So if I simply shared a link to a New York Times article on Twitter, for instance, would there be a payment exchange? If so, who would it go to?
Lanier: It would be person-to-person payments. Right now, we’re used to a system where you earn money in blocks, like a salary check, and you’re spending on little things like coffee of something. And in this system, you’d be earning lots of little micropayments all the time. But you would be spending less often. That terrifies people, but it’s a macroeconomic thing. I believe the economy would actually grow if information was monetized, and overall your chances will get a lot better than they are now.
Been: You say in the book that this person-to-person payment system is partly inspired by the early work of the sociologist and information technology pioneer, Ted Nelson. Particular, his thoughts about two-way linking over a network. Could you talk a little bit about why you think this is a better way to exchange information?
Lanier: The original concept of digital networking that predated the actual existence of digital networking is Ted Nelson’s work from the 1960s. It was different from the networks we know today in a few key ways. All the links were two-way, for one. You would always know who was linking at your website — there would always be backlinks. If you have universal backlinks, you have a basis for micropayments from somebody’s information that’s useful to somebody else. If the government camera on a corner catches you walking by, and it matches against you, you’d be owed some money because you contributed information. Every backlink would be monetized. Monetizing actually decentralizes power rather than centralizing it. Demonetizing a network actually concentrates power around anyone who has the biggest computer analyzing it.
Been: Let’s talk about that last point. This is an example of what you call in the book a “Siren Server.” That is, computers on a network that gather data without conceding that money is owed to those individuals mined for the information.
Lanier: That’s right. It’s my name for one of the biggest, best, most effective, connected computers on the network. A Siren Server is a big server farm — a remote unmarked building somewhere in the countryside near a river so it can get cooled. It has tons of computers that run as one. It gathers data from the world for free and does more processing of that data that normal computers can do. What it does with the processing is it calculates several moves that the owners can make that put them in an advantage based on a global perspective.

If you’re Amazon, it means you keep track of everybody else’s prices in the world, including little local independent stores, so you can never be outsold. If a store wants to give a book away, Amazon will also do that, so nobody gets a local advantage. If you’re Google, it gives advertisers a way to use a behavioral model of the world to predict which options in front of you are most likely to steer you. If you’re a finance company, it’s a way of bundling derivatives in such a way that somebody else is holding the risk. It’s almost a cryptographic effort. If you’re an insurance company, it’s a way of calculating how to divide populations so you insure the people who least need to be insured. In all these cases, a giant computer calculates an advantage for yourself and you get a global perspective that overwhelms the local advantage that participants in the market might have had before.

Been: In the book, you call Craigslist a Siren Server, one that “created a service that has greatly increased convenience for ordinary people, while causing a crisis in local journalism that once relied on paid classified adds.” You write that it “has a tragic quality, since it is as modest and ethical as it can be, eschewing available spying opportunities, and yet it still functions as a Siren Server despite that.” So a Siren Server, in your mind, isn’t necessarily always a malevolent construction.
Lanier: That’s true. I don’t think there’s much in the way of evil or competitive intent. It’s the power of having one of the biggest computers. When you suddenly get power by surprise, it’s a seduction. You don’t realize that other people are being hurt. But if it wasn’t Craigslist, it would have been something else. Some computer gets a global perspective on everything and the local advantage goes away. Craigslist calculated away the local advantage that newspapers used to have.
Been: So far, the reviews of Who Owns the Future? have been largely positive. But in The Washington Post, Evgeny Morozov criticized it by saying “Lanier’s proposal raises two questions that he never fully confronts.” One being whether a nanopayment system would actually help the middle class once automation hits its tipping point. He cites cab drivers being replaced by self-driving cars and says: “Unless cabdrivers have directly contributed to the making of maps used by self-driving cars, it’s hard to see how a royalty-like system can be justified.”
Lanier: This has to do with the value of information. In the book I ask this very question — in the future, in the case of self-driving cars, it’s certainly true that once you’ve been through the streets once, why do it again? The reason is that they’re changing. There might be potholes, or there might be changes to local traffic laws and traffic patterns. The world is dynamic. So over time, maps of streets that need cars to drive on them will need to be updated. The way self-driving cars work is big data. It’s not some brilliant artificial brain that knows how to drive a car. It’s that the streets are digitized in great detail.

So where does the data come from? To a degree, from automated cameras. But no matter where it comes from, at the bottom of the chain there will be someone operating it. It’s not really automated. Whoever that is — maybe somebody wearing Google Glass on their head that sees a new pothole, or somebody on their bike that sees it — only a few people will pick up that data. At that point, when the data becomes rarified, the value should go up. The updating of the input that is needed is more valuable, per bit, than we imagine it would be today. Cabbies themselves, that’s irrelevant. There won’t be cabbies. They’ll have to be doing other things.

Been: His other question is “how many [online] services would survive his proposed reforms?” Morozov brings up Wikipedia and says the “introduction of monetary incentives would probably affect authors’ motivation. Wikipedia the nonprofit attracts far more of them than would Wikipedia the startup.”
Lanier: But in what I’m proposing, Wikipedia would not pay you — it would be a person-to-person thing. I’m proposing that there’s no shop and people are paying each other when they create things like Wikipedia. Which is very different. If it’s going through a central hub, it creates a very narrow range of winners. If it’s not, it’s a whole different story.

The online services that would survive would be the ones that can add value to the data that people are providing anyway. Instagram could perhaps charge to do cool effects on your pictures, but the mere connections between you and other people would not be billable, it would just be normal. People would pay each other for that. The services would have to do more now than they are. A lot of services are just gatekeepers and would not survive and they shouldn’t. It would force people to up their game.

Been: Speaking of upping one’s game, you get a strong sense throughout the book that you think society is no longer future-minded. Towards the end, you write that you “miss the future.” What do you mean by that statement?
Lanier: It seems that there’s a loss of ambition or a lowering of standards for what we should expect from the future. We hyped up things like being able to network — and we understood it was a step on a path — but these days I call the open-source idea the MSG of journalism.

An example would be this: Take some story that would be totally boring, like garbage bags are being left on the street. But if you say, “open-source software is being used to track garbage bags on the street,” there’s something about it that it makes it seem interesting. And that makes it a low bar for what seems interesting. A very unambitious idea of what innovation can be.

Photo of Jaron Lanier by Dan Farber used under a Creative Commons license.

May 21 2013

15:00

Tuesday Q&A: CEO Baba Shetty talks Newsweek’s relaunch, user-first design, magazineness, and the business model

A brand guru. That’s what they called Baba Shetty when he was hired away from advertising agency Hill Holliday by The Daily Beast to be the new CEO of The Newsweek Daily Beast Company.

1348078198601.cachedLess than a month later, the company announced that Newsweek was putting an end to its print edition and going all-digital. Last week, Shetty released the beta version of the relaunched website, a simple, colorful, responsive, and easily navigable new home for the decades-old news brand.

Shetty began working with the magazine on a “Mad Men”-themed issue on retro advertising back in March 2012. So maybe it’s not surprising that the new site’s first feature article is an exploration of what makes contemporary television so addictive. Shetty has big plans for capitalizing on on the historically respected Newsweek name, blending a New York Times-like metered paywall approach with an ambitious sponsorship model that will see a lot of creative ad work coming off the Newsweek desk.

On Monday, Shetty and I spoke about how he sees that plan unfolding, as well as some of his favorite new design features, bringing classic Newsweek covers into the digital space, and why ad agencies should act more like newsrooms. Here’s our conversation:

O’Donovan: So let’s start with the redesign! Congrats, first of all — very exciting.
Shetty: Oh, thank you.
O’Donovan: I’m curious, first, who you were looking to for inspiration with the redesign and what your major goals were.
Shetty: The audience is a combination of the people who’ve always looked to Newsweek for its sense of authority, its sense of editorial authority and its stature — its ability to offer perspective on the happenings in the world. But we also wanted to really innovate around the narrative formats for longform publishing on the web.

The real story of the Newsweek relaunch is that it allowed us to think about innovation in a way that really hasn’t happened much for professional journalism. Actually, there’s been a ton of innovation in microblogging and other formats — look at the Tumblr news from the last couple days. Enormous value from thinking about beautiful user experience for content consumption.

But really, a lot of the professional editorial products kind of slavishly follow a set of conventions that are all about maximizing pageviews. You look at a long article that might require seven clicks and page reloads to get through — and then there’s a lot of display advertising that is competing for attention with the actual content. We thought there was an opportunity to do for professional journalism what Tumblr and Pinterest and Flipboard, so many of the other innovative new startups, have done for other kinds of content.

So what we see with Newsweek is the user first. I’ve been talking about it as user-first publishing. The idea is, let’s deconstruct the sense of magazineness — not as a physical thing, but as a concept. The sense of magazineness is about a beautiful user experience. You think about your favorite magazine and sitting in your favorite chair at home and reading it — there’s a sense of editorial coherence. You know — the cover communicates a sense of editorial priority, there’s a table of contents that lends a sense of coherence to the issue. It’s a beautiful package that results.

But when magazines go digital, so much of that’s lost because of the conventions I talked about before — you slice and dice content into the slivers that we call pageviews, and it’s not a very satisfying experience to read professional journalism on the web.

So we really wanted to take a leap forward with Newsweek. In addition to the idea of the editorial stature and credibility of Newsweek, also creating a radically creative user experience around that content. I can talk about a few of the features if you think that would be useful.

O’Donovan: Yes, but I’m still curious about other projects, other sites, other redesigns, that you might have taken something from, or tried to emulate at all. Or maybe this is a ground zero thing. But for example, The New Republic’s redesign, or maybe Quartz — is there a trend?
Shetty: There really weren’t — we didn’t really emulate anything. What we were trying to do was stay true to Newsweek and what the ideal user experience would be.

The cover — there actually is a cover, and it was static in the first issue, and in future issues it will be interactive, video-based multimedia. It’s this idea of drawing a reader in to something that has great editorial to prominence and priority, and we’re going to explore what the cover could be in the digital age. There is a persistent table of contents which is available to you at any part of the experience, and that lends a sense of completeness and coherence to this experience.

O’Donovan: Yeah, the table of contents gives an element of navigability — it helps you understand the fullness of the product.
Shetty: Exactly. It’s persistent. No matter where you are, in an article or on a page, when you mouse over the window, the table of contents dissolves into view, and you can access it. So there’s a sense of, again, an ideal concept of magazineness, and part of it is this sense of complete control over the content consumption experience. So we thought, we’d love to make that real in a natively digital format.

Of course, we took account of all the devices that people read on now, so the site is fully responsive and looks beautiful on a handset or tablet screen or — you should really try it on a 23-inch monitor. It’s gorgeous in large format screens. It gracefully apportions itself to whatever the screen happens to be.

O’Donovan: What would you say, right now, the focus is on in mobile, in building apps? I feel like there’s this turn back towards building cross-platform websites and away from apps. Where did apps fall into your priorities when you started compared to where you are now?
Shetty: Yes, you’re exactly right. I think 18 months ago, everybody was talking about native apps as absolutely the way to go. But there’s a lot of friction in the app experience, and what I mean by that is apps have to be downloaded, apps have to be used and accessed on a regular basis, apps sometimes make it a little more difficult to share content. People are sometimes not as adept at sharing content via apps as they are across the open web. So for us, it’s about giving consumers a choice. We’re going to parallel-path for a while — we’ll also have a Newsweek app available. But the open web launch we did last week we think is actually a beautiful experience across devices. It’s friction-free — there’s nothing to download, there’s nothing that prevents easy sharing. So it’s designed to kind of be — I don’t want to say post-app, but it’s post- the initial way of publishing thinking, that native apps are the only way to go. I think a well designed, thoughtfully engineered open web experience can be terrific for the user.
O’Donovan: You mentioned building an interactive cover page earlier — I’d be interested in knowing what other kinds of engagement you’re interested in building across the site. How did you think about structuring comments? How do you want people to respond to the site?
Shetty: We thought a lot about socially driven content, and if you actually look at an article called “The Way They Hook Us — For 13 Hours Straight,” which is about longform, binge-viewing, addictive TV shows — you know, “Breaking Bad,” “Game of Thrones,” et cetera — if you look at that story, you can see how we handle social. Instead of having commentary being a thing that is relegated to the bottom of the page, there’s a set of functionality on the left side margin that moves along with the story. Right now, there’s 2,100 opinions listed — it’s a way to kind of over time have the idea that engagement opportunities are persistently available, no matter where you are reading these stories — it’s not just a thing that’s relegated ot the boot of a page. There’s a tray that actually slides out to reveal the social features. And there’s a lot of innovation we have planned in that area as well.

And while we’re talking about a long article page, you can kind of see the ability to use multimedia photography, video, infographics to help the journalistic storytelling of a longform piece. That’s another, I think, terrific step forward. It’s not the tyranny of the pageview, it’s not the conventions that are going to deliver more advertising properties — it’s thinking about he user first. What’s going to make for a great reading experience? in that way, I think it differs from a lot of the conventions that are in play across the web.

O’Donovan: So this is my understanding having read a couple things, so correct me if I’m wrong — but your strategy is first to build this product that people are going to want, and then slowly to introduce a paywall, and then later this sponsored content component. Can you explain how you see that unfolding and over what kind of timeline?
Shetty: I can talk a little bit about it — I probably can’t talk about all of our plans right now.

The metered access is going to be rolled out fairly soon, and that’s just the simple idea that, look, anybody can read any article on Newsweek, and initially that’s completely open and completely free. But only subscribers will be able to consume content over a certain number of articles. So it’s very similar to what The New York Times and others have done. Open access — we want a lot of social sharing, we want a lot of visibility of the content across the open web. But what we’re asking is, if people consume over a certain amount of content, that they subscribe. And that’s going to take place fairly soon.

The second question is how brands can participate. We have the same principles we’ve been talking about — thinking about the user first — applied to brand participation. What we’re going to do is limit the clutter — relatively few units, but really high impact — but stay with the design aesthetic of the site overall. They’re going to be beautiful, unignorable, but the value exchange with the reader is going to be very appropriate.

When you listen to a program on NPR, and there’s a sponsorship message before the program starts, you can kind of say, okay, well, I get that. I get how that works. It’s a reasonable exchange between the audience and the brand that sponsors the content. That’s really the model. It’s not as much about the standards of display advertising that have dominated the discussion on the web. It’s a sponsorship model — a different direction.

O’Donovan: From a structural standpoint, in terms of building the sponsorship and how closely married they may be to the content you have, I’m curious if it’s going to be an internal team and how closely they’ll work with the editorial team, or if it’s someone from outside. How does that all work?
Shetty: Oh, it’s all part of one organization in our company, and it’s a close partnership between the editorial and business sides.
O’Donovan: I was just reading earlier, you wrote, along with someone else, a piece for the Harvard Business Review about how advertising companies should act more like newsrooms. I was hoping you could explain that theory and maybe, I’d be curious to know if that was an idea that started to percolate for you having been in a newsroom for a little while.
Shetty: It actually started percolating for me well before I came into a newsroom. I think it actually a pretty clear direction that has been well represented by a lot of people. There’s a real opportunity for smart brands to publish content that’s useful, interesting, engaging, and helpful to their audience. It’s not a new idea — in fact I always talk about the fact that it’s an idea that’s been around for a very long time.

But what’s changed is all the tools that are available for content creation, distribution, measurement and all the channels that are available to brands. I think it’s a very powerful idea. I don’t think it’s one of these trend-of-the-season ideas. I think it’s a dramatic industry shift that we’re going to be tracking for years to come, through various iterations.

That was something I did with Jerry Wind, head of the Future of Advertising Program at Wharton. It was really based on the Wharton 2020 Project, which was asking a lot of advertisers about what they think about the future of advertising, and it was such a consistent theme — that it’s going to be less and less about what we think of advertising today, and more content that is voluntarily consumed by people because they view it as in some way useful or interesting.

O’Donovan: As we continue to see this trend toward sponsored content and cooperation between advertisers and news brands, I’m curious what your advice might be to other people who are following a path similar to yours — coming from the ad side and moving into newsroom, operating as the person who is trying to bring those two things together. Are there any specific challenges or surprises there? How would you tell someone to pursue that?
Shetty: I would just say think about the user first, and by the way, think about editorial standards. It doesn’t serve anyone to have editorial standards compromised. Users don’t want that, the consumer doesn’t want that, and certainly it doesn’t benefit the editorial side of things either. Nobody wants that. I think full transparency and good judgment are critical here.
O’Donovan: How do you telegraph that to the reader?
Shetty: Well, we don’t really — we haven’t really had any issues with telegraphing that. It’s just kind of clearly indicating where, what the source of a particular piece of content is. I think as long as you maintain these kind of standards, there really aren’t issues.
O’Donovan: And in terms of the user-centric experience you’re trying to build — you’re talking about how modern newsrooms have so many different kinds of metrics available to them now — when I hear people talk about building new products like this, they talk about building something light and flexible, and prototyping it so you can really respond to the audience’s initial reaction to it. I’d be curious to know how you’re tracking that, how you’re listening to the reader, and what kind of flexibility you’ve been able to build into the product.
Shetty: Absolutely. The iterative nature of web design development — or I should say, digital design development — is a terrific kind of approach for designing something that users really love and respond to. For us, it’s tools like Chartbeat, which we love, and other kind of leading-edge ways of getting real moment-to-moment feedback from not only what people are reading, but how they’re spending time with it, where they’re coming from, what kind of engagement they have with it. It’s all fed right back to the design and development process.

It’s a long way from the days of just building it and they will come. It’s really paying such close attention to what people actually respond to.

14:00

At The Miami Herald, tweeting’s about breaking news in the a.m. and conversation in the p.m.

miami-herald-old-building-cc

Have you ever tried tweeting at a major news organization? How often have they responded or retweeted? Probably not often — and that corresponds to the findings offered by a GW/Pew study of 13 major news organizations which found “limited use of the institution’s public Twitter identity, one that generally takes less advantage of the interactive and reportorial nature of the Twitter.”

So when I went to The Miami Herald as part of a much larger project looking at newsrooms and news buildings, I was pleasantly surprised to find it, like some other newspapers, has actual people manning Twitter — breaking news “by hand,” interacting with readers, and having a genuine public conversation over the main @miamiherald Twitter account, with its 98,000 followers. (Aside from Twitter, The Miami Herald is making ample use of its Facebook account, posting new stories once an hour and relying on feedback from the 46,000-plus audience for stories and tips — and as an extension of the Public Insight Network pioneered by American Public Radio.)

In Miami, Twitter takes on two distinct modes during the day — in the morning as headline service and in the afternoon as conversation. “In the morning, we try to get the audience between 6 and 8 a.m. on Twitter and on the website,” says continuous news editor/day editor Jeff Kleinman, who says he wakes up at 4:30 to begin monitoring the news.

Kleinman uses Twitter to break news — whether or not it’s on the paper’s website. “We want to be first,” he noted, as he quickly dashed off a tweet about a boat fire in front of me. More often then not, though, there will be a link to a short two-paragraph story begun on the website. But not always.

Miami still remains a vibrant and competitive news marketplace with three local TV stations chasing breaking news, the Sun Sentinel, and even blogs getting in on niche action. So in the breaking-news morning environment, “If something happens, I’ll put it up on Twitter, I’ll write or have the reporter write two quick grafs on the homepage with italics that say ‘More to come,’” he said. “We’re constantly updating over Twitter and on the website as news comes in.”

There’s less time for conversation, but Kleinman is especially careful to do one thing: retweet what his reporters are offering from the field to the wider audience. “We’re not there, but they are, and Twitter is often the fastest way to say what’s going on,” he noted. So while the reporters have their own followings, their work gets amplified to a larger audience.

Take this example of breaking news:

BREAKING: RT @waltermichot: Neighbors gather at scene of one shot and transported 5644 NW 4th Ave. twitter.com/WalterMichot/s…

— The Miami Herald (@MiamiHerald) May 2, 2013

Walter Michot, a former photographer who prowls the city with an iPhone (another story), has frequently broken news on his Twitter account, which has then been retweeted by @miamiherald. The mantra in the newsroom is to tweet, write, tweet, write, perhaps blog, and then write a takeout for the web and perhaps the paper.

Later on in the afternoon, Twitter and Facebook take on a more conversational tone. Luisa Yanez runs the @miamiherald account then. She focuses on three key things: curating incoming reporters’ work and retweeting it — adding additional substance if necessary; offering updates from the website; and responding to readers. The Miami Herald also offers updates about traffic and weather “as a public service and because people want to know,” Kleinman said, so followers might see something like this.

#Weather alert: Severe thunderstorm warning issued for the #Keys until 1:30 p.m.

— The Miami Herald (@MiamiHerald) May 2, 2013

And then Yanez will retweet a reader who happens to chime in with a photo, in this case, Marven The Martian (@DaReelMJ), who offers a twitpic of the nasty weather brewing.

@miamiherald Even from the balcony it doesn’t get any better as it has started to rain in Sunny Isles. #Weather twitter.com/DaReelMJ/statu…

— Marven The Martian (@DaReelMJ) May 2, 2013

The Herald also uses Twitter as a direct way to ask its readers to pitch in for story help:

The Herald is writing about ruling that would allow teens to obtain the “morning after” pill. Please contact aburch@MiamiHerald.com.

— The Miami Herald (@MiamiHerald) May 2, 2013

The main Twitter feed doesn’t shy away from letting reporters show off their spunk. For instance, on Evan Benn’s first story for the paper (yes, they hired someone):

MT @evanbenn: My first @miamiherald story. Can’t beat ‘em? Eat ‘em. Smoked python at invasive-species meal hrld.us/11EXFcO

— The Miami Herald (@MiamiHerald) May 2, 2013

That, of course, is what they call in the newsroom an only-in-Miami story. And it prompted some only-in-Miami community conversation:

@miamiherald @evanbenn I would think it would taste like smoked eel but maybe more like gator? Either way, great idea hrld.us/11EXFcO

— Jackie Blue (@JackieBlue4u) May 2, 2013

@jackieblue4u Closer to gator, and smoking it really did make it taste like bacon, or prosciutto.

— Evan Benn (@EvanBenn) May 2, 2013

Kleinman and others acknowledge that the tweet-to-web traffic conversion isn’t what they’d like it to be. But for them, Twitter is a way to build an audience, establish their continued brand prominence, and carry on a conversation. And while The Miami Herald newsroom might be losing the best view in journalism for a new home by the airport, location might not matter as much as it once did, because their conversation with their audience is virtual.

Those who doubt that a newsroom that is struggling with staff and budget problems can handle putting the time and energy into social media should look at Miami and see a case of what’s going well. And those who think that community conversation is too hard to handle should also pause and consider the possibilities that do exist when a newsroom engages with its community. Especially if it’s about eating python.

Photo of outgoing Miami Herald building by Phillip Pessar used under a Creative Commons license.

May 09 2013

17:24

Diaries, the original social media: How our obsession with documenting (and sharing) our own lives is nothing new

If you’ve ever kept a diary, chances are you probably considered that document private. As in,

MOM I’VE TOLD YOU A MILLION TIMES MY DIARY IS PRIVATE SO DON’T FUCKING READ IT AGAIN PS THANKS FOR CLEANING MY ROOM IT LOOKS NICE

— Luke (@StereotypeLuke) March 24, 2013

But that wasn’t always the case when it came to personal journals. At least, not according to Lee Humphreys, a communications and media researcher at Cornell.

Humphreys led a conversation this week with Microsoft Research’s Social Media Collective on historicizing social media practices. Humphreys argues that, through journals and diaries, people have been recounting their daily activities and reflecting on them for much longer than Twitter and other social media platforms have been around.

But through her research, Humphreys found that it’s only been in the last hundred years that journalling has come to be considered a private practice. In the late 19th century, she says visiting friends and relatives would gather together and read each others diaries as a way of keeping up to date and sharing their lives. Journals were also kept in early American towns to mark and record important events: weddings, births, deaths and other events of community-wide importance.

“You don’t get a real sense of personal, individual self until the end of the 19th century,” Humphreys told the Cornell Chronicle in 2010, “so it makes perfect sense that diaries or journals prior to that time were much more social in nature.”

At Humphreys’ talk on Tuesday, some suggested that the advent of Freudian psychology — or perhaps the mass popularization of the novel — had contributed to this inward turn by America’s diarists. As the profession of journalism began to rise at the beginning of the 20th century, the independent writer was becoming increasingly self-reflective, creating the expectation of privacy that we were familiar with prior to the arrival of the Internet. But Humphrey is arguing that before we had a mass media, there was a system of personal writing that looked like a slower, more loosely networked version of Twitter.

people want to make twitter their diary but isn’t a diary suppose to be private?

— #slick (@rickstayslick) May 7, 2013

The similarities between Twitter and historic trends in diary keeping don’t stop there, according to Humphreys. She points to a surge in the popularity of pocket diaries, which, like Twitter, restricted the number of words you could write due to their small size, but also made them mobile. With 60 percent of tweets now being written on mobile devices, according to Humphreys, as compared to around 14 percent when she conducted the study in 2008, trends in Twitter behavior are in fact reflecting historical trends in self-reporting. So even the practice of making notes about your daily activities as they are happening isn’t a new behavior.

A second study Humphreys conducted revealed even more lessons about our drive to create personal records. Using the diary entires of a soldier in the Civil War, which he dutifully copied and turned into letters home, and the personal blog of an Iraq War soldier, Humphreys explored the reasons people feel compelled to record the events of their lives.

Primarily, she says, people journal as a way of strengthening “kin and friend” relationships. The soldier in Iraq, referred to as DadManly, originally began his blog as a way of keeping in touch with all of his family members at once. Charlie Mac, the Civil War soldier, exhibits a similar desire for communication and relationship maintenance by sending home a faithfully transcribed (we assume) copy of his diary. Both men, Humphreys says, described experiencing profound frustration and anxiety when the medium through which they communicated was disrupted, whether by an Internet blackout or a rainstorm that dissolved parchment and delayed the post.

The writings of Charlie Mac and DadManly shared another important similarity: Although both were writing for ostensibly private audiences, there was an implicit understanding that their words might someday reach a wider audience. When DadManly saw web traffic from strangers, he began to increasingly write about his political views on the war, providing what he believed to be a unique perspective of support at a time when very few journalists in the traditional media felt the same way.

Charlie Mac also had reason to believe his diary letters were being shared with an audience larger than the one he was directly addressing. In fact, he sometimes included parenthetical addresses to specific individuals, should they happen to come across the documents. But there was also a real possibility that his war correspondence would be picked up and reprinted by newspapers. (Or, as it happened, compiled, archived, and read by researchers hundreds of years later.) After the war, he ended up becoming a journalist at The Boston Globe. What more apt analogue to the media of today than a world in which one’s personal commentary on current events is so appreciated that they can be transformed into a lifelong career?

During the course of Charlie Mac’s budding career, he would have observed the budding of what we consider the traditional media hierarchy. Information would increasingly begin to flow from the top down, rather than be gathered voraciously from amateurs in the field. He would see news brands begin to shape and control narratives, and come to exist in an information system with less and less emphasis on personal interactions.

Of course, what we’ve seen in the decades since the dawn of the digital age is just the opposite. Humphreys said one of the early conclusions from her research is the possibility that the mass media of the 20th century was in fact a blip, a historical aberration, and that, through platforms like Twitter, we are gradually returning to a communication network that indulges, without guilt, the individual’s desire to record his existence.

Personal diarists are not only comforted by recording and sharing their experience, Humphreys says, but they are empowered by claiming their own narrative. She suspects it was for this reason that so many 19th-century women kept journals — in the hopes that they and their families would be remembered. Her point takes on contemporary significance when she points out that Twitter is more popular among African-American and Hispanic youths than among whites.

The most powerful argument for Twitter as a force of erosion of the public media is not, as we hear so often lately, that it feeds the fires of rumor and speculation. The argument that Twitter is facile is much more potent — that Twitter users are self-obsessed, that a minute spent tweeting is a minute wasted, that Twitter is the digital embodiment of the general degradation of intellectual society — many of the same arguments made a decade ago about blogging.

This is what I ate for breakfast… Greek potatoes, orzo, Greek salad, dolmades, and OJ. #Hmm yfrog.com/nxfktoej

— Miss Illinois (@StaciJoee) March 28, 2012

I’m going to be a total blogger today. This is what I ate for breakfast. LOL http://yfrog.com/h2tovdrj

— Holly Becker (@decor8) February 20, 2011

What Humphreys has found, instead, is that if we are all navel-gazers, it’s not Twitter that made us that way. And further, that we are tighter-networked, faster-responding, further-reaching navel-gazers, with a richer media experience, than ever before.

Image by Barnaby Dorfman used under a Creative Commons license.

April 08 2013

16:55

Emerging spaces for storytelling: Journalistic lessons from social media in the Delhi gang rape case

delhi-gang-rape-case-protest-cc

DELHI — In December, the brutal rape and subsequent death of a 23-year-old female student quickly gained attention in Indian and foreign media. In the days immediately following the woman’s death, protesters staged large demonstrations at Delhi’s India Gate and outside government buildings, including Rashtrapati Bhavan, the official residence of India’s President.

During the protests, activists and journalists used social media to follow the protests and to discuss India’s problem of violence against women. This discourse highlighted how social media offer an emerging space for storytelling — remarkable in a country where social media hasn’t had the same impact it has elsewhere. To explore this case, we interviewed Indian and foreign correspondents who covered the protests in Delhi. They told us how journalists used social media during the protests, giving us insight into how a new medium is contributing to hard news coverage.

India’s digital divide and the challenge of representation

Social media hasn’t played anything near the role in Indian journalism that it does in, say, the United States. Supporters of social media often point to their inclusive and democratizing aspects — but in India, social media usage remains confined to a small percentage of the population. Nearly 80 percent of Indians now have a mobile phone, but only 11 percent have Internet access, and fewer than 5 percent use social media. In rural areas, these percentages are significantly lower. Information gathered from social media tends to come from a rarified segment of the population: the affluent, educated, English-speaking youth of India’s major cities.

When we asked journalists to what extent they use social media for news-gathering, some complained that social media discussions are narrow. As an India-based journalist for the Swiss Neue Zürcher Zeitung told us:

They don’t really represent the majority of the people. For example, if you read social media, you would think everyone was extremely shocked and devastated. But if you talked to people on the streets or in slums, you get the idea that many Indians have extremely backward and conservative idea about women and how they had to behave. Social media can not replace doing research on the ground, in slums and villages. That’s the most important thing for working in India.

The digital divide thus represents a sociocultural divide: In India, those who use social media are more likely to live in cities, hold a passport, and share values with social media users in the West. By relying too heavily on social media, journalists may find that their coverage skews toward a narrow readership. One Australian journalist told us that the rape protests gained prominence on Twitter “because [social media] is city-based and at the center of the life of the middle class, university students, and mobile professionals.” The journalist from Neue Zürcher Zeitung told us “it is easy to share ideas and read articles of colleagues or see what intellectuals think.” What’s more difficult is to get beyond that narrow demographic and understand the views of Indians whose voices are not heard on social media.

According to Rohan Venkataramakrishnan, senior reporter at the Mail Today, social media presents a challenge for nuanced debate: “If I wanted to write about how Indian society needs to change or how patriarchy needs to be dealt with, I cannot say it in 140 characters.”

While journalists may have personal affinity with social media users, several of them told us that to get a more representative understanding of issues, they pay attention to television and newspapers. Here’s the journalist from Neue Zürcher Zeitung: “Television, newspapers, and talking to people on the streets were much more important [in newsgathering]. Only a very small part of society has access to social media, but everyone watches television.”

delhi-gang-rape-case-protest-2-cc

In addition to questions of representation, journalists have another reason to discount social media reporting. As we discussed in an earlier piece, India’s newspaper industry is thriving. With healthy growth in newspaper circulation and advertising, many journalists are skeptical about what social media can do for Indian journalism right now. When we conducted interviews with journalists at The Hindu, we encountered a widespread belief that social media is mostly for soft news. But that was before the Delhi gang rape and protests. The question now is: Will this event change the role of social media in Indian journalism?

New spaces, new beats for storytelling

In the past, television networks have been the largest players in Indian news coverage. Social media haven’t changed that, but have instead provided new avenues for news-gathering and story distribution. In the months preceding the events, Indian newspapers and television had covered a number of rape cases. But the December Delhi gang rape proved to be different. The brutality of the attack and the scale of the protests brought international attention to India’s problem of violence against women. Some journalists we spoke to highlighted the role of protest in democratizing India’s media.

The Delhi gang rape case prompted many journalists to use Twitter for updates on events and immediate responses from activists. To a greater extent than in previous protests, social media helped journalists keep a finger on the pulse of middle class India and get their immediate feedback on important issues. An Australian reporter said that “Twitter was really helpful to get a sense of the public sentiment and developments.” He followed the #delhigangrape hashtag, the official Twitter account of the Indian government, women’s groups, pressure groups, and Indian media on the subject.

Venkataramakrishnan, the journalist who found 140 characters limiting, nonetheless said that the protests have been incubators for social media sophistication in India. “Following the Anna Hazare case and the Delhi gang rape case, social media began to achieve a critical mass,” he told us.

Many journalists cited the importance of social media for background information. A journalist from The Hindu told us “I look at tweets by our own editor, editors from other newspapers, well known journalists such as Pritish Nandy [a columnist with The Times of India and the Hindi newspaper Dainik Bhaskar], Abhijit Majumder [editor of the Delhi edition of the Hindustan Times], and Saikat Dutta [a Delhi-based editor of the newspaper DNA]. I also look up tweets by television journalists such as Shiv Aroor [deputy editor at Headlines Today]. You get a mix of opinions from their tweets. Knowing these people’s perspectives helps me during coverage — but only indirectly…I rely on what I see when I am on the ground.”

Here we see the emergence of new storytelling beats. Many journalists and activists discussed how they used Twitter to stay informed about the locations of the protests. In this sense, social media has allowed for a new type of beat as well as a new element in storytelling. A foreign correspondent told us: “Last week, one of the accused rapists died in his prison cell, I found out about it on social media. I logged on Twitter, found students, and from there, the media picked up in India and our news organization called sources and confirmed.” Social media also helped journalists learn what coverage audiences wanted to read and see. Zoe Daniel, the Australian Broadcasting Corporation Southeast Asia correspondent, told us that in response to demand from members of India’s social media-savvy diaspora, she’s made plans for a documentary on the Delhi gang rape case.

A common theme with journalists we spoke to is that social media have enabled wider conversations with audiences. Ruchira Singh, social media editor at Network 18, told us that during the protests

Our editors and reporter were tweeting individually — we had very hectic social media activity during the case. We were inviting opinions on the goings on in the case; we were asking questions, we were asking people what they think are the solutions to the problems. We were interacting with people, asking them if they are joining a protest and how they are reaching the protest grounds. We promoted certain petitions by change.org. We asked people if they felt certain provisions should be incorporated in these petitions. I tweeted on the organizational account. I also tweeted on my individual account, especially when the girl died.

These interactions reveal new interest in social media by both Indian and foreign journalists covering the protests. In India, social media is a nascent enterprise still finding its place in journalism. But in time, we may see the the 2012 Delhi gang rape protests as a watershed moment for social media in news gathering and distribution. This case demonstrates how journalists respond to social media and how social media allows for new spaces for storytelling in India.

The future of India’s public sphere

Social media has allowed a small but growing part of the Indian public to join in discussions of soft and hard news. To date, online storytelling has catered to certain demographic groups: the middle and upper classes, the intellectuals, activists, and journalists. Middle class discontent found a place on social media, but marginalized and subaltern groups had minimal representation or participation in social media discussions.

The Delhi gang rape case gives us insight into ongoing changes in India’s public sphere. Jürgen Habermas defined the public sphere as “a realm of social life in which something approaching public opinion can be formed.” But what kind of public sphere? Today, social media offer (at best) limited access to marginalized groups. Journalists who want to know about those groups cannot rely on social networking sites; they must visit often-remote towns, villages, and slums where most residents remain disconnected from social media. In the Delhi gang rape case and many other stories affecting India, social media is essential in research and reporting. At the same time, perhaps the greatest lesson is the limitation of social media: They offer starting points for news-gathering and distribution, but they haven’t replaced traditional journalism.

Valérie Bélair-Gagnon is a Ph.D. candidate at City University London. Her research interests include new media, social media, journalism, public speech, sociology of news, sociology of work and organizations, ethnography and interviews, history of the media and ecology of communication.

Smeeta Mishra is an assistant professor at the Centre for Culture, Media & Governance at Jamia Millia Islamia University in New Delhi. Her research interests include new media, research methods, Muslim studies, and gender issues.

Colin Agur is a Ph.D. candidate in communications at Columbia University’s Graduate School of Journalism and a visiting fellow at Yale Law School’s Information Society Project. He is interested in cultural and sociological aspects of mobile phone use in developing countries, and his dissertation research is based on ethnographic fieldwork in India.

Photos by Ramesh Lalwani and Biswarup Ganguly used under a Creative Commons license.

14:43

Getting personal: A Dutch online news platform wants you to subscribe to individual journalists

“It’s my own little shop, that’s what I like about it. You decide what goes in — like having your own newspaper.”

Arnold Karskens has his own channel on Dutch news startup De Nieuwe Pers (The New Press). For €1.79 a month, readers can subscribe to him and read his war reporting and investigations into war criminals. Don’t care about war crimes? Maybe some of the other journalist-driven channels — on subjects from games to France, from the science of sex to environmental sustainability, from Germany to the euro crisis — would be of interest.

De Nieuwe Pers recently launched in the Netherlands as an online platform for freelance journalists. Users pay €4.49 a month for access to all content on its app or website. But what stands out is the possibility to subscribe to individual reporters, for €1.79 a month. Think True/Slant, but with paywalls.

de-nieuwe-pers-authors

“News has become more personal,” Alain van der Horst, editor in chief of De Nieuwe Pers, told me. “People are interested in the opinions, the beliefs, the revelations of a certain journalist they know and trust, much more than an anonymous person who writes for a large publication.”

Karskens concurs, stressing that a personal brand is key in this business model. “People read my stuff because I have a clear, crystalized opinion based on over 32 years of war correspondence,” he said. “This really works well for journalists with a distinctive character. It’s not for the average desk slave.”

Van der Horst also thinks paying per journalist is fairer to the readers than subscribing to a publication as a whole. “When you subscribe to a newspaper, you’ll get the full package. Even if you always throw out the sports section, you’ll still get it. With this model you decide: ‘This is what I want to read, so I’ll pay for it — what I don’t read, I don’t pay for.’”

The metaphor isn’t perfect — rather than paying for content on a specific subject, De Nieuwe Pers invites readers to pay by the journalist. Authors have full editorial control over their own channel (“as long as it’s legal,” van der Horst says). Though of all them state a thematic or geographic specialism, those aren’t binding and there are no posting quotas. With this freedom comes unpredictability for the readers — the bang they get for their buck depends on which journalists they subscribe to.

“I do investigative journalism, so sometimes I won’t be able to publish something for a week, sometimes two weeks,” Karskens says. “By subscribing to me personally, people support this type of investigation.”

Until the end of 2013, journalists will receive the full revenue generated by their channels, which includes in-app purchases through Apple’s App Store. Next year, De Nieuwe Pers will start collecting a 25 percent commission. They already take a quarter from the collective subscriptions, with the rest of the money divided among the individual contributors.

In its first few weeks, De Nieuwe Pers has sold about 2,000 subscriptions — about 40 percent of them for channels, the rest for the full collection. (The balance was 20/80 in the very beginning after launch.) The platform wasn’t building its following from scratch, strictly. It’s the descendant of De Pers, a free print newspaper that went out of business in March 2012. Much of De Nieuwe Pers’ editorial staff came from De Pers.

“After we shut down, we got a lot of attention, and readers were telling us they’d be willing to pay for us,” van der Horst said. “It’s encouraging to know that people will pay for digital journalistic work. People often still doubt that, and in many places it’s not yet customary. But it works. People do it as long as they get value for money.”

Though director Jan-Jaap Heij says 2,000 subscribers has De Nieuwe Pers meeting internal targets for 2013, it doesn’t take mathematical genius to figure out it’s not enough to support 17 journalists and a small editorial staff. (At current rates and revenue split, those 2,000 subscribers would generate somewhere north of $100,000 a year.) In the short term, Heij isn’t worried about the money; the company managed to sell some of the technology they developed, and because of its low costs, the bills are covered until late 2014. The authors themselves are free to publish their work elsewhere. “Maybe one or two contributors will get a reasonable income out of this in a year, but for the near future, that’s not our ambition,” said Heij.

For now, Heij’s main goal is further product development. De Nieuwe Pers is set to introduce thematic bundles and a bundle of bundles — the platform’s version of a full newspaper. They’re also expanding their pool of journalists, to cover more themes.

Karskens is the only author who chose to write exclusively for De Nieuwe Pers, and enjoys the freedom of maintaining his channel. “You can be much more personal to your readers,” he said. “They’ve become like friends.” But he says there is one drawback: “Never being able to take a holiday. There’s always the pressure of having to give something to my subscribers.”

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl