Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 21 2013

10:36

Former Facebook ME Dan Fletcher: 'It's a Great Time to Launch a New Publication'

This post was written by Ryan Graff of the Knight News Innovation Lab and originally appeared on the Lab's blog as part of a series of Q&As with highly impressive makers and strategists from media and its fringes, each with unique perspectives on journalism, publishing and communications technology. Catch up and/or follow the series here.

dan_fletcher.jpg

Dan Fletcher, the recently departed managing editor at Facebook, seems to be always ahead of the curve. In 2010, at age 22, Fletcher became the youngest person ever to write a cover story for Time magazine. He also created and launched Time.com's NewsFeed feature and Time's social media feeds. At Bloomberg a few years later he created and staffed the editorial social media teams for Bloomberg News and Bloomberg Businessweek, picking up a Forbes 30 Under 30 distinction in the process. Now, at a time when journalists are headed to the Twitters and LinkedIns of the world to help shape editorial content, he's already completed his time at a tech giant and is looking for his next project. Below is an edited version of our Q&A.

Q&A

Q: Can you give us a quick rundown of what you do, who you are, and all the latest since resigning from Facebook?

Dan Fletcher: I’ve really dug into the intersection of social media and editorial. At Time and Bloomberg, that meant helping news organizations figure out how to use these new platforms and reporting on the companies building them out. At Facebook, it meant trying to bring an editorial angle to a technology company. In each role, I've been lucky to be allowed to experiment, and now I’m eager to continue experimenting on my own.

What excites you most about journalism/media in 2013?

Fletcher: It seems like there’s a greater appetite for experimentation. Places like Circa and NowThisNews are rethinking how journalism’s packaged and distributed in a mobile world. Projects like Matter, Atavist, and The Magazine are seeing if people will pay for a great story, given to them in a way that honors the reading experience. And "traditional" publishers like The New York Times are recognizing the importance of good design and investing in tools and people that let them package stories in better ways. Not all of these will be successful, but it’s progress beyond the impetus to just rack up page views.

What are the big differences you found between the traditional news shops and Facebook?

Fletcher: Facebook has incredible focus on their goal of connecting the world. Everything exists in service of that mission, and Facebook Stories was our small way of showing some of the cool things that happens when people connect. Newsrooms generally can’t focus on examining one idea with that level of intensity -- there are other stories to tell and themes to explore. It was refreshing to spend a year really honed in on a single idea, but part of me really missed the broader purview of traditional news.

What has changed since you started working?

Fletcher: The pace. And things were pretty fast when I got started. But so many publishers are producing more stories and turning them around faster, so as to compete for traffic from search and social media. On the whole, I’m not sure this is a good thing. Or at least it shouldn’t be the only way that stories are produced.

When did you decide to become a media person?

Fletcher: I wish I had a better story for this -- I didn’t get into the pottery class in high school, and a girl I liked was in the newspaper class. So it goes. But I’ve loved it ever since.

C'mon, fess up, what's next?

Fletcher: It’s a great time to launch a new publication.

What is the biggest tech challenge that media companies will face over the next five years?

Fletcher: Monetizing. I wish there were another answer, but that’s still the case. Journalists are producing great work, maybe more great work than at any point in history. And therein lies the problem -- what makes this a great moment to be a reader makes it a tough moment to be a producer. There’s going to be a great deal of creativity in how companies approach these challenges, though -- I think we’ll see a variety of successful models, some of which will include new forms of advertising and some of which will require reader support.

What makes good content?

Fletcher: Authenticity. It doesn’t matter who’s making it -- the Times or a company doing content marketing like Facebook or Coca-Cola. If it feels fake, forced or false, people won’t trust it.

What excites you about technology and media?

Fletcher: The barriers to entry continue to fall. What WordPress did for blogging, someone's about to do for publishing on iOS and Android while companies like Scrollkit are making it easier to build immersive experiences around stories on the web. This frees journalists, photographers and art directors from technical costs that may have inhibited them in the past, and ultimately will result in more great projects being launched.

What applications do you have open while working?

Fletcher: MOG for music, Tweetdeck (although I’m much more of a follower than an active participant), Adobe Lightroom, and a really nifty and simple text editor called iA Writer. I find fewer options are better when it comes to writing.

What could the world use a little more of?

Fletcher: Originality.

What could the world use a little less of?

Fletcher: Top 10 lists.

Follow Dan Fletcher on Twitter, @danielfletcher. Find weekly updates from the Knight News Innovation Lab's profiles series on Fridays.

Ryan Graff joined the Knight News Innovation Lab in October 2011. He previously held a variety of newsroom positions -- from arts and entertainment editor to business reporter -- at newspapers around Colorado before moving to magazines and the web. In 2008 he won a News21 Fellowship from the Carnegie and Knight foundations to come up with innovative ways to report on and communicate the economic impact of energy development in the West. He holds an MSJ from the Medill School of Journalism and a certificate in media management from Northwestern's Media Management Center. Immediately prior to joining the Lab, Graff led marketing and public relations efforts in the Middle East.

knlogo_stacked_80x80_bg_white.jpg

The Knight Lab is a team of technologists, journalists, designers and educators working to advance news media innovation through exploration and experimentation. Straddling the sciences and the humanities the Lab develops projects, prototypes and innovative bits of code that help make information meaningful, and promote quality journalism, storytelling and content on the internet. The Knight Lab is a joint initiative of Northwestern University's Robert R. McCormick School of Engineering and Applied Science and the Medill School of Journalism. The Lab was launched and is sustained by a grant from the John S. and James L. Knight Foundation, with additional support from the Robert R. McCormick Foundation and the National Science Foundation.

May 15 2013

10:57

4 Lessons for Journalism Students from the Digital Edge

This past semester, I flew a drone. I helped set up a virtual reality environment. And I helped print a cup out of thin air.

Nice work if you can get it.

Working as a research assistant to Dan Pacheco at the Peter A. Horvitz Endowed Chair for Journalism Innovation at the S.I. Newhouse School of Public Communications at Syracuse University, I helped run the Digital Edge Journalism Series in the spring semester. We held a series of four programs that highlighted the cutting edge of journalism technology. Pacheco ran a session about drones in media; we had Dan Schultz from the MIT Media Lab talk about hacking journalism; we hosted Nonny de la Peña and her immersive journalism experience, and we had a 3D printer in our office, on loan from the Syracuse University ITS department, showing what can be made.

For someone who spent 10 years in traditional media as a newspaper reporter, it was an eye-opening semester. Here are some of the lessons I learned after spending a semester on the digital edge. Maybe they can be useful for you as you navigate the new media waters.

1. The future is here

During our 3D printer session, as we watched a small globe and base print from almost out of thin air, I turned to Pacheco and said, "This is the Jetsons. We're living the Jetsons."

photo.JPG

This stuff is all real. It sounds obvious to say, but in a way, it's an important thing to remember. Drones, virtual reality, 3D printing all sound like stuff straight out of science fiction. But they're here. And they're being used. More saliently, the barrier to entry of these technologies is not as low as you'd think. You can fly a drone using an iPad. The coding used to create real-time fact-checking programs is accessible. 3D printers are becoming cheaper and more commercially available. And while creating a full-room 3D immersive experience still takes a whole lot of time, money and know-how (we spent the better part of two days putting the experience together, during which I added "using a glowing wand to calibrate a $100,000 PhaseSpace Motion Capture system, then guided students through an immersive 3D documentary experience" to my skill set), you can create your own 3D world using Unity 3D software, which has a free version.

The most important thing I learned is to get into the mindset that the future is here. The tools are here, they're accessible, they can be easy and fun to learn. Instead of thinking of the future as something out there that's going to happen to you, our seminar series showed me that the future is happening right now, and it's something that we can create ourselves.

2. Get it first, ask questions later

One of the first questions we'd always get, whether it was from students, professors or professionals, was: "This is neat, but what application does it have for journalism?" It's a natural question to ask of a new technology, and one that sparked a lot of good discussions. What would a news organization use a drone for? What would a journalist do with the coding capabilities Schultz showed us? What kind of stories could be told in an immersive, virtual-reality environment? What journalistic use can a 3D printer have?

These are great questions. But questions become problems when they are used as impediments to change. The notion that a technology is only useful if there's a fully formed and tested journalistic use already in place for it is misguided. The smart strategy moving forward may be to get the new technologies and see what you can use them for. You won't know how you can use a drone in news coverage until you have one. You won't know how a 3D printer can be used in news coverage until you try it out.

There are potential uses. I worked in Binghamton, N.Y, for several years, and the city had several devastating floods. Instead of paying for an expensive helicopter to take overhead photos of the damage, maybe a drone could have been used more inexpensively and effectively (and locally). Maybe a newsroom could use a 3D printer to build models of buildings and landmarks that could be used in online videos. So when news breaks at, say, the local high school, instead of a 2D drawing, a 3D model could be used to walk the audience through the story. One student suggested that 3D printers could be made for storyboards for entertainment media. Another suggested advertising uses, particularly at trade shows. The possibilities aren't endless, but they sure feel like it.

Like I said above, these things are already here. Media organizations can either wait to figure it out (which hasn't exactly worked out for them so far in the digital age) or they can start now. Journalism organizations have never been hubs for research and development. Maybe this is a good time to start.

3. Real questions, real issues

This new technology is exciting, and empowering. But these technologies also raise some real, serious questions that call for real, serious discussion. The use of drones is something that sounds scary to people, and understandably so. (This is why the phrase "unmanned aerial vehicle" (UAV) is being used more often. It may not be elegant, but it does avoid some of the negative connotation the word "drone" has.) It's not just the paparazzi question. With a drone, where's the line between private and public life? How invasive will the drones be? And there is something undeniably unsettling about seeing an unmanned flying object hovering near you. 3D printers raise concerns, especially now that the first 3D printed guns have been made and fired.

To ignore these questions would be to put our heads in the sand, to ignore the real-world concerns. There aren't easy answers. They're going to require an honest dialogue among users, media organizations, and the academy.

4. Reporting still rules

Technology may get the headlines. But the technology is worthless without what the old-school journalists call shoe-leather reporting. At the heart of all these projects and all these technologies is the same kind of reporting that has been at the heart of journalism for decades.

Drones can provide video we can't get anywhere else, but the pictures are meaningless without context. The heart of "hacking journalism" is truth telling, going past the spin and delivering real-time facts to our audience. An immersive journalism experience is pointless if the story, the details, and the message aren't meticulously reported. Without a deeper purpose to inform the public, a 3D printer is just a cool gadget.

It's the marriage of the two -- of old-school reporting and new-school technology -- that makes the digital edge such a powerful place to be.

newhouse.jpgBrian Moritz is a Ph.D. student at the S.I. Newhouse School of Public Communications at Syracuse University and co-editor of the Journovation Journal. A former award-winning sports reporter in Binghamton, N.Y. and Olean, N.Y., his research focuses on the evolution of journalists' routines. His writing has appeared on the Huffington Post and in the Boston Globe, Boston Herald and Fort Worth Star-Telegram. He has a masters' degree from Syracuse University and a bachelor's degree from St. Bonaventure.

May 14 2013

11:00

How FrontlineSMS Helped an Indonesian Community Clean Up a River

FrontlineSMS has had a strong connection with environmental issues since our founder had the initial spark of an idea while working on an anti-poaching project in South Africa. We're delighted to share how Een Irawan Putra of KPC Bogor and the Indonesia Nature Film Society used FrontlineSMS in Indonesia to invite the community to help clean up the garbage clogging the Ciliwung River.

Community Care Ciliwung Bogor, known locally as KPC Bogor, was founded in March 2009 in West Java, Indonesia to harness the growing community concern for the sustainability of the Ciliwung River in the city of Bogor. We formed to raise awareness about the damaging impact of garbage and waste in the river, as well as to mobilize the community to take action.

river1.jpg

The community around KPC Bogor was initially formed by our friend Hapsoro who used to share his fishing experiences in the Ciliwung River. "If we go fishing in the river now, there is so much junk," Hapsoro once said, "All we get is plastic, instead of fish." It was after an increasing number of similar tales from the community about pollution levels that we decided to conduct some field research. We set out to find the best spots for fishing along the Ciliwung River, particularly in the area stretching from Katulampa to Cilebut.

Some KPC members work in Research and Development of Ornamental Fish Aquaculture, Ministry of Marine and Fisheries and in fisheries laboratory in Bogor Agricultural University. So while we conducted the research voluntarily, they were always present to offer their skills and ensure our research methods were sound. In addition to the study of fish, some KPC members who work in mapping forest areas in Indonesia helped us to map the river area using GPS. We mapped the points where garbage was stacked, sewage levels and commensurate changes in the river. We also tested the quality of river water by using a simple method called "biotilik," using organisms as an indication of the state of the river water quality in the Ciliwung River.

The results of the research were shocking. We found out that while the people who live along the Ciliwung River rely on its use for daily necessities including cooking, cleaning and washing, the river is increasingly being used as a place to dispose of trash and inorganic waste materials. The research helped us realize just how poor the Ciliwung River conditions were at the time -- with worrying consequences for the function, condition, and use of the river. Not only did we uncover poor river standards, we also identified that there was a lack of public knowledge about the importance of maintaining a healthy river among the community. Waste disposal practices have become rooted in the bad habits that have been ingrained in the minds of the people who live around the Ciliwung riverbanks over a long period of time. People are so used to the methods they use that they do not realize the severity of the environmental damage which they cause.

citizen clean-up takes off

So members of KPC Bogor got together to ask, "What can we do to save Ciliwung River in ways that are simple, inexpensive and uncomplicated?" From there, a simple concept was born. We set out to recruit volunteers to become garbage scavengers in the Ciliwung River. Every Saturday, KPC Bogor members and friends met from 8am to 11am, to pick up any inorganic matter that litters the Ciliwung River and put it into the sacks before sending it to landfills.

In many ways, we actually consider this activity as a way to meet new friends. It might be hard work that can cause us to sweat, but we understand that even though waste removal is a very simple activity, it important for the sustainability of our river and our community around it. The number of people who come every Saturday varies: Sometimes there are only two, other times up to 100 people. For us, the number doesn't matter. What's important is that KPC Bogor must continue to remind citizens to take care of the Ciliwung River.

About three months ago, we had some sad and shocking news that our friend and leader Hapsoro had passed away. A few of us were worried about what would happen to our 4-year-old community and how it could continue without his leadership. We gathered at Hapsoro's house before his funeral, and we all committed to doing all we could to ensure KPC Bogor's activities would carry on. We saw how vital this work was for the River, the community's health, and our livelihoods. We needed to honor and commemorate the important service Hapsoro had initiated to form a sense of responsibility and awareness in the community. But how could we mobilize the community like he did?

river2.jpg

using sms

Hapsoro was a man who always actively sent SMS to all our friends to participate in regular KPC Bogor activities, especially to remind them to get them involved with cleaning the river. With an old mobile phone, he used to send messages one by one to the numbers in his phone book. The day after we decided to keep KPC Bogor alive, I asked permission from Hapsoro's wife, Yuniken Mayangsari, about whether we could keep using his phone number to send SMS to all the subscribers. She gave me the phone at once without hesitation.

I started using Hapsoro's mobile phone to send SMS every Friday to the friends of KPC Bogor. When I was using the phone, I realized how patient Hapsoro must have been in sending the SMS alerts about river cleaning over his three years of organizing the activities. One by one, each of the numbers had to be selected from the address book, and I could only enter 10 numbers at once. It made getting though more than 200 numbers exhausting, and it took me more than two hours! Not to mention when I forgot which numbers I'd already sent the message to. I'm sure there are a few people who got the message twice.

Because of the limited time I could dedicate to sending SMS every Friday, some friends and I decided to try using FrontlineSMS. A friend who lives in Jakarta went looking for a compatible Huawei E-series modem to send and receive messages with the software. When we were finally able to buy one, we installed it on my laptop and KPC Bogor's laptop. Now every Friday, we load up FrontlineSMS to send alerts about KPC Bogor activities due to take place the following Saturday. It's great because I can carry on working while FrontlineSMS is sending the messages. I can easily manage contacts and send alerts to the community in a few simple steps.

KPC Bogor's work with volunteers is now so successful that we started a "Garbage Scavengers Race" which has now become an official annual agenda event in the city of Bogor. Last year, 1,500 people came to the river to help and we collected 1,300 bags of garbage in just 3 hours. We are now preparing for this year's scavenge due to take place in June 2013. In recognition of the need to tackle root causes of the waste issue rather than just the clean up, we've also started to do more than collecting garbage. KPC Bogor now provides environmental education for elementary school children, conducts research on water quality and plants trees around the Ciliwung River. We are also able to regularly assess the river water biota, where we analyze diversity of micro-organisms, plants and animals in the ecosystem. Recently, we even made a film about the waste problems in the Ciliwung River.

Now, we use FrontlineSMS to let the community know about our new activities too. Every week we receive SMS from new people who want their mobile number to be added to the subscribers list so they can receive a regular SMS every week with information about how to join in with our activities.

Thanks to the community, the city government is now giving full support to our activities by giving us budget for waste cleanup efforts through the official budget allocation. Once, Ciliwung was a clean river that was highly venerated by the people for its famous fresh water and was relied on by the public in Indonesia for their livelihoods. It was once a source of clean water used for drinking, cooking, bathing and washing. This community wants the condition of the Ciliwung River to return to how it once was, and we're getting there -- one piece of garbage at a time.

You can watch a video with English subtitles about the KPC Bogor community here.

More information about KPC Bogor can be found at here or via Twitter @tjiliwoeng and Facebook.com/KPCBogor.

river3.jpg

Een Irawan Putra is currently director of the Indonesia Nature Film Society, coordinator for the Ciliwung River Care Community (KPC Bogor), head of TELAPAK West of Java Territorial Body, member of TELAPAK, and member of LAWALATA IPB (Student Nature Club Bogor Agriculture University). Formerly he was a Forest Researcher in Greenpeace South East Asia Indonesia Office (2005); producer, cameraman, and editor at Gekko Studio (2005-2012), vice director PT. Poros Nusantara Media (2012), and vice president of the Association of Indonesia Peoples' Media and Television (ASTEKI) (2012).

April 15 2013

11:00

San Francisco, a City That Knows Its Faults

Low vacancy, so many homeless people, beautiful old buildings, shuttle buses to Silicon Valley ... and warning, I'm going to talk about earthquakes. If it gets scary, stick with me: There's good news at the end, ways to better understand the specific risks facing San Francisco, and some easy places to start.

Let's Talk Numbers

After the 1989 Loma Prieta earthquake, 11,500 Bay Area housing units were uninhabitable. If there was an earthquake today, the current estimate (from Spur) is that 25% of SF's population would be displaced for anywhere between a few days to a few years. However, San Francisco's top shelter capacity can only serve roughly 7.5% of the overall population. And that is only for short-term stays in places like the Moscone Center. So where would the remaining 17.5% of the population go?

  1. Some people may decide to leave the city and start over somewhere else (something called "outmigration," which is not ideal for the economic health of a city).
  2. And some people take longer-term housing in vacant units around the city. But this is particularly tough in San Francisco because vacancy is currently at an all-time low of about 4.2% vacant units.
  3. This brings us to the most ideal scenario: staying put -- something referred to in the emergency management world as "shelter-in-place."

ground-shaking-map.jpg

ground-shaking-key.jpg

What is Shelter-in-Place?

Shelter-in-place is "a resident's ability to remain in his or her home while it is being repaired after an earthquake -- not just for hours or days after an event, but for the months it may take to get back to normal. For a building to have shelter-in-place capacity, it must be strong enough to withstand a major earthquake without substantial structural damage. [...] residents who are sheltering in place will need to be within walking distance of a neighborhood center that can help meet basic needs not available within their homes."

A recent report from Spur's "Safe Enough to Stay" estimates that San Francisco needs 95% of its housing to be shelter-in-place. But currently there are 3,000 addresses (15% of the population) that are in a scary thing called "soft story buildings."

LomaPrieta-Marina.jpeg

A soft story building is characterized by having a story which has a lot of open space. Parking garages, for example, are often soft stories, as are large retail spaces or floors with a lot of windows.

Live in SF? What you can do:

  1. For starters, find out if your house is on the list of soft story buildings, here. And the SF Board of Supervisors recently voted to pass a "Mandatory Seismic Retrofit Program," which will make residents, or their landlords, mandatory to fix these buildings. Might as well check your block, while you're at it. If you are a renter, contact your landlord. If you're an owner, look into some seismic retrofitting.
  2. Check out what sort of liquefaction zone you're in in the map above. If you're in one of the better zones, plan to stock what you need for at least 72 hours while the bigger emergencies are dealt with.
  3. Sign up here to join other San Franciscans looking for better tools to deal with these issues and we'll keep you up to date. At Recovers, we're trying to help San Francisco prepare -- and prepare smartly.

Have an idea or question? Get in touch. We want to help.

sfrecovers.jpg

Emily Wright is an experience designer and illustrator. She studied at Parsons School of Design and California College of the Arts. Before joining Recovers, Emily was a 2012 Code for America Fellow focused on crisis response and disaster preparedness. She likes pretzels, and engaging her neighbors through interactive SMS projects.

April 02 2013

10:39

How Public Lab Turned Kickstarter Crowdfunders Into a Community

Public Lab is structured like many open-source communities, with a non-profit hosting and coordinating the efforts of a broader, distributed community of contributors and members. However, we are in the unique position that our community creates innovative open-source hardware projects -- tools to measure and quantify pollution -- and unlike software, it takes some materials and money to actually make these tools. As we've grown over the past two years, from just a few dozen members to thousands today, crowdfunding has played a key role in scaling our effort and reaching new people.

DIY Spectrometry Kit Kickstarter

Kickstarter: economies of DIY scale

Consider a project like our DIY Spectrometry Kit, which was conceived of just after the Deepwater Horizon oil spill to attempt to identify petroleum contamination. In the summer of 2012, just a few dozen people had ever built one of our designs, let alone uploaded and shared their work. As the device's design matured to the point that anyone could easily build a basic version for less than $40, we set out to reach a much larger audience while identifying new design ideas, use cases, and contributors, through a Kickstarter project. Our theory was that many more people would get involved if we offered a simple set of parts in a box, with clear instructions for assembly and use.

By October 2012, more than 1,600 people had backed the project, raising over $110,000 -- and by the end of December, more than half of them had received a spectrometer kit. Many were up and running shortly after the holidays, and we began to see regular submissions of open spectral data at http://spectralworkbench.org, as well as new faces and strong opinions on Public Lab's spectrometry mailing list.

Kickstarter doesn't always work this way: Often, projects turn into startups, and the first generation of backers simply becomes the first batch of customers. But as a community whose mission is to involve people in the process of creating new environmental technologies, we had to make sure people didn't think of us as a company but as a community. Though we branded the devices a bit and made them look "nice," we made sure previous contributors were listed in the documentation, which explicitly welcomed newcomers into our community and encouraged them to get plugged into our mailing list and website.

newbox.jpg

As a small non-profit, this approach is not only in the spirit of our work, but essential to our community's ability to scale up. To create a "customer support" contact rather than a community mailing list would be to make ourselves the exclusive contact point and "authority" for a project which was developed through open collaboration. For the kind of change we are trying to make, everyone has to be willing to learn, but also to teach -- to support fellow contributors and to work together to improve our shared designs.

Keeping it DIY

One aspect of the crowdfunding model that we have been careful about is the production methods themselves. While it's certainly vastly different to procure parts for 1,000 spectrometers, compared to one person assembling a single device, we all agreed that the device should be easy to assemble without buying a Public Lab kit -- from off-the-shelf parts, at a reasonable cost. Thus the parts we chose were all easily obtainable -- from the aluminum conduit box enclosure, to the commercially available USB webcams and the DVD diffraction grating which makes spectrometry possible.

spectrometry.jpg

While switching to a purpose-made "holographic grating" would have made for a slightly more consistent and easy-to-assemble kit (not to mention the relative ease of packing it vs. chopping up hundreds of DVDs with a paper cutter...), it would have meant that anyone attempting to build their own would have to specially order such grating material -- something many folks around the world cannot do. Some of these decisions also made for a slightly less optimal device -- but our priority was to ensure that the design was replicable, cheap, and easy. Advanced users can take several steps to dramatically improve the device, so the sky is the limit!

The platform effect

One clear advantage of distributing kits, besides the bulk prices we're able to get, is that almost 2,000 people now have a nearly identical device -- so they can learn from one another with greater ease, not to mention develop applications and methodologies which thousands of others can reproduce with their matching devices. We call this the "platform effect" -- where this "good enough" basic design has been standardized to the point that people can build technologies and techniques on top of it. In many ways, we're looking to the success of the Arduino project, which created not only a common software library, but a standardized circuit layout and headers to support a whole ecology of software and hardware additions which are now used by -- and produced by -- countless people and organizations.

Spectral Challenge screenshot

As we continue to grow, we are exploring innovative ways to use crowdfunding to get people to collaboratively use the spectrometers they now have in hand to tackle real-world problems. Recently, we have launched the Spectral Challenge, a kind of "X Prize for DIY science", but it's crowdfunded -- meaning that those who support the goals of the Challenge can participate in the competition directly, or by contributing to the prize pool. Additionally, Public Lab will continue to leverage more traditional means of crowdfunding as our community develops new projects to measure plant health and produce thermal images -- and we'll have to continue to ensure that any kits we sell clearly welcome new contributors into the community.

The lessons we've learned from our first two kit-focused Kickstarters will help us with everything from the box design to the way we design data-sharing software. The dream, of course, is that in years to come, as we pass the 10,000- and 100,000-member marks, we continue to be a community which -- through peer-to-peer support -- helps one another identify and measure pollution without breaking the bank.

The creator of GrassrootsMapping.org, Jeff Warren designs mapping tools, visual programming environments, and flies balloons and kites as a fellow in the Center for Future Civic Media, and as a student at the MIT Media Lab's Design Ecology group, where he created the vector-mapping framework Cartagen. He co-founded Vestal Design, a graphic/interaction design firm in 2004, and directed the Cut&Paste Labs project, a year-long series of workshops on open source tools and web design in 2006-7 with Lima designer Diego Rotalde. He is a co-founder of Portland-based Paydici.com.

September 04 2012

13:13

From Parenting Listservs to Comedian Message Boards, Collaboration Starts With Community

"A proper community, we should remember also, is a commonwealth: a place, a resource, an economy. It answers the needs, practical as well as social and spiritual, of its members..."
- Wendell Berry

In the days immediately after giving birth, I gave thanks for a listserv.

It was Day 4 of my daughter's life, and I was having trouble nursing (sorry if that's TMI [too much information]). In a moment of desperation, I sent an email to the neighborhood parenting listserv: "Need in-home lactation consultant this weekend please." Within minutes, several strangers had emailed me the names and contact information of consultants who lived within a five-block radius of me. Talk about customer service.

A couple of months later, I turned to the listserv for nanny recommendations. One woman replied to me and asked if I might consider daycare; if so, she highly recommended a place about a mile from my apartment. Her email really got my husband and me thinking, and we ended up visiting the daycare and falling in love with it; our daughter is there now, as I type this.

Of course, the way I've described the listserv so far doesn't really illustrate collaboration at work. Fellow listserv members helped me -- and, in other instances, I helped them -- but we didn't exactly "collaborate." We didn't create something together. But wait -- if it takes a village to raise a child (and I believe it does); and if this listserv put me in touch with village members I otherwise wouldn't have known existed; then is it maybe an example of collaboration after all?

In other words, is community a form of collaboration?

I'd posit that the answer is "yes."

Quid Pro Quo

A community can help you do something better than you could have done it alone, whether that something is being a parent or being a journalist. As Josh Stearns of Free Press recently wrote on Collaboration Central, journalists are increasingly coming together to form ad-hoc networks of support. In other words, journalists are helping each other do their jobs, whether by sharing news tips or safety precautions or the best place in town to get a camera repaired. They are forming communities (Stearns uses the term "solidarity"), and collaborating within those communities on matters editorial, legal, and operational.

Back to my parenting listserv. Are we really collaborating with each other, or just helping each other out? Scratch beneath the surface and "help" and "collaborate" may not mean such different things. In an editorial collaboration, one news organization may "help" another by providing complementary resources or expertise. Is this help provided free of charge, out of the goodness of someone's heart? No, probably not.

But the parenting listserv doesn't necessarily run on goodness, either. On some level, I help other members because other members help me. That's human nature. Of course, I'm happy to help another mom if I have information at my fingertips that she needs, or to share an experience. But as a member of the listserv, of the community, I expect the help will flow back to me, as well.

Adios, Silos

silos2.png

Two newsrooms come together to conduct an investigation. They share staff; they share budgets. On the other hand, two freelance journalists come together. They share story leads, sources, lessons learned from the field. Two parents come together. They share daycare recommendations, news about product recalls, and warnings about wayward dentists. (This really happened.) Whether the outcome is a news report, a scoop or an informed parent (or healthier, happier child) -- behind the scenes, the pattern is the same: individuals with a common interest coming together, instead of functioning as silos.

This is remarkable because of just how many silos continue to dominate our world.

In my consulting work, for example, I'm struck by how many organizations still have a culture where departments operate in isolation -- where an employee has no idea that the person in the office next door has information that could help her do her job better. Christa Avampato recently wrote about an innovative way to get executives and lower-level employees talking and collaborating on new product ideas.

It's staggering, really, when you consider how revolutionary it would be for more co-workers to just talk to each other, and for more people to just talk to other people in their field.

It's the People, Stupid

But for now, it's still noteworthy when a community forms, and holds up over time. In public media -- an industry where I've spent a lot of my career -- it took a handful of individuals starting a weekly Twitter chat (#pubmedia chat, R.I.P.) to get many people across the industry to begin to feel like part of a community. Around the time that chat formed, I happened to be the project manager of a multimillion-dollar collaboration funded in large part by the Corporation for Public Broadcasting (CPB). While we learned a lot from that project (I wrote about it here), I honestly think the chat ultimately had a greater ripple effect in terms of increasing collegiality and collaboration industrywide.

ryangoslingpubmedia_1.png

Why? Because relationships, not organizations, fuel collaboration. In fact, the best outcome I saw from the CPB-funded collaboration wasn't one of the contractually obligated editorial deliverables -- it was the relationships between individuals. A producer at the PBS NewsHour now knew who to call over at Marketplace, or NPR, and vice versa. These folks now had history together, and therefore trust, and it was easy to just pick up the phone or send an IM. With those channels of communication open, it became easier for collaborations large and small to take root.

To be sure, public media is no paragon of collaboration -- like most industries, it has a ways to go. But that Twitter chat morphed into a Facebook group, which, as I recently mentioned, I consider an invaluable professional resource. Like the networks Stearns profiled, this community sprung up because individuals saw a need -- and it's lasted.

A Venn Diagram of Communities

I'm a performer, and in addition to public media and parenting, comedy is yet another community in the Venn Diagram of my life. When I lived in Washington, D.C., Washington Improv Theater was the nexus of my community. Since moving to New York City, I haven't felt as strong of a connection to a group of performers, but one organization that helps provide a sense of connection is G.L.O.C. -- aka Gorgeous Ladies of Comedy (recently profiled in the New York Times). G.L.O.C.'s mission is to foster community among female comedians. Another vehicle for community among comedians is the Improv Resource Center, message boards that performers all over the country use to discuss the art of improv and related matters.

And my comedy friends from D.C.? We're currently collaborating on a web series, long-distance, using Google Hangouts.

The human need for community is as old as time. The interwebs just give us new ways to connect. And these connections provide an essential framework for the kind of collaboration that helps us do our jobs better, and with a greater feeling of connectedness ... of not being in it alone. It's almost enough to make a person sing "kumbaya."

Now You

Do you agree that community is a form of collaboration? And are there areas of your life where you find community lacking -- for example, in your workplace? How can you plant the seeds of community in place of silos? What resources do you rely on to help you feel connected to the communities in your life?

"What should young people do with their lives today? Many things, obviously. But the most daring thing is to create stable communities in which the terrible disease of loneliness can be cured."
- Kurt Vonnegut

Amanda Hirsch is the editor of Collaboration Central. She is a writer, social media consultant and performer who lives in Brooklyn, N.Y. The former editorial director of PBS.org, she blogs at amandahirsch.com spends way too much time on Twitter.

Photo above by Flickr user Ed Yourdon.

This is a summary. Visit our site for the full post ».

August 30 2012

13:13

The Rise of Ad-Hoc Journalist Support Networks

Journalistic collaboration isn't just something that happens between newsrooms. Increasingly, journalists working outside of traditional news organizations are coming together to support each other in a range of ways, from offering safety advice when covering protests to sharing news tips, local resource recommendations and more.

Safety in Numbers

"When ecosystems change and inflexible institutions collapse," Clay Shirky wrote in a post on his blog, "their members disperse, abandoning old beliefs, trying new things, making their living in different ways than they used to." In the news industry, an ecosystem is emerging that's fueled by independent and citizen reporters, along with a new generation of small non-profit news sites. These new journalistic entities are putting themselves on the line without the kind of legal, administrative or technological support of major newsrooms.

"Journalists who work for big institutions will continue to have better protections," Rebecca Rosen noted in The Atlantic almost a year ago, "not because of laws that protect them but because of the legal power their companies can buy." That means journalists outside such institutions need networks of support to provide protection for them, and for the work they do.

The lack of support and protection for journalists has made this one of the most deadly and dangerous times to be an independent journalist. The International News Safety Institute lists almost 90 journalists and media staff who have been killed in 2012 alone. The Committee to Protect Journalists found that of the 179 journalists imprisoned worldwide in 2011, 86 were digital journalists and 78 were freelancers. Here in the U.S. nearly 90 journalists have been arrested or detained in the past year, and in state after state citizens have scuffled with police over their right to record. As these statistics make clear, in journalism's changing ecosystem, networks that provide protections for journalists are essential.

Emerging Networks

reporters-protest.jpgIn my work tracking press suppression and journalist arrests I'm beginning to see some of those networks emerging. For example, in the week leading up to the NATO summit, a group of independent journalists organized a Google Group email list to share information and connect on the ground in Chicago. Roughly 50 journalists from all over the world joined the list, and in the days leading up to the protests they used it to plan their coverage, share local tips (like this map of places to buy camera equipment in a pinch), and socialize. As the protests swung into high gear, the list was alive with posts from people comparing notes, sharing where the action was, and helping each other confirm details or track down sources.

Joe Macera, a local Chicago journalist who works with Truthout and the Occupied Chicago Tribune, set up the NATO email list in hopes of connecting journalists around the nation to local Chicago independent media. One member of the list, Aaron Cynic, said via email that he found it useful for journalistic support and collaboration, but also for legal support. He said the list was helpful for "creating solidarity between us, fostering relationships, sharing information and photos, and also, getting information to the NLG [National Lawyers Guild] to help with people that had been arrested." Another member of the list, Ryan Williams, lamented via email the lack of diversity on the list, but acknowledged that "the list was great ... as a networking resource, and as a good early warning system for developing stories."

Journalists used the list for everything from meeting up for dinner to providing information on movements of marches and protests. Kevin Gosztola, another list member, pointed out via email, "You could run questions by others, ask what to do next if you hit a roadblock, inform others of something that happened that you think is an abuse of power, etc." After NATO, the members decided to keep it going as a forum and network for journalists who are covering protests, Occupy and a related set of issues. (Disclosure: I have been on the email list since before the NATO summit, using it to monitor reports of press suppression at protests.)

Rising Solidarity

The NATO email list was unique as it merged online and offline components and was truly ad-hoc in nature. Other networks that have emerged tend to occupy either an online or offline space, but rarely both. The local meet-ups by Hacks/Hackers and Online News Association chapters that have developed and spread quickly across the country (and world) are great examples of how local journalists are connecting and collaborating in person to support their work. Online, Twitter chats like #WJCHAT and the email and blog network Carnival of Journalism represent the digital equivalent of such collaboration where journalists are debating critical issues about the field, sharing lessons about their work, and supporting each other.

Both online and off, these new networks are designed to provide something the journalism ecosystem is largely lacking: solidarity. In a passionate post, Bryan Westfall, an independent journalist in the Bay Area, writes, "The work we do in these circles is up against something violent, self serving, and relentless ... we need each other in a way that must be personal in a way no version of simple 'networking' could ever be."

Many independent and freelance journalists I talk to describe feeling isolated in their work. "We need to continue to foster that solidarity," Cynic told me. "We don't have the same resources or protections as corporate media -- all we have is each other."

Networking with the Audience

During a recent Free Press webinar Pool, one of the best-known livestream journalists who has covered Occupy protests for the last year, said, "The Internet is my fixer." He was referring to the way his audience would step up during his coverage to help get him out of a bind, whether it was to get him food or water or a spare battery. Indeed, the webinar itself was designed less as a formal panel and more as an open conversation, drawing on the legal and safety expertise of the panelists but complementing that with personal stories and advice from the audience. The event helped connect independent journalists before the Republican and Democratic national conventions and foster more ad-hoc networks of support. This highlights the potential of new networks that enable audience members to become media allies -- both part of the journalistic process and advocates or defenders for that process.

More is Needed

To remake journalism, we need to build new networks of resiliency for the future of news. When we talk about the journalistic resources we have lost in recent years we tend to focus almost exclusively on the number of jobs lost, not on the capacity of the entire field to fight for the First Amendment, protect each other and our reporting, and support experimentation and eventually sustainability. To the best of my knowledge, there has been no comprehensive effort to map the needs of the new journalism ecosystem in these terms. Perhaps now is the time.

Josh Stearns is a journalist, organizer and community strategist. He is Journalism and Public Media Campaign Director for Free Press, a national, non-partisan, non-profit organization working to reform the media through education, organizing and advocacy. He was a co-author of "Saving the News: Toward a national journalism strategy," "Outsourcing the News: How covert consolidation is destroying newsrooms and circumventing media ownership rules," and "On the Chopping Block: State budget battles and the future of public media." Find him on Twitter at @jcstearns.

Photo above by Flickr user Paul Weiskel.

This is a summary. Visit our site for the full post ».

13:13

Post-Disaster, We Can Do More Than 'Feed It to Fix It'

Did something go wrong? Bring a casserole. While the type of barbeque may vary regionally, if you're standing near storm damage, there's likely a home-cooked meal on the way. Following a disaster, competent ladies fill church and school kitchens, turning out hundreds of sandwiches. Restaurants donate buffet trays of wings and lasagna. Community organizations host spaghetti dinner after spaghetti dinner, feeding survivors
and volunteers alike. Quite simply, we live in a casserole culture, and we can harness this tendency for a better local response.

Why, exactly, our knee-jerk reaction as a culture is to bake a pie in the face of
unthinkable loss, is anyone's guess. I have a theory that our Norman Rockwell tendencies are linked directly to what we are told we can and cannot do after a disaster.

'feed it to fix it'

Unless you happened to keep the FEMA National Incident Management Framework around for bedtime reading, you probably have no clue who is in charge of what on the ground after a disaster. Even if you do know what is supposed to happen, the practice is often far different than the plan. As an unaffiliated volunteer, you're often sent home, told off, or simply not answered when you try to help.

foodbank.jpeg

But food -- that makes sense. The Red Cross won't accept home-cooked donations, but local churches will. You're greeted with thanks instead of confusion if you drop off sandwiches and Gatorade at a worksite. We, as a culture, have assumed permission to feed during a disaster, and we get after it. Think: Studs Terkel meets Paula Deen.

I, like you, love a good plate of mashed potatoes. But our "feed it to fix it" tendencies right now fall short of our potential to help out at the community level. Here are a few suggestions for building a better community recovery:

Use your skills

Yes, you can cook. But are you also a lawyer? Bilingual? Great with computers? Those skills are every bit as necessary to the recovery as Dunkin Donuts -- survivors will need tax advice, translation and resource management help.

Use your head

The difference between lasagna and labor is that it is currently a painful process to volunteer skills through large, regional organizations. Your community can independently plan to share skills and resources before a disaster -- just agree upon a system beforehand.

Use your leaders

Your emergency management department and city leadership can use your help. Can you start a Community Emergency Response team? Would you agree to help the EM run social media during a disaster? Get in touch and plan ahead!

Use this recipe: the singular best recipe for chocolate chip recovery cookies I have ever
encountered:

Catastrophe Cookies

  • 1 1/2 cups all-purpose flour
  • 1/2 teaspoon baking soda
  • 1/2 teaspoon salt
  • 1/2 cup (1 stick) cold unsalted butter, cut into 1/2-inch pieces
  • 3/4 cup tightly packed light brown sugar
  • 1/2 cup granulated sugar
  • 1 1/2 teaspoons vanilla extract
  • 1 large egg, at room temperature, lightly beaten
  • 6 to 7 ounces of chocolate chips
  • Preheat oven to 350 F.
  • Cream butter and sugar together in a large bowl.
  • Add the vanilla and egg, keep on mixing.
  • Mix dry ingredients together, then add slowly to the large bowl of wet ingredients.
  • If you're patient, refrigerate the dough for a couple of hours.
  • If not, just go ahead and bake those cookies for 11-13 minutes.
  • Distribute to sweaty workers, affected families, stressed organizers, and your own family.

P.S. Check out our work in action this week at http://IsaacGulf.Recovers.org.

Caitria O'Neill is the CEO of Recovers.org. She received a B.A. degree in government from Harvard University in 2011. She has worked for Harvard Law Review and the U.S. State Department, and brings legal, political and editorial experience to the team. O'Neill has completed the certificate programs for FEMA's National Incident Management System 700 and 800, and Incident Command Systems 100 and 200. She has also worked with Emergency Management Directors, regional hospital and public health organizations and regional Homeland Security chapters to develop partnerships and educate stakeholders about local organization and communication following disasters.

August 21 2012

14:00

Why Did So Many News Outlets Not Link to Pussy Riot Video?

The Russian punk band Pussy Riot must have done something really bad to merit a possible seven years in prison, I figured. Finding all descriptions of their behavior to be filled with euphemism, I wanted to see their offensive behavior myself.

Who do you turn to when you want to see the world as it is, rather than the world as others tell you it is? My parents would have turned on network television. Or read the Progress-Bulletin or Daily Report. I went to YouTube and searched for "PussyRiot" and watched what struck me as the video of the actions I had heard about second- and third-hand. The video, I thought, was edited in such a way that made both the church and the band look like victims, depending on your point of view. To me, that was a good indication of its authenticity.

But I don't really know, and I trust sources like the New York Times, and especially its reporters on the ground in Moscow, to tell me whether what I'm really seeing is accurate. So I next went to nytimes.com and its story. The Times had links to videos. But a quick look around the other five top news sites in the U.S showed that it was the only popular publication that linked to the videos of the band's action that landed it in prison for three months while awaiting trial. So why was the Times the only source to have linked to the video? And what does that news organization's unusual behavior mean?

a lack of links

The other sites -- Yahoo News, Huffington Post, ABC News, NBC News and USA Today -- failed me. These are sites that are both praised and vilified as "aggregators" or "MSM." But all made the same editorial decision -- and didn't help their audience see the key fact of this case for itself.

But I wonder why the link wasn't made? The people who work there are professionals. And I have no reason to believe they are more or less immoral than I am.

Going back more than a decade, academic studies have found that few news stories actually link to source information. In 2001, one in 23 stories about the Timothy McVeigh execution linked to external sources. And a 2010 study indicates that U.S. journalists are less inclined to link to foreign sources than domestic sources, with fewer than 1 percent of foreign new stories on U.S. news sites containing links in their stories.

So, why?

Two prominent academic studies seem to indicate that the presence of inbound and outbound links increase credibility in both professional and amateur sites. Are professional journalists unaware of those studies? Are they aware, but think they're bunk?

One study indicates that journalists don't link because they are concerned about the financial implications -- that users who leave the site will not return to drive up ad impressions. Another seems to indicate that U.S. journalists are particularly skeptical of foreign sources of news because they are less confident of their own ability to judge the credibility of foreign sources.

enhancing credibility

From my experience in online newsrooms, both those findings seem plausible. But they also seem incomplete. My own additional hypothesis is that hyperlinking has been left primarily to automation and that editors and reporters who've been asked for the last decade to "do more with less" have decided that links to original source material -- which, at least according to a few studies, enhance their credibility, are not worth their time.

But other studies have shown that hyperlinks in the text of a story distract readers -- even the small percentage of readers who click on the links -- and reduce reading comprehension. That said, I suspect the journalists who didn't include links to the Pussy Riots videos are completely unaware of such studies (which are summarized nicely throughout Nicholas Carr's book "The Shallows."

If there's credit to be given in The New York Times' decision to include the links in the story, then it goes to the reporter in Moscow, David Herzenhorn, according to three sources who work at the Times. The role that Herzenhorn played is important. This was a task not left to an editor or producer in New York, but one that the Moscow correspondent took upon himself. The links add to his credibility.

"I have to say I am completely floored that other news organizations would not link to the videos, since they explain so much about the story," Kyle Crichton, the editor who worked on the story, wrote to me in response to an email query.

My rather slack Friday afternoon efforts to obtain comment from other news organizations that didn't link to the videos yielded no responses. I still hope to hear from them in hopes of understanding whether the lack of links was merely an oversight or a conscious omission. Herzenhorn also did not reply to my email on late Friday.

The reporter -- and at this point he, rather than his employer, deserves credit for the links -- selected the more popular Russian-language versions on YouTube rather than the English subtitled versions, which had fewer views but would be more useful to the Times' English-language audience.

"There is some profanity on the soundtrack, so I presume that is why David chose not to include [the videos with English subtitles]," Crichton said in his email to me. "That strikes me as fair, since the text isn't as important as the overall spectacle of their 'performance.'"

the political impact of linking

I also wondered what the political impact of including such links might be. I've had
newsroom conversations about whether linking to a source constitutes endorsement. The modern version of this is manifested in newsroom social media policies that discourage journalists from re-tweeting information from sources and in Twitter bios that say "RT ≠ endorsement."

I teach my students, and write in Chapter 7 of "Producing Online News," that links in a story are akin to quotes. You're responsible for the facts of the source's statement, but not the opinions. And stories without links today seem as incomplete as stories without quotes from named sources have always been.

In foreign stories, though, links to banned material could have an effect on both the news
organization's ability to distribute news and on its reporters' ability to collect it. Crichton wasn't concerned.

"I don't think our including the videos will have any impact on our future ability to report in Russia," Crichton said in his email to me. "If it were Iran, maybe, but Russia isn't like that, yet."

What discussion to you have in your newsroom about including or excluding links? If you aren't having any, consider consulting with -- and funding -- the mass communication researchers who can help you make your journalism more credible, more memorable and more useful.

Related links:

August 20 2012

13:34

How Wikipedia Manages Sources for Breaking News

Almost a year ago, I was hired by Ushahidi to work as an ethnographic researcher on a project to understand how Wikipedians managed sources during breaking news events.

Ushahidi cares a great deal about this kind of work because of a new project called SwiftRiver that seeks to collect and enable the collaborative curation of streams of data from the real-time web about a particular issue or event. If another Haiti earthquake happened, for example, would there be a way for us to filter out the irrelevant, the misinformation, and build a stream of relevant, meaningful and accurate content about what was happening for those who needed it? And on Wikipedia's side, could the same tools be used to help editors curate a stream of relevant sources as a team rather than individuals?

pakistan.png

Ranking sources

When we first started thinking about the problem of filtering the web, we naturally thought of a ranking system that would rank sources according to their reliability or veracity. The algorithm would consider a variety of variables involved in determining accuracy, as well as whether sources have been chosen, voted up or down by users in the past, and eventually be able to suggest sources according to the subject at hand. My job would be to determine what those variables are -- i.e., what were editors looking at when deciding whether or not to use a source?

I started the research by talking to as many people as possible. Originally I was expecting that I would be able to conduct 10 to 20 interviews as the focus of the research, finding out how those editors went about managing sources individually and collaboratively. The initial interviews enabled me to hone my interview guide. One of my key informants urged me to ask questions about sources not cited as well as those cited, leading me to one of the key findings of the report (that the citation is often not the actual source of information and is often provided in order to appease editors who may complain about sources located outside the accepted Western media sphere). But I soon realized that the editors with whom I spoke came from such a wide variety of experience, work areas and subjects that I needed to restrict my focus to a particular article in order to get a comprehensive picture of how editors were working. I chose a 2011 Egyptian revolution article on Wikipedia because I wanted a globally relevant breaking news event that would have editors from different parts of the world working together on an issue with local expertise located in a language other than English.

Using Kathy Charmaz's grounded theory method, I chose to focus editing activity (in the form of talk pages, edits, statistics and interviews with editors) from January 25, 2011 when the article was first created (within hours of the first protests in Tahrir Square), to February 12 when Mubarak resigned and the article changed its name from "2011 Egyptian protests" to "2011 Egyptian revolution." After reviewing the big-picture analyses of the article using Wikipedia statistics of top editors, and locations of anonymous editors, etc., I started work with an initial coding of the actions taking place in the text, asking the question, "What is happening here?"

I then developed a more limited codebook using the most frequent/significant codes and proceeded to compare different events with the same code (looking up relevant edits of the article in order to get the full story), and to look for tacit assumptions that the actions left out. I did all of this coding in Evernote because it seemed the easiest (and cheapest) way of importing large amounts of textual and multimedia data from the web, but it wasn't ideal because talk pages, when imported, need to be re-formatted, and I ended up using a single column to code data in the first column since putting each conversation on the talk page in a cell would be too time-consuming.

evernote.png

I then moved to writing a series of thematic notes on what I was seeing, trying to understand, through writing, what the common actions might mean. I finally moved to the report writing, bringing together what I believed were the most salient themes into a description and analysis of what was happening according to the two key questions that the study was trying to ask: How do Wikipedia editors, working together, often geographically distributed and far from where an event is taking place, piece together what is happening on the ground and then present it in a reliable way? And how could this process be improved?

Key variables

Ethnography Matters has a great post by Tricia Wang that talks about how ethnographers contribute (often invisible) value to organizations by showing what shouldn't be built, rather than necessarily improving a product that already has a host of assumptions built into it.

And so it was with this research project that I realized early on that a ranking system conceptualized this way would be inappropriate -- for the single reason that along with characteristics for determining whether a source is accurate or not (such as whether the author has a history of presenting accurate news article), a number of important variables are independent of the source itself. On Wikipedia, these include variables such as the number of secondary sources in the article (Wikipedia policy calls for editors to use a majority of secondary sources), whether the article is based on a breaking news story (in which case the majority of sources might have to be primary, eyewitness sources), or whether the source is notable in the context of the article. (Misinformation can also be relevant if it is widely reported and significant to the course of events as Judith Miller's New York Times stories were for the Iraq War.)

nyt.png

This means that you could have an algorithm for determining how accurate the source has been in the past, but whether you make use of the source or not depends on factors relevant to the context of the article that have little to do with the reliability of the source itself.

Another key finding recommending against source ranking is that Wikipedia's authority originates from its requirement that each potentially disputed phrase is backed up by reliable sources that can be checked by readers, whereas source ranking necessarily requires that the calculation be invisible in order to prevent gaming. It is already a source of potential weakness that Wikipedia citations are not the original source of information (since editors often choose citations that will be deemed more acceptable to other editors) so further hiding how sources are chosen would disrupt this important value.

On the other hand, having editors provide a rationale behind the choice of particular sources, as well as showing the variety of sources rather than those chosen because of loading time constraints may be useful -- especially since these discussions do often take place on talk pages but are practically invisible because they are difficult to find.

Wikipedians' editorial methods

Analyzing the talk pages of the 2011 Egyptian revolution article case study enabled me to understand how Wikipedia editors set about the task of discovering, choosing, verifying, summarizing, adding information and editing the article. It became clear through the rather painstaking study of hundreds of talk pages that editors were:

  1. storing discovered articles either using their own editor domains by putting relevant articles into categories or by alerting other editors to breaking news on the talk page,
  2. choosing sources by finding at least two independent sources that corroborated what was being reported but then removing some of the citations as the page became too heavy to load,
  3. verifying sources by finding sources to corroborate what was being reported, by checking what the summarized sources contained, and/or by waiting to see whether other sources corroborated what was being reported,
  4. summarizing by taking screenshots of videos and inserting captions (for multimedia) or by choosing the most important events of each day for a growing timeline (for text),
  5. adding text to the article by choosing how to reflect the source within the article's categories and providing citation information, and
  6. editing disputing the way that editors reflected information from various sources and replacing primary sources with secondary sources over time.

It was important to discover the work process that editors were following because any tool that assisted with source management would have to accord as closely as possible with the way that editors like to do things on Wikipedia. Since the process is managed by volunteers and because volunteers decide which tools to use, this becomes really critical to the acceptance of new tools.

sources.png

Recommendations

After developing a typology of sources and isolating different types of Wikipedia source work, I made two sets of recommendations as follows:

  1. The first would be for designers to experiment with exposing variables that are important for determining the relevance and reliability of individual sources as well as the reliability of the article as a whole.
  2. The second would be to provide a trail of documentation by replicating the work process that editors follow (somewhat haphazardly at the moment) so that each source is provided with an independent space for exposition and verification, and so that editors can collect breaking news sources collectively.

variables.png

Regarding a ranking system for sources, I'd argue that a descriptive repository of major media sources from different countries would be incredibly beneficial, but that a system for determining which sources are ranked highest according to usage would yield really limited results. (We know, for example, that the BBC is the most used source on Wikipedia by a high margin, but that doesn't necessarily help editors in choosing a source for a breaking news story.) Exposing the variables used to determine relevancy (rather than adding them up in invisible amounts to come up with a magical number) and showing the progression of sources over time offers some opportunities for innovation. But this requires developers to think out of the box in terms of what sources (beyond static texts) look like, where such sources and expertise are located, and how trust is garnered in the age of Twitter. The full report provides details of the recommendations and the findings and will be available soon.

Just the beginning

This is my first comprehensive ethnographic project, and one of the things I've noticed when doing other design and research projects using different methodologies is that, although the process can seem painstaking and it can prove difficult to articulate the hundreds of small observations into findings that are actionable and meaningful to designers, getting close to the experience of editors is extremely valuable work that is rare in Wikipedia research. I realize now that in the past when I actually studied an article in detail, I knew very little about how Wikipedia works in practice. And this is only the beginning!

Heather Ford is a budding ethnographer who studies how online communities get together to learn, play and deliberate. She currently works for Ushahidi and is studying how online communities like Wikipedia work together to verify information collected from the web and how new technology might be designed to help them do this better. Heather recently graduated from the UC Berkeley iSchool where she studied the social life of information in schools, educational privacy and Africans on Wikipedia. She is a former Wikimedia Foundation Advisory Board member and the former Executive Director of iCommons - an international organization started by Creative Commons to connect the open education, access to knowledge, free software, open access publishing and free culture communities around the world. She was a co-founder of Creative Commons South Africa and of the South African nonprofit, The African Commons Project as well as a community-building initiative called the GeekRetreat - bringing together South Africa's top web entrepreneurs to talk about how to make the local Internet better. At night she dreams about writing books and finding time to draw.

This article also appeared at Ushahidi.com and Ethnography Matters. Get the full report at Scribd.com.

August 16 2012

14:00

Did Global Voices Use Diverse Sources on Twitter for Arab Spring Coverage?

Citizen journalism and social media have become major sources for the news, especially after the Arab uprisings of early 2011. From Al Jazeera Stream and NPR's Andy Carvin to the Guardian's "Three Pigs" advertisement, news organizations recognize that journalism is just one part of a broader ecosystem of online conversation. At the most basic level, journalists are following social media for breaking news and citizen perspectives. As a result, designers are rushing to build systems like Ushahidi's SwiftRiver to filter and verify citizen media.

Audience analytics and source verification only paint part of the picture. While upcoming technologies will help newsrooms understand their readers and better use citizen sources, we remain blind to the way the news is used in turn by citizen sources to gain credibility and spread ideas. That's a loss for two reasons. Firstly, it opens newsrooms up to embarrassing forms of media manipulation. Most importantly, we're analytically blind to one of bloggers' and citizen journalists' greatest incentives: attention.

Re-imagining media representation

For my MIT Media Lab master's thesis, I'm trying to re-imagine how we think about media representation in online media ecosystems. Over the next year, my main focus will be gender in the media. But this summer, for a talk at the Global Voices Summit in Nairobi, I developed a visualization of media representation in Global Voices, which has been reporting on citizen media far longer than most news organizations.

(I'm hoping the following analysis of Global Voices convinces you that tracking media representation is exciting and important. If your news organization is interested in developing these kinds of metrics, or if you're a Global Voices editor trying to understand whose voices you amplify, I would love to hear from you. Contact me on Twitter at @natematias or at natematias@gmail.com.)

Media Representation in Global Voices: Egypt and Libya

My starting questions were simple: Whose voices (from Twitter) were most cited in Global Voices' coverage of the Arab uprisings, and how diverse were those voices? Was Global Voices just amplifying the ideas of a few people, or were they including a broad range of perspectives? Global Voices was generous enough to share its entire English archive going back to 2004, and I built a data visualization tool for exploring those questions across time and sections:

globalvoices.jpg

Let's start with Egypt. (Click to load the Egypt visualization.) Global Voices has been covering Egypt since its early days. The first major spike in coverage occurred in February 2007 when blogger Kareem Amer was sentenced to prison for things he said on his blog. The next spike in coverage, in February 2009, occurred in response to the Cairo bombing. The largest spike in Egypt coverage starts at the end of January 2011 in response to protests in Tahrir Square and is sustained over the next few weeks. Notice that while Global Voices did quote Twitter from time to time (citing 68 unique Twitter accounts the week of the Cairo bombing), the diversity of Twitter citation grew dramatically during the Egyptian uprising -- and actually remained consistently higher thereafter.

Tracking twitter citations

Why was Global Voices citing Twitter? By sorting articles by Twitter citation in my visualization, it's possible to look at the posts which cite the greatest number of unique Twitter accounts. Some posts reported breaking news from Tahrir, quoting sources from Twitter. Others report on viral political hashtag jokes, a popular format for Global Voices posts. Not all posts cite Egyptian sources. This post on the global response to Egyptian uprising shares tweets from around the world.

twitteraccounts.jpg

By tracking Twitter citation in Global Voices, we're also able to ask: Whose voices was GlobalVoices amplifying? Citation in blogs and the news can give a source exposure, credibility, and a growing audience.

In the Egypt section, the most cited Twitter source was Alaa Abd El Fattah, an Egyptian blogger, software developer, and activist. One of the last times he was cited in Global Voices was in reference to his month-long imprisonment in November 2011.

Although Alaa is prominent, Global Voices relied on hundreds of other sources. The Egypt section cites 1,646 Twitter accounts, and @alaa himself appears alongside 368 other accounts.

One of those accounts is that of Sultan al-Qassemi, who lives in Sharjah in the UAE, and who translated arabic Tweets into English throughout the Arab uprisings. @sultanalqassemi is the fourth most cited account in Global Voices Egypt, but that accounts for only 28 posts out of the 65 where he is mentioned. This is very different from Alaa, who is cited primarily just within the Egypt section.

sultan.jpg

Let's look at other sections where Sultan al-Qassemi is cited in Global Voices. Consider, for example, the Libya section, where he appears in 18 posts. (Click to load the Libya visualization.) Qassemi is cited exactly the same number of times as the account @ChangeInLibya, a more Libya-focused Twitter account. Here, non-Libyan voices have been more prominent: Three out of the five most cited Twitter accounts (Sultan al-Qassemi, NPR's Andy Carvin, and the Dubai-based Iyad El-Baghdadi) aren't Libyan accounts. Nevertheless, all three of those accounts were providing useful information: Qassemi reported on sources in Libya; Andy Carvin was quoting and retweeting other sources, and El-Baghdadi was creating situation maps and posting them online. With Libya's Internet mostly shut down from March to August, it's unsurprising to see more outside commentary than we saw in the Egypt section.

globalvoiceslibya.jpg

Where Do We Go From Here?

This very simple demo shows the power of tracking source diversity, source popularity, and the breadth of topics that a single source is quoted on. I'm excited about taking the project further, to look at:

  • Comparing sources used by different media outlets
  • Auto-following sources quoted by a publication, as a way for journalists to find experts, and for audiences to connect with voices mentioned in the media
  • Tracking and detecting media manipulators
  • Developing metrics for source diversity, and developing tools to help journalists find the right variety of sources
  • Journalist and news bias detection, through source analysis
  • Comparing the effectiveness of closed source databases like the Public Insight Network and Help a Reporter Out to open ecosystems like Twitter, Facebook, and online comments. Do source databases genuinely broaden the conversation, or are they just a faster pipeline for PR machines?
  • Tracking the role of media exposure on the popularity and readership of social media accounts

Still Interested?

I'm sure you can think of another dozen ideas. If you're interested in continuing the conversation, try out my Global Voices Twitter Citation Viewer (tutorial here), add a comment below, and email me at natematias@gmail.com.

Nathan develops technologies for media analytics, community information, and creative learning at the MIT Center for Civic Media, where he is a Research Assistant. Before MIT, Nathan worked in UK startups, developing technologies used by millions of people worldwide. He also helped start the Ministry of Stories, a creative writing center in East London. Nathan was a Davies-Jackson Scholar at the University of Cambridge from 2006-2008.

This post originally appeared on the MIT Center for Civic Media blog.

August 14 2012

14:00

What's Next for Ushahidi and Its Platform?

This is part 2 in a series. In part 1, I talked about how we think of ourselves at Ushahidi and how we think of success in our world. It set up the context for this post, which is about where we're going next as an organization and with our platform.

We realize that it's hard to understand just how much is going on within the Ushahidi team unless you're in it. I'll try to give a summarized overview, and will answer any questions through the comments if you need more info on any of them.

The External Projects Team

Ushahidi's primary source of income is private foundation grant funding (Omidyar Network, Hivos, MacArthur, Google, Cisco, Knight, Rockefeller, Ford), and we don't take any public funding from any country so that we are more easily able to maintain our neutrality. Last year, we embarked on a strategy to diversify our revenue stream, endeavoring to decrease our percentage of revenues based on grant funding and offset that with earned revenue from client projects. This turned out to be very hard to do within our current team structure, as the development team ended up being pulled off of platform-side work and client-side work suffered for it. Many internal deadlines were missed, and we found ourselves unable to respond to the community as quickly as we wanted.

This year we split out an "external projects team" made up of some of the top Ushahidi deployers in the world, and their first priority is to deal with client and consulting work, followed by dev community needs. We're six months into this strategy, and it seems like this team format will continue to work and grow. Last year, 20% of our revenue was earned; this year we'd like to get that to the 30-40% range.

Re-envisioning Crowdmap

When anyone joins the Ushahidi team, we tend to send them off to some conference to speak about Ushahidi in the first few weeks. There's nothing like knowing that you're going to be onstage talking about your new company to galvanize you into really learning about and understanding everything about the organization. Basically, we want you to understand Ushahidi and be on the same mission with us. If you are, you might explain what we do in a different way than I do onstage or in front of a camera, but you'll get the right message out regardless.

crowdmap-screenshot-mobile-397x500.png

You have a lot of autonomy within your area of work, or so we always claimed internally. This was tested earlier this year, where David Kobia, Juliana Rotich and myself as founders were forced to ask whether we were serious about that claim, or were just paying it lip-service. Brian Herbert leads the Crowdmap team, which in our world means he's in charge of the overall architecture, strategy and implementation of the product.

The Crowdmap team met up in person earlier this year and hatched a new product plan. They re-envisioned what Crowdmap could be, started mocking up the site, and began building what would be a new Crowdmap, a complete branch off the core platform. I heard this was underway, but didn't get a brief on it until about six weeks in. When I heard what they had planned, and got a complete walk-through by Brian, I was floored. What I was looking at was so different from the original Ushahidi, and thus what we have currently as Crowdmap, that I couldn't align the two in my mind.

My initial reaction was to shut it down. Fortunately, I was in the middle of a random 7-hour drive between L.A. and San Francisco, so that gave me ample time to think by myself before I made any snap judgments. More importantly, it also gave me time to call up David and talk through it with him. Later that week, Juliana, David and I had a chat. It was at that point that we realized that, as founders, we might have blinders on of our own. Could we be stuck in our own 2008 paradigm? Should we trust our team to set the vision for a product? Did the product answer the questions that guide us?

The answer was yes.

The team has done an incredible job of thinking deeply about Crowdmap users, then translating that usage into a complete redesign, which is both beautiful and functional at the same time. It's user-centric, as opposed to map-centric, which is the greatest change. But, after getting around our initial feelings of alienness, we are confident that this is what we need to do. We need to experiment and disrupt ourselves -- after all, if we aren't willing to take risks and try new things, then we fall into the same trap that those who we disrupted did.

A New Ushahidi

For about a year we've been asking ourselves, "If we rebuilt Ushahidi, with all we know now, what would it look like?"

To redesign, re-architect and rebuild any platform is a huge undertaking. Usually this means part of the team is left to maintain and support the older code, while the others are building the shiny new thing. It means that while you're spending months and months building the new thing, that you appear stagnant and less responsive to the market. It means that you might get it wrong and what you build is irrelevant by the time it's launched.

Finally, after many months of internal debate, we decided to go down this path. We've started with a battery of interviews with users, volunteer developers, deployers and internal team members. The recent blog post by Heather Leson on the design direction we're heading in this last week shows where we're going. Ushahidi v3 is the complete redesign of Ushahidi's core platform, from the first line of code to the last HTML tag. On the front-end it's mobile web-focused out of the gate, and the backend admin area is about streamlining the publishing and verification process.

At Ushahidi we are still building, theming and using Ushahidi v2.x, and will continue to do so for a long time. This idea of a v3 is just vaporware until we actually decide to build it, but the exercise has already born fruit because it forces us to ask what it might look like if we weren't constrained by the legacy structure we had built. We'd love to get more input from everyone on this as we go forward.

SwiftRiver in Beta

After a couple of fits and starts, SwiftRiver is now being tried out by 500-plus beta testers. It's 75% of the way to completion, but usable, and so it's out and we're getting the feedback from everyone on what needs to be changed, added and removed in order to make it the tool we all need to manage large amounts of data. It's an expensive, server-intensive platform to run, so those who use it in the future will have to pay for its use when using it on our servers. As always, the core code will be made available, free and open source, for those who would like to set it up and run it on their own.

In Summary

The amount of change and internal change that Ushahidi is undertaking is truly breathtaking to us. We're cognizant of just how much we're putting on the edge. However, we know this; in our world of technology, those who don't disrupt themselves will themselves be disrupted. In short, we'd rather go all-in to make this change happen ourselves than be mired in a state of stagnancy and defensive activity.

As always, this doesn't happen in a vacuum for Ushahidi. We've relied on those of you who are the coders and deployers to help us guide the platforms for over four years. Many of you have been a part of one of these product rethinks. If you aren't already, and would like to be, get in touch with myself or Heather to get into it and help us re-envision and build the future.

Raised in Kenya and Sudan, Erik Hersman is a technologist and blogger who lives in Nairobi. He is a co-founder of Ushahidi, a free and open-source platform for crowdsourcing information and visualizing data. He is the founder of AfriGadget, a multi-author site that showcases stories of African inventions and ingenuity, and an African technology blogger at WhiteAfrican.com. He currently manages Ushahidi's operations and strategy, and is in charge of the iHub, Nairobi's Innovation Hub for the technology community, bringing together entrepreneurs, hackers, designers and the investment community. Erik is a TED Senior Fellow, a PopTech Fellow and speaker and an organizer for Maker Faire Africa. You can find him on Twitter at @WhiteAfrican

This post originally appeared on Ushahidi's blog.

August 13 2012

14:00

Movement-Based Arts Inspire Public Lab's DIY Environmental Science

Researchers at Public Laboratory pursue environmental justice creatively, through re-imagining our relationship with the environment. Our model is to rigorously ask oddball questions, then initiate research by designing or adapting locally accessible tools and methods to collect the data we need to answer those questions.

We've found, perhaps not surprisingly, that innovation in tools and methods frequently emerges from creative practices. In the larger trend of art plus science collaboration, 2D graphics, illustration, and visualization get most of the glory. But sculpture and dance are also major drivers of environmental imagination -- and therefore scientific inquiry.

taking back the production of research supplies

publiclab.jpg

In early July, approximately 25 people gathered in the cool interior of the 600,000-square-foot Pfizer building to design and build kites and balloons. This event was led by a sculptor, Mathew Lippincott, one of the co-founders of Public Laboratory. From his workshop in Portland, Ore., he's been researching the performance of tyvek and bamboo as well as ultra-lightweight plastic coated with iron oxide powder that heats itself in the sun. Because community researchers around the world use commercially produced kites and balloons to lift payloads (such as visible and infrared cameras, air quality sensors, and grab samplers) high into the air, this is part of a mission-critical initiative to take back the production of research supplies into the hands of local communities.

publiclab2.jpg

dancers and scientists collaborate

publiclab3.jpg

What you may not be expecting to hear is that half of the workshop attendees were dancers or choreographers, organized by Lailye Weidman and Jessica Einhorn, two fellows of iLAND, an organization dedicated to collaboration between dancers and scientists. Inspired by embodied investigations into atmospheric pressure and dynamics, these dancers joined the sculptors to drive forward a research agenda into the little-understood urban wind condition. Other attendees included engineers, theater artists, design students, landscape architects, and urban foresters. This group spent the weekend splitting bamboo, heat seaming painter's plastic towards building a solar-heated balloon large enough to lift a person, and learning about aerodynamics through attempting to fly their creations.

This work on the replicability (ease of making) and autonomy (easily procurable materials) of DIY aerial platforms -- directed by the aesthetic and embodied sense of sculptors and dancers -- has increased the ability of non-professional scientists to ask and answer their questions about their environment.

14:00

How National Geographic Used Cowbird Storytelling Tool to Tell a Reservation's Whole Story

Sometimes, it takes more than one storyteller to get a story right -- especially when the subjects of the story are members of a community that often feels misrepresented by media.

Thanks to multimedia storytelling tool Cowbird, photographer Aaron Huey and National Geographic were able to collaborate with the people of the Pine Ridge Reservation to jointly tell their story to the world. The result: the Pine Ridge Community Storytelling Project, a companion to the August 2012 cover story in National Geographic magazine.

The Roots of a New Storytelling Approach

After working with the Oglala Lakota people for seven years, Huey felt their stories couldn't adequately be conveyed in the pages of a magazine.

ng-cover.jpg

"To make a really great narrative [in print] often means only telling the story of a couple of people, and trying to use those stories to tell the larger story of the community and where it's going," Huey said. "That's often confusing for the community itself. People always asked me why I couldn't fit in something about the all-star basketball team, or the scholars going on to college. Everyone wanted something specific and claimed that I was missing the entire story because I didn't have those things. They felt like they were misrepresented. They felt like for decades in the media, they'd been misrepresented."

While on a John S. Knight Journalism Fellowship at Stanford University, Huey reflected on this storytelling dilemma. He tried to build a multimedia platform himself that could be used by National Geographic, but he realized quickly that he didn't have "the money or the expertise" for the job. But when he discovered Cowbird, an online storytelling tool developed by Jonathan Harris, Huey knew it was just right for the stories he wanted to tell.

"It was obvious that it was the perfect collaboration. I didn't need to reinvent the wheel," he said.

Telling the Whole Story, Unfiltered

Cowbird allows people to tell stories with photos, audio, timelines, maps, and other media. Working together, Huey, Harris and the National Geographic team crafted a Cowbird story interface and embedded it into the magazine's website.

Each block on the page tells a different story, from bits of tribal history to an account of one boy's encounter with racism. One photo, titled "Rez joke #2," shows Lakota men in line at a convenience store with the caption, "Pine Ridge traffic jam."

Submissions continue to roll in. Huey screens each story to ensure that it connects to Pine Ridge or the Oglala Lakota in some way; stories are otherwise unedited.

"National Geographic was incredibly brave to run this unedited content and to trust me to do this right," Huey said.

Inspiring New Approaches

The magazine's editor-in-chief, Chris Johns, said Cowbird and the "unfiltered voice" of the Pine Ridge storytellers were a natural fit for National Geographic. "This is a future that we're terribly excited about and fully embracing. This just suits our DNA perfectly."

For a magazine whose goal is representing often new, distant and unfamiliar places and cultures, this partnership has inspired thinking about new storytelling possibilities.

"I believe in the importance of letting people have their voice," Johns said. "We want to hear the voices of others, the voices of those who were photographed, to hear what they feel about the work we are doing."

Huey said this style of storytelling will continue in his own work, and he hopes it's something more journalists will embrace.

"We can't just put stories out there that are filtered through one or two people's vision anymore," he said. He noted that tools like Cowbird that enable multifaceted storytelling are especially useful for telling stories about a community likely to feel misrepresented by media.

"It's the right tool whenever there is a possibility for people to feel misrepresented -- when we as journalists are talking in big brush strokes about whole peoples or ways of thinking," he said.

Huey hopes that the Pine Ridge project will contain more than 500 stories by the end of 2012.

"We found a way to make the story infinitely expanding," he said. "The only limitation is apathy."

Susan Currie Sivek, Ph.D., is an assistant professor in the Department of Mass Communication at Linfield College. Her research focuses on magazines and media communities. She also blogs at sivekmedia.com, and is the magazine correspondent for MediaShift.

This is a summary. Visit our site for the full post ».

August 01 2012

13:46

Can Google Maps + Fusion Tables Beat OpenBlock?

WRAL.com, North Carolina's most widely read online news site, recently published a tool that allows you to search concealed weapons permits down to the street level. It didn't use OpenBlock to do so. Why?

openblock-logo.png

Or, if you're like many journalistically and technically savvy people I've spoken over the last few months, you could ask why would they? There's plenty of evidence out there to suggest the OpenBlock application is essentially a great experiment and proof of concept, but a dud as a useful tool for journalists. Many of the public records portions of Everyblock.com -- OpenBlock's commercial iteration -- are months if not years out of date. It can't be found anywhere on the public sites of the two news organizations in which the Knight Foundation invested $223,625. There are only three sites running the open-source code -- two of those are at universities and only one of which was created without funding from the Knight Foundation.

And, you, Thornburg. You don't have a site up and running yet, either.

All excellent points, dear friends. OpenBlock has its problems -- it doesn't work well in multi-city installations, some search functions don't work as you'd expect, there's no easy way to correct incorrect geocoding or even identify possible failures, among other obstacles that I'll describe in greater detail in a later blog post. But the alternatives also have shortcomings. And deciding whether to use OpenBlock depends on which shortcomings will be more tolerable to your journalists, advertisers and readers.

SHOULD I USE OPENBLOCK?

If you want to publish news from multiple cities or from unincorporated areas, or if you serve a rural community I'd hold off for now. If you visit our public repositories on GitHub you can see the good work the developers at Caktus have been doing to remove these limitations, and I'm proud to say that we have a private staging site that's up and running for our initial partner site. But until we make the set-up process easier, you're going to have to hire a Django developer (at anywhere from $48,000 a year to $150 an hour) to customize the site with your logo, your geographic data, and your news items.

The other limitation to OpenBlock right now is that it isn't going to be cheap to maintain once you do get it up and running. The next priority for me is to make the application scale better to multiple installations and therefore lower the maintenance costs. Within the small OpenBlock community, there's debate about how large of a server it requires. The very good developers at OpenPlans who did a lot of heavy lifting on the code between the time it was open sourced and the time that it should run nicely on a "micro" instance of Amazon's EC2 cloud hosting service -- about $180 a year.

But we and Tim Shedor, the University of Kansas student who built LarryvilleKU, find OpenBlock a little too memory intensive for the "micro" instance. We're on an Amazon Web Services "small" instance, and LarryvilleKU is on a similar sized virtual server at MediaTemple. That costs more like $720 a year. And if you add a staging server to make sure your code changes break in private instead of public, you're looking at hosting costs of nearly $1,500 a year.

And that's before your scrapers start breaking. Depending on how conservative you are, you'll want to set aside a budget for fixing each scraper somewhere between one and three times a year. Each fix might be an hour or maybe up to 12 hours of work for a Python programmer (or the good folks at ScraperWiki). If you have three data sources -- arrests, restaurant inspections and home sales, let's say -- then you may get away with a $300 annual scraper maintenance cost, or it may set you back as much as $15,000 a year.

I've got some ideas on how to reduce those scraper costs, too, but more on that later as well.

Of course, if you have someone on staff who does Python programming and whose done some work with public records and published a few Django sites and they've got time to spare, then your costs will go down significantly.

But just in case you don't have such a person on staff or aren't ready to make this kind of investment, what are your alternatives?

GOOGLE MAPS AND FUSION TABLES

Using a Google Map on your news website is a little like playing the saxophone. It's probably the easiest instrument to learn how to play poorly, but pretty difficult to make it really sing. Anyone can create a Google Map of homicides or parking garages or whatever, but it's going to be a static map of only one schema, and it won't be searchable or sortable.

Google_maps_screenshot.png

On the other hand, you can also use Google Maps and Fusion Tables to build some really amazing applications, like the ones you might see in The Guardian or on The Texas Tribune or WNYC or The Bay Citizen. You can do all this, but it also takes some coding effort and probably a bit more regular hand care and feeding to keep the site up-to-date.

I've taken a look at how you might use Google's data tools to replicate something like OpenBlock, although I've not actually done it. If you want to give it a whirl and report back, here's my recipe.

A RECIPE FOR REPLICATING OPENBLOCK

Step 1. Create one Google Docs spreadsheet for each schema, up to a maximum of four spreadsheets. And create one Google Fusion Table for each scheme, up to a maximum of four tables.

Step 2. If the data you want is in a CSV file that's been published to the web, you can populate it with a Google Docs function called ImportData. This function -- as well as its sister functions ImportHTML and ImportXML -- will only update 50 records a time. And I believe this function will pull in new data from the CSV about once an hour. I don't know whether it will append the new rows or overwrite them, or what it would do if only a few of the fields in a record change. If you're really lucky, the data would be in an RSS feed and you could use the ImportFeed function to get past this 50-record limit.

Of course, in the real world almost none of your data will be in these formats. None of mine are. And in that case, you'd have to either re-enter the data into Google Docs by hand or use something like ScraperWiki to scrape a datasource and present it as a CSV or a feed.

Step 3. Use a modification of this script to automatically pull the data -- including updates -- from the Google Docs spreadsheet into the corresponding Fusion table you created for that schema.

Step 4. Find the U.S. Census or local county shapefiles for any geographies you want -- such as ZIP codes or cities or school districts -- and convert them to KML.

Step 5. Upload that geographic information into another Fusion Table.

Step 6. Merge the the Fusion table from Step 3 with the Fusion table from Step 5.

Step 7. This is really a thousand little steps, each depending on which of OpenBlock's user interface features you'd like to replicate. And, really, it should be preceded by step 6a -- learn JavaScript, SQL, CSS and HTML. Once you've done that, you can build tools so that users can:

And there's even at least one prototype of using server-side scripting and Google's APIs to build a relatively full-functioning GIS-type web application: https://github.com/odi86/GFTPrototype

After all that, you will have some of the features of OpenBlock, but not others.

Some key OpenBlock features you can replicate with Google Maps and Fusion Tables:

  • Filter by date, street, city, ZIP code or any other field you choose. Fusion Tables is actually a much better interface for searching and filtering -- or doing any kind of reporting work -- than OpenBlock.
  • Show up to four different kinds of news items on one map (five if you don't include a geography layer).
  • Conduct proximity searches. "Show me crimes reported within 1 mile of a specific address."

WHAT YOU CAN'T REPLICATE

The OpenBlock features you can't replicate with Google:

  • Use a data source that is anything other than an RSS feed, HTML table, CSV or TSV. That's right, no XLS files unless you manually import them.
  • Use a data source for which you need to combine two CSV files before import. This is the case with our property transactions and restaurant inspections.
  • Update more than 50 records at a time. Definitely a problem for police reports in all but the smallest towns.
  • Use a data source that doesn't store the entire address in a single field. That's a problem for all the records with which we're working.
  • Map more than 100,000 rows in any one Fusion table. In rural counties, this probably wouldn't be a concern. In Columbus County, N.C., there are only 45,000 parcels of land and 9,000 incidents and arrests a year.
  • Use data sources that are larger than 20MB or 400,000 cells. I don't anticipate this would be a problem for any dataset in any county we're working.
  • Plot more than 2,500 records a day on a map. Don't anticipate hitting this limit either, especially after the initial upload of data.
  • Parse text for an address -- so you can't map news articles, for example.
  • Filter to the block level. If Main Street runs for miles through several miles, you're not going to be able to narrow your search to anything relevant.
  • Create a custom RSS feed, or email alert.

THE SEO ADVANTAGE

And there's one final feature of OpenBlock that you can't replicate using Google tools without investing a good deal of manual, rote set-up work -- taking advantage of SEO or social media sharing by having a unique URL for a particular geography or news item type. Ideally, if someone searches for "home sales in 27514" I want them to come to my site. And if someone wants to post to Facebook a link to a particular restaurant that was scolded for having an employee with a finger-licking tendency (true story), I'd want them to be able to link directly to that specific inspection incident without forcing their friends to hunt through a bunch of irrelevant 100 scores.

To replicate OpenBlock's URL structure using Google Maps and Fusion Tables, you'd have to create a unique web page and a unique Google map for each city and ZIP code. The geography pages would display a polygon of the selected geography, whether it's a ZIP code or city or anything else, and all of the news items for that geography (up to four schemas, such as arrests, incidents, property sales, and restaurant inspections). That's 55 map pages.

Then you'd have to create a map and a page for each news item type. That's four pages, four Fusion tables, and four Google Docs spreadsheets.

Whew. I'm going to stick with our work in improving the flexibility and scalability of OpenBlock. But it's still worth looking at Google Maps and Fusion Tables for some small and static data use cases. Other tools such as Socrata's Open Data, Caspio and Tableau Public are also worth your time as you begin to think about publishing public data. Each of those have some maintenance costs and their own strengths and weaknesses, but the real trick for using all of these tools is public data that isn't in any usable format. We're looking hard at solving that problem with a combination of scraping and crowdsourcing, and I'll report what we've found in an upcoming post.

Ryan Thornburg researches and teaches online news writing, editing, producing and reporting as an assistant professor in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. He has helped news organizations on four continents develop digital editorial products and use new media to hold powerful people accountable, shine light in dark places and explain a complex world. Previously, Thornburg was managing editor of USNews.com, managing editor for Congressional Quarterly's website and national/international editor for washingtonpost.com. He has a master's degree from George Washington University's Graduate School of Political Management and a bachelor's from the University of North Carolina at Chapel Hill.

July 30 2012

14:00

The Fundamental Problem With Political Cookies, Microtargeting

The MIT Technology Review recently posted an article titled, "Campaigns to Track Voters with 'Political Cookies." It freaks me out for a reason I'll get to below.

From Technology Review:

The technology involves matching a person's web identity with information gathered about that person offline, including his or her party registration, voting history, charitable donations, address, age, and even hobbies.

Companies selling political targeting services say "microtargeting" of campaign advertising will make it less expensive, more up to the minute, and possibly less intrusive. But critics say there's a downside to political ads that combine offline and online data. "These are not your mom and pop TV ads. These are ads increasingly designed for you--to tell you what you may want to hear," says Jeff Chester, executive director of the Center for Digital Democracy.

funny-pictures-cat-wishes-to-access-your-cookies.jpg

Like most conscientious web users, I'm skeeved by the privacy issues of cookies, even as I tolerate them for their convenience.

But the real, immediate, permanent harm of political cookies, like Chester argues, is the other kind of privacy: the privacy it affords you to avoid public discussion, the (otherwise positive) right to be left alone.

Targeted ads bypass the public. They needn't be subject to civic debate. In fact, they foreclose the very possibility. With political cookies, civic debate about those messages can only happen within the subject's own head.

When MIT Center for Civic Media's own Matt Stempeck and Dan Schultz proposed projects like a recommended daily intake for the news, a credibility API, or automatic bullshit detectors, they're doing a great service but not necessarily a public service. Their work implicitly acknowledges -- and they're right -- that a political message is now predominantly a direct communications experience, from a campaign directly to an individual subject.

toward private politics

It's a private experience. Democracy without the demos. By definition and design, there's no public counterpoint to an ad targeted with cookies.

The earliest examples of American democracy took for granted that debate was public, happening among many individuals and associations of them. And a core logic, without which the rest fails, is that people are persuadable. Campaigns would love to persuade, but it's cheaper to reinforce. And reinforcement happens by aggregating individuals' click and spending data, with targeting taking into account predispositions, self-identification, and biases.

There's no need to persuade. No need, it feels, to be persuaded. No need to live outside our own private politics.

A version of this post originally appeared on the MIT Center for Civic Media blog.

Andrew Whitacre is Communications Director for the MIT Center for Future Civic Media, 2007 Knight News Challenge winner. A native of the nation's capital, Whitacre has written on communications policy issues, starting with work on satellite radio as a student at Wake Forest University.

July 27 2012

14:00

The Importance of NextDrop's Customer Cycle, and How to Improve Service

In our last post on PBS Idea Lab, NextDrop, which informs residents in India via cell phone about the availability of piped water, was trying to scale up in a very short period of time. How did we fare?

nextdroplog.png

Well, I think we discovered the first step to winning: Just get good data about yourself. Period. Even if it's ugly. Because after admitting there's something wrong, the second hardest part is wading through the mess and figuring out what exactly that is!

Let me try to lay out everything we discovered about our service.

Customer Side

Goal: Bill everyone possible and make money.

Immediate problem: Billers wasted a lot of time because even when they found houses (which many times proved difficult), a lot of people were getting late messages, weren't getting messages at all, getting them intermittently so they didn't want to pay for the service (no argument there), or just didn't want the service.

Immediate solution: Make a list of areas that have been getting regular messages for the past two weeks, and then call all those people before we actually go out and bill.

Immediate Systems We Put in place

Creation of the "Green List": We look through all of our valvemen data, and using the all-mighty Excel, we figure out which areas received at least four calls within the last two weeks. Our logic here is that since the supply cycle is once every 3-4 days now, if they are getting regular messages, valvemen should call in at least four times in a 2-week span. This system is by no means perfect, but it's a start, and at least gets us to the next level.

Conduct phone surveys: After we see all the areas that are on the Green List, we then call all the customers in that area. We spent two weeks piloting the survey to even figure out what categories/questions we should ask, and we've finally got some classifications the sales team feels good about.

Here are the different categories of NextDrop potential customers:

  • Could Not Contact (people who had phones turned off, didn't answer the call, possibly fake numbers)
  • Satisfied Customers
  • Pay (want to pay for service)
  • Continue
  • 1-month Free Trial (again)
  • Deactivate
  • Unsatisfied Customers
  • Not Getting Messages
  • Wrong Messages

Bill: We just bill the people who are satisfied and want to pay, or who are satisfied but want another free month trial (and have already had one).

our customer cycle

Here's a great flow chart that our sales manager made of our customer cycle (and if any engineers out there think this looks familiar, you're right! It is, in fact, a State Diagram. This is why I love hiring engineers!) And let me say, this may look easy, but it took two weeks to analyze customer behavior to even figure out what states to include and how to go from one state to another state.

customercycle.png

When we finally had data, we discovered some really interesting things about our service:

  • Total number of people called: 1,493
  • Total number of people we could contact: 884 (59%)
  • Total number of deactivated customers: 229
    15% of total customers
    26% of contacted customers
  • Total number of continuing customers: 655
    44% of total customers
    74% of contacted customers
  • Total billable customers: 405
    27% of total customers
    46% of contacted customers
  • Total billed customers: 223
    15% of total customers
    25% of contacted customers
    55% of billable customers
  • Total number of people who paid: 95
    6% of total customers
    23% of billable customers
    43% of billed customers

As you can see, the two major problems we identified were 1) we were unable to contact 41% of the customers we tried to contact, and 2) a majority of the people who we were able to contact were getting incorrect messages (54% of the contacted customers).

troubleshooting problems

And that's where we're at: trying to troubleshoot those two problems. Here are the immediate solutions we're putting in place to increase the people that we contact, and to put customers in the correct valve area.

Instead of taking "Could Not Contact" customers off the billing list, we are going to try to contact them. We're in the process of seeing what percentage of the "Could Not Contact" customers we can actually find and contact when we bill.

We have an intern, Kristine, from UC Berkeley, who will be working with us for the next six months to figure out how to place people in the correct valve area (because that is the critical question now, isn't it?) Kristine's findings are pretty interesting (and definitely deserves its own blog post), but our first prototype is to test a guess and check methodology:

  • First we call customers and find out when was the last time they got water.
  • Then sort through our data and see what areas got water on that date (plus or minus a few hours). This should at least eliminate 50% of the areas.
  • Then, to narrow it down even further, we only consider those areas that are geographically close to the customer. This should narrow it down to within 4-5 areas to check.
  • We subscribe the customer to these areas, and see when he/she gets the correct message. (We will find out through the phone survey.)

That's what we are going to try -- we'll let you know how that goes.

steps toward progress

In any case, I think the tunnel has a light at the end of it, so that's all we can really ask for -- progress!

And, as always, we will keep you updated on our progress, what seems to work, what doesn't, and more importantly, why.

Additionally, and most importantly, we're hiring! We are looking for enthusiastic and passionate individuals who want to be a part of our team. If you love problem solving, and finding creative solutions to problems, we want you!

As always, please feel free to write comments, offer insight, ask questions, or just say hi. Our proverbial door is always open!

A version of this post first appeared on the NextDrop blog.

Anu Sridharan graduated from the University of California, Berkeley in 2010 with a master's degree in civil systems engineering; she received her bachelor's degree from UC Berkeley as well. During her time there, Sridharan researched the optimization of pipe networked systems in emerging economies as well as new business models for the dissemination of water purification technologies for arsenic removal. Sridharan also served as the education and health director for a water and sanitation project in the slums of Mumbai, India, where she piloted a successful volunteer recruitment and community training model.

April 27 2012

13:38

April 26 2012

13:12

How 'Screenularity' Will Destroy Television as We Know It

Yesterday I announced the next project I'm going to work on which will focus on mobile news consumption. As a result, I've been thinking a lot about screens.

In the future, consumers will not make a distinction between their television, phone or computer screens. The only difference will be the size of each screen, its placement and, therefore, what you most likely do with it. 

iphone sky.jpg

But one will not call the handheld-sized screen their "mobile phone." That you might use it to make phone calls will be happenstance. You will just as easily make a call on the 15-inch screen at your desk or the 40-inch screen in the living room.

Let's call this future moment the "Screenularity." It is the moment in the future when, as a consumer, there's no distinction in functionality between the various screens we interact with. Much like Matt Thompson's "Speakularity," this will be a watershed moment for how we consume information and, therefore, journalism.

THE DEATH KNELL OF TELEVISION

For the entire television industry as we know it, this will be a back-breaking moment. It's not a question of "if" but "when." We see early signs of it in Netflix and Hulu, but the cracks in the dam haven't even started to show. For national broadcast journalism organizations like CNN, Fox and MSNBC, it will create a lot of disruption. For local broadcast journalism, it will leave them utterly decimated. 

Local broadcast journalism simply has no added value when compared with the wealth of information on the Internet. They rely on personality-less hosts that talk at you (not with you). Combine this with high overhead to do local reporting about topics many people simply don't care about, and you can start to see how this looks bleak for local broadcast affiliates. Breaking news is broken. Local broadcast websites are offensively bad and nowhere near competing on the open web. Their continued existence relies on the fact that the majority of people still get their news from television. But once the Screenularity hits, that will no longer be the case. There won't be a "television" just various screens. People will get their "lean back" information from the same screen they can engage with. Dogs and cats living together ... mass hysteria!

THEY'RE NOT HAVING THIS CONVERSATION

Whether you love or hate the "future of news" crowd, we should admit that it's painfully devoid of broadcast journalism. I am not 100 percent sure why. I've heard Jay Rosen give a decent explanation, and it can be summarized as: "They just don't care, it's not in their interest."

I'm not saying there aren't any folks within broadcast who are forward-thinking. But considering the disproportionate size of their organizations/budgets/audience to more traditional print mediums, they are painfully absent from conversations about the future of the industry. From what I can observe, the television journalism world has no interest in the future-of-news conversation, and their websites speak louder about this than any defense they could possibly make. This is dangerous, because the majority of people still get their news from local broadcast networks. There is no plan b. There is no fallout shelter.

A DANGEROUS IDEA

For this month's Carnival of Journalism the question is: "What's a dangerous idea to save journalism." Mine is the Screenularity. Local broadcast outfits need to operate as if it's here. I recognize this is dangerous, because it assumes that an industry will disrupt itself. That inherently means there will be danger involved. People will lose their jobs. Organizations will falter and crumble. But others will come out the other end and reinvent an industry on their own terms.

Media companies must become technology companies so they can create the platforms that define the type of media they produce. If they're the ones who create the platforms, they will continue to create media on their own terms.

If local news broadcasters don't embrace the Screenularity and create the platforms themselves, they'd better hope that somebody else does it for them. And "hope" is a horrible strategy. That's what leads to complaints about "Google" or "Craigslist" killing journalism. All they did was create platforms that define the type of media produced. If you aren't creating those platforms then you have no excuse to complain about the terms those organizations create.

April 25 2012

14:00

52 Applicants Move to Next Round of Knight News Challenge

The Knight Foundation has selected 52 applicants that will move onto the next stage of its News Challenge.

klogo.jpg

There's a theme you'll see running through the proposals that have made it thus far -- namely, networking. That's because networks are the focus of this year's first round. (The Knight News Challenge now offers three rounds instead of one competition per year.)

What sort of networks? "The Internet, and the mini-computers in our pockets, enable us to connect with one another, friends and strangers, in new ways," Knight's John Bracken wrote in a release when the round was first announced. "We're looking for ideas that build on the rise of these existing network events and tools -- that deliver news and information and extend our understanding of the phenomenon."

Consultant Ryan Jacoby wrote further about some of the trends he saw among applicants. You can read more about that here.

Here's the list of who's moving onward to the next round of the challenge (49 are listed because two were closed entries so we're not able to share them):

Amauta (Eric French)

Asia Beat (Jeffrey Wasserstrom/Angilee Shah)

Bridging the Big Data Digital Divide (Dan Brickley)

Change the Ratio (Rachel Sklar)

CitJo (Sarah Wali/Mahamad El Tanahy)

Connecting the global Hacks/Hackers network (Burt Herman, Hacks/Hackers)

Connecting the World with Rural India (Brian Conley)

Cont3nt.com (Anton Gelman/Daniel Shaw)

Cowbird (Jonathan Harris/Aaron Huey)

Data Networks are Local (Erik Gundersen, Development Seed)

DifferentFeather (Elana Berkowitz/Amina Sow)

DIY drone fleets (Ben Moskowitz/Jack Labarba)

Docs to WordPress to InDesign (William Davis, Bangor Daily News)

Electoral College of Me (John Keefe/Ron Williams)

EnviroFact (Beth Parke/Chris Marstall)

Funf.org: Open Mobile Sourcing (Nadav Aharony/Alan Gardner; MIT)

Global Censorship Monitoring System (Ruben Bloemgarten, James Burke, Chris Pinchen)

Google News for the Social Web (Sachim Kandar, Andrew Montalenti, Parse.ly)

Hawaii Eco-Net (Jay April, Maui Community Television)

Hypothes.is (Dan Whaley/Randall Leeds)

IAVA New GI Bill Veterans Alumni Network Iraq and Afghanistan Veterans of America (Paul Rieckhoff)

m.health.news.network (Marcus Messner and Yan Jin)

MediaReputations.com (Anton Gelman/Daniel Shaw)

Mesh Potato 2.0 (Steve Song/David Rowe)

Mobile Publishing for Everyone (David Jacobs/Blake Eskin/Natalie Podrazik)

NOULA (Tayana Etienne)

Peepol.tv (Eduardo Hauser/Jeff Warren)

PreScouter (Dinesh Ganesarajah)

Prozr (Pueng Vongs/Sherbeam Wright)

Rbutr (Shane Greenup/Craig O'Shannessy)

Recovers.org (Caitria O'Neill/Alvin Liang)

Secure, Anonymous Journalism Toolkit (Karen Reilly)

Sensor Networks for News (Matt Waite, University of Nebraska)

Shareable (Seth Schneider and Neal Gorenflo)

Tethr (Aaron Huslage/Roger Weeks)

The PressForward Dashboard (Dan Cohen/ Joan Fragaszy Troyano, George Mason University)

ThinkUpApp (Gina Trapani/Anil Dash)

Tracks News Stories (David Burrows, designsuperbuild.com)

Truth Goggles (Dan Schultz)

Truth Teller (Cory Haik/Steven Ginsberg, Washington Post)

Unconsumption Project (Rob Walker/Molly Block)

UNICEF GIS (Joseph Agoada, UNICEF)

Watchup (Adriano Farrano/Jonathan Lundell)

Water Canary (Sonaar Luthra/Zach Eveland)

A Bridge Between WordPress and Git (Robert McMillan / Evan Hansen)

In the Life (Joe Miloscia, American Public Media)

Get to the Source (Joanna S. Kao/MIT)

Farm-to-Table School Lunch (Leonardo Bonanni, Sourcemap)

Partisans.org (Michael Trice)

Protecting Journalists (Diego Mendiburu and Ela Stapley)

What do you think about the finalists? Who are your favorites and who do you think should win?

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl