Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 30 2013

11:00

Activist Campaign Successfully Targets Facebook's Advertisers

Last week I wrote up the #FBrape campaign's strategy: to hold Facebook accountable for the misogynistic content of its users by pressuring advertisers. Only seven days after the open letter was published, Marne Levine, Facebook's VP of Global Publicy Policy, published a response agreeing to the campaign's demands to better train the company's moderators, improve reporting processes, and hold offending users more accountable for the content they publish.

facebook-logo.jpg

The campaigners say they generated 5,000 emails to advertisers, and convinced Nissan to pull its advertising from the platform. This is great initial traction for a social media advocacy campaign, but it represents a miniscule percentage of Facebook's users and advertisers. For people interested in shaping what kinds of speech social media giants allow, the #FBrape campaign quickly confirmed the relative value of targeting companies' revenue sources rather than directly petition the corporations. The #FBrape campaign also had a clear moral high road over the terrible instances of speech it campaigned to censor. But the results are still illuminating, as we struggle to determine how much power companies like Facebook wield over our self expression, and the organizational processes and technical mechanisms of how that power is exterted.

Continued attention will be required to hold Facebook, Inc. to its promises to train its content moderators (and an entire planet of actual users) to flag and remove violent content. Facebook has also promised to establish more direct lines of communication with women's groups organizing against such content. This is the kind of personal relationship and human contact groups have clamored for (see WITNESS and YouTube's relationship).

'fair, thoughtful, scalable'

Technology companies have tended to avoid establishing such relationships, probably because they require relatively large amounts of time in a venture that's taking on an entire planet worth of communications. Facebook itself lists its preferences for solutions to governing speech that are "fair, thoughtful, and scalable." Given the crazy scale of content uploaded every minute, Facebook might look into algorithmic solutions to identify content before users are exposed to it. YouTube has conducted research to automatically categorize some of its own torrent of incoming user content to identify the higher quality material. According to their post, Facebook has "built industry leading technical and human systems to encourage people using Facebook to report violations of our terms and developed sophisticated tools to help our teams evaluate the reports we receive."

This is unlikely to be the last we hear about this. By publishing an official response, Facebook gave 130 media outlets and counting an excuse to cover the campaign, which few had done prior to the company's reply. And whether they relish the position or not, social media companies like Facebook have positioned themselves as arbiters of speech online, subject to the laws of the lands they operate within, but also comfortable codifying their own preferences into their policies. Kudos to Facebook for taking a minute to respond to some of the messy side effects of connecting over a billion human beings.

Matt Stempeck is a Research Assistant at the Center for Civic Media at the MIT Media Lab. He has spent his career at the intersection of technology and social change, mostly in Washington, D.C. He has advised numerous non-profits, startups, and socially responsible businesses on online strategy. Matt's interested in location, games, online tools, and other fun things. He's on Twitter @mstem.

This post originally appeared on the MIT Center for Civic Media blog.

May 29 2013

10:38

Join the Zeega Makers Challenge for 'The Making Of...Live at SFMOMA'

ZeegaSFMoMA_24.gif

In 24 hours, Zeegas -- a new form of interactive media -- will be installed on four projection screens at San Francisco's renowned Museum of Modern Art. This showcase is part of "The Making Of..." -- a collaboration between award-winning NPR producers the Kitchen Sisters, KQED, AIR's Localore, the Zeega community and many others.

Join in this collaborative media experiment and make Zeegas for SFMOMA. To participate, log in to Zeega and create something for the exhibition. To make the simplest Zeega possible, just combine an animated GIF and a song. And if you want to do more, go wild.

You can contribute from anywhere in the world. The deadline is midnight EST on Wednesday.

make a zeega

If you've never made a Zeega, worry not: It's super-easy. You can quickly combine audio, images, animated GIFs, text and video from across the web. Zeegas come in all shapes and sizes, from GIFs accompanied by a maker's favorite song to a haunting photo story about a Nevada ghost town to an interactive video roulette.

The Zeega exhibition is one piece of "The Making Of...Live at SFMOMA." As SFMOMA closes for two years of renovation and expansion, over 100 makers from throughout the region will gather to share their skills and crafts and tell their stories.

For the event, there will be two live performances of Zeegas and the "Web Documentary Manifesto," and there will also be a session with Roman Mars ("99% Invisible"), The Kitchen Sisters, AIR's Sue Schardt talking about Localore, and other storytelling gatherings throughout the festivities. For the full program, click here.

Jesse Shapins is a media entrepreneur, cultural theorist and urban artist. He is Co-Founder/CEO of Zeega, a platform revolutionzing interactive storytelling for an immersive future. For the past decade, he has been a leader in innovating new models of web and mobile publishing, his work featured in Wired, The New York Times, Boingboing and other venues. His artistic practice focuses on mapping the imagination and perception of place between physical, virtual and social space. His work has been cited in books such as The Sentient City, Networked Locality and Ambient Commons, and exhibited at MoMA, Deutsches Architektur Zentrum and the Carpenter Center for Visual Arts, among other venues. He was Co-Founder of metaLAB (at) Harvard, a research unit at the Berkman Center for Internet and Society, and served on the faculty of architecture at the Harvard Graduate School of Design, where he invented courses such as The Mixed-Reality City and Media Archaeology of Place.

May 28 2013

13:15

Circa Hires Reuters' Anthony De Rosa as New Editor in Chief

Mobile news app Circa is hoping to push further into the breaking news space, announcing today that the startup behind the app has hired Anthony De Rosa, former social media editor at Reuters, to become its editor in chief.

de_rosa.png

Circa collects the "atomic units" of stories -- facts, quotes and images -- and puts them into running stories with alerts to updates. The startup was co-founded by Cheezburger CEO Ben Huh and his partner, Matt Galligan, and is under the editorial leadership of David Cohn, who was the founder and director of Spot.Us, a non-profit that pioneered "community funded reporting." Cohn has written about both Spot.Us and Circa here on Idea Lab.

"As the head of editorial I'm very excited to have Anthony come on board," Cohn said in an email. "I know he will bring a lot to the table and will help Circa push forward in the 'breaking news' space which, combined with our 'follow feature' really puts us in a unique position to serve a readers' needs."

De Rosa is owner of tumblog SoupSoup and co-founder of hyperlocal blogging tool Neighborhoodr. At Reuters, he trained staff to make use of live blogs and social media to try to produce a constant river of breaking news. In an article in 2011, The New York Times' Paul Boutin dubbed De Rosa the "undisputed King of Tumblr."

"There's a huge opportunity to present news in a way that's made for mobile. Nobody is thinking about this more than Circa and I'm thrilled to help move that mission forward," De Rosa said in a statement announcing his new position.

You can tune in on the recent Mediatwits podcast with Circa founder Huh on PBS MediaShift, and here's another Mediatwits podcast with Cohn, talking about the prospects for Circa now and in the future.

Desiree Everts is the associate editor for Idea Lab and PBS MediaShift. She's dabbled in digital media for the past decade including stints at CNET News and Wired magazine.

11:00

Spending Stories' 'Data Expedition' Tackles Tax Avoidance and Evasion

The Spending Stories team mentioned earlier in the year that we were looking into ways of helping journalists negotiate tricky financial topics. We're pleased to announce our first pilot "data expedition" on the topic of tax evasion and avoidance -- it aims to help reporters negotiate the key decisions made when writing on this highly contentious topic.

data_expedition.jpg

Want to dig deep into tax avoidance and evasion? We have gathered a wide range of data on this sensitive topic, and for one afternoon we'll guide you through some of the key decisions to think about when writing a story on it. With tax evasion and tax avoidance currently such a hot topic in the media, it's crucial that people can understand the difference between the two terms as well as the mechanisms by which they happen.

When: Thursday June 6, 12:00 BST to 17:00 BST (link to your timezone)

We'll be looking for projects such as:

  • Exploring the tax avoidance schemes used by Apple, Google, Amazon, or Starbucks;

  • Looking at data gathered by tax-collection authorities and patterns of avoidance that emerge from that dataset;

  • Creating a "most wanted" list tax evaders for future research;

  • Your project here!

Sign up here for the Data Expedition!

Please note that limited space is available. For more information about the Data Expedition format, we encourage you to read this article.

How can I participate?

To get involved either:

  • Lead a team (up to six hours) -- Are you able to help to coordinate a team on the day? This involves helping your team understand the options and research that's been conducted and starting a discussion about the choice of story and how to construct a plan for making the story happen. The School of Data team will hold a specific hangout for team leads on Monday, June 3 at 12:00 BST to prepare for Thursday's activities. Please email schoolofdata [at] okfn.org if you are interested in getting involved.

  • Offer an expert introduction (up to one hour) -- We're looking for experts who understand the loopholes or tactics used by companies in different countries to offer quick introductions 5-30 minutes long to get the expedition started.

  • Join us as a participant on the day (3-6 hours) -- You will need to be prepared to brainstorm ideas with others in your group and ultimately explain your choice of story. There will be two roles you can take on the day -- either getting stuck into the data (analyst) or writing (storyteller).

Aims of the expedition

We will aim to give people:

  • A clear understanding of the difference between tax evasion and tax avoidance;
  • A key understanding of a few schemes via which people engage in them;
  • Perhaps also a few story ideas!

How to get involved

Please make sure you are registered here and that you select "Tax Avoidance/Evasion" in the "I'm Interested in..." section. Please note: You will need to be available for at least three hours during the expedition period and spaces will be limited, so preference will be given to those who can definitely commit to the expedition. Spaces will be confirmed shortly before the expedition.

Stay up to date with the latest data expeditions

Want to be informed anytime there is a new data expedition? Join the School of Data announcement list to get notifications of the expeditions as soon as they are announced.

Lucy Chambers is a community coordinator at the Open Knowledge Foundation. She works on the OKF's OpenSpending project and coordinates the data-driven-journalism activities of the foundation, including running training sessions and helping to streamline the production of a collaboratively written handbook for data journalists.

May 22 2013

19:31

Want an Affordable Infrared Camera? Give to Public Lab's 'Infragram' Project on Kickstarter

This post was co-written by Public Lab organizer Don Blair.

Public Lab is pleased to announce the launch of our fourth Kickstarter today, "Infragram: the Infrared Photography Project." The idea for the Infragram was originally conceptualized during the BP oil spill in the Gulf of Mexico and as a tool for monitoring wetland damages. Since then, the concept has been refined to offer an affordable and powerful tool for farmers, gardeners, artists, naturalists, teachers and makers for as little as $35 -- whereas near-infrared cameras typically cost $500-$1,200.

Technologies such as the Infragram have similar roles as photography during the rise of credible print journalism -- these new technologies democratize and improve reporting about environmental impacts. The Infragram in particular will allow regular people to monitor their environment through verifiable, quantifiable, citizen-generated data. You can now participate in a growing community of practitioners who are experimenting and developing low-cost near-infrared technology by backing the Infragram Project and joining the Public Lab infrared listserve.

PublicLab Infrared1.png

some Background

Infrared imagery has a long history of use by organizations like NASA to assess the health and productivity of vegetation via sophisticated satellite imaging systems like Landsat. It has also been applied on-the-ground in recent years by large farming operations. By mounting an infrared imaging system on a plane, helicopter, or tractor, or carrying around a handheld device, farmers can collect information about the health of crops, allowing them to make better decisions about how much fertilizer to add, and where. But satellites, planes, and helicopters are very expensive platforms; and even the tractor-based and handheld devices for generating such imagery typically cost thousands of dollars. Further, the analysis software that accompanies many of these devices is "closed source"; the precise algorithms used -- which researchers would often like to tweak, and improve upon -- are often not disclosed.

PublicLab_Infrared2.png

Public Lab's approach

So, members of the Public Lab community set out to see whether it was possible to make a low-cost, accessible, fully open platform for capturing infrared imagery useful for vegetation analysis. Using the insights and experience of a wide array of community members -- from farmers and computer geeks to NASA-affiliated researchers -- a set of working prototypes for infrared image capture started to emerge. By now, the Public lab mailing lists and website contain hundreds of messages, research notes, and wikis detailing various tools and techniques for infrared photography, ranging from detailed guides to DIY infrared retrofitting of digital SLRs, to extremely simple and low-cost off-the-shelf filters, selected through a collective testing-and-reporting back to the community process.

All of the related discussions, how-to guides, image examples, and hardware designs are freely available, published under Creative Commons and CERN Open Hardware licensing. There are already some great examples of beautiful NDVI/near-infrared photography by Public Lab members -- including timelapses of flowers blooming, and balloon-based infrared imagery that quickly reveals which low-till methods are better at facilitating crop growth.

PublicLab_Infrared7.JPG

What's next

By now, the level of interest and experience around DIY infrared photography in the Public Lab community has reached a tipping point, and Public Lab has decided to use a Kickstarter as a way of disseminating the ideas and techniques around this tool to a wider audience, expanding the community of users/hackers/developers/practitioners. It's also a way of generating support for the development of a sophisticated, online, open-source infrared image analysis service, allowing anyone who has captured infrared images to "develop" them and analyze them according to various useful metrics, as well as easily tag them and share them with the wider community. The hope is that by raising awareness (and by garnering supporting funds), Public Lab can really push the "Infrared Photography Project" forward at a rapid pace.

Accordingly, we've set ourselves a Kickstarter goal of 5,000 "backers" -- we're very excited about the new applications and ideas that this large number of new community members would bring! And, equally exciting: The John S. and James L. Knight Foundation has offered to provide a matching $10,000 of support to the Public Lab non-profit if we reach 1,000 backers.

With this growing, diverse community of infrared photography researchers and practitioners -- from professional scientists, to citizen scientists, to geek gardeners -- we're planning on developing Public Lab's "Infrared Photography Project" in many new and exciting directions, including:

  • The creation of a community of practitioners interested in infrared technology, similar to the community that has been created and continues to grow around open-source spectrometry.
  • The development of an archive for the Infrared Photography Project -- a platform that will allow people to contribute images and collaborate on projects while sharing data online.
  • Encouragement of agricultural imagery tinkering and the development and use of inexpensive, widely available near-infrared technologies.
  • Development of standards and protocols that are appropriate to the needs, uses and practices of a grassroots science community.
  • Providing communities and individuals with the ability to assess their own neighborhoods through projects that are of local importance.
  • The continued development of a set of tools that will overlap and add to the larger toolkit of community-based environmental monitoring tools such as what SpectralWorkbench.org and MapKnitter.org provide.

We hope you'll join us by contributing to the Kickstarter campaign and help grow a community of open-source infrared enthusiasts and practitioners!

A co-founder of Public Laboratory for Open Technology and Science, Shannon is based in New Orleans as the Director of Outreach and Partnerships. With a background in community organizing, prior to working with Public Lab, Shannon held a position with the Anthropology and Geography Department at Louisiana State University as a Community Researcher and Ethnographer on a study about the social impacts of the spill in coastal Louisiana communities. She was also the Oil Spill Response Director at the Louisiana Bucket Brigade, conducting projects such as the first on-the-ground health and economic impact surveying in Louisiana post-spill. Shannon has an MS in Anthropology and Nonprofit Management, a BFA in Photography and Anthropology and has worked with nonprofits for over thirteen years.

Don Blair is a doctoral candidate in the Physics Department at the University of Massachusetts Amherst, a local organizer for The Public Laboratory for Open Technology and Science, a Fellow at the National Center for Digital Government, and a co-founder of Pioneer Valley Open Science. He is committed to establishing strong and effective collaborations among citizen / civic and academic / industrial scientific communities through joint research and educational projects. Contact him at http://dwblair.github.io, or via Twitter: @donwblair

May 21 2013

10:36

Former Facebook ME Dan Fletcher: 'It's a Great Time to Launch a New Publication'

This post was written by Ryan Graff of the Knight News Innovation Lab and originally appeared on the Lab's blog as part of a series of Q&As with highly impressive makers and strategists from media and its fringes, each with unique perspectives on journalism, publishing and communications technology. Catch up and/or follow the series here.

dan_fletcher.jpg

Dan Fletcher, the recently departed managing editor at Facebook, seems to be always ahead of the curve. In 2010, at age 22, Fletcher became the youngest person ever to write a cover story for Time magazine. He also created and launched Time.com's NewsFeed feature and Time's social media feeds. At Bloomberg a few years later he created and staffed the editorial social media teams for Bloomberg News and Bloomberg Businessweek, picking up a Forbes 30 Under 30 distinction in the process. Now, at a time when journalists are headed to the Twitters and LinkedIns of the world to help shape editorial content, he's already completed his time at a tech giant and is looking for his next project. Below is an edited version of our Q&A.

Q&A

Q: Can you give us a quick rundown of what you do, who you are, and all the latest since resigning from Facebook?

Dan Fletcher: I’ve really dug into the intersection of social media and editorial. At Time and Bloomberg, that meant helping news organizations figure out how to use these new platforms and reporting on the companies building them out. At Facebook, it meant trying to bring an editorial angle to a technology company. In each role, I've been lucky to be allowed to experiment, and now I’m eager to continue experimenting on my own.

What excites you most about journalism/media in 2013?

Fletcher: It seems like there’s a greater appetite for experimentation. Places like Circa and NowThisNews are rethinking how journalism’s packaged and distributed in a mobile world. Projects like Matter, Atavist, and The Magazine are seeing if people will pay for a great story, given to them in a way that honors the reading experience. And "traditional" publishers like The New York Times are recognizing the importance of good design and investing in tools and people that let them package stories in better ways. Not all of these will be successful, but it’s progress beyond the impetus to just rack up page views.

What are the big differences you found between the traditional news shops and Facebook?

Fletcher: Facebook has incredible focus on their goal of connecting the world. Everything exists in service of that mission, and Facebook Stories was our small way of showing some of the cool things that happens when people connect. Newsrooms generally can’t focus on examining one idea with that level of intensity -- there are other stories to tell and themes to explore. It was refreshing to spend a year really honed in on a single idea, but part of me really missed the broader purview of traditional news.

What has changed since you started working?

Fletcher: The pace. And things were pretty fast when I got started. But so many publishers are producing more stories and turning them around faster, so as to compete for traffic from search and social media. On the whole, I’m not sure this is a good thing. Or at least it shouldn’t be the only way that stories are produced.

When did you decide to become a media person?

Fletcher: I wish I had a better story for this -- I didn’t get into the pottery class in high school, and a girl I liked was in the newspaper class. So it goes. But I’ve loved it ever since.

C'mon, fess up, what's next?

Fletcher: It’s a great time to launch a new publication.

What is the biggest tech challenge that media companies will face over the next five years?

Fletcher: Monetizing. I wish there were another answer, but that’s still the case. Journalists are producing great work, maybe more great work than at any point in history. And therein lies the problem -- what makes this a great moment to be a reader makes it a tough moment to be a producer. There’s going to be a great deal of creativity in how companies approach these challenges, though -- I think we’ll see a variety of successful models, some of which will include new forms of advertising and some of which will require reader support.

What makes good content?

Fletcher: Authenticity. It doesn’t matter who’s making it -- the Times or a company doing content marketing like Facebook or Coca-Cola. If it feels fake, forced or false, people won’t trust it.

What excites you about technology and media?

Fletcher: The barriers to entry continue to fall. What WordPress did for blogging, someone's about to do for publishing on iOS and Android while companies like Scrollkit are making it easier to build immersive experiences around stories on the web. This frees journalists, photographers and art directors from technical costs that may have inhibited them in the past, and ultimately will result in more great projects being launched.

What applications do you have open while working?

Fletcher: MOG for music, Tweetdeck (although I’m much more of a follower than an active participant), Adobe Lightroom, and a really nifty and simple text editor called iA Writer. I find fewer options are better when it comes to writing.

What could the world use a little more of?

Fletcher: Originality.

What could the world use a little less of?

Fletcher: Top 10 lists.

Follow Dan Fletcher on Twitter, @danielfletcher. Find weekly updates from the Knight News Innovation Lab's profiles series on Fridays.

Ryan Graff joined the Knight News Innovation Lab in October 2011. He previously held a variety of newsroom positions -- from arts and entertainment editor to business reporter -- at newspapers around Colorado before moving to magazines and the web. In 2008 he won a News21 Fellowship from the Carnegie and Knight foundations to come up with innovative ways to report on and communicate the economic impact of energy development in the West. He holds an MSJ from the Medill School of Journalism and a certificate in media management from Northwestern's Media Management Center. Immediately prior to joining the Lab, Graff led marketing and public relations efforts in the Middle East.

knlogo_stacked_80x80_bg_white.jpg

The Knight Lab is a team of technologists, journalists, designers and educators working to advance news media innovation through exploration and experimentation. Straddling the sciences and the humanities the Lab develops projects, prototypes and innovative bits of code that help make information meaningful, and promote quality journalism, storytelling and content on the internet. The Knight Lab is a joint initiative of Northwestern University's Robert R. McCormick School of Engineering and Applied Science and the Medill School of Journalism. The Lab was launched and is sustained by a grant from the John S. and James L. Knight Foundation, with additional support from the Robert R. McCormick Foundation and the National Science Foundation.

May 14 2013

11:00

How FrontlineSMS Helped an Indonesian Community Clean Up a River

FrontlineSMS has had a strong connection with environmental issues since our founder had the initial spark of an idea while working on an anti-poaching project in South Africa. We're delighted to share how Een Irawan Putra of KPC Bogor and the Indonesia Nature Film Society used FrontlineSMS in Indonesia to invite the community to help clean up the garbage clogging the Ciliwung River.

Community Care Ciliwung Bogor, known locally as KPC Bogor, was founded in March 2009 in West Java, Indonesia to harness the growing community concern for the sustainability of the Ciliwung River in the city of Bogor. We formed to raise awareness about the damaging impact of garbage and waste in the river, as well as to mobilize the community to take action.

river1.jpg

The community around KPC Bogor was initially formed by our friend Hapsoro who used to share his fishing experiences in the Ciliwung River. "If we go fishing in the river now, there is so much junk," Hapsoro once said, "All we get is plastic, instead of fish." It was after an increasing number of similar tales from the community about pollution levels that we decided to conduct some field research. We set out to find the best spots for fishing along the Ciliwung River, particularly in the area stretching from Katulampa to Cilebut.

Some KPC members work in Research and Development of Ornamental Fish Aquaculture, Ministry of Marine and Fisheries and in fisheries laboratory in Bogor Agricultural University. So while we conducted the research voluntarily, they were always present to offer their skills and ensure our research methods were sound. In addition to the study of fish, some KPC members who work in mapping forest areas in Indonesia helped us to map the river area using GPS. We mapped the points where garbage was stacked, sewage levels and commensurate changes in the river. We also tested the quality of river water by using a simple method called "biotilik," using organisms as an indication of the state of the river water quality in the Ciliwung River.

The results of the research were shocking. We found out that while the people who live along the Ciliwung River rely on its use for daily necessities including cooking, cleaning and washing, the river is increasingly being used as a place to dispose of trash and inorganic waste materials. The research helped us realize just how poor the Ciliwung River conditions were at the time -- with worrying consequences for the function, condition, and use of the river. Not only did we uncover poor river standards, we also identified that there was a lack of public knowledge about the importance of maintaining a healthy river among the community. Waste disposal practices have become rooted in the bad habits that have been ingrained in the minds of the people who live around the Ciliwung riverbanks over a long period of time. People are so used to the methods they use that they do not realize the severity of the environmental damage which they cause.

citizen clean-up takes off

So members of KPC Bogor got together to ask, "What can we do to save Ciliwung River in ways that are simple, inexpensive and uncomplicated?" From there, a simple concept was born. We set out to recruit volunteers to become garbage scavengers in the Ciliwung River. Every Saturday, KPC Bogor members and friends met from 8am to 11am, to pick up any inorganic matter that litters the Ciliwung River and put it into the sacks before sending it to landfills.

In many ways, we actually consider this activity as a way to meet new friends. It might be hard work that can cause us to sweat, but we understand that even though waste removal is a very simple activity, it important for the sustainability of our river and our community around it. The number of people who come every Saturday varies: Sometimes there are only two, other times up to 100 people. For us, the number doesn't matter. What's important is that KPC Bogor must continue to remind citizens to take care of the Ciliwung River.

About three months ago, we had some sad and shocking news that our friend and leader Hapsoro had passed away. A few of us were worried about what would happen to our 4-year-old community and how it could continue without his leadership. We gathered at Hapsoro's house before his funeral, and we all committed to doing all we could to ensure KPC Bogor's activities would carry on. We saw how vital this work was for the River, the community's health, and our livelihoods. We needed to honor and commemorate the important service Hapsoro had initiated to form a sense of responsibility and awareness in the community. But how could we mobilize the community like he did?

river2.jpg

using sms

Hapsoro was a man who always actively sent SMS to all our friends to participate in regular KPC Bogor activities, especially to remind them to get them involved with cleaning the river. With an old mobile phone, he used to send messages one by one to the numbers in his phone book. The day after we decided to keep KPC Bogor alive, I asked permission from Hapsoro's wife, Yuniken Mayangsari, about whether we could keep using his phone number to send SMS to all the subscribers. She gave me the phone at once without hesitation.

I started using Hapsoro's mobile phone to send SMS every Friday to the friends of KPC Bogor. When I was using the phone, I realized how patient Hapsoro must have been in sending the SMS alerts about river cleaning over his three years of organizing the activities. One by one, each of the numbers had to be selected from the address book, and I could only enter 10 numbers at once. It made getting though more than 200 numbers exhausting, and it took me more than two hours! Not to mention when I forgot which numbers I'd already sent the message to. I'm sure there are a few people who got the message twice.

Because of the limited time I could dedicate to sending SMS every Friday, some friends and I decided to try using FrontlineSMS. A friend who lives in Jakarta went looking for a compatible Huawei E-series modem to send and receive messages with the software. When we were finally able to buy one, we installed it on my laptop and KPC Bogor's laptop. Now every Friday, we load up FrontlineSMS to send alerts about KPC Bogor activities due to take place the following Saturday. It's great because I can carry on working while FrontlineSMS is sending the messages. I can easily manage contacts and send alerts to the community in a few simple steps.

KPC Bogor's work with volunteers is now so successful that we started a "Garbage Scavengers Race" which has now become an official annual agenda event in the city of Bogor. Last year, 1,500 people came to the river to help and we collected 1,300 bags of garbage in just 3 hours. We are now preparing for this year's scavenge due to take place in June 2013. In recognition of the need to tackle root causes of the waste issue rather than just the clean up, we've also started to do more than collecting garbage. KPC Bogor now provides environmental education for elementary school children, conducts research on water quality and plants trees around the Ciliwung River. We are also able to regularly assess the river water biota, where we analyze diversity of micro-organisms, plants and animals in the ecosystem. Recently, we even made a film about the waste problems in the Ciliwung River.

Now, we use FrontlineSMS to let the community know about our new activities too. Every week we receive SMS from new people who want their mobile number to be added to the subscribers list so they can receive a regular SMS every week with information about how to join in with our activities.

Thanks to the community, the city government is now giving full support to our activities by giving us budget for waste cleanup efforts through the official budget allocation. Once, Ciliwung was a clean river that was highly venerated by the people for its famous fresh water and was relied on by the public in Indonesia for their livelihoods. It was once a source of clean water used for drinking, cooking, bathing and washing. This community wants the condition of the Ciliwung River to return to how it once was, and we're getting there -- one piece of garbage at a time.

You can watch a video with English subtitles about the KPC Bogor community here.

More information about KPC Bogor can be found at here or via Twitter @tjiliwoeng and Facebook.com/KPCBogor.

river3.jpg

Een Irawan Putra is currently director of the Indonesia Nature Film Society, coordinator for the Ciliwung River Care Community (KPC Bogor), head of TELAPAK West of Java Territorial Body, member of TELAPAK, and member of LAWALATA IPB (Student Nature Club Bogor Agriculture University). Formerly he was a Forest Researcher in Greenpeace South East Asia Indonesia Office (2005); producer, cameraman, and editor at Gekko Studio (2005-2012), vice director PT. Poros Nusantara Media (2012), and vice president of the Association of Indonesia Peoples' Media and Television (ASTEKI) (2012).

April 16 2013

05:24

The View from MIT on the Boston Marathon Explosions

Here's what we know:

At 2:50 p.m. two explosions occurred along on Boylston Street near the finish line of the Boston Marathon. Police later detonated a third device further down the street.

As of 6 p.m., two people are dead, and nearly 90 injured, according to the Boston Globe. At MIT's Civic Media Center, we have been following along through both broadcast and social media, including the Globe's liveblog and Completure's News Scanner.

The Boston Marathon is one of the country's pre-eminent sporting events. It draws athletes and spectators into the beating heart of one of the world's best cities.

Civic is located almost directly across the river from where the explosions occurred. The blasts were audible from the MIT campus. Members of the immediate Civic family have checked in. Some were at the marathon. All are safe.

Not everyone has been lucky enough to contact their loves ones as we have. On the Boston Marathon website you may search for runners and check their status. Google has launched an instance of their People Finder for the emergency. The Red Cross' Safe and Well system appears at the moment to have been overwhelmed by demand.

Geeks Without Bounds is maintaining a Google Doc of resources, including spreadsheets where people can both offer and request housing.

marathon1.jpg

I write this as a native. My mother grew up in Everett. My father grew up in Melrose. Like my Civic colleague Matt Stempeck, who attended the marathon today, I was born in Reading. I love Boston. I love its people. I love its tradition. It is my home. My heart hurts. And then I think of Carlos Arredondo.

Carlos-Arredondo.jpg

Arredondo became a peace activist in 2004 after he lost one son in Iraq and his other committed suicide in grief. A Costa Rican emigrant, he became a citizen in 2006 with the help of the late Ted Kennedy. He happened to be near the finish line today and rushed to assist first responders. A man who has suffered such loss, such grief, continuing to do all that he can to help other members of the nation he can now call his own.

Arredondo gives me hope. He reminds me that, despite all evidence to the contrary, there is good in the world. As did Patton Oswalt, the acerbic comic, who today wrote some words I will try to always remember: "So when you spot violence, or bigotry, or intolerance or fear or just garden-variety misogyny, hatred or ignorance, just look it in the eye and think, 'The good outnumber you, and we always will.'"

As a wise man once said:

tumblr_mf1s1vWV6s1rvynjbo1_1280.jpg

RELATED READING: Social Media Offers Vital Updates, Support After Boston Marathon Bombings

Chris Peterson is on leave from MIT's Office of Undergraduate Admissions, where he has spent three years directing web communications, to be a full-time graduate student in MIT's Comparative Media Studies program. In addition to overseeing all web and new media activities for MITAdmissions, Chris liaised with FIRST Robotics and had a special focus on subaltern, disadvantaged, and first-generation applicants. He continues to be involved with MIT's awesome undergraduates as a freshman advisor. Before MIT, Chris worked as a research assistant at the Berkman Center for Internet and Society at Harvard Law School and as a Senior Campus Rep for Apple. He currently serves on the Board of Directors of the National Coalition Against Censorship, as an Associate at the National Center for Technology and Dispute Resolution, and as the sole proprietor of BurgerMap.org. He holds a B.A. in Critical Legal Studies from the University of Massachusetts at Amherst, where he completed his senior thesis on Facebook privacy under Professors Ethan Katsh and Alan Gaitenby. He is interested generally in how people communicate within digitally mediated spaces and occasionally blogs at cpeterson.org.

A version of this post originally appeared on the MIT Center for Civic Media blog.

April 15 2013

11:00

San Francisco, a City That Knows Its Faults

Low vacancy, so many homeless people, beautiful old buildings, shuttle buses to Silicon Valley ... and warning, I'm going to talk about earthquakes. If it gets scary, stick with me: There's good news at the end, ways to better understand the specific risks facing San Francisco, and some easy places to start.

Let's Talk Numbers

After the 1989 Loma Prieta earthquake, 11,500 Bay Area housing units were uninhabitable. If there was an earthquake today, the current estimate (from Spur) is that 25% of SF's population would be displaced for anywhere between a few days to a few years. However, San Francisco's top shelter capacity can only serve roughly 7.5% of the overall population. And that is only for short-term stays in places like the Moscone Center. So where would the remaining 17.5% of the population go?

  1. Some people may decide to leave the city and start over somewhere else (something called "outmigration," which is not ideal for the economic health of a city).
  2. And some people take longer-term housing in vacant units around the city. But this is particularly tough in San Francisco because vacancy is currently at an all-time low of about 4.2% vacant units.
  3. This brings us to the most ideal scenario: staying put -- something referred to in the emergency management world as "shelter-in-place."

ground-shaking-map.jpg

ground-shaking-key.jpg

What is Shelter-in-Place?

Shelter-in-place is "a resident's ability to remain in his or her home while it is being repaired after an earthquake -- not just for hours or days after an event, but for the months it may take to get back to normal. For a building to have shelter-in-place capacity, it must be strong enough to withstand a major earthquake without substantial structural damage. [...] residents who are sheltering in place will need to be within walking distance of a neighborhood center that can help meet basic needs not available within their homes."

A recent report from Spur's "Safe Enough to Stay" estimates that San Francisco needs 95% of its housing to be shelter-in-place. But currently there are 3,000 addresses (15% of the population) that are in a scary thing called "soft story buildings."

LomaPrieta-Marina.jpeg

A soft story building is characterized by having a story which has a lot of open space. Parking garages, for example, are often soft stories, as are large retail spaces or floors with a lot of windows.

Live in SF? What you can do:

  1. For starters, find out if your house is on the list of soft story buildings, here. And the SF Board of Supervisors recently voted to pass a "Mandatory Seismic Retrofit Program," which will make residents, or their landlords, mandatory to fix these buildings. Might as well check your block, while you're at it. If you are a renter, contact your landlord. If you're an owner, look into some seismic retrofitting.
  2. Check out what sort of liquefaction zone you're in in the map above. If you're in one of the better zones, plan to stock what you need for at least 72 hours while the bigger emergencies are dealt with.
  3. Sign up here to join other San Franciscans looking for better tools to deal with these issues and we'll keep you up to date. At Recovers, we're trying to help San Francisco prepare -- and prepare smartly.

Have an idea or question? Get in touch. We want to help.

sfrecovers.jpg

Emily Wright is an experience designer and illustrator. She studied at Parsons School of Design and California College of the Arts. Before joining Recovers, Emily was a 2012 Code for America Fellow focused on crisis response and disaster preparedness. She likes pretzels, and engaging her neighbors through interactive SMS projects.

April 09 2013

11:00

OpenNews Revamps Code Sprints; Sheetsee.js Wins First Grant

imageBack at the Hacks/Hackers Media Party in Buenos Aires, I announced the creation of Code Sprints -- funding opportunities to build open-sourced tools for journalism. We used Code Sprints to fund a collaboration between WNYC in New York and KPCC in Southern California to build a parser for election night XML data that ended up used on well over 100 sites -- it was a great collaboration to kick off the Code Sprint concept.

Originally, Code Sprints were designed to work like the XML parser project: driven in concept and execution by newsrooms. While that proved great for working with WNYC, we heard from a lot of independent developers working on great tools that fit the intent of Code Sprints, but not the wording of the contract. And we heard from a lot of newsrooms that wanted to use code, but not drive development, so we rethought how Code Sprints work. Today we're excited to announce refactored Code Sprints for 2013.

Now, instead of a single way to execute a Code Sprint, there are three ways to help make Code Sprints happen:

  • As an independent developer (or team) with a great idea that you think may be able to work well in the newsroom.
  • As a newsroom with a great idea that wants help making it a reality.
  • As a newsroom looking to betatest code that comes out of Code Sprints.

Each of these options means we can work with amazing code, news organizations, and developers and collaborate together to create lots of great open-source tools for journalism.

Code Sprint grant winner: Sheetsee.js

I always think real-world examples are better than theoreticals, so I'm also excited to announce the first grant of our revamped Code Sprints will go to Jessica Lord to develop her great Sheetsee.js library for the newsroom. Sheetsee has been on the OpenNews radar for a while -- we profiled the project in Source a number of months back, and we're thrilled to help fund its continued development.

Sheetsee was originally designed for use in the Macon, Ga., government as part of Lord's Code for America fellowship, but the intent of the project -- simple data visualizations using a spreadsheet for the backend -- has always had implications far beyond the OpenGov space. We're excited today to pair Lord with Chicago Public Media (WBEZ) to collaborate on turning Sheetsee into a kick-ass and dead-simple data journalism tool.

For WBEZ's Matt Green, Sheetsee fit the bill for a lightweight tool that could help get the reporters "around the often steep learning curve with data publishing tools." Helping to guide Lord's development to meet those needs ensures that Sheetsee becomes a tool that works at WBEZ and at other news organizations as well.

We're excited to fund Sheetsee, to work with a developer as talented as Lord, to collaborate with a news organization like WBEZ, and to relaunch Code Sprints for 2013. Onward!

Dan Sinker heads up the Knight-Mozilla News Technology Partnership for Mozilla. From 2008 to 2011 he taught in the journalism department at Columbia College Chicago where he focused on entrepreneurial journalism and the mobile web. He is the author of the popular @MayorEmanuel twitter account and is the creator of the election tracker the Chicago Mayoral Scorecard, the mobile storytelling project CellStories, and was the founding editor of the influential underground culture magazine Punk Planet until its closure in 2007. He is the editor of We Owe You Nothing: Punk Planet, the collected interviews and was a 2007-08 Knight Fellow at Stanford University.

A version of this post originally appeared on Dan Sinker's Tumblr here.

April 02 2013

10:39

How Public Lab Turned Kickstarter Crowdfunders Into a Community

Public Lab is structured like many open-source communities, with a non-profit hosting and coordinating the efforts of a broader, distributed community of contributors and members. However, we are in the unique position that our community creates innovative open-source hardware projects -- tools to measure and quantify pollution -- and unlike software, it takes some materials and money to actually make these tools. As we've grown over the past two years, from just a few dozen members to thousands today, crowdfunding has played a key role in scaling our effort and reaching new people.

DIY Spectrometry Kit Kickstarter

Kickstarter: economies of DIY scale

Consider a project like our DIY Spectrometry Kit, which was conceived of just after the Deepwater Horizon oil spill to attempt to identify petroleum contamination. In the summer of 2012, just a few dozen people had ever built one of our designs, let alone uploaded and shared their work. As the device's design matured to the point that anyone could easily build a basic version for less than $40, we set out to reach a much larger audience while identifying new design ideas, use cases, and contributors, through a Kickstarter project. Our theory was that many more people would get involved if we offered a simple set of parts in a box, with clear instructions for assembly and use.

By October 2012, more than 1,600 people had backed the project, raising over $110,000 -- and by the end of December, more than half of them had received a spectrometer kit. Many were up and running shortly after the holidays, and we began to see regular submissions of open spectral data at http://spectralworkbench.org, as well as new faces and strong opinions on Public Lab's spectrometry mailing list.

Kickstarter doesn't always work this way: Often, projects turn into startups, and the first generation of backers simply becomes the first batch of customers. But as a community whose mission is to involve people in the process of creating new environmental technologies, we had to make sure people didn't think of us as a company but as a community. Though we branded the devices a bit and made them look "nice," we made sure previous contributors were listed in the documentation, which explicitly welcomed newcomers into our community and encouraged them to get plugged into our mailing list and website.

newbox.jpg

As a small non-profit, this approach is not only in the spirit of our work, but essential to our community's ability to scale up. To create a "customer support" contact rather than a community mailing list would be to make ourselves the exclusive contact point and "authority" for a project which was developed through open collaboration. For the kind of change we are trying to make, everyone has to be willing to learn, but also to teach -- to support fellow contributors and to work together to improve our shared designs.

Keeping it DIY

One aspect of the crowdfunding model that we have been careful about is the production methods themselves. While it's certainly vastly different to procure parts for 1,000 spectrometers, compared to one person assembling a single device, we all agreed that the device should be easy to assemble without buying a Public Lab kit -- from off-the-shelf parts, at a reasonable cost. Thus the parts we chose were all easily obtainable -- from the aluminum conduit box enclosure, to the commercially available USB webcams and the DVD diffraction grating which makes spectrometry possible.

spectrometry.jpg

While switching to a purpose-made "holographic grating" would have made for a slightly more consistent and easy-to-assemble kit (not to mention the relative ease of packing it vs. chopping up hundreds of DVDs with a paper cutter...), it would have meant that anyone attempting to build their own would have to specially order such grating material -- something many folks around the world cannot do. Some of these decisions also made for a slightly less optimal device -- but our priority was to ensure that the design was replicable, cheap, and easy. Advanced users can take several steps to dramatically improve the device, so the sky is the limit!

The platform effect

One clear advantage of distributing kits, besides the bulk prices we're able to get, is that almost 2,000 people now have a nearly identical device -- so they can learn from one another with greater ease, not to mention develop applications and methodologies which thousands of others can reproduce with their matching devices. We call this the "platform effect" -- where this "good enough" basic design has been standardized to the point that people can build technologies and techniques on top of it. In many ways, we're looking to the success of the Arduino project, which created not only a common software library, but a standardized circuit layout and headers to support a whole ecology of software and hardware additions which are now used by -- and produced by -- countless people and organizations.

Spectral Challenge screenshot

As we continue to grow, we are exploring innovative ways to use crowdfunding to get people to collaboratively use the spectrometers they now have in hand to tackle real-world problems. Recently, we have launched the Spectral Challenge, a kind of "X Prize for DIY science", but it's crowdfunded -- meaning that those who support the goals of the Challenge can participate in the competition directly, or by contributing to the prize pool. Additionally, Public Lab will continue to leverage more traditional means of crowdfunding as our community develops new projects to measure plant health and produce thermal images -- and we'll have to continue to ensure that any kits we sell clearly welcome new contributors into the community.

The lessons we've learned from our first two kit-focused Kickstarters will help us with everything from the box design to the way we design data-sharing software. The dream, of course, is that in years to come, as we pass the 10,000- and 100,000-member marks, we continue to be a community which -- through peer-to-peer support -- helps one another identify and measure pollution without breaking the bank.

The creator of GrassrootsMapping.org, Jeff Warren designs mapping tools, visual programming environments, and flies balloons and kites as a fellow in the Center for Future Civic Media, and as a student at the MIT Media Lab's Design Ecology group, where he created the vector-mapping framework Cartagen. He co-founded Vestal Design, a graphic/interaction design firm in 2004, and directed the Cut&Paste Labs project, a year-long series of workshops on open source tools and web design in 2006-7 with Lima designer Diego Rotalde. He is a co-founder of Portland-based Paydici.com.

March 25 2013

11:00

How One Student Went Mobile-Only for a Day on Campus

Recently, Reese News Lab students have conducted experiments in living without a smartphone and social media.

NYT ipad.jpg

But because the lab is working on a project on producing media for mobile devices, I thought it was time that someone tried a computer blackout. I'd give up my laptop for a day, navigating the UNC campus with just an iPhone and an iPad (with a Bluetooth keyboard). I figured that way, I could find out how mobile-friendly the world really is.

Before I could attempt this task, though, I knew I had to plan carefully. I had to make sure it wouldn't interfere with my schoolwork, and I tried to account for as many problems as I could beforehand.

I knew I would be unable to print because UNC's printing program requires you to install specific hardware. I also would lose access to a good word-processing program. So I added all the documents I needed to my Google Drive and converted them into PDFs. I also knew I'd lose access to Spotify, so I downloaded MixerBox, which makes playlists of YouTube videos. Set with my arsenal of solutions, I felt confident that this day would be relatively easy, but I quickly discovered that you can't account for everything.

A mobile-only day begins

When I awoke on the day of my experiment, I was pleased to have no trouble going through my routine of checking emails and Twitter. All of the mobile sites I encountered were effective and easy to navigate. But my positivity about the day was soon shattered by the first text I received: a free Redbox code. I don't have a TV in my room, so without my laptop, a disc was useless. This was the first omen that Netflix would be my saving grace later.

With a sense of dread, I embarked on the rest of my day. I immediately noticed how much lighter my backpack was without a laptop, so at least there was one perk. In class, I was already used to doing the reading on my iPad. It was after class that I ran into trouble.

Help! No tabs!

Sticking to my Thursday routine, I headed to the Reese News Lab. However, I realized that doing any kind of research was going to be hard. When I am on a computer, I love using tabs and multiple windows. I can read something in my browser and take notes on it in a Word document. As I write this, I have open six Word documents, Spotify, an Excel spreadsheet and four windows in Google Chrome (31 tabs) open. And yes, those numbers were higher until I was embarrassed by how much I had open and decided to close a few.

In the lab, I decided to scroll through Twitter and Facebook to find the latest news. As I tapped through articles, though, I realized how much I missed the tab and find features. Links in both of these apps opened a new page within the app. However, these pages were slow and harder to navigate. Multimedia components from places like the Wall Street Journal were especially troublesome as I tried to navigate their normally mobile-friendly site within these other apps. Also, I couldn't just search for keywords on any page. Rather, I had to search for terms line-by-line.

Frustrated, I decided that I just wanted a break and opted to try the USA Today crossword, but I hit another road block. I couldn't access it on my Safari app: USA Today requires mobile users to purchase its crossword app. I'm a college student, so thanks, but no thanks.

All eyes on screens

I headed to the Student Union. Although I saw plenty of people I knew there, they all were engrossed by whatever was on their computer screens. The screen blocked them from social interactions.

I headed back to my room around 5 p.m. to charge my phone, which was already in the red. I turned on Netflix, but I quickly got antsy. I needed something else to do simultaneously so that I wasn't just mindlessly watching a TV show. I couldn't do any work on my iPad while I had Netflix running, so I resorted to cleaning. This lasted for an hour or so until I decided I just needed to get out and go to dinner.

But the problem wasn't over. After dinner, I started to try to teach myself HTML/CSS with Codecademy. This seemed like the perfect opportunity to attempt a few more courses, but in a rather ironic turn of events, I found the site was not mobile-friendly. All of the site's features worked on my iPad, but it was not easy to navigate and use. Even with my keyboard, which hides the onscreen keyboard, the site responded by zooming in too much. Sure, most people aren't coding from mobile devices now, but why can't we?

After facing yet another disappointment, I spent the rest of the night using my iPad to watch Netflix and my phone as a second screen, where I could read articles and play games,. But I still never found an adequate solution. I couldn't even clean out my inbox from my phone easily, as the mail app tries to archive messages rather than delete them. I opted to go to bed early knowing that as soon as I woke up Friday, I could have my laptop back.

Lessons learned

So what did I learn?

  1. It's expensive to use only mobile devices. While content is often free for desktop users, mobile users are forced to buy apps to access the same content.
  2. Mobile devices make multitasking harder.
  3. We miss social interactions and are less observant hiding behind computer screens.
  4. We can still perform most of our daily routines on mobile devices. In fact, most of the sites we interact with have a mobile-friendly version.

And when I finally did check my laptop, I found I hadn't really missed anything. Sure, I was unable to get ahead on my work, but I had still been connected to the rest of the world. So could I learn to survive without a laptop? Absolutely. Do I want to try it? Not in the slightest.

Lincoln Pennington is a freshman in the journalism school at UNC Chapel Hill with a second major in political science. He works as a staffer for reesenews.org and tweets from @Lincoln_Ross. He is a politics junkie interested in the future of the media and hopes to work in D.C. upon graduation in 2016.

This story originally appeared on Reese News Lab.

reeselablogo.jpgReese News Lab is an experimental news and research project based at the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. The lab was established in 2010 with a gift from the estate of journalism school alum Reese Felts. Our mission is to push past the boundaries of media today, refine best practices and embrace the risks of experimentation. We do this through: collaborating with researchers, students, the public and industry partners; producing tested, academically grounded insights for media professionals; and providing engaging content. We pursue projects that enable us to create engaging content and to answer research questions about the digital media environment. All of our projects are programmed, designed, reported, packaged and edited by a staff of undergraduate and graduate students.

September 05 2012

13:33

Tor Project Offers a Secure, Anonymous Journalism Toolkit

"On condition of anonymity" is one of the most important phrases in journalism. At Tor, we are working on making that more than a promise.

torlogo.jpg

The good news: The Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.

The bad news: People who were used to getting away with atrocities are aware that the Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.

New digital communication means new threats

Going into journalism is a quick way to make a lot of enemies. Authoritarian regimes, corporations with less-than-stellar environmental records, criminal cartels, and other enemies of the public interest can all agree on one thing: Transparency is bad. Action to counter their activities starts with information. Reporters have long been aware that threats of violence, physical surveillance, and legal obstacles stand between them and the ability to publish. With digital communication, there are new threats and updates to old ones to consider.

Eavesdroppers can reach almost everything. We rely on third parties for our connections to the Internet and voice networks.The things you ask search engines, the websites you visit, the people you email, the people you connect to on social networks, and maps of the places you have been carrying a mobile phone are available to anyone who can pay, hack, or threaten their way into these records. The use of this information ranges from merely creepy to harmful.

You may be disturbed to learn about the existence of a database with the foods you prefer, the medications you take, and your likely political affiliation based on the news sites you read. On the other hand, you may be willing to give this information to advertisers, insurance companies, and political campaign staff anyway. For activists and journalists, having control over information can be a matter of life and death. Contact lists, chat logs, text messages, and hacked emails have been presented to activists during interrogations by government officials. Sources have been murdered for giving information to journalists.

If a journalist does manage to publish, there is no guarantee that people in the community being written about can read the story. Censorship of material deemed offensive is widespread. This includes opposition websites, information on family planning, most foreign websites, platforms for sharing videos, and the names of officials in anything other than state-owned media. Luckily, there are people who want to help ensure access to information, and they have the technology to do it.

Improving privacy and security

Tor guards against surveillance and censorship by bouncing your communications through a volunteer network of about 3,000 relays around the world. These relays can be set up using a computer on a home connection, using a cloud provider, or through donations to people running servers.

When you start Tor, it connects to directory authorities to get a map of the relays. Then it randomly selects three relays. The result is a tunnel through the Internet that hides your location from websites and prevents your Internet service provider from learning about the sites you visit. Tor also hides this information from Tor -- no one relay has all of the information about your path through the network. We can't leak information that we never had in the first place.

The Tor Browser, a version of Firefox that pops up when you are connected to the Tor network, blocks browser features that can leak information. It also includes HTTPS Everywhere, software to force a secure connection to websites that offer protection for passwords and other information sent between you and their servers.

Other privacy efforts

Tor is just one part of the solution. Other software can encrypt email, files, and the contents of entire drives -- scrambling the contents so that only people with the right password can read them. Portable operating systems like TAILS can be put on a CD or USB drive, used to connect securely to the Internet, and removed without leaving a trace. This is useful while using someone else's computer at home or in an Internet cafe.

The Guardian Project produces open-source software to protect information on mobile phones. Linux has come a long way in terms of usability, so there are entire operating systems full of audiovisual production software that can be downloaded free of charge. This is useful if sanctions prevent people from downloading copies of commercial software, or if cost is an issue.

These projects are effective. Despite well-funded efforts to block circumvention technology, hundreds of thousands of people are getting past firewalls every day. Every video of a protest that ends up on a video-sharing site or the nightly news is a victory over censorship.

There is plenty of room for optimism, but there is one more problem to discuss. Open-source security software is not always easy to use. No technology is immune to user error. The responsibility for this problem is shared by developers and end users.

The Knight Foundation is supporting work to make digital security more accessible. Usability is security: Making it easier to use software correctly keeps people safe. We are working to make TAILS easier to use. Well-written user manuals and video tutorials help high-risk users who need information about the risks and benefits of technology in order to come up with an accurate threat model. We will be producing more educational materials and will ask for feedback to make sure they are clear.

When the situation on the ground changes, we need to communicate with users to get them back online safely. We will expand our help desk, making help available in more languages. By combining the communication skills of journalists and computer security expertise of software developers, we hope to protect reporters and their sources from interference online.

You can track our progress and find out how to help at https://blog.torproject.org and https://www.torproject.org/getinvolved/volunteer.html.en.

Karen Reilly is Development Director at The Tor Project, responsible for fundraising, advocacy, general marketing, and policy outreach programs for Tor. Tor is a software and a volunteer network that enables people to circumvent censorship and guard their privacy online. She studied Government and International Politics at George Mason University.

September 04 2012

13:13

LocalWiki Releases First API, Enabling Innovative Apps

We're excited to announce that the first version of the LocalWiki API has just been released!

What's this mean?

In June, folks in Raleigh, N.C., held their annual CityCamp event. CityCamp is a sort of "civic hackathon" for Raleigh. During one part of the event, people broke up into teams and came up with projects that used technology to help solve local, civic needs.

citycamp.jpg

What did almost every project pitched at CityCamp have in common? "Almost every final CityCamp idea had incorporated a stream of content from TriangleWiki," CityCamp and TriangleWiki organizer Reid Seroz said in an interview with Red Hat's Jason Hibbets.

The LocalWiki API makes it really easy for people to build applications and systems that push and pull information from a LocalWiki. In fact, the API has already been integrated into a few applications. LocalWiki is an effort to create community-owned, living information repositories that will provide much-needed context behind the people, places, and events that shape our communities.

The winning project at CityCamp Raleigh, RGreenway, is a mobile app that helps residents find local greenways. They plan to push/pull data from the TriangleWiki's extensive listing of greenways.

Another group in the Raleigh-Durham area, Wanderful, is developing a mobile application that teaches residents about their local history as they wander through town. They're using the LocalWiki API to pull pages and maps from the TriangleWiki.

Ultimately, we hope that LocalWiki can be thought of as an API for the city itself -- a bridge between local data and local knowledge, between the quantitative and the qualitative aspects of community life.

Using the API

You can read the API documentation to learn about the new API. You'll also want to make sure you check out some of the API examples to get a feel for things.

wanderful.jpg

We did a lot of work to integrate advanced geospatial support into the API, extending the underlying API library we were using -- and now everyone using it can effortlessly create an awesome geospatially aware API.

This is just the first version of the API, and there's a lot more we want to do! As we add more structured data to LocalWiki, the API will get more and more useful. And we hope to simplify and streamline the API as we see real-world usage.

Want to help? Share your examples for interacting with the API from a variety of environments -- jump in on the page on dev.localwiki.org or add examples/polish to the administrative documentation.

CityCamp photo courtesy of CityCamp Raleigh.

Philip Neustrom is a software engineer in the San Francisco Bay area. He co-founded DavisWiki.org in 2004 and is currently co-directing the LocalWiki.org effort. For the past several years he has worked on a variety of non-profit efforts to engage everyday citizens. He oversaw the development of the popular VideoTheVote.org, the world's largest coordinated video documentation project, and was the lead developer at Citizen Engagement Laboratory, a non-profit focused on empowering traditionally underrepresented constituencies. He is a graduate of the University of California, Davis, with a bachelor's in Mathematics.

August 30 2012

13:13

Post-Disaster, We Can Do More Than 'Feed It to Fix It'

Did something go wrong? Bring a casserole. While the type of barbeque may vary regionally, if you're standing near storm damage, there's likely a home-cooked meal on the way. Following a disaster, competent ladies fill church and school kitchens, turning out hundreds of sandwiches. Restaurants donate buffet trays of wings and lasagna. Community organizations host spaghetti dinner after spaghetti dinner, feeding survivors
and volunteers alike. Quite simply, we live in a casserole culture, and we can harness this tendency for a better local response.

Why, exactly, our knee-jerk reaction as a culture is to bake a pie in the face of
unthinkable loss, is anyone's guess. I have a theory that our Norman Rockwell tendencies are linked directly to what we are told we can and cannot do after a disaster.

'feed it to fix it'

Unless you happened to keep the FEMA National Incident Management Framework around for bedtime reading, you probably have no clue who is in charge of what on the ground after a disaster. Even if you do know what is supposed to happen, the practice is often far different than the plan. As an unaffiliated volunteer, you're often sent home, told off, or simply not answered when you try to help.

foodbank.jpeg

But food -- that makes sense. The Red Cross won't accept home-cooked donations, but local churches will. You're greeted with thanks instead of confusion if you drop off sandwiches and Gatorade at a worksite. We, as a culture, have assumed permission to feed during a disaster, and we get after it. Think: Studs Terkel meets Paula Deen.

I, like you, love a good plate of mashed potatoes. But our "feed it to fix it" tendencies right now fall short of our potential to help out at the community level. Here are a few suggestions for building a better community recovery:

Use your skills

Yes, you can cook. But are you also a lawyer? Bilingual? Great with computers? Those skills are every bit as necessary to the recovery as Dunkin Donuts -- survivors will need tax advice, translation and resource management help.

Use your head

The difference between lasagna and labor is that it is currently a painful process to volunteer skills through large, regional organizations. Your community can independently plan to share skills and resources before a disaster -- just agree upon a system beforehand.

Use your leaders

Your emergency management department and city leadership can use your help. Can you start a Community Emergency Response team? Would you agree to help the EM run social media during a disaster? Get in touch and plan ahead!

Use this recipe: the singular best recipe for chocolate chip recovery cookies I have ever
encountered:

Catastrophe Cookies

  • 1 1/2 cups all-purpose flour
  • 1/2 teaspoon baking soda
  • 1/2 teaspoon salt
  • 1/2 cup (1 stick) cold unsalted butter, cut into 1/2-inch pieces
  • 3/4 cup tightly packed light brown sugar
  • 1/2 cup granulated sugar
  • 1 1/2 teaspoons vanilla extract
  • 1 large egg, at room temperature, lightly beaten
  • 6 to 7 ounces of chocolate chips
  • Preheat oven to 350 F.
  • Cream butter and sugar together in a large bowl.
  • Add the vanilla and egg, keep on mixing.
  • Mix dry ingredients together, then add slowly to the large bowl of wet ingredients.
  • If you're patient, refrigerate the dough for a couple of hours.
  • If not, just go ahead and bake those cookies for 11-13 minutes.
  • Distribute to sweaty workers, affected families, stressed organizers, and your own family.

P.S. Check out our work in action this week at http://IsaacGulf.Recovers.org.

Caitria O'Neill is the CEO of Recovers.org. She received a B.A. degree in government from Harvard University in 2011. She has worked for Harvard Law Review and the U.S. State Department, and brings legal, political and editorial experience to the team. O'Neill has completed the certificate programs for FEMA's National Incident Management System 700 and 800, and Incident Command Systems 100 and 200. She has also worked with Emergency Management Directors, regional hospital and public health organizations and regional Homeland Security chapters to develop partnerships and educate stakeholders about local organization and communication following disasters.

August 20 2012

13:34

How Wikipedia Manages Sources for Breaking News

Almost a year ago, I was hired by Ushahidi to work as an ethnographic researcher on a project to understand how Wikipedians managed sources during breaking news events.

Ushahidi cares a great deal about this kind of work because of a new project called SwiftRiver that seeks to collect and enable the collaborative curation of streams of data from the real-time web about a particular issue or event. If another Haiti earthquake happened, for example, would there be a way for us to filter out the irrelevant, the misinformation, and build a stream of relevant, meaningful and accurate content about what was happening for those who needed it? And on Wikipedia's side, could the same tools be used to help editors curate a stream of relevant sources as a team rather than individuals?

pakistan.png

Ranking sources

When we first started thinking about the problem of filtering the web, we naturally thought of a ranking system that would rank sources according to their reliability or veracity. The algorithm would consider a variety of variables involved in determining accuracy, as well as whether sources have been chosen, voted up or down by users in the past, and eventually be able to suggest sources according to the subject at hand. My job would be to determine what those variables are -- i.e., what were editors looking at when deciding whether or not to use a source?

I started the research by talking to as many people as possible. Originally I was expecting that I would be able to conduct 10 to 20 interviews as the focus of the research, finding out how those editors went about managing sources individually and collaboratively. The initial interviews enabled me to hone my interview guide. One of my key informants urged me to ask questions about sources not cited as well as those cited, leading me to one of the key findings of the report (that the citation is often not the actual source of information and is often provided in order to appease editors who may complain about sources located outside the accepted Western media sphere). But I soon realized that the editors with whom I spoke came from such a wide variety of experience, work areas and subjects that I needed to restrict my focus to a particular article in order to get a comprehensive picture of how editors were working. I chose a 2011 Egyptian revolution article on Wikipedia because I wanted a globally relevant breaking news event that would have editors from different parts of the world working together on an issue with local expertise located in a language other than English.

Using Kathy Charmaz's grounded theory method, I chose to focus editing activity (in the form of talk pages, edits, statistics and interviews with editors) from January 25, 2011 when the article was first created (within hours of the first protests in Tahrir Square), to February 12 when Mubarak resigned and the article changed its name from "2011 Egyptian protests" to "2011 Egyptian revolution." After reviewing the big-picture analyses of the article using Wikipedia statistics of top editors, and locations of anonymous editors, etc., I started work with an initial coding of the actions taking place in the text, asking the question, "What is happening here?"

I then developed a more limited codebook using the most frequent/significant codes and proceeded to compare different events with the same code (looking up relevant edits of the article in order to get the full story), and to look for tacit assumptions that the actions left out. I did all of this coding in Evernote because it seemed the easiest (and cheapest) way of importing large amounts of textual and multimedia data from the web, but it wasn't ideal because talk pages, when imported, need to be re-formatted, and I ended up using a single column to code data in the first column since putting each conversation on the talk page in a cell would be too time-consuming.

evernote.png

I then moved to writing a series of thematic notes on what I was seeing, trying to understand, through writing, what the common actions might mean. I finally moved to the report writing, bringing together what I believed were the most salient themes into a description and analysis of what was happening according to the two key questions that the study was trying to ask: How do Wikipedia editors, working together, often geographically distributed and far from where an event is taking place, piece together what is happening on the ground and then present it in a reliable way? And how could this process be improved?

Key variables

Ethnography Matters has a great post by Tricia Wang that talks about how ethnographers contribute (often invisible) value to organizations by showing what shouldn't be built, rather than necessarily improving a product that already has a host of assumptions built into it.

And so it was with this research project that I realized early on that a ranking system conceptualized this way would be inappropriate -- for the single reason that along with characteristics for determining whether a source is accurate or not (such as whether the author has a history of presenting accurate news article), a number of important variables are independent of the source itself. On Wikipedia, these include variables such as the number of secondary sources in the article (Wikipedia policy calls for editors to use a majority of secondary sources), whether the article is based on a breaking news story (in which case the majority of sources might have to be primary, eyewitness sources), or whether the source is notable in the context of the article. (Misinformation can also be relevant if it is widely reported and significant to the course of events as Judith Miller's New York Times stories were for the Iraq War.)

nyt.png

This means that you could have an algorithm for determining how accurate the source has been in the past, but whether you make use of the source or not depends on factors relevant to the context of the article that have little to do with the reliability of the source itself.

Another key finding recommending against source ranking is that Wikipedia's authority originates from its requirement that each potentially disputed phrase is backed up by reliable sources that can be checked by readers, whereas source ranking necessarily requires that the calculation be invisible in order to prevent gaming. It is already a source of potential weakness that Wikipedia citations are not the original source of information (since editors often choose citations that will be deemed more acceptable to other editors) so further hiding how sources are chosen would disrupt this important value.

On the other hand, having editors provide a rationale behind the choice of particular sources, as well as showing the variety of sources rather than those chosen because of loading time constraints may be useful -- especially since these discussions do often take place on talk pages but are practically invisible because they are difficult to find.

Wikipedians' editorial methods

Analyzing the talk pages of the 2011 Egyptian revolution article case study enabled me to understand how Wikipedia editors set about the task of discovering, choosing, verifying, summarizing, adding information and editing the article. It became clear through the rather painstaking study of hundreds of talk pages that editors were:

  1. storing discovered articles either using their own editor domains by putting relevant articles into categories or by alerting other editors to breaking news on the talk page,
  2. choosing sources by finding at least two independent sources that corroborated what was being reported but then removing some of the citations as the page became too heavy to load,
  3. verifying sources by finding sources to corroborate what was being reported, by checking what the summarized sources contained, and/or by waiting to see whether other sources corroborated what was being reported,
  4. summarizing by taking screenshots of videos and inserting captions (for multimedia) or by choosing the most important events of each day for a growing timeline (for text),
  5. adding text to the article by choosing how to reflect the source within the article's categories and providing citation information, and
  6. editing disputing the way that editors reflected information from various sources and replacing primary sources with secondary sources over time.

It was important to discover the work process that editors were following because any tool that assisted with source management would have to accord as closely as possible with the way that editors like to do things on Wikipedia. Since the process is managed by volunteers and because volunteers decide which tools to use, this becomes really critical to the acceptance of new tools.

sources.png

Recommendations

After developing a typology of sources and isolating different types of Wikipedia source work, I made two sets of recommendations as follows:

  1. The first would be for designers to experiment with exposing variables that are important for determining the relevance and reliability of individual sources as well as the reliability of the article as a whole.
  2. The second would be to provide a trail of documentation by replicating the work process that editors follow (somewhat haphazardly at the moment) so that each source is provided with an independent space for exposition and verification, and so that editors can collect breaking news sources collectively.

variables.png

Regarding a ranking system for sources, I'd argue that a descriptive repository of major media sources from different countries would be incredibly beneficial, but that a system for determining which sources are ranked highest according to usage would yield really limited results. (We know, for example, that the BBC is the most used source on Wikipedia by a high margin, but that doesn't necessarily help editors in choosing a source for a breaking news story.) Exposing the variables used to determine relevancy (rather than adding them up in invisible amounts to come up with a magical number) and showing the progression of sources over time offers some opportunities for innovation. But this requires developers to think out of the box in terms of what sources (beyond static texts) look like, where such sources and expertise are located, and how trust is garnered in the age of Twitter. The full report provides details of the recommendations and the findings and will be available soon.

Just the beginning

This is my first comprehensive ethnographic project, and one of the things I've noticed when doing other design and research projects using different methodologies is that, although the process can seem painstaking and it can prove difficult to articulate the hundreds of small observations into findings that are actionable and meaningful to designers, getting close to the experience of editors is extremely valuable work that is rare in Wikipedia research. I realize now that in the past when I actually studied an article in detail, I knew very little about how Wikipedia works in practice. And this is only the beginning!

Heather Ford is a budding ethnographer who studies how online communities get together to learn, play and deliberate. She currently works for Ushahidi and is studying how online communities like Wikipedia work together to verify information collected from the web and how new technology might be designed to help them do this better. Heather recently graduated from the UC Berkeley iSchool where she studied the social life of information in schools, educational privacy and Africans on Wikipedia. She is a former Wikimedia Foundation Advisory Board member and the former Executive Director of iCommons - an international organization started by Creative Commons to connect the open education, access to knowledge, free software, open access publishing and free culture communities around the world. She was a co-founder of Creative Commons South Africa and of the South African nonprofit, The African Commons Project as well as a community-building initiative called the GeekRetreat - bringing together South Africa's top web entrepreneurs to talk about how to make the local Internet better. At night she dreams about writing books and finding time to draw.

This article also appeared at Ushahidi.com and Ethnography Matters. Get the full report at Scribd.com.

August 17 2012

14:00

Next Knight News Challenge Calls for Mobile Visionaries

The Knight Foundation, which now offers three rounds of its News Challenge instead of one competition per year, just announced the theme of its next contest: mobile. This round focuses on funding innovators who are using mobile to change the face of the media industry.

iphone sky.jpg

Considerable growth in mobile Internet usage over the past few years has meant the way in which people consume news is undoubtedly shifting -- so it's not much of a surprise that mobile would be the theme of one of this year's rounds. In fact, several mobile players have already been the recipients of past News Challenge awards -- think MobileActive, FrontlineSMS, as well as Watchup, Behavio and Peepol.tv, which were winners of the round on networks.

"We know that we (and our kids) have grown attached to our mobile devices," Knight's John Bracken and Christopher Sopher wrote in a blog post announcing the round, "but we have less clarity about the ways people are using them, or might use them, as citizens, content producers and consumers to tell, share and receive stories."

move over, data

The announcement of the next theme comes as round 2, which focuses on data, moves onto the next stage. The round is now closed for submissions, and Knight's team of advisers has selected 16 finalists. They'll be doing interviews and video chats with the finalists over the next couple of weeks. Winners of the data round will be announced in September.

"We've focused the News Challenge this year on big opportunities in news and information -- networks, data and now mobile," Bracken and Sopher wrote in their post. "In some ways, mobile represents both the greatest need and greatest potential for individual citizens and news organizations."

The mobile round will be open to applicants starting on August 29, and Knight will accept entries until September 10.

August 16 2012

14:00

Did Global Voices Use Diverse Sources on Twitter for Arab Spring Coverage?

Citizen journalism and social media have become major sources for the news, especially after the Arab uprisings of early 2011. From Al Jazeera Stream and NPR's Andy Carvin to the Guardian's "Three Pigs" advertisement, news organizations recognize that journalism is just one part of a broader ecosystem of online conversation. At the most basic level, journalists are following social media for breaking news and citizen perspectives. As a result, designers are rushing to build systems like Ushahidi's SwiftRiver to filter and verify citizen media.

Audience analytics and source verification only paint part of the picture. While upcoming technologies will help newsrooms understand their readers and better use citizen sources, we remain blind to the way the news is used in turn by citizen sources to gain credibility and spread ideas. That's a loss for two reasons. Firstly, it opens newsrooms up to embarrassing forms of media manipulation. Most importantly, we're analytically blind to one of bloggers' and citizen journalists' greatest incentives: attention.

Re-imagining media representation

For my MIT Media Lab master's thesis, I'm trying to re-imagine how we think about media representation in online media ecosystems. Over the next year, my main focus will be gender in the media. But this summer, for a talk at the Global Voices Summit in Nairobi, I developed a visualization of media representation in Global Voices, which has been reporting on citizen media far longer than most news organizations.

(I'm hoping the following analysis of Global Voices convinces you that tracking media representation is exciting and important. If your news organization is interested in developing these kinds of metrics, or if you're a Global Voices editor trying to understand whose voices you amplify, I would love to hear from you. Contact me on Twitter at @natematias or at natematias@gmail.com.)

Media Representation in Global Voices: Egypt and Libya

My starting questions were simple: Whose voices (from Twitter) were most cited in Global Voices' coverage of the Arab uprisings, and how diverse were those voices? Was Global Voices just amplifying the ideas of a few people, or were they including a broad range of perspectives? Global Voices was generous enough to share its entire English archive going back to 2004, and I built a data visualization tool for exploring those questions across time and sections:

globalvoices.jpg

Let's start with Egypt. (Click to load the Egypt visualization.) Global Voices has been covering Egypt since its early days. The first major spike in coverage occurred in February 2007 when blogger Kareem Amer was sentenced to prison for things he said on his blog. The next spike in coverage, in February 2009, occurred in response to the Cairo bombing. The largest spike in Egypt coverage starts at the end of January 2011 in response to protests in Tahrir Square and is sustained over the next few weeks. Notice that while Global Voices did quote Twitter from time to time (citing 68 unique Twitter accounts the week of the Cairo bombing), the diversity of Twitter citation grew dramatically during the Egyptian uprising -- and actually remained consistently higher thereafter.

Tracking twitter citations

Why was Global Voices citing Twitter? By sorting articles by Twitter citation in my visualization, it's possible to look at the posts which cite the greatest number of unique Twitter accounts. Some posts reported breaking news from Tahrir, quoting sources from Twitter. Others report on viral political hashtag jokes, a popular format for Global Voices posts. Not all posts cite Egyptian sources. This post on the global response to Egyptian uprising shares tweets from around the world.

twitteraccounts.jpg

By tracking Twitter citation in Global Voices, we're also able to ask: Whose voices was GlobalVoices amplifying? Citation in blogs and the news can give a source exposure, credibility, and a growing audience.

In the Egypt section, the most cited Twitter source was Alaa Abd El Fattah, an Egyptian blogger, software developer, and activist. One of the last times he was cited in Global Voices was in reference to his month-long imprisonment in November 2011.

Although Alaa is prominent, Global Voices relied on hundreds of other sources. The Egypt section cites 1,646 Twitter accounts, and @alaa himself appears alongside 368 other accounts.

One of those accounts is that of Sultan al-Qassemi, who lives in Sharjah in the UAE, and who translated arabic Tweets into English throughout the Arab uprisings. @sultanalqassemi is the fourth most cited account in Global Voices Egypt, but that accounts for only 28 posts out of the 65 where he is mentioned. This is very different from Alaa, who is cited primarily just within the Egypt section.

sultan.jpg

Let's look at other sections where Sultan al-Qassemi is cited in Global Voices. Consider, for example, the Libya section, where he appears in 18 posts. (Click to load the Libya visualization.) Qassemi is cited exactly the same number of times as the account @ChangeInLibya, a more Libya-focused Twitter account. Here, non-Libyan voices have been more prominent: Three out of the five most cited Twitter accounts (Sultan al-Qassemi, NPR's Andy Carvin, and the Dubai-based Iyad El-Baghdadi) aren't Libyan accounts. Nevertheless, all three of those accounts were providing useful information: Qassemi reported on sources in Libya; Andy Carvin was quoting and retweeting other sources, and El-Baghdadi was creating situation maps and posting them online. With Libya's Internet mostly shut down from March to August, it's unsurprising to see more outside commentary than we saw in the Egypt section.

globalvoiceslibya.jpg

Where Do We Go From Here?

This very simple demo shows the power of tracking source diversity, source popularity, and the breadth of topics that a single source is quoted on. I'm excited about taking the project further, to look at:

  • Comparing sources used by different media outlets
  • Auto-following sources quoted by a publication, as a way for journalists to find experts, and for audiences to connect with voices mentioned in the media
  • Tracking and detecting media manipulators
  • Developing metrics for source diversity, and developing tools to help journalists find the right variety of sources
  • Journalist and news bias detection, through source analysis
  • Comparing the effectiveness of closed source databases like the Public Insight Network and Help a Reporter Out to open ecosystems like Twitter, Facebook, and online comments. Do source databases genuinely broaden the conversation, or are they just a faster pipeline for PR machines?
  • Tracking the role of media exposure on the popularity and readership of social media accounts

Still Interested?

I'm sure you can think of another dozen ideas. If you're interested in continuing the conversation, try out my Global Voices Twitter Citation Viewer (tutorial here), add a comment below, and email me at natematias@gmail.com.

Nathan develops technologies for media analytics, community information, and creative learning at the MIT Center for Civic Media, where he is a Research Assistant. Before MIT, Nathan worked in UK startups, developing technologies used by millions of people worldwide. He also helped start the Ministry of Stories, a creative writing center in East London. Nathan was a Davies-Jackson Scholar at the University of Cambridge from 2006-2008.

This post originally appeared on the MIT Center for Civic Media blog.

August 14 2012

14:00

What's Next for Ushahidi and Its Platform?

This is part 2 in a series. In part 1, I talked about how we think of ourselves at Ushahidi and how we think of success in our world. It set up the context for this post, which is about where we're going next as an organization and with our platform.

We realize that it's hard to understand just how much is going on within the Ushahidi team unless you're in it. I'll try to give a summarized overview, and will answer any questions through the comments if you need more info on any of them.

The External Projects Team

Ushahidi's primary source of income is private foundation grant funding (Omidyar Network, Hivos, MacArthur, Google, Cisco, Knight, Rockefeller, Ford), and we don't take any public funding from any country so that we are more easily able to maintain our neutrality. Last year, we embarked on a strategy to diversify our revenue stream, endeavoring to decrease our percentage of revenues based on grant funding and offset that with earned revenue from client projects. This turned out to be very hard to do within our current team structure, as the development team ended up being pulled off of platform-side work and client-side work suffered for it. Many internal deadlines were missed, and we found ourselves unable to respond to the community as quickly as we wanted.

This year we split out an "external projects team" made up of some of the top Ushahidi deployers in the world, and their first priority is to deal with client and consulting work, followed by dev community needs. We're six months into this strategy, and it seems like this team format will continue to work and grow. Last year, 20% of our revenue was earned; this year we'd like to get that to the 30-40% range.

Re-envisioning Crowdmap

When anyone joins the Ushahidi team, we tend to send them off to some conference to speak about Ushahidi in the first few weeks. There's nothing like knowing that you're going to be onstage talking about your new company to galvanize you into really learning about and understanding everything about the organization. Basically, we want you to understand Ushahidi and be on the same mission with us. If you are, you might explain what we do in a different way than I do onstage or in front of a camera, but you'll get the right message out regardless.

crowdmap-screenshot-mobile-397x500.png

You have a lot of autonomy within your area of work, or so we always claimed internally. This was tested earlier this year, where David Kobia, Juliana Rotich and myself as founders were forced to ask whether we were serious about that claim, or were just paying it lip-service. Brian Herbert leads the Crowdmap team, which in our world means he's in charge of the overall architecture, strategy and implementation of the product.

The Crowdmap team met up in person earlier this year and hatched a new product plan. They re-envisioned what Crowdmap could be, started mocking up the site, and began building what would be a new Crowdmap, a complete branch off the core platform. I heard this was underway, but didn't get a brief on it until about six weeks in. When I heard what they had planned, and got a complete walk-through by Brian, I was floored. What I was looking at was so different from the original Ushahidi, and thus what we have currently as Crowdmap, that I couldn't align the two in my mind.

My initial reaction was to shut it down. Fortunately, I was in the middle of a random 7-hour drive between L.A. and San Francisco, so that gave me ample time to think by myself before I made any snap judgments. More importantly, it also gave me time to call up David and talk through it with him. Later that week, Juliana, David and I had a chat. It was at that point that we realized that, as founders, we might have blinders on of our own. Could we be stuck in our own 2008 paradigm? Should we trust our team to set the vision for a product? Did the product answer the questions that guide us?

The answer was yes.

The team has done an incredible job of thinking deeply about Crowdmap users, then translating that usage into a complete redesign, which is both beautiful and functional at the same time. It's user-centric, as opposed to map-centric, which is the greatest change. But, after getting around our initial feelings of alienness, we are confident that this is what we need to do. We need to experiment and disrupt ourselves -- after all, if we aren't willing to take risks and try new things, then we fall into the same trap that those who we disrupted did.

A New Ushahidi

For about a year we've been asking ourselves, "If we rebuilt Ushahidi, with all we know now, what would it look like?"

To redesign, re-architect and rebuild any platform is a huge undertaking. Usually this means part of the team is left to maintain and support the older code, while the others are building the shiny new thing. It means that while you're spending months and months building the new thing, that you appear stagnant and less responsive to the market. It means that you might get it wrong and what you build is irrelevant by the time it's launched.

Finally, after many months of internal debate, we decided to go down this path. We've started with a battery of interviews with users, volunteer developers, deployers and internal team members. The recent blog post by Heather Leson on the design direction we're heading in this last week shows where we're going. Ushahidi v3 is the complete redesign of Ushahidi's core platform, from the first line of code to the last HTML tag. On the front-end it's mobile web-focused out of the gate, and the backend admin area is about streamlining the publishing and verification process.

At Ushahidi we are still building, theming and using Ushahidi v2.x, and will continue to do so for a long time. This idea of a v3 is just vaporware until we actually decide to build it, but the exercise has already born fruit because it forces us to ask what it might look like if we weren't constrained by the legacy structure we had built. We'd love to get more input from everyone on this as we go forward.

SwiftRiver in Beta

After a couple of fits and starts, SwiftRiver is now being tried out by 500-plus beta testers. It's 75% of the way to completion, but usable, and so it's out and we're getting the feedback from everyone on what needs to be changed, added and removed in order to make it the tool we all need to manage large amounts of data. It's an expensive, server-intensive platform to run, so those who use it in the future will have to pay for its use when using it on our servers. As always, the core code will be made available, free and open source, for those who would like to set it up and run it on their own.

In Summary

The amount of change and internal change that Ushahidi is undertaking is truly breathtaking to us. We're cognizant of just how much we're putting on the edge. However, we know this; in our world of technology, those who don't disrupt themselves will themselves be disrupted. In short, we'd rather go all-in to make this change happen ourselves than be mired in a state of stagnancy and defensive activity.

As always, this doesn't happen in a vacuum for Ushahidi. We've relied on those of you who are the coders and deployers to help us guide the platforms for over four years. Many of you have been a part of one of these product rethinks. If you aren't already, and would like to be, get in touch with myself or Heather to get into it and help us re-envision and build the future.

Raised in Kenya and Sudan, Erik Hersman is a technologist and blogger who lives in Nairobi. He is a co-founder of Ushahidi, a free and open-source platform for crowdsourcing information and visualizing data. He is the founder of AfriGadget, a multi-author site that showcases stories of African inventions and ingenuity, and an African technology blogger at WhiteAfrican.com. He currently manages Ushahidi's operations and strategy, and is in charge of the iHub, Nairobi's Innovation Hub for the technology community, bringing together entrepreneurs, hackers, designers and the investment community. Erik is a TED Senior Fellow, a PopTech Fellow and speaker and an organizer for Maker Faire Africa. You can find him on Twitter at @WhiteAfrican

This post originally appeared on Ushahidi's blog.

August 13 2012

14:00

Movement-Based Arts Inspire Public Lab's DIY Environmental Science

Researchers at Public Laboratory pursue environmental justice creatively, through re-imagining our relationship with the environment. Our model is to rigorously ask oddball questions, then initiate research by designing or adapting locally accessible tools and methods to collect the data we need to answer those questions.

We've found, perhaps not surprisingly, that innovation in tools and methods frequently emerges from creative practices. In the larger trend of art plus science collaboration, 2D graphics, illustration, and visualization get most of the glory. But sculpture and dance are also major drivers of environmental imagination -- and therefore scientific inquiry.

taking back the production of research supplies

publiclab.jpg

In early July, approximately 25 people gathered in the cool interior of the 600,000-square-foot Pfizer building to design and build kites and balloons. This event was led by a sculptor, Mathew Lippincott, one of the co-founders of Public Laboratory. From his workshop in Portland, Ore., he's been researching the performance of tyvek and bamboo as well as ultra-lightweight plastic coated with iron oxide powder that heats itself in the sun. Because community researchers around the world use commercially produced kites and balloons to lift payloads (such as visible and infrared cameras, air quality sensors, and grab samplers) high into the air, this is part of a mission-critical initiative to take back the production of research supplies into the hands of local communities.

publiclab2.jpg

dancers and scientists collaborate

publiclab3.jpg

What you may not be expecting to hear is that half of the workshop attendees were dancers or choreographers, organized by Lailye Weidman and Jessica Einhorn, two fellows of iLAND, an organization dedicated to collaboration between dancers and scientists. Inspired by embodied investigations into atmospheric pressure and dynamics, these dancers joined the sculptors to drive forward a research agenda into the little-understood urban wind condition. Other attendees included engineers, theater artists, design students, landscape architects, and urban foresters. This group spent the weekend splitting bamboo, heat seaming painter's plastic towards building a solar-heated balloon large enough to lift a person, and learning about aerodynamics through attempting to fly their creations.

This work on the replicability (ease of making) and autonomy (easily procurable materials) of DIY aerial platforms -- directed by the aesthetic and embodied sense of sculptors and dancers -- has increased the ability of non-professional scientists to ask and answer their questions about their environment.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl