Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 29 2013

10:38

Join the Zeega Makers Challenge for 'The Making Of...Live at SFMOMA'

ZeegaSFMoMA_24.gif

In 24 hours, Zeegas -- a new form of interactive media -- will be installed on four projection screens at San Francisco's renowned Museum of Modern Art. This showcase is part of "The Making Of..." -- a collaboration between award-winning NPR producers the Kitchen Sisters, KQED, AIR's Localore, the Zeega community and many others.

Join in this collaborative media experiment and make Zeegas for SFMOMA. To participate, log in to Zeega and create something for the exhibition. To make the simplest Zeega possible, just combine an animated GIF and a song. And if you want to do more, go wild.

You can contribute from anywhere in the world. The deadline is midnight EST on Wednesday.

make a zeega

If you've never made a Zeega, worry not: It's super-easy. You can quickly combine audio, images, animated GIFs, text and video from across the web. Zeegas come in all shapes and sizes, from GIFs accompanied by a maker's favorite song to a haunting photo story about a Nevada ghost town to an interactive video roulette.

The Zeega exhibition is one piece of "The Making Of...Live at SFMOMA." As SFMOMA closes for two years of renovation and expansion, over 100 makers from throughout the region will gather to share their skills and crafts and tell their stories.

For the event, there will be two live performances of Zeegas and the "Web Documentary Manifesto," and there will also be a session with Roman Mars ("99% Invisible"), The Kitchen Sisters, AIR's Sue Schardt talking about Localore, and other storytelling gatherings throughout the festivities. For the full program, click here.

Jesse Shapins is a media entrepreneur, cultural theorist and urban artist. He is Co-Founder/CEO of Zeega, a platform revolutionzing interactive storytelling for an immersive future. For the past decade, he has been a leader in innovating new models of web and mobile publishing, his work featured in Wired, The New York Times, Boingboing and other venues. His artistic practice focuses on mapping the imagination and perception of place between physical, virtual and social space. His work has been cited in books such as The Sentient City, Networked Locality and Ambient Commons, and exhibited at MoMA, Deutsches Architektur Zentrum and the Carpenter Center for Visual Arts, among other venues. He was Co-Founder of metaLAB (at) Harvard, a research unit at the Berkman Center for Internet and Society, and served on the faculty of architecture at the Harvard Graduate School of Design, where he invented courses such as The Mixed-Reality City and Media Archaeology of Place.

May 23 2013

11:00

The Crowd and the Mob: Opportunities, Cautions for Constant Video Surveillance

Recent events in Boston highlight both the potential and hazards of ever-present cameras. In the hours following the April 15 bombing, law enforcement agencies called upon commercial businesses and the public to submit relevant footage from surveillance cameras and mobile devices. While the tsunami of crowdsourced data threatened to overwhelm servers and analysts, it provided clues that ultimately led to identifying the perpetrators. It also led to false identifications and harassment of innocent bystanders.

surveillance.jpg

Use of surveillance video to solve large-scale crimes first came to attention in the 2005 London subway bombings. In part due to its history of violent attacks by the IRA, London had invested heavily in closed-circuit television (CCTV) technology and had installed nearly 6,000 cameras in the underground system. In the days before smartphones, these publicly installed cameras were the most reliable source of video evidence, and law enforcement was able to identify the bombers using this footage.

With the advent of low-cost cameras and video recorders in smartphones, witnesses to events soon had a powerful tool to contribute to the law enforcement toolbox. Couple this technical capacity with the proliferation of social-networking platforms and the possibilities for rapid identification -- as well as the spread of misinformation -- become clear.

Vancouver police were overwhelmed with evidence from social media after the Stanley Cup riot in June 2011. This instance also highlighted the need for two things: stronger means of verification, since a number of photos were retouched or falsified, and protections against vigilantism or harassment of unofficial suspects.

authenticating digital images

Several projects currently in development address the need for a reliable system to authenticate digital images. In addition to a growing number of commercial companies specializing in audio and video forensic analysis, academic and non-profit labs are developing tools for this purpose. Informacam, a project of WITNESS and The Guardian Project, will strengthen metadata standards, and the Rashomon Project at UC Berkeley will aggregate and synchronize multiple videos of a given event. (Disclosure: The Rashomon Project is a project of the CITRIS Data and Democracy Initiative, which I direct.) These tactics, among others, will bolster the use of video evidence for criminal investigations and prosecutions.

Rashomon-Project-Screenshot-Feb-20131.jpg

Despite the clear advantages of drawing on crowdsourced footage for solving crimes, civil liberties groups and privacy advocates have warned about the dangers of perpetual surveillance. We saw in the Boston case the liability inherent in the ease and speed of circulating false claims and images. The New York Post published a front-page photo of two young men mistakenly identified as suspects, and the family of another young man, who had been missing for several weeks, was tormented by media seeking stories about the misplaced suspicion fueled by Reddit, an online social media aggregator.

surveillance vs. crime prevention

In addition to facilitating the "wisdom of crowds," technology grows more sophisticated for automated surveillance, including face recognition and gait analysis. In the last decade, many cities have accelerated implementation of surveillance systems, capitalizing on advances in computer technology and funds available from the Department of Homeland Security and other public sources. Yet whether considering fixed cameras or citizen footage, the effectiveness of surveillance for crime prevention is mixed. A 2009 CITRIS study shows San Francisco's installation of cameras in high-risk neighborhoods led to decreases in property crime but had apparently little effect on violent crime. If anything, perpetrators learned to evade the cameras, and crimes were displaced into neighboring areas or private spaces.

In open societies, technological advances should spark new discussions about ethics and protocol for their implementation. Communities, both online and in-person, have an opportunity to debate the benefits and costs of video evidence in the context of social-networking platforms. While their enthusiasm must be tempered by regard for due process, armchair investigators should be encouraged to work in partnership with public agencies charged with ensuring public safety.

Camille Crittenden is Deputy Director of CITRIS, based at UC Berkeley, where she also directs the Data and Democracy Initiative. Prior to this appointment, she served as Executive Director of the Human Rights Center at Berkeley Law, where she was responsible for overall administration of the Center, including fundraising, communications, and outreach, and developed its program in human rights, technology, and new media. She held previous positions as Assistant Dean for Development in the division of International and Area Studies at UC Berkeley and in development and public relations at University of California Press and San Francisco Opera. She holds a Ph.D. from Duke University.

Image of surveillance camera courtesy of Flickr user jonathan mcintosh.

May 22 2013

19:31

Want an Affordable Infrared Camera? Give to Public Lab's 'Infragram' Project on Kickstarter

This post was co-written by Public Lab organizer Don Blair.

Public Lab is pleased to announce the launch of our fourth Kickstarter today, "Infragram: the Infrared Photography Project." The idea for the Infragram was originally conceptualized during the BP oil spill in the Gulf of Mexico and as a tool for monitoring wetland damages. Since then, the concept has been refined to offer an affordable and powerful tool for farmers, gardeners, artists, naturalists, teachers and makers for as little as $35 -- whereas near-infrared cameras typically cost $500-$1,200.

Technologies such as the Infragram have similar roles as photography during the rise of credible print journalism -- these new technologies democratize and improve reporting about environmental impacts. The Infragram in particular will allow regular people to monitor their environment through verifiable, quantifiable, citizen-generated data. You can now participate in a growing community of practitioners who are experimenting and developing low-cost near-infrared technology by backing the Infragram Project and joining the Public Lab infrared listserve.

PublicLab Infrared1.png

some Background

Infrared imagery has a long history of use by organizations like NASA to assess the health and productivity of vegetation via sophisticated satellite imaging systems like Landsat. It has also been applied on-the-ground in recent years by large farming operations. By mounting an infrared imaging system on a plane, helicopter, or tractor, or carrying around a handheld device, farmers can collect information about the health of crops, allowing them to make better decisions about how much fertilizer to add, and where. But satellites, planes, and helicopters are very expensive platforms; and even the tractor-based and handheld devices for generating such imagery typically cost thousands of dollars. Further, the analysis software that accompanies many of these devices is "closed source"; the precise algorithms used -- which researchers would often like to tweak, and improve upon -- are often not disclosed.

PublicLab_Infrared2.png

Public Lab's approach

So, members of the Public Lab community set out to see whether it was possible to make a low-cost, accessible, fully open platform for capturing infrared imagery useful for vegetation analysis. Using the insights and experience of a wide array of community members -- from farmers and computer geeks to NASA-affiliated researchers -- a set of working prototypes for infrared image capture started to emerge. By now, the Public lab mailing lists and website contain hundreds of messages, research notes, and wikis detailing various tools and techniques for infrared photography, ranging from detailed guides to DIY infrared retrofitting of digital SLRs, to extremely simple and low-cost off-the-shelf filters, selected through a collective testing-and-reporting back to the community process.

All of the related discussions, how-to guides, image examples, and hardware designs are freely available, published under Creative Commons and CERN Open Hardware licensing. There are already some great examples of beautiful NDVI/near-infrared photography by Public Lab members -- including timelapses of flowers blooming, and balloon-based infrared imagery that quickly reveals which low-till methods are better at facilitating crop growth.

PublicLab_Infrared7.JPG

What's next

By now, the level of interest and experience around DIY infrared photography in the Public Lab community has reached a tipping point, and Public Lab has decided to use a Kickstarter as a way of disseminating the ideas and techniques around this tool to a wider audience, expanding the community of users/hackers/developers/practitioners. It's also a way of generating support for the development of a sophisticated, online, open-source infrared image analysis service, allowing anyone who has captured infrared images to "develop" them and analyze them according to various useful metrics, as well as easily tag them and share them with the wider community. The hope is that by raising awareness (and by garnering supporting funds), Public Lab can really push the "Infrared Photography Project" forward at a rapid pace.

Accordingly, we've set ourselves a Kickstarter goal of 5,000 "backers" -- we're very excited about the new applications and ideas that this large number of new community members would bring! And, equally exciting: The John S. and James L. Knight Foundation has offered to provide a matching $10,000 of support to the Public Lab non-profit if we reach 1,000 backers.

With this growing, diverse community of infrared photography researchers and practitioners -- from professional scientists, to citizen scientists, to geek gardeners -- we're planning on developing Public Lab's "Infrared Photography Project" in many new and exciting directions, including:

  • The creation of a community of practitioners interested in infrared technology, similar to the community that has been created and continues to grow around open-source spectrometry.
  • The development of an archive for the Infrared Photography Project -- a platform that will allow people to contribute images and collaborate on projects while sharing data online.
  • Encouragement of agricultural imagery tinkering and the development and use of inexpensive, widely available near-infrared technologies.
  • Development of standards and protocols that are appropriate to the needs, uses and practices of a grassroots science community.
  • Providing communities and individuals with the ability to assess their own neighborhoods through projects that are of local importance.
  • The continued development of a set of tools that will overlap and add to the larger toolkit of community-based environmental monitoring tools such as what SpectralWorkbench.org and MapKnitter.org provide.

We hope you'll join us by contributing to the Kickstarter campaign and help grow a community of open-source infrared enthusiasts and practitioners!

A co-founder of Public Laboratory for Open Technology and Science, Shannon is based in New Orleans as the Director of Outreach and Partnerships. With a background in community organizing, prior to working with Public Lab, Shannon held a position with the Anthropology and Geography Department at Louisiana State University as a Community Researcher and Ethnographer on a study about the social impacts of the spill in coastal Louisiana communities. She was also the Oil Spill Response Director at the Louisiana Bucket Brigade, conducting projects such as the first on-the-ground health and economic impact surveying in Louisiana post-spill. Shannon has an MS in Anthropology and Nonprofit Management, a BFA in Photography and Anthropology and has worked with nonprofits for over thirteen years.

Don Blair is a doctoral candidate in the Physics Department at the University of Massachusetts Amherst, a local organizer for The Public Laboratory for Open Technology and Science, a Fellow at the National Center for Digital Government, and a co-founder of Pioneer Valley Open Science. He is committed to establishing strong and effective collaborations among citizen / civic and academic / industrial scientific communities through joint research and educational projects. Contact him at http://dwblair.github.io, or via Twitter: @donwblair

11:00

Pop Up Archive Makes Audio Searchable, Findable, Reusable

After an insane and memorable week at SXSW Interactive in Austin in March, we came away with our work cut out for us: improving Pop Up Archive so that it's a reliable place to make all kinds of audio searchable, findable and reusable. Thanks in no small part to the brilliant development team at PRX, we've come leaps and bounds since then.

popuparchivesite.png

what it can do

Pop Up Archive can:

  • Generate automatic transcripts and keywords so that audio is both searchable and easy to organize.
  • Provide access to an archive of sound from around the world.
  • Save time and money for producers, creators, radio stations, media organizations, and archives of all stripes.

We've been opening the site to select groups of pioneering users, and we'd love input from the community. Request an invite here.

The content creators and caretakers we're talking to have valuable digital material on their hands: raw interviews and oral histories, partial mixes of produced works, and entire series of finished pieces. They can't revisit, remix, or repackage that material -- it's stored in esoteric formats in multiple locations. And it gets lost every time a hard drive dies or a folder gets erased to make more space on a laptop.

We're hearing things like:

"Someday I'm gonna spend a month organizing all this, but I plug [hard drives] in until I find what I need."

"Imagine being able to find a sentence somewhere in your archive. That would be an amazing tool."

"Unfortunately...we don't have a good way of cleaning [tags] to know that 'Obama,' 'Mr. Obama,' and 'Barack Obama' should be just one entry."

No one wants to figure out how to save all that audio, not to mention search on anything more than filenames. Some stations and media companies maintain incredible archives, but they've got different methods for managing the madness, which don't always line up with workflows and real-world habits. Content creators rely on their memories or YouTube to find old audio, and that works to a degree. But in the meantime, lots of awesome, time-saving and revenue-generating opportunities are going to waste.

Want a taste from the archive? Let Nikki Silva tell you about "War and Separation," one of the first pieces The Kitchen Sisters produced for NPR in the early 1980s.

Read more in the press release.

Before arriving in California, Anne Wootton lived in France, and managed a historic newspaper digitization project at Brown University. Anne came to the UC-Berkeley School of Information with an interest in digital archives and the sociology of technology. She spent summer 2011 working with The Kitchen Sisters and grant agencies to identify preservation and access opportunities for independent radio. She holds a Master's degree in Information Management and Systems.

May 15 2013

10:57

4 Lessons for Journalism Students from the Digital Edge

This past semester, I flew a drone. I helped set up a virtual reality environment. And I helped print a cup out of thin air.

Nice work if you can get it.

Working as a research assistant to Dan Pacheco at the Peter A. Horvitz Endowed Chair for Journalism Innovation at the S.I. Newhouse School of Public Communications at Syracuse University, I helped run the Digital Edge Journalism Series in the spring semester. We held a series of four programs that highlighted the cutting edge of journalism technology. Pacheco ran a session about drones in media; we had Dan Schultz from the MIT Media Lab talk about hacking journalism; we hosted Nonny de la Peña and her immersive journalism experience, and we had a 3D printer in our office, on loan from the Syracuse University ITS department, showing what can be made.

For someone who spent 10 years in traditional media as a newspaper reporter, it was an eye-opening semester. Here are some of the lessons I learned after spending a semester on the digital edge. Maybe they can be useful for you as you navigate the new media waters.

1. The future is here

During our 3D printer session, as we watched a small globe and base print from almost out of thin air, I turned to Pacheco and said, "This is the Jetsons. We're living the Jetsons."

photo.JPG

This stuff is all real. It sounds obvious to say, but in a way, it's an important thing to remember. Drones, virtual reality, 3D printing all sound like stuff straight out of science fiction. But they're here. And they're being used. More saliently, the barrier to entry of these technologies is not as low as you'd think. You can fly a drone using an iPad. The coding used to create real-time fact-checking programs is accessible. 3D printers are becoming cheaper and more commercially available. And while creating a full-room 3D immersive experience still takes a whole lot of time, money and know-how (we spent the better part of two days putting the experience together, during which I added "using a glowing wand to calibrate a $100,000 PhaseSpace Motion Capture system, then guided students through an immersive 3D documentary experience" to my skill set), you can create your own 3D world using Unity 3D software, which has a free version.

The most important thing I learned is to get into the mindset that the future is here. The tools are here, they're accessible, they can be easy and fun to learn. Instead of thinking of the future as something out there that's going to happen to you, our seminar series showed me that the future is happening right now, and it's something that we can create ourselves.

2. Get it first, ask questions later

One of the first questions we'd always get, whether it was from students, professors or professionals, was: "This is neat, but what application does it have for journalism?" It's a natural question to ask of a new technology, and one that sparked a lot of good discussions. What would a news organization use a drone for? What would a journalist do with the coding capabilities Schultz showed us? What kind of stories could be told in an immersive, virtual-reality environment? What journalistic use can a 3D printer have?

These are great questions. But questions become problems when they are used as impediments to change. The notion that a technology is only useful if there's a fully formed and tested journalistic use already in place for it is misguided. The smart strategy moving forward may be to get the new technologies and see what you can use them for. You won't know how you can use a drone in news coverage until you have one. You won't know how a 3D printer can be used in news coverage until you try it out.

There are potential uses. I worked in Binghamton, N.Y, for several years, and the city had several devastating floods. Instead of paying for an expensive helicopter to take overhead photos of the damage, maybe a drone could have been used more inexpensively and effectively (and locally). Maybe a newsroom could use a 3D printer to build models of buildings and landmarks that could be used in online videos. So when news breaks at, say, the local high school, instead of a 2D drawing, a 3D model could be used to walk the audience through the story. One student suggested that 3D printers could be made for storyboards for entertainment media. Another suggested advertising uses, particularly at trade shows. The possibilities aren't endless, but they sure feel like it.

Like I said above, these things are already here. Media organizations can either wait to figure it out (which hasn't exactly worked out for them so far in the digital age) or they can start now. Journalism organizations have never been hubs for research and development. Maybe this is a good time to start.

3. Real questions, real issues

This new technology is exciting, and empowering. But these technologies also raise some real, serious questions that call for real, serious discussion. The use of drones is something that sounds scary to people, and understandably so. (This is why the phrase "unmanned aerial vehicle" (UAV) is being used more often. It may not be elegant, but it does avoid some of the negative connotation the word "drone" has.) It's not just the paparazzi question. With a drone, where's the line between private and public life? How invasive will the drones be? And there is something undeniably unsettling about seeing an unmanned flying object hovering near you. 3D printers raise concerns, especially now that the first 3D printed guns have been made and fired.

To ignore these questions would be to put our heads in the sand, to ignore the real-world concerns. There aren't easy answers. They're going to require an honest dialogue among users, media organizations, and the academy.

4. Reporting still rules

Technology may get the headlines. But the technology is worthless without what the old-school journalists call shoe-leather reporting. At the heart of all these projects and all these technologies is the same kind of reporting that has been at the heart of journalism for decades.

Drones can provide video we can't get anywhere else, but the pictures are meaningless without context. The heart of "hacking journalism" is truth telling, going past the spin and delivering real-time facts to our audience. An immersive journalism experience is pointless if the story, the details, and the message aren't meticulously reported. Without a deeper purpose to inform the public, a 3D printer is just a cool gadget.

It's the marriage of the two -- of old-school reporting and new-school technology -- that makes the digital edge such a powerful place to be.

newhouse.jpgBrian Moritz is a Ph.D. student at the S.I. Newhouse School of Public Communications at Syracuse University and co-editor of the Journovation Journal. A former award-winning sports reporter in Binghamton, N.Y. and Olean, N.Y., his research focuses on the evolution of journalists' routines. His writing has appeared on the Huffington Post and in the Boston Globe, Boston Herald and Fort Worth Star-Telegram. He has a masters' degree from Syracuse University and a bachelor's degree from St. Bonaventure.

April 09 2013

11:00

OpenNews Revamps Code Sprints; Sheetsee.js Wins First Grant

imageBack at the Hacks/Hackers Media Party in Buenos Aires, I announced the creation of Code Sprints -- funding opportunities to build open-sourced tools for journalism. We used Code Sprints to fund a collaboration between WNYC in New York and KPCC in Southern California to build a parser for election night XML data that ended up used on well over 100 sites -- it was a great collaboration to kick off the Code Sprint concept.

Originally, Code Sprints were designed to work like the XML parser project: driven in concept and execution by newsrooms. While that proved great for working with WNYC, we heard from a lot of independent developers working on great tools that fit the intent of Code Sprints, but not the wording of the contract. And we heard from a lot of newsrooms that wanted to use code, but not drive development, so we rethought how Code Sprints work. Today we're excited to announce refactored Code Sprints for 2013.

Now, instead of a single way to execute a Code Sprint, there are three ways to help make Code Sprints happen:

  • As an independent developer (or team) with a great idea that you think may be able to work well in the newsroom.
  • As a newsroom with a great idea that wants help making it a reality.
  • As a newsroom looking to betatest code that comes out of Code Sprints.

Each of these options means we can work with amazing code, news organizations, and developers and collaborate together to create lots of great open-source tools for journalism.

Code Sprint grant winner: Sheetsee.js

I always think real-world examples are better than theoreticals, so I'm also excited to announce the first grant of our revamped Code Sprints will go to Jessica Lord to develop her great Sheetsee.js library for the newsroom. Sheetsee has been on the OpenNews radar for a while -- we profiled the project in Source a number of months back, and we're thrilled to help fund its continued development.

Sheetsee was originally designed for use in the Macon, Ga., government as part of Lord's Code for America fellowship, but the intent of the project -- simple data visualizations using a spreadsheet for the backend -- has always had implications far beyond the OpenGov space. We're excited today to pair Lord with Chicago Public Media (WBEZ) to collaborate on turning Sheetsee into a kick-ass and dead-simple data journalism tool.

For WBEZ's Matt Green, Sheetsee fit the bill for a lightweight tool that could help get the reporters "around the often steep learning curve with data publishing tools." Helping to guide Lord's development to meet those needs ensures that Sheetsee becomes a tool that works at WBEZ and at other news organizations as well.

We're excited to fund Sheetsee, to work with a developer as talented as Lord, to collaborate with a news organization like WBEZ, and to relaunch Code Sprints for 2013. Onward!

Dan Sinker heads up the Knight-Mozilla News Technology Partnership for Mozilla. From 2008 to 2011 he taught in the journalism department at Columbia College Chicago where he focused on entrepreneurial journalism and the mobile web. He is the author of the popular @MayorEmanuel twitter account and is the creator of the election tracker the Chicago Mayoral Scorecard, the mobile storytelling project CellStories, and was the founding editor of the influential underground culture magazine Punk Planet until its closure in 2007. He is the editor of We Owe You Nothing: Punk Planet, the collected interviews and was a 2007-08 Knight Fellow at Stanford University.

A version of this post originally appeared on Dan Sinker's Tumblr here.

April 04 2013

10:32

Game Changer? Inside BuzzFeed's Native Ad Network

After quietly piloting the concept for months, BuzzFeed officially launched its own native ad network this March. The mechanics of the network are bizarre, yet intriguing: Participating publishers allow BuzzFeed to serve story previews on their sites which, when clicked, bring visitors to sponsored stories on BuzzFeed.com. The network, whose ads resemble real story teases, is brash and a bit risky, but it may just help publishers circumvent the abuses of today's established, banner reliant, ad network ecosystem.

onlineadsevolved_seriesimage_sm.jpg

The current ad network model, or indirect sales model, is a mess. It functions based on an oversupply of simple display ads and is rife with inefficiencies, opening the door for middlemen to reap profits while devaluing publisher inventory. BuzzFeed's native ad network, along with others in a similar mold, has the potential to minimize these drawbacks by giving publishers a simple, safe way to make money through indirect sales channels.

How We Got Here

The ad networks we know today came about as a result of the poor economics of the banner ad. A little history: In the early days of Internet publishing, the banner ad seemed to make sense. Just as many publishers began figuring out the Internet by taking content produced for print and slapping it on the web, they took the standard print ad format -- selling advertisers designated space on a page -- and brought it online too. Instead of selling these ads by the inch though (a measurement suitable for edition-based print publishing), digital ads were sold by the impression, or view, a better fit for the unceasing nature of online media.

mediumrectangle.png

Over time, the acceptance and standardization of the banner ad brought a number of side effects along with it, the most important being an incentive for publishers to pack their pages with as many banners as possible. For publishers, the decision was easy: The more banner ads they placed on a page, the more money they stood to make. So instead of running a more manageable (and more user-friendly) three or four banner ads, publishers cluttered their pages with 10, 15 or even 20 of them.

Placing ads on a page was only half the equation though; publishers still needed to sell them. As they soon found out, selling premium, above-the-fold ads was a lot easier than getting advertisers to pony up for the glut of below-the-fold, low-quality inventory. A significant percentage of ads thus went unsold, and into the void stepped ad networks. Even at a heavy discount, publishers figured, it was better to get some money from remnant inventory via ad networks as opposed to making nothing. This would prove to be a poor calculation.

The Dark Side of Ad Networks
Display.jpg

Rather than question the logic of creating more inventory than it was possible to sell, publishers stuck with the model, growing their audiences along with their inventory and watching the original ad networks evolve into a multibillion-dollar tech industry fed largely on remnant inventory. Soon, publishers found themselves exposed to more drawbacks than they perhaps initially bargained for, and the original premise of making more money with more ads came into question.

As it grew, the indirect ecosystem not only enabled advertisers to buy publisher inventory at cheaper prices, devaluing even premium inventory, it also allowed them to buy premium publisher audiences on non-premium sites, thanks to the third-party cookie. The Atlantic's Alexis Madrigal zoomed in on this problem in a long piece about the tough economics of the online publishing industry.

"Advertisers didn't have to buy The Atlantic," he wrote. "They could buy ads on networks that had dropped a cookie on people visiting The Atlantic. They could snatch our audience right out from underneath us." The indirect system, in other words, commoditized his audience, leaving his impressions as valuable, in some ways, as those on third-rate sites.

Recognizing these and other abuses as endemic to the system, publishers today are starting to fight back. Many are trying to limit their dependency on banner ads either by cutting them out of their business completely or by constricting supply. David Payne, the chief digital officer at Gannett who oversaw a major USA Today redesign which dramatically reduced the site's supply of banners, put it this way when I spoke with him for an article for Digiday: "I think we've all proven over the last 12 years that the strategy we've been following -- to create a lot of inventory and then sell it at 95 percent off to these middlemen every day -- is not a long-term strategy."

Publishers have started looking for alternative forms of revenue to fill the gap and, so far, the hottest alternative is the native ad. Everyone from The Atlantic, to Tumblr, to the Washington Post, to Twitter is giving it a try and BuzzFeed, perhaps the extreme example, is all in. It sells only native ads, no banners.

BuzzFeed Susceptible to the Same Problems?

Which brings us to BuzzFeed's ad network. At this early point, it seems like the network should indeed be free of many of the abuses listed above. Its simple nature, for example, ensures that most of the value won't be siphoned out by a group of tech middlemen and will be largely shared by BuzzFeed, participating publishers and minimally, the ad server. Participating in the network, furthermore, should not devalue publishers' existing inventory since it will not provide advertisers access to the same inventory at cheaper prices.

BuzzFeed also claims its networks steers clear of third-party cookies, the audience-snatching culprit that The Atlantic's Madrigal railed against.

"We believe the ultimate targeting is real human-to-human sharing, digital word of mouth, so we don't do third-party cookie targeting," BuzzFeed advertising executive Eric Harris told me via email. "We're not collecting individually identifiable data and will not sell any data."

The approach should help participating publishers breathe a bit easier -- and they may just want to consider demanding the same from any network they engage with, not just BuzzFeed's.

"It's cleaner; it's more straight up," said Fark.com CEO Drew Curtis of BuzzFeed's network. His site, which is one of the partners participating in the launch, embeds BuzzFeed sponsored story previews on its home page, marking them as sponsored. "I just like the fact that there's no screwing around," Curtis explained in a phone interview, "It's exactly what it appears to be, no more no less." Rates from BuzzFeed's ad network, he added, are significantly higher from other indirect channels. "Advertisers," he said, "are willing to pay for less bulls#*t."

Of course, one question participating publishers might ask themselves is why they are helping BuzzFeed profit from sponsored posts instead of selling them on their own sites. The answer might worry BuzzFeed -- at least until it can get its traffic up to the point of advertiser demand -- but if publishers decide to go that route and withdraw from the network, they may be able to pull themselves away from the bad economics that brought them into the network game in the first place.

Alex Kantrowitz covers the digital marketing side of politics for Forbes.com and PBS MediaShift. His writing has previously appeared in Fortune and the New York Times' Local Blog. Follow Alex on Twitter at @Kantrowitz.

This is a summary. Visit our site for the full post ».

April 03 2013

10:33

What's Holding Back Responsive Web Design? Advertising

Responsive web design -- where "one design fits all devices" -- continues to gain momentum. Dozens of responsive sites have popped up, and a recent post on Idea Lab from Journalism Accelerator outlined how and why media sites should go responsive.

onlineadsevolved_seriesimage_sm.jpg

But hold your horses. Despite the mounting hype, responsive websites are still far from becoming ubiquitous, and for good reason.

As much as responsive web design improves user experience and makes it easier for publishers to go cross-platform, the industry's struggle with delivering profitable ads during the first big shift from print to web is still happening. And in this second big shift to a responsive web, that struggle is magnified.

It Has To Look Different

The surface-level problem that a responsive-designed website poses for advertising is that ads are typically delivered in fixed dimensions (not proportional to the size of their container) and typically sold based on exact position. Initial solutions to this issue largely focus on making ads as flexible as the web page, i.e., selling ads in packages that include different sizes to fit all sorts of devices, rather than the traditional fixed-width slots, or making ads that are themselves responsive. Ad firm ResponsiveAds, for example, has come up with various strategies for making ads adjust to different screen sizes.

rwd2.jpg

These diagrams from ResponsiveAds show how display ads themselves can respond to different screen sizes.

But these approaches are not yet ideal. For example, when the Boston Globe went responsive in 2011, the site used just a few fixed-sized ads, placed in highly controlled positions that could then move around the page.

max.jpeg

Andrés Max, a software engineer and user experience designer at Mashable, told me via email: "In the end technology (and screen resolutions) will keep evolving, so we must create ads and websites that are more adaptive than responsive."

Here, he means that ads should adapt to the medium and device instead of just responding to set resolution break-points. After all, we might also need to scale up ads for websites accessed on smart TVs.

Miranda Mulligan, the executive director of the Knight Lab at Northwestern University and part of the team that helped the Globe transition to responsive, agrees. She told me via email, "We need a smarter ad serving system that can detect viewport sizes, device capability, and they should be set up to be highly structured, with tons of associated metadata to maximize flexibility for display."

rwd3.jpg

Moreover, many web ads today are rich media ads -- i.e., takeovers, video, pop-overs, etc. -- so incorporating these interactive rich ads goes beyond a flexibility in sizes. A lot of pressure is resting on designers and developers to innovate ad experiences for the future, but evolving tech tools can help clear a path for making interactive ads flexible and fluid. The arrival of HTML5 brought many helpful additions that aid in creating responsive sites in general.

"HTML5 does provide lots of room for innovation not only for responsive but for richer websites and online experiences," Max said. "For example, we will see a lot of use of the canvas concept for creating great online games and interactions."

Display Advertising Is Still Broken

In the iceberg of web advertising problems, what ads will look like on responsive sites is just the tip. According to Mulligan, a major underlying problem is still the lack of communication between publishing and advertising. The ad creation and delivery environment is infinitely complex. Publishers range from small to very large, and much of the web development code and creative visuals are made outside of the core web publishing team.

One of the problems is that there are so many moving parts and parties involved: ad networks that publishers subscribe to; ad servers that publishers own themselves; ad servers that publishers license from other companies; sales teams within large publishers; the Interactive Advertising Bureau (IAB); and more. The obligatory silos make it very hard for good communication and flexible results to transpire.

mulligan-headshot.jpg

The challenge of mobile advertising on responsive sites, Mulligan later said via phone, "has very little to do with the web design technique and has a lot to do with the fact that we have really complicated ways of getting revenue attached to our websites."

In other words, the display ad system is still broken. And now, the same old problem is more pronounced in responsive mobile sites, where another layer of complication is introduced.

"We have to go and talk to seven different places and say, 'you know how you used to give us creative that would've been fixed-width? What we need from you now is flexible-width,'" Mulligan said.

While responsive web design inherently may not be the source of advertising difficulties, the fact that it amplifies the existing problems is a good reason for web publishers to be cautious about going responsive. In the meantime, a paradigm shift in how web content generates revenue is still desperately needed. Instead of plunging into using responsive ads for responsive sites, perhaps everyone can get in the same room and prototype alternatives to display ads altogether.

The Boston Globe screenshots above were captured by the BuySellAds blog.

Jenny Xie is the PBS MediaShift editorial intern. Jenny is a senior at Massachusetts Institute of Technology studying architecture and management. She is a digital-media junkie fascinated by the intersection of media, design, and technology. Jenny can be found blogging for MIT Admissions, tweeting @canonind, and sharing her latest work and interests here.

This is a summary. Visit our site for the full post ».

March 28 2013

11:00

Fact-Checking Social Media: The Case of the Pope and the Dictator

Did Pope Francis play a major role in Argentina's Dirty War? Reporters claim they can substantiate this allegation. They published photos of dictator Jorge Videla with a cardinal, allegedly Jorge Bergoglio, the recently elected Pope Francis. But something was wrong with these findings.

Great find, Brad: pope's connivance with dictatorship RT @delong Hugh O'Shaughnessy: Sins of the Argentine church bit.ly/XuR7k0

— Matt Seaton (@mattseaton) March 13, 2013

The buzz started just two hours after the waiting for white smoke was over. Hundreds of people, including reporters, tweeted a link to a 2011 story in The Guardian: "The Sins of the Argentinian Church."

Blogs came up with similar stories. Documentary maker Michael Moore forwarded a link to a photo of Videla with a cardinal -- allegedly, the new pope. For some newspapers, like the Dutch Volkskrant, these tweets were sufficient to break the story. "Pope sparks controversy," the newspaper wrote.

ogg.jpg

In the end, everybody had to correct their stories. Moore withdrew his tweet, The Guardian corrected the 2-year-old article in which Bergoglio was mentioned, and Volkskrant apologized for using the wrong photos.

CORRECTION: New Pope too young, photo circulating not of him giving communion to Argentine dictator Jorge Rafael Videla

— Michael Moore (@MMFlint) March 14, 2013

With the help of basic Internet research skills this never would have happened. Let's try to debunk all four clues:

1. The Guardian article

free.jpg The story came from the "Comment is free" section, the opinion corner of the newspaper.

It wasn't a factual story that was tweeted, but an opinion.

2. Enough people retweeted it

The number of retweets by itself, does not tell much about the credibility of a story. Take, for example, a look at a fake Amber Alert that was retweeted thousands of times:

henk2.jpg

But fact checking social media starts with numbers. How many people retweeted something? From which countries? How many clicked on the link? To make an educated guess, you need tools.

Tracking links

Type in the link to the story you want to investigate in Backtweets

backtweet.png

You see it is quoted 23 times, but that is the latest results. Search for March 13, 2013 and March 14, 2013. If you click on "More Tweets" you can access an archive of 5,840 tweets.

Keep in mind that you will miss tweets that use a shortened link service like bit.ly. You have to investigate each possible link separately in Backtweets. That's boring work, but somebody has to do this. Only then you'll see reporters who retweeted the link, like this Italian reporter:

italian.png

She deleted the tweet, but with Backtweet you can still find it.

Shortening services

If you type the plus sign behind any bit.ly link, you will get link statistics.

statistics.png

The trick with to

If you search in Twitter for: to (name of source) several concerns came up. Usually, followers are the first to correct false tweets. Therefore, it makes sense to use to:@mattseaton or _@to:MMflint (Michael Moore) to find out if somebody warned the source of the story. Here you see the same Italian reporter has some doubts after she posted and removed the link to the Guardian article:

did you counter factcheck this or is the source just #Horacio_Verbitsky's book? @mattseaton @delong

— Anna Masera (@annamasera) March 13, 2013

Another warning:

@mattseaton @annamasera Please fact check,Verbitsky was friend of Kirchners , accusations against Pope seem to be retaliation..

— Karen La Bretonne (@Lady_BZH) March 14, 2013

3. Blogs

Blogs broke the news, like Consortium News.stolen.jpg

Who is behind that source? I use this little Google trick to find sources who talk about
the blog, but are not affiliated with it. Here's how you do that:

perry.jpg

The writer is Robert Perry, who has a serious problem "with millions of Americans brainwashed by the waves of disinformation." His site wants to fight distortions of Fox News and "the hordes of other right-wing media outlets." The blog constitutes mostly activism rather than journalism.

4. The pictures

Michael Moore corrected his tweet several hours after he had posted the original. Without his correction, however, validation would have been possible too. You can upload the specific photo -- in this case, the alleged photo of the pope and Videla, to Google Images and try to find the original source:

amazng.jpg

Google now presents a list of most popular search words in conjunction with the image. When I tried this on the exact day the pope was presented, the words were different: "corruption," "Argentina" and "church." This indicated the person who found the image probably typed these words in Google to find the particular image that later sparked so much controversy.

To find the first date the photo was published or that Google indexed the photo, you can go back in time. You can order Google to show you only photos older than, say 2004:

travek.jpg

Now you get to the original source, Getty Images. In the caption it says that Videla visited a church in Buenos Aires in 1990. The new pope isn't mentioned:

getty.jpg

Compare this with Pope's Francis biography from the Vatican:

org.jpg

It says he was a spiritual director in Córdoba, 400 miles away from Buenos Aires. Sure, they have buses and trains and plains in Argentina, but still.

Another tip now: Always think "video" when you see a picture. Just type some words from the event in Google's search engine. This will lead to a YouTube video of the same event as captured on the Getty photo.

youtube2.jpg

Here's that YouTube video.

Now you see both people from the Getty image moving. This doesn't make sense. Pope Francis was born December 17, 1936. Videla was born August 2, 1925. He is more than 10 years older. In the YouTube video, the ages don't seem to match.

We have uncovered enough reason to doubt the original claim about Pope Francis, so now it's time to go for the final check. Probably more people discovered what you just found out. So, order Google to search for fake photos:

"false" OR "falsely" OR "fake photo" "jorge videla" "jorge bergoglio"

Don't search in English, but go for Spanish and French. You can type the words in English, and Google translates the keywords and the hits are translated back into English.

frencj.jpg

The first hit leads to sources who claim that the Michael Moore photo is false:

henf.jpg

Other keywords can be "not true," "hoax" or "blunder."

It's also a good idea to send a tweet to Storyful -- they even have a hashtag #dailydebunk.

deubnked.png

There you have it. The Guardian amended its story from 2011 on March 14, 2013.

@lady_bzh @annamasera @delong Correction coming shortly. Verbitsky book does deal with Bergoglio, but O'Shaughnessy misreported story

— Matt Seaton (@mattseaton) March 14, 2013

Nevertheless, some newspapers broke the story afterwards, as Volkskrant did on March 15. They apologized the next day:

correct.png

By doing some background research, this could have been avoided. Had proper fact checking taken place, this story should not have been written in the first place.

Dutch born Henk van Ess, currently chairs the VVOJ, the Association of Investigative Journalists for The Netherlands and Belgium. Van Ess teaches internet research & multimedia/cross media at universities and news media in Europe. He is founder of VVOJ Medialab and search engines, Inside Search, Cablesearch.org and Facing Facebook. His current projects include consultancy for news websites, fact checking of social media and internet research workshops.

This is a summary. Visit our site for the full post ».

September 05 2012

13:33

Tor Project Offers a Secure, Anonymous Journalism Toolkit

"On condition of anonymity" is one of the most important phrases in journalism. At Tor, we are working on making that more than a promise.

torlogo.jpg

The good news: The Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.

The bad news: People who were used to getting away with atrocities are aware that the Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.

New digital communication means new threats

Going into journalism is a quick way to make a lot of enemies. Authoritarian regimes, corporations with less-than-stellar environmental records, criminal cartels, and other enemies of the public interest can all agree on one thing: Transparency is bad. Action to counter their activities starts with information. Reporters have long been aware that threats of violence, physical surveillance, and legal obstacles stand between them and the ability to publish. With digital communication, there are new threats and updates to old ones to consider.

Eavesdroppers can reach almost everything. We rely on third parties for our connections to the Internet and voice networks.The things you ask search engines, the websites you visit, the people you email, the people you connect to on social networks, and maps of the places you have been carrying a mobile phone are available to anyone who can pay, hack, or threaten their way into these records. The use of this information ranges from merely creepy to harmful.

You may be disturbed to learn about the existence of a database with the foods you prefer, the medications you take, and your likely political affiliation based on the news sites you read. On the other hand, you may be willing to give this information to advertisers, insurance companies, and political campaign staff anyway. For activists and journalists, having control over information can be a matter of life and death. Contact lists, chat logs, text messages, and hacked emails have been presented to activists during interrogations by government officials. Sources have been murdered for giving information to journalists.

If a journalist does manage to publish, there is no guarantee that people in the community being written about can read the story. Censorship of material deemed offensive is widespread. This includes opposition websites, information on family planning, most foreign websites, platforms for sharing videos, and the names of officials in anything other than state-owned media. Luckily, there are people who want to help ensure access to information, and they have the technology to do it.

Improving privacy and security

Tor guards against surveillance and censorship by bouncing your communications through a volunteer network of about 3,000 relays around the world. These relays can be set up using a computer on a home connection, using a cloud provider, or through donations to people running servers.

When you start Tor, it connects to directory authorities to get a map of the relays. Then it randomly selects three relays. The result is a tunnel through the Internet that hides your location from websites and prevents your Internet service provider from learning about the sites you visit. Tor also hides this information from Tor -- no one relay has all of the information about your path through the network. We can't leak information that we never had in the first place.

The Tor Browser, a version of Firefox that pops up when you are connected to the Tor network, blocks browser features that can leak information. It also includes HTTPS Everywhere, software to force a secure connection to websites that offer protection for passwords and other information sent between you and their servers.

Other privacy efforts

Tor is just one part of the solution. Other software can encrypt email, files, and the contents of entire drives -- scrambling the contents so that only people with the right password can read them. Portable operating systems like TAILS can be put on a CD or USB drive, used to connect securely to the Internet, and removed without leaving a trace. This is useful while using someone else's computer at home or in an Internet cafe.

The Guardian Project produces open-source software to protect information on mobile phones. Linux has come a long way in terms of usability, so there are entire operating systems full of audiovisual production software that can be downloaded free of charge. This is useful if sanctions prevent people from downloading copies of commercial software, or if cost is an issue.

These projects are effective. Despite well-funded efforts to block circumvention technology, hundreds of thousands of people are getting past firewalls every day. Every video of a protest that ends up on a video-sharing site or the nightly news is a victory over censorship.

There is plenty of room for optimism, but there is one more problem to discuss. Open-source security software is not always easy to use. No technology is immune to user error. The responsibility for this problem is shared by developers and end users.

The Knight Foundation is supporting work to make digital security more accessible. Usability is security: Making it easier to use software correctly keeps people safe. We are working to make TAILS easier to use. Well-written user manuals and video tutorials help high-risk users who need information about the risks and benefits of technology in order to come up with an accurate threat model. We will be producing more educational materials and will ask for feedback to make sure they are clear.

When the situation on the ground changes, we need to communicate with users to get them back online safely. We will expand our help desk, making help available in more languages. By combining the communication skills of journalists and computer security expertise of software developers, we hope to protect reporters and their sources from interference online.

You can track our progress and find out how to help at https://blog.torproject.org and https://www.torproject.org/getinvolved/volunteer.html.en.

Karen Reilly is Development Director at The Tor Project, responsible for fundraising, advocacy, general marketing, and policy outreach programs for Tor. Tor is a software and a volunteer network that enables people to circumvent censorship and guard their privacy online. She studied Government and International Politics at George Mason University.

September 04 2012

13:13

4 Tech, Social Innovations at the RNC -- And One Clever Tweet

convention digital small.jpg

TAMPA, Fla. -- For those who haven't experienced it, a national political convention in America is something like a post-apocalyptic police state crossed with the Super Bowl and an Academy Awards red carpet.

Here at the site of this year's Republican National Convention, bomb-sniffing dogs, Secret Service agents, and a tropical storm all made it hard for people to connect with each other. But social media probably made people feel more connected than ever. Twitter confirmed that more than 4 million tweets were sent during the GOP event -- a one-day record for political conventions.

But we're somewhat past the era during which merely using a social media platform is considered interesting. Whether it be Twitter, Facebook, Tumblr, Foursquare or any number of other platforms or apps, people are using them. Republicans, Democrats, and Independents can agree that they like social media.

Guests in Tampa were immediately greeted by a gigantic sign that boldly stated the official hashtag: #GOP2012. Times have changed since the John McCain/Sarah Palin campaign of 2008.

The convention officials themselves were using social media: conducting interviews with media via Skype, monitoring the hashtag. But this is what we have come to expect. It's not particularly interesting.

(Note: Skype is now owned by Microsoft, my employer.)

Innovation in the shadows

Here's what I did notice was standing out a bit at the GOP's big event: collaborations between some unlikely bedfellows, overtly or presumably serving to show both partners in different lights. This took place in what one might call the "shadow convention," the space outside the official proceedings with delegates and votes and state delegation breakfast meetings, where a melange of media and tech companies hold policy briefings, interact with convention VIPs, and underwrite after-hours parties. The shadow convention with its corporate stalwarts got fairly innovative in comparison to the convention proper.

WP_000179.jpg

Here's a rundown of some innovations I saw:

1. CNN had a "CNN Grill" at the convention, as they typically do at large events like the conventions or SXSW. It serves as a combination working space for staff and full-service restaurant. Because you need a special pass to even get into the CNN Grill for one day, it's a popular place to hang out. But CNN was also using social technology in the midst of all the hamburgers and beer. Deploying Skype, they created what they call Delegate Cam, and enabled people following from home to be able to talk to their delegate representative casting their vote inside the security perimeter.

2. Time partnered up with social location service and fellow New York-based company Foursquare on an interactive map that helped conventioneers find each other. I asked Time about why they thought this was an interesting experiment to deploy in Tampa. Time.com managing editor Catherine Sharick told me, "Time partnered with FourSquare for the political conventions in order to help solve a common problem: Where are people and what is happening?" Writing elsewhere, I gave it a "B" for usefulness (if I know where Time writer Mark Halpirin is, what exactly should I do with that information?), but an "A" for creativity.

Time Foursquare map.png

3. Mobile short video service Tout collaborated with the Wall Street Journal to launch WSJ Worldstream, an effort by more than 2,000 global reporters who post vetted real-time videos from a special Tout iPhone app. The new video channel was launched in conjunction with the RNC. Reporters posted video interviews with delegates, protesters, and so on. Some of the videos will also be incorporated within longer online written pieces.

4. Microsoft (my employer), for its part, allowed me to use Pinterest to post real-time photos of the behind-the-scenes efforts of my colleagues. That included powering the IT infrastructure of the convention, conducting cyber-security monitoring, running Skype Studios for media and VIPs to conduct HD video interviews, and live-streaming the event on Xbox Live. Interestingly, Pinterest as far as I can tell, was not a popular medium during the GOP convention. I'm not sure if that's significant, but I couldn't easily find many pins from the convention.

Toward the end of the convention, social media watchers knew that the Republicans had a success by the numbers -- millions of tweets and countless uses of the hashtags, photos uploaded, YouTube views of individual speeches, etc. But that's expected now. One thing that was missing? A truly creative use of social media that involved more wittiness than brute force.

One Clever Tweet

There were a couple of clever uses of social media by a prominent politician during the Republican convention. That politician just happens to be a Democrat by the name of Barack Obama.

The most popular tweet during the Republican National Convention wasn't tweeted by a Republican. In a reference to the now-infamous Clint Eastwood "talking to an empty chair" speech, Obama's account tweeted three simple words: "This chair's taken." It was retweeted more than 50,000 times and favorited more than 20,000 times. More importantly, it's smart, it's art, and it's memorable.

This seat's taken. OFA.BO/c2gbfi, twitter.com/BarackObama/st...

— Barack Obama (@barackobama) August 31, 2012

Obama also hopped on the somewhat-edgy, somewhat-underground "front page of the Internet" Reddit to do something Redditors (as they're dubbed) call "Ask Me Anything." In a half-hour chat, the president took on all comers in a broad Q&A.

Heading into the Democratic National Convention in Charlotte, N.C., I'm curious to see how it compares. I'll be Pinteresting, CNN will be Skyping while they're grilling, and the WSJ will be posting short videos. What'll be the surprise there, if anything?

Mark Drapeau is the the director of innovative engagement for Microsoft's public and civic sector business headquartered in D.C. He tweets @cheeky_geeky.

This is a summary. Visit our site for the full post ».

13:13

LocalWiki Releases First API, Enabling Innovative Apps

We're excited to announce that the first version of the LocalWiki API has just been released!

What's this mean?

In June, folks in Raleigh, N.C., held their annual CityCamp event. CityCamp is a sort of "civic hackathon" for Raleigh. During one part of the event, people broke up into teams and came up with projects that used technology to help solve local, civic needs.

citycamp.jpg

What did almost every project pitched at CityCamp have in common? "Almost every final CityCamp idea had incorporated a stream of content from TriangleWiki," CityCamp and TriangleWiki organizer Reid Seroz said in an interview with Red Hat's Jason Hibbets.

The LocalWiki API makes it really easy for people to build applications and systems that push and pull information from a LocalWiki. In fact, the API has already been integrated into a few applications. LocalWiki is an effort to create community-owned, living information repositories that will provide much-needed context behind the people, places, and events that shape our communities.

The winning project at CityCamp Raleigh, RGreenway, is a mobile app that helps residents find local greenways. They plan to push/pull data from the TriangleWiki's extensive listing of greenways.

Another group in the Raleigh-Durham area, Wanderful, is developing a mobile application that teaches residents about their local history as they wander through town. They're using the LocalWiki API to pull pages and maps from the TriangleWiki.

Ultimately, we hope that LocalWiki can be thought of as an API for the city itself -- a bridge between local data and local knowledge, between the quantitative and the qualitative aspects of community life.

Using the API

You can read the API documentation to learn about the new API. You'll also want to make sure you check out some of the API examples to get a feel for things.

wanderful.jpg

We did a lot of work to integrate advanced geospatial support into the API, extending the underlying API library we were using -- and now everyone using it can effortlessly create an awesome geospatially aware API.

This is just the first version of the API, and there's a lot more we want to do! As we add more structured data to LocalWiki, the API will get more and more useful. And we hope to simplify and streamline the API as we see real-world usage.

Want to help? Share your examples for interacting with the API from a variety of environments -- jump in on the page on dev.localwiki.org or add examples/polish to the administrative documentation.

CityCamp photo courtesy of CityCamp Raleigh.

Philip Neustrom is a software engineer in the San Francisco Bay area. He co-founded DavisWiki.org in 2004 and is currently co-directing the LocalWiki.org effort. For the past several years he has worked on a variety of non-profit efforts to engage everyday citizens. He oversaw the development of the popular VideoTheVote.org, the world's largest coordinated video documentation project, and was the lead developer at Citizen Engagement Laboratory, a non-profit focused on empowering traditionally underrepresented constituencies. He is a graduate of the University of California, Davis, with a bachelor's in Mathematics.

August 22 2012

14:00

August 21 2012

15:58

August 20 2012

13:34

How Wikipedia Manages Sources for Breaking News

Almost a year ago, I was hired by Ushahidi to work as an ethnographic researcher on a project to understand how Wikipedians managed sources during breaking news events.

Ushahidi cares a great deal about this kind of work because of a new project called SwiftRiver that seeks to collect and enable the collaborative curation of streams of data from the real-time web about a particular issue or event. If another Haiti earthquake happened, for example, would there be a way for us to filter out the irrelevant, the misinformation, and build a stream of relevant, meaningful and accurate content about what was happening for those who needed it? And on Wikipedia's side, could the same tools be used to help editors curate a stream of relevant sources as a team rather than individuals?

pakistan.png

Ranking sources

When we first started thinking about the problem of filtering the web, we naturally thought of a ranking system that would rank sources according to their reliability or veracity. The algorithm would consider a variety of variables involved in determining accuracy, as well as whether sources have been chosen, voted up or down by users in the past, and eventually be able to suggest sources according to the subject at hand. My job would be to determine what those variables are -- i.e., what were editors looking at when deciding whether or not to use a source?

I started the research by talking to as many people as possible. Originally I was expecting that I would be able to conduct 10 to 20 interviews as the focus of the research, finding out how those editors went about managing sources individually and collaboratively. The initial interviews enabled me to hone my interview guide. One of my key informants urged me to ask questions about sources not cited as well as those cited, leading me to one of the key findings of the report (that the citation is often not the actual source of information and is often provided in order to appease editors who may complain about sources located outside the accepted Western media sphere). But I soon realized that the editors with whom I spoke came from such a wide variety of experience, work areas and subjects that I needed to restrict my focus to a particular article in order to get a comprehensive picture of how editors were working. I chose a 2011 Egyptian revolution article on Wikipedia because I wanted a globally relevant breaking news event that would have editors from different parts of the world working together on an issue with local expertise located in a language other than English.

Using Kathy Charmaz's grounded theory method, I chose to focus editing activity (in the form of talk pages, edits, statistics and interviews with editors) from January 25, 2011 when the article was first created (within hours of the first protests in Tahrir Square), to February 12 when Mubarak resigned and the article changed its name from "2011 Egyptian protests" to "2011 Egyptian revolution." After reviewing the big-picture analyses of the article using Wikipedia statistics of top editors, and locations of anonymous editors, etc., I started work with an initial coding of the actions taking place in the text, asking the question, "What is happening here?"

I then developed a more limited codebook using the most frequent/significant codes and proceeded to compare different events with the same code (looking up relevant edits of the article in order to get the full story), and to look for tacit assumptions that the actions left out. I did all of this coding in Evernote because it seemed the easiest (and cheapest) way of importing large amounts of textual and multimedia data from the web, but it wasn't ideal because talk pages, when imported, need to be re-formatted, and I ended up using a single column to code data in the first column since putting each conversation on the talk page in a cell would be too time-consuming.

evernote.png

I then moved to writing a series of thematic notes on what I was seeing, trying to understand, through writing, what the common actions might mean. I finally moved to the report writing, bringing together what I believed were the most salient themes into a description and analysis of what was happening according to the two key questions that the study was trying to ask: How do Wikipedia editors, working together, often geographically distributed and far from where an event is taking place, piece together what is happening on the ground and then present it in a reliable way? And how could this process be improved?

Key variables

Ethnography Matters has a great post by Tricia Wang that talks about how ethnographers contribute (often invisible) value to organizations by showing what shouldn't be built, rather than necessarily improving a product that already has a host of assumptions built into it.

And so it was with this research project that I realized early on that a ranking system conceptualized this way would be inappropriate -- for the single reason that along with characteristics for determining whether a source is accurate or not (such as whether the author has a history of presenting accurate news article), a number of important variables are independent of the source itself. On Wikipedia, these include variables such as the number of secondary sources in the article (Wikipedia policy calls for editors to use a majority of secondary sources), whether the article is based on a breaking news story (in which case the majority of sources might have to be primary, eyewitness sources), or whether the source is notable in the context of the article. (Misinformation can also be relevant if it is widely reported and significant to the course of events as Judith Miller's New York Times stories were for the Iraq War.)

nyt.png

This means that you could have an algorithm for determining how accurate the source has been in the past, but whether you make use of the source or not depends on factors relevant to the context of the article that have little to do with the reliability of the source itself.

Another key finding recommending against source ranking is that Wikipedia's authority originates from its requirement that each potentially disputed phrase is backed up by reliable sources that can be checked by readers, whereas source ranking necessarily requires that the calculation be invisible in order to prevent gaming. It is already a source of potential weakness that Wikipedia citations are not the original source of information (since editors often choose citations that will be deemed more acceptable to other editors) so further hiding how sources are chosen would disrupt this important value.

On the other hand, having editors provide a rationale behind the choice of particular sources, as well as showing the variety of sources rather than those chosen because of loading time constraints may be useful -- especially since these discussions do often take place on talk pages but are practically invisible because they are difficult to find.

Wikipedians' editorial methods

Analyzing the talk pages of the 2011 Egyptian revolution article case study enabled me to understand how Wikipedia editors set about the task of discovering, choosing, verifying, summarizing, adding information and editing the article. It became clear through the rather painstaking study of hundreds of talk pages that editors were:

  1. storing discovered articles either using their own editor domains by putting relevant articles into categories or by alerting other editors to breaking news on the talk page,
  2. choosing sources by finding at least two independent sources that corroborated what was being reported but then removing some of the citations as the page became too heavy to load,
  3. verifying sources by finding sources to corroborate what was being reported, by checking what the summarized sources contained, and/or by waiting to see whether other sources corroborated what was being reported,
  4. summarizing by taking screenshots of videos and inserting captions (for multimedia) or by choosing the most important events of each day for a growing timeline (for text),
  5. adding text to the article by choosing how to reflect the source within the article's categories and providing citation information, and
  6. editing disputing the way that editors reflected information from various sources and replacing primary sources with secondary sources over time.

It was important to discover the work process that editors were following because any tool that assisted with source management would have to accord as closely as possible with the way that editors like to do things on Wikipedia. Since the process is managed by volunteers and because volunteers decide which tools to use, this becomes really critical to the acceptance of new tools.

sources.png

Recommendations

After developing a typology of sources and isolating different types of Wikipedia source work, I made two sets of recommendations as follows:

  1. The first would be for designers to experiment with exposing variables that are important for determining the relevance and reliability of individual sources as well as the reliability of the article as a whole.
  2. The second would be to provide a trail of documentation by replicating the work process that editors follow (somewhat haphazardly at the moment) so that each source is provided with an independent space for exposition and verification, and so that editors can collect breaking news sources collectively.

variables.png

Regarding a ranking system for sources, I'd argue that a descriptive repository of major media sources from different countries would be incredibly beneficial, but that a system for determining which sources are ranked highest according to usage would yield really limited results. (We know, for example, that the BBC is the most used source on Wikipedia by a high margin, but that doesn't necessarily help editors in choosing a source for a breaking news story.) Exposing the variables used to determine relevancy (rather than adding them up in invisible amounts to come up with a magical number) and showing the progression of sources over time offers some opportunities for innovation. But this requires developers to think out of the box in terms of what sources (beyond static texts) look like, where such sources and expertise are located, and how trust is garnered in the age of Twitter. The full report provides details of the recommendations and the findings and will be available soon.

Just the beginning

This is my first comprehensive ethnographic project, and one of the things I've noticed when doing other design and research projects using different methodologies is that, although the process can seem painstaking and it can prove difficult to articulate the hundreds of small observations into findings that are actionable and meaningful to designers, getting close to the experience of editors is extremely valuable work that is rare in Wikipedia research. I realize now that in the past when I actually studied an article in detail, I knew very little about how Wikipedia works in practice. And this is only the beginning!

Heather Ford is a budding ethnographer who studies how online communities get together to learn, play and deliberate. She currently works for Ushahidi and is studying how online communities like Wikipedia work together to verify information collected from the web and how new technology might be designed to help them do this better. Heather recently graduated from the UC Berkeley iSchool where she studied the social life of information in schools, educational privacy and Africans on Wikipedia. She is a former Wikimedia Foundation Advisory Board member and the former Executive Director of iCommons - an international organization started by Creative Commons to connect the open education, access to knowledge, free software, open access publishing and free culture communities around the world. She was a co-founder of Creative Commons South Africa and of the South African nonprofit, The African Commons Project as well as a community-building initiative called the GeekRetreat - bringing together South Africa's top web entrepreneurs to talk about how to make the local Internet better. At night she dreams about writing books and finding time to draw.

This article also appeared at Ushahidi.com and Ethnography Matters. Get the full report at Scribd.com.

August 17 2012

10:07

Diaspora's next act: Social remixing site Makr.io

Another tool to waste time.

AllThingsD :: “So many people are worried that technology is mediating us, but I think it’s just giving us a new way to hang out with our friends,” says Salzberg, co-creator of Makr.io, a “collaborative Web remixing tool” where users try to one-up each other by posting funny captions on pictures, a la lolcats.

A report by Liz Gannes, allthingsd.com

August 16 2012

14:00

Why Self-Publishers Should Care That Penguin Bought Author Solutions

Should self-publishers care that Pearson, the corporate parent of Penguin Group, has acquired Author Solutions and its subsidiaries? Maybe. Because among them are Author House, Booktango, Inkubook, iUniverse, Trafford, Xlibris, Wordclay, AuthorHive, Pallbrio, and Hollywood Pitch.

Thus, the move marks something significant happening in the world of self-publishing. Here's my take on the acquisition and what it means, along with some pundits' reactions to the merger and a report from my conversation with the senior vice president of marketing for Author Solutions, Keith Ogorek.

Why Author Solutions? Why Now?

Keith Ogorek, Sr VP Marketing, Author Solutions

It's no secret that since traditional publishing houses have been suffering, smart agents and acquisitions editors actively seek successful self-published authors. Publishers like Harlequin, Hay House, and Thomas Nelson partnered with Author Solutions (ASI) to create self-publishing services for them back in 2009, both to expand into a profitable business, and to data mine for successful authors in their genres.

Penguin is no different, of course, and its solution was Book Country, a genre-fiction writing community, which only added self-publishing services in November 2011 -- late to the game.

"Sure they've been watching the trend," Ogorek said. "Penguin has already been acquiring self-published titles. With the [ASI] acquisition they will be able to identify self-published authors earlier in the process, the ones that meet the high standards of Penguin."

Bringing in Community

One big question that arises from the purchase is: Will Pearson's Book Country continue as both a genre fiction writing community and self-publishing service retooled to use Author Solutions technologies and services? Or will Book Country revert to a writing community and retire its self-publishing arm to open a new and improved self-publishing service more obviously branded next to Penguin?

"It's part of the discussion," Ogorek said, "We think there's a bigger opportunity in the online learning center there, and it's possible that Booktango could bring in Book Country as part of that. It's a great site for curating content and community involvement. However," he added, "I'd like to talk to you in about a month. After all, we just got married yesterday, and we haven't figured out where all the furniture is going to go."

(Book Country's self-publishing tools area recently went offline while they "upgrade the site.")

Book Country Self-Publishing Tools Offline

A Booktango and Book Country pairing could be interesting, as community is lacking in most self-publishing platforms.

Scribd comes close, with its document sharing and commenting features, paired with a sales platform. But it doesn't distribute, so popular authors like "My Drop Dead Life" author Hyla Molander have to choose print and e-book platforms that get them into all the stores.

Then there's the WattPad community for the young adult market, where authors like Brittany Geragotelis shared her writing and attracted 13 million readers, before deciding to self-publish using Amazon CreateSpace and KDP for print and e-book sales.

As a side note, WattPad and Smashwords partnered to close the gap between community file-sharing and commenting and getting books out into the stores. The right combination of community and publishing platform could attract authors to Booktango and Book Country.

DIY Services ... or More?

Ogorek uses the home-improvement metaphor to explain that DIY services like their Booktango e-book service, along with Smashwords and Amazon CreateSpace, Kindle Direct Publishing and maybe BookBaby, are "for people with skills, who know how to build a deck and want to do it themselves." Then there are the people who don't have the skills, or maybe just don't have the time, "who hire contractors to build the deck." For these authors, they provide add-on services and "assisted self-publishing" tools like iUniverse and Author House, Trafford and Xlibris, for which authors pay into the five figures.

Self-publishers who dream of winning a traditional publishing contract may anticipate that Penguin will notice them if they're popular on Book Country, or Booktango, or whatever it will be called. (Though so obviously impractical, the acquisition dream dies hard, even now, when so many traditionally published authors are jumping to the free services.)

How is an Author to Choose?

Booktango List of Services

Instead of salivating over a possible acquisition by Penguin, self-publishers should be asking how the Penguin/ASI services help them now. Do Booktango and Book Country compete in the current market? Well, yeah. Let's just say that ASI is pulling an Amazon and underselling, giving authors 100% of earnings when they publish with Booktango, without even a signup fee. "It's a business decision on our part," Ogorek said. "We think that authors will purchase services, and we'll have the opportunity down the road to get their books out there and known."

So how is an author to choose? Author Solutions is often criticized for its hard upselling, and Booktango's pages are not exempt. There are "hot deals" on social media consultations, as well as "new" marketing services like Kirkus Indie Review, and blogger review services among the many listed on their site.

Their packaged services (iUniverse, Author House, etc.) are also famous for add-ons, but let's stick to Booktango, whose e-book packages range from free to $189. In comparison, Smashwords is free, giving authors 85% of earnings. BookBaby is closest in structure to Booktango by not taking a percentage, but it makes its money by signing up authors for $99 and in premium services. Amazon KDP gives the author 70% of earnings, and Amazon CreateSpace (print) 80%.

BookBaby, whose premium publishing e-book packages top out at $249, sells add-on services like cover design and advanced formatting, with cover designs topping out at $279. (They can also do web design with their HostBaby product.) Smashwords doesn't sell anything but the authors' e-books, and almost reluctantly passes on an email list of e-book formatters and cover designers liked by its authors.

The Critics Say...

Smashwords founder Mark Coker is a longtime critic of Author Solutions, saying that they make more money from selling services to authors than selling authors' books: "Author Solutions is one of the companies that put the 'V' in vanity.  Author Solutions earns two-thirds or more of their income selling services and books to authors, not selling authors' books to readers ... Does Pearson think that Author Solutions represents the future of indie publishing?"

Mark Coker, Founder, Smashwords

It's not news that ASI, along with Amazon, is the company that some publishing pundits love to hate. Jane Friedman, in her Writer Unboxed blog, notes that ASI's acquisitions are "appearing more and more like a huge scramble to squeeze a few more profitable dollars out of a service that is no longer needed, that is incredibly overpriced when compared to the new and growing competition, and has less to recommend it with each passing day, as more success stories come from the e-publishing realm where author royalties are in the 70-85% range. (An author typically earns less than half that percentage for royalties on a POD book.)"

Guy LeCharles Gonzalez of Digital Book World was skeptical of Penguin's claim as to the value of the acquisition, posting in his blog that "my own first reaction was pretty cynical." And he finds Penguin Group CEO "John Makinson's claim odd, as reported by Publisher's Lunch, that he expects there will be a 'new and growing category of professional authors who are going to gravitate towards the ASI solution rather than the free model.'"

I always advise authors to be skeptical of add-on services -- marketing especially. It's generally agreed in the industry that unless you've got very deep pockets, you just cannot hire it out to someone else, and that's even if the book is great. I've remarked many times that authors are as much, or more at fault, as the seller, for paying more than they need for services, and for paying for services they don't need. Especially vulnerable are new authors, and authors recently dumped by their publishing companies, as they would like to believe it can be easy to simply throw money at a service to solve their problem, mewing in an almost deliberate naiveté, "I just want to write."

Lest I sound too harsh, I have often found the language on some of ASI's pages to be convincing, easily frightening uneducated authors into paying for a service that can be cheaply and easily done themselves. In fact, it was the language on Booktango's U.S. Copyright Registration service, along with the $150 price tag, that led to me write my previous post on how to easily and cheaply register your copyright electronically for $35 in 35 minutes.

I asked Ogorek to comment, and he responded with the deck analogy. "It's up to the individual to decide whether they want a product. They may have the time and skills to build the deck themselves, or they may not want to learn how, and hire the contractor instead. We provide tools and services to serve both cases."

The Future

Should self-publishers put ASI's Booktango in the running when they're considering Smashwords and BookBaby, Amazon CreateSpace and Kindle Direct Publishing? Sure. Just resist the upsell.

Should you consider purchasing ASI's iUniverse, Author House, Xlibris, or another package? Hmmmm. It is very difficult for a committed do-it-yourselfer like myself to be convinced to recommend these options. I've never taken a hands-off approach to publishing, and I like to know who is editing, designing, and formatting my book, instead of throwing it into a mill and seeing which cubicle it lands in. I may get a riffed senior editor from Random House, or a recent college graduate. But the bigger question may be, will Penguin provide a much-needed publisher's touch to organize the confusing array of products and soften ASI's hard-sell approach?

Will the Book Country community prove to be valuable to authors seeking to perfect and sell their books? Is all the acquisition and activity productive and author-friendly, or is it just rearranging the deck chairs on the Titanic? Penguin has a chance to reorganize, rebrand, and remarket Author Solutions companies with a level of transparency that regains the trust of authors and critics in the industry.The activity is worth watching closely.

Carla King is an author, a publishing consultant, and founder of the Self-Publishing Boot Camp program providing books, lectures and workshops for prospective self-publishers. She has self-published non-fiction travel and how-to books since 1994 and has worked in multimedia since 1996. Her series of dispatches from motorcycle misadventures around the world are available as print books, e-books and as diaries on her website. Her Self-Publishing Boot Camp Guide for Authors was updated in early 2012 and is available in print and online at the usual resellers.

This is a summary. Visit our site for the full post ».

August 14 2012

16:16
14:00

What's Next for Ushahidi and Its Platform?

This is part 2 in a series. In part 1, I talked about how we think of ourselves at Ushahidi and how we think of success in our world. It set up the context for this post, which is about where we're going next as an organization and with our platform.

We realize that it's hard to understand just how much is going on within the Ushahidi team unless you're in it. I'll try to give a summarized overview, and will answer any questions through the comments if you need more info on any of them.

The External Projects Team

Ushahidi's primary source of income is private foundation grant funding (Omidyar Network, Hivos, MacArthur, Google, Cisco, Knight, Rockefeller, Ford), and we don't take any public funding from any country so that we are more easily able to maintain our neutrality. Last year, we embarked on a strategy to diversify our revenue stream, endeavoring to decrease our percentage of revenues based on grant funding and offset that with earned revenue from client projects. This turned out to be very hard to do within our current team structure, as the development team ended up being pulled off of platform-side work and client-side work suffered for it. Many internal deadlines were missed, and we found ourselves unable to respond to the community as quickly as we wanted.

This year we split out an "external projects team" made up of some of the top Ushahidi deployers in the world, and their first priority is to deal with client and consulting work, followed by dev community needs. We're six months into this strategy, and it seems like this team format will continue to work and grow. Last year, 20% of our revenue was earned; this year we'd like to get that to the 30-40% range.

Re-envisioning Crowdmap

When anyone joins the Ushahidi team, we tend to send them off to some conference to speak about Ushahidi in the first few weeks. There's nothing like knowing that you're going to be onstage talking about your new company to galvanize you into really learning about and understanding everything about the organization. Basically, we want you to understand Ushahidi and be on the same mission with us. If you are, you might explain what we do in a different way than I do onstage or in front of a camera, but you'll get the right message out regardless.

crowdmap-screenshot-mobile-397x500.png

You have a lot of autonomy within your area of work, or so we always claimed internally. This was tested earlier this year, where David Kobia, Juliana Rotich and myself as founders were forced to ask whether we were serious about that claim, or were just paying it lip-service. Brian Herbert leads the Crowdmap team, which in our world means he's in charge of the overall architecture, strategy and implementation of the product.

The Crowdmap team met up in person earlier this year and hatched a new product plan. They re-envisioned what Crowdmap could be, started mocking up the site, and began building what would be a new Crowdmap, a complete branch off the core platform. I heard this was underway, but didn't get a brief on it until about six weeks in. When I heard what they had planned, and got a complete walk-through by Brian, I was floored. What I was looking at was so different from the original Ushahidi, and thus what we have currently as Crowdmap, that I couldn't align the two in my mind.

My initial reaction was to shut it down. Fortunately, I was in the middle of a random 7-hour drive between L.A. and San Francisco, so that gave me ample time to think by myself before I made any snap judgments. More importantly, it also gave me time to call up David and talk through it with him. Later that week, Juliana, David and I had a chat. It was at that point that we realized that, as founders, we might have blinders on of our own. Could we be stuck in our own 2008 paradigm? Should we trust our team to set the vision for a product? Did the product answer the questions that guide us?

The answer was yes.

The team has done an incredible job of thinking deeply about Crowdmap users, then translating that usage into a complete redesign, which is both beautiful and functional at the same time. It's user-centric, as opposed to map-centric, which is the greatest change. But, after getting around our initial feelings of alienness, we are confident that this is what we need to do. We need to experiment and disrupt ourselves -- after all, if we aren't willing to take risks and try new things, then we fall into the same trap that those who we disrupted did.

A New Ushahidi

For about a year we've been asking ourselves, "If we rebuilt Ushahidi, with all we know now, what would it look like?"

To redesign, re-architect and rebuild any platform is a huge undertaking. Usually this means part of the team is left to maintain and support the older code, while the others are building the shiny new thing. It means that while you're spending months and months building the new thing, that you appear stagnant and less responsive to the market. It means that you might get it wrong and what you build is irrelevant by the time it's launched.

Finally, after many months of internal debate, we decided to go down this path. We've started with a battery of interviews with users, volunteer developers, deployers and internal team members. The recent blog post by Heather Leson on the design direction we're heading in this last week shows where we're going. Ushahidi v3 is the complete redesign of Ushahidi's core platform, from the first line of code to the last HTML tag. On the front-end it's mobile web-focused out of the gate, and the backend admin area is about streamlining the publishing and verification process.

At Ushahidi we are still building, theming and using Ushahidi v2.x, and will continue to do so for a long time. This idea of a v3 is just vaporware until we actually decide to build it, but the exercise has already born fruit because it forces us to ask what it might look like if we weren't constrained by the legacy structure we had built. We'd love to get more input from everyone on this as we go forward.

SwiftRiver in Beta

After a couple of fits and starts, SwiftRiver is now being tried out by 500-plus beta testers. It's 75% of the way to completion, but usable, and so it's out and we're getting the feedback from everyone on what needs to be changed, added and removed in order to make it the tool we all need to manage large amounts of data. It's an expensive, server-intensive platform to run, so those who use it in the future will have to pay for its use when using it on our servers. As always, the core code will be made available, free and open source, for those who would like to set it up and run it on their own.

In Summary

The amount of change and internal change that Ushahidi is undertaking is truly breathtaking to us. We're cognizant of just how much we're putting on the edge. However, we know this; in our world of technology, those who don't disrupt themselves will themselves be disrupted. In short, we'd rather go all-in to make this change happen ourselves than be mired in a state of stagnancy and defensive activity.

As always, this doesn't happen in a vacuum for Ushahidi. We've relied on those of you who are the coders and deployers to help us guide the platforms for over four years. Many of you have been a part of one of these product rethinks. If you aren't already, and would like to be, get in touch with myself or Heather to get into it and help us re-envision and build the future.

Raised in Kenya and Sudan, Erik Hersman is a technologist and blogger who lives in Nairobi. He is a co-founder of Ushahidi, a free and open-source platform for crowdsourcing information and visualizing data. He is the founder of AfriGadget, a multi-author site that showcases stories of African inventions and ingenuity, and an African technology blogger at WhiteAfrican.com. He currently manages Ushahidi's operations and strategy, and is in charge of the iHub, Nairobi's Innovation Hub for the technology community, bringing together entrepreneurs, hackers, designers and the investment community. Erik is a TED Senior Fellow, a PopTech Fellow and speaker and an organizer for Maker Faire Africa. You can find him on Twitter at @WhiteAfrican

This post originally appeared on Ushahidi's blog.

August 11 2012

15:07
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl