- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me."
In 24 hours, Zeegas -- a new form of interactive media -- will be installed on four projection screens at San Francisco's renowned Museum of Modern Art. This showcase is part of "The Making Of..." -- a collaboration between award-winning NPR producers the Kitchen Sisters, KQED, AIR's Localore, the Zeega community and many others.
Join in this collaborative media experiment and make Zeegas for SFMOMA. To participate, log in to Zeega and create something for the exhibition. To make the simplest Zeega possible, just combine an animated GIF and a song. And if you want to do more, go wild.
You can contribute from anywhere in the world. The deadline is midnight EST on Wednesday.
If you've never made a Zeega, worry not: It's super-easy. You can quickly combine audio, images, animated GIFs, text and video from across the web. Zeegas come in all shapes and sizes, from GIFs accompanied by a maker's favorite song to a haunting photo story about a Nevada ghost town to an interactive video roulette.
The Zeega exhibition is one piece of "The Making Of...Live at SFMOMA." As SFMOMA closes for two years of renovation and expansion, over 100 makers from throughout the region will gather to share their skills and crafts and tell their stories.
For the event, there will be two live performances of Zeegas and the "Web Documentary Manifesto," and there will also be a session with Roman Mars ("99% Invisible"), The Kitchen Sisters, AIR's Sue Schardt talking about Localore, and other storytelling gatherings throughout the festivities. For the full program, click here.
Jesse Shapins is a media entrepreneur, cultural theorist and urban artist. He is Co-Founder/CEO of Zeega, a platform revolutionzing interactive storytelling for an immersive future. For the past decade, he has been a leader in innovating new models of web and mobile publishing, his work featured in Wired, The New York Times, Boingboing and other venues. His artistic practice focuses on mapping the imagination and perception of place between physical, virtual and social space. His work has been cited in books such as The Sentient City, Networked Locality and Ambient Commons, and exhibited at MoMA, Deutsches Architektur Zentrum and the Carpenter Center for Visual Arts, among other venues. He was Co-Founder of metaLAB (at) Harvard, a research unit at the Berkman Center for Internet and Society, and served on the faculty of architecture at the Harvard Graduate School of Design, where he invented courses such as The Mixed-Reality City and Media Archaeology of Place.
Recent events in Boston highlight both the potential and hazards of ever-present cameras. In the hours following the April 15 bombing, law enforcement agencies called upon commercial businesses and the public to submit relevant footage from surveillance cameras and mobile devices. While the tsunami of crowdsourced data threatened to overwhelm servers and analysts, it provided clues that ultimately led to identifying the perpetrators. It also led to false identifications and harassment of innocent bystanders.
Use of surveillance video to solve large-scale crimes first came to attention in the 2005 London subway bombings. In part due to its history of violent attacks by the IRA, London had invested heavily in closed-circuit television (CCTV) technology and had installed nearly 6,000 cameras in the underground system. In the days before smartphones, these publicly installed cameras were the most reliable source of video evidence, and law enforcement was able to identify the bombers using this footage.
With the advent of low-cost cameras and video recorders in smartphones, witnesses to events soon had a powerful tool to contribute to the law enforcement toolbox. Couple this technical capacity with the proliferation of social-networking platforms and the possibilities for rapid identification -- as well as the spread of misinformation -- become clear.
Vancouver police were overwhelmed with evidence from social media after the Stanley Cup riot in June 2011. This instance also highlighted the need for two things: stronger means of verification, since a number of photos were retouched or falsified, and protections against vigilantism or harassment of unofficial suspects.
Several projects currently in development address the need for a reliable system to authenticate digital images. In addition to a growing number of commercial companies specializing in audio and video forensic analysis, academic and non-profit labs are developing tools for this purpose. Informacam, a project of WITNESS and The Guardian Project, will strengthen metadata standards, and the Rashomon Project at UC Berkeley will aggregate and synchronize multiple videos of a given event. (Disclosure: The Rashomon Project is a project of the CITRIS Data and Democracy Initiative, which I direct.) These tactics, among others, will bolster the use of video evidence for criminal investigations and prosecutions.
Despite the clear advantages of drawing on crowdsourced footage for solving crimes, civil liberties groups and privacy advocates have warned about the dangers of perpetual surveillance. We saw in the Boston case the liability inherent in the ease and speed of circulating false claims and images. The New York Post published a front-page photo of two young men mistakenly identified as suspects, and the family of another young man, who had been missing for several weeks, was tormented by media seeking stories about the misplaced suspicion fueled by Reddit, an online social media aggregator.
In addition to facilitating the "wisdom of crowds," technology grows more sophisticated for automated surveillance, including face recognition and gait analysis. In the last decade, many cities have accelerated implementation of surveillance systems, capitalizing on advances in computer technology and funds available from the Department of Homeland Security and other public sources. Yet whether considering fixed cameras or citizen footage, the effectiveness of surveillance for crime prevention is mixed. A 2009 CITRIS study shows San Francisco's installation of cameras in high-risk neighborhoods led to decreases in property crime but had apparently little effect on violent crime. If anything, perpetrators learned to evade the cameras, and crimes were displaced into neighboring areas or private spaces.
In open societies, technological advances should spark new discussions about ethics and protocol for their implementation. Communities, both online and in-person, have an opportunity to debate the benefits and costs of video evidence in the context of social-networking platforms. While their enthusiasm must be tempered by regard for due process, armchair investigators should be encouraged to work in partnership with public agencies charged with ensuring public safety.
Camille Crittenden is Deputy Director of CITRIS, based at UC Berkeley, where she also directs the Data and Democracy Initiative. Prior to this appointment, she served as Executive Director of the Human Rights Center at Berkeley Law, where she was responsible for overall administration of the Center, including fundraising, communications, and outreach, and developed its program in human rights, technology, and new media. She held previous positions as Assistant Dean for Development in the division of International and Area Studies at UC Berkeley and in development and public relations at University of California Press and San Francisco Opera. She holds a Ph.D. from Duke University.
Image of surveillance camera courtesy of Flickr user jonathan mcintosh.
This post was co-written by Public Lab organizer Don Blair.
Public Lab is pleased to announce the launch of our fourth Kickstarter today, "Infragram: the Infrared Photography Project." The idea for the Infragram was originally conceptualized during the BP oil spill in the Gulf of Mexico and as a tool for monitoring wetland damages. Since then, the concept has been refined to offer an affordable and powerful tool for farmers, gardeners, artists, naturalists, teachers and makers for as little as $35 -- whereas near-infrared cameras typically cost $500-$1,200.
Technologies such as the Infragram have similar roles as photography during the rise of credible print journalism -- these new technologies democratize and improve reporting about environmental impacts. The Infragram in particular will allow regular people to monitor their environment through verifiable, quantifiable, citizen-generated data. You can now participate in a growing community of practitioners who are experimenting and developing low-cost near-infrared technology by backing the Infragram Project and joining the Public Lab infrared listserve.
Infrared imagery has a long history of use by organizations like NASA to assess the health and productivity of vegetation via sophisticated satellite imaging systems like Landsat. It has also been applied on-the-ground in recent years by large farming operations. By mounting an infrared imaging system on a plane, helicopter, or tractor, or carrying around a handheld device, farmers can collect information about the health of crops, allowing them to make better decisions about how much fertilizer to add, and where. But satellites, planes, and helicopters are very expensive platforms; and even the tractor-based and handheld devices for generating such imagery typically cost thousands of dollars. Further, the analysis software that accompanies many of these devices is "closed source"; the precise algorithms used -- which researchers would often like to tweak, and improve upon -- are often not disclosed.
So, members of the Public Lab community set out to see whether it was possible to make a low-cost, accessible, fully open platform for capturing infrared imagery useful for vegetation analysis. Using the insights and experience of a wide array of community members -- from farmers and computer geeks to NASA-affiliated researchers -- a set of working prototypes for infrared image capture started to emerge. By now, the Public lab mailing lists and website contain hundreds of messages, research notes, and wikis detailing various tools and techniques for infrared photography, ranging from detailed guides to DIY infrared retrofitting of digital SLRs, to extremely simple and low-cost off-the-shelf filters, selected through a collective testing-and-reporting back to the community process.
All of the related discussions, how-to guides, image examples, and hardware designs are freely available, published under Creative Commons and CERN Open Hardware licensing. There are already some great examples of beautiful NDVI/near-infrared photography by Public Lab members -- including timelapses of flowers blooming, and balloon-based infrared imagery that quickly reveals which low-till methods are better at facilitating crop growth.
By now, the level of interest and experience around DIY infrared photography in the Public Lab community has reached a tipping point, and Public Lab has decided to use a Kickstarter as a way of disseminating the ideas and techniques around this tool to a wider audience, expanding the community of users/hackers/developers/practitioners. It's also a way of generating support for the development of a sophisticated, online, open-source infrared image analysis service, allowing anyone who has captured infrared images to "develop" them and analyze them according to various useful metrics, as well as easily tag them and share them with the wider community. The hope is that by raising awareness (and by garnering supporting funds), Public Lab can really push the "Infrared Photography Project" forward at a rapid pace.
Accordingly, we've set ourselves a Kickstarter goal of 5,000 "backers" -- we're very excited about the new applications and ideas that this large number of new community members would bring! And, equally exciting: The John S. and James L. Knight Foundation has offered to provide a matching $10,000 of support to the Public Lab non-profit if we reach 1,000 backers.
With this growing, diverse community of infrared photography researchers and practitioners -- from professional scientists, to citizen scientists, to geek gardeners -- we're planning on developing Public Lab's "Infrared Photography Project" in many new and exciting directions, including:
We hope you'll join us by contributing to the Kickstarter campaign and help grow a community of open-source infrared enthusiasts and practitioners!
A co-founder of Public Laboratory for Open Technology and Science, Shannon is based in New Orleans as the Director of Outreach and Partnerships. With a background in community organizing, prior to working with Public Lab, Shannon held a position with the Anthropology and Geography Department at Louisiana State University as a Community Researcher and Ethnographer on a study about the social impacts of the spill in coastal Louisiana communities. She was also the Oil Spill Response Director at the Louisiana Bucket Brigade, conducting projects such as the first on-the-ground health and economic impact surveying in Louisiana post-spill. Shannon has an MS in Anthropology and Nonprofit Management, a BFA in Photography and Anthropology and has worked with nonprofits for over thirteen years.
Don Blair is a doctoral candidate in the Physics Department at the University of Massachusetts Amherst, a local organizer for The Public Laboratory for Open Technology and Science, a Fellow at the National Center for Digital Government, and a co-founder of Pioneer Valley Open Science. He is committed to establishing strong and effective collaborations among citizen / civic and academic / industrial scientific communities through joint research and educational projects. Contact him at http://dwblair.github.io, or via Twitter: @donwblair
After an insane and memorable week at SXSW Interactive in Austin in March, we came away with our work cut out for us: improving Pop Up Archive so that it's a reliable place to make all kinds of audio searchable, findable and reusable. Thanks in no small part to the brilliant development team at PRX, we've come leaps and bounds since then.
Pop Up Archive can:
We've been opening the site to select groups of pioneering users, and we'd love input from the community. Request an invite here.
The content creators and caretakers we're talking to have valuable digital material on their hands: raw interviews and oral histories, partial mixes of produced works, and entire series of finished pieces. They can't revisit, remix, or repackage that material -- it's stored in esoteric formats in multiple locations. And it gets lost every time a hard drive dies or a folder gets erased to make more space on a laptop.
We're hearing things like:
"Someday I'm gonna spend a month organizing all this, but I plug [hard drives] in until I find what I need."
"Imagine being able to find a sentence somewhere in your archive. That would be an amazing tool."
"Unfortunately...we don't have a good way of cleaning [tags] to know that 'Obama,' 'Mr. Obama,' and 'Barack Obama' should be just one entry."
No one wants to figure out how to save all that audio, not to mention search on anything more than filenames. Some stations and media companies maintain incredible archives, but they've got different methods for managing the madness, which don't always line up with workflows and real-world habits. Content creators rely on their memories or YouTube to find old audio, and that works to a degree. But in the meantime, lots of awesome, time-saving and revenue-generating opportunities are going to waste.
Want a taste from the archive? Let Nikki Silva tell you about "War and Separation," one of the first pieces The Kitchen Sisters produced for NPR in the early 1980s.
Read more in the press release.
Before arriving in California, Anne Wootton lived in France, and managed a historic newspaper digitization project at Brown University. Anne came to the UC-Berkeley School of Information with an interest in digital archives and the sociology of technology. She spent summer 2011 working with The Kitchen Sisters and grant agencies to identify preservation and access opportunities for independent radio. She holds a Master's degree in Information Management and Systems.
This past semester, I flew a drone. I helped set up a virtual reality environment. And I helped print a cup out of thin air.
Nice work if you can get it.
Working as a research assistant to Dan Pacheco at the Peter A. Horvitz Endowed Chair for Journalism Innovation at the S.I. Newhouse School of Public Communications at Syracuse University, I helped run the Digital Edge Journalism Series in the spring semester. We held a series of four programs that highlighted the cutting edge of journalism technology. Pacheco ran a session about drones in media; we had Dan Schultz from the MIT Media Lab talk about hacking journalism; we hosted Nonny de la Peña and her immersive journalism experience, and we had a 3D printer in our office, on loan from the Syracuse University ITS department, showing what can be made.
For someone who spent 10 years in traditional media as a newspaper reporter, it was an eye-opening semester. Here are some of the lessons I learned after spending a semester on the digital edge. Maybe they can be useful for you as you navigate the new media waters.
During our 3D printer session, as we watched a small globe and base print from almost out of thin air, I turned to Pacheco and said, "This is the Jetsons. We're living the Jetsons."
This stuff is all real. It sounds obvious to say, but in a way, it's an important thing to remember. Drones, virtual reality, 3D printing all sound like stuff straight out of science fiction. But they're here. And they're being used. More saliently, the barrier to entry of these technologies is not as low as you'd think. You can fly a drone using an iPad. The coding used to create real-time fact-checking programs is accessible. 3D printers are becoming cheaper and more commercially available. And while creating a full-room 3D immersive experience still takes a whole lot of time, money and know-how (we spent the better part of two days putting the experience together, during which I added "using a glowing wand to calibrate a $100,000 PhaseSpace Motion Capture system, then guided students through an immersive 3D documentary experience" to my skill set), you can create your own 3D world using Unity 3D software, which has a free version.
The most important thing I learned is to get into the mindset that the future is here. The tools are here, they're accessible, they can be easy and fun to learn. Instead of thinking of the future as something out there that's going to happen to you, our seminar series showed me that the future is happening right now, and it's something that we can create ourselves.
One of the first questions we'd always get, whether it was from students, professors or professionals, was: "This is neat, but what application does it have for journalism?" It's a natural question to ask of a new technology, and one that sparked a lot of good discussions. What would a news organization use a drone for? What would a journalist do with the coding capabilities Schultz showed us? What kind of stories could be told in an immersive, virtual-reality environment? What journalistic use can a 3D printer have?
These are great questions. But questions become problems when they are used as impediments to change. The notion that a technology is only useful if there's a fully formed and tested journalistic use already in place for it is misguided. The smart strategy moving forward may be to get the new technologies and see what you can use them for. You won't know how you can use a drone in news coverage until you have one. You won't know how a 3D printer can be used in news coverage until you try it out.
There are potential uses. I worked in Binghamton, N.Y, for several years, and the city had several devastating floods. Instead of paying for an expensive helicopter to take overhead photos of the damage, maybe a drone could have been used more inexpensively and effectively (and locally). Maybe a newsroom could use a 3D printer to build models of buildings and landmarks that could be used in online videos. So when news breaks at, say, the local high school, instead of a 2D drawing, a 3D model could be used to walk the audience through the story. One student suggested that 3D printers could be made for storyboards for entertainment media. Another suggested advertising uses, particularly at trade shows. The possibilities aren't endless, but they sure feel like it.
Like I said above, these things are already here. Media organizations can either wait to figure it out (which hasn't exactly worked out for them so far in the digital age) or they can start now. Journalism organizations have never been hubs for research and development. Maybe this is a good time to start.
This new technology is exciting, and empowering. But these technologies also raise some real, serious questions that call for real, serious discussion. The use of drones is something that sounds scary to people, and understandably so. (This is why the phrase "unmanned aerial vehicle" (UAV) is being used more often. It may not be elegant, but it does avoid some of the negative connotation the word "drone" has.) It's not just the paparazzi question. With a drone, where's the line between private and public life? How invasive will the drones be? And there is something undeniably unsettling about seeing an unmanned flying object hovering near you. 3D printers raise concerns, especially now that the first 3D printed guns have been made and fired.
To ignore these questions would be to put our heads in the sand, to ignore the real-world concerns. There aren't easy answers. They're going to require an honest dialogue among users, media organizations, and the academy.
Technology may get the headlines. But the technology is worthless without what the old-school journalists call shoe-leather reporting. At the heart of all these projects and all these technologies is the same kind of reporting that has been at the heart of journalism for decades.
Drones can provide video we can't get anywhere else, but the pictures are meaningless without context. The heart of "hacking journalism" is truth telling, going past the spin and delivering real-time facts to our audience. An immersive journalism experience is pointless if the story, the details, and the message aren't meticulously reported. Without a deeper purpose to inform the public, a 3D printer is just a cool gadget.
It's the marriage of the two -- of old-school reporting and new-school technology -- that makes the digital edge such a powerful place to be.
Brian Moritz is a Ph.D. student at the S.I. Newhouse School of Public Communications at Syracuse University and co-editor of the Journovation Journal. A former award-winning sports reporter in Binghamton, N.Y. and Olean, N.Y., his research focuses on the evolution of journalists' routines. His writing has appeared on the Huffington Post and in the Boston Globe, Boston Herald and Fort Worth Star-Telegram. He has a masters' degree from Syracuse University and a bachelor's degree from St. Bonaventure.
Back at the Hacks/Hackers Media Party in Buenos Aires, I announced the creation of Code Sprints -- funding opportunities to build open-sourced tools for journalism. We used Code Sprints to fund a collaboration between WNYC in New York and KPCC in Southern California to build a parser for election night XML data that ended up used on well over 100 sites -- it was a great collaboration to kick off the Code Sprint concept.
Originally, Code Sprints were designed to work like the XML parser project: driven in concept and execution by newsrooms. While that proved great for working with WNYC, we heard from a lot of independent developers working on great tools that fit the intent of Code Sprints, but not the wording of the contract. And we heard from a lot of newsrooms that wanted to use code, but not drive development, so we rethought how Code Sprints work. Today we're excited to announce refactored Code Sprints for 2013.
Now, instead of a single way to execute a Code Sprint, there are three ways to help make Code Sprints happen:
Each of these options means we can work with amazing code, news organizations, and developers and collaborate together to create lots of great open-source tools for journalism.
I always think real-world examples are better than theoreticals, so I'm also excited to announce the first grant of our revamped Code Sprints will go to Jessica Lord to develop her great Sheetsee.js library for the newsroom. Sheetsee has been on the OpenNews radar for a while -- we profiled the project in Source a number of months back, and we're thrilled to help fund its continued development.
Sheetsee was originally designed for use in the Macon, Ga., government as part of Lord's Code for America fellowship, but the intent of the project -- simple data visualizations using a spreadsheet for the backend -- has always had implications far beyond the OpenGov space. We're excited today to pair Lord with Chicago Public Media (WBEZ) to collaborate on turning Sheetsee into a kick-ass and dead-simple data journalism tool.
For WBEZ's Matt Green, Sheetsee fit the bill for a lightweight tool that could help get the reporters "around the often steep learning curve with data publishing tools." Helping to guide Lord's development to meet those needs ensures that Sheetsee becomes a tool that works at WBEZ and at other news organizations as well.
We're excited to fund Sheetsee, to work with a developer as talented as Lord, to collaborate with a news organization like WBEZ, and to relaunch Code Sprints for 2013. Onward!
Dan Sinker heads up the Knight-Mozilla News Technology Partnership for Mozilla. From 2008 to 2011 he taught in the journalism department at Columbia College Chicago where he focused on entrepreneurial journalism and the mobile web. He is the author of the popular @MayorEmanuel twitter account and is the creator of the election tracker the Chicago Mayoral Scorecard, the mobile storytelling project CellStories, and was the founding editor of the influential underground culture magazine Punk Planet until its closure in 2007. He is the editor of We Owe You Nothing: Punk Planet, the collected interviews and was a 2007-08 Knight Fellow at Stanford University.
A version of this post originally appeared on Dan Sinker's Tumblr here.
After quietly piloting the concept for months, BuzzFeed officially launched its own native ad network this March. The mechanics of the network are bizarre, yet intriguing: Participating publishers allow BuzzFeed to serve story previews on their sites which, when clicked, bring visitors to sponsored stories on BuzzFeed.com. The network, whose ads resemble real story teases, is brash and a bit risky, but it may just help publishers circumvent the abuses of today's established, banner reliant, ad network ecosystem.
The current ad network model, or indirect sales model, is a mess. It functions based on an oversupply of simple display ads and is rife with inefficiencies, opening the door for middlemen to reap profits while devaluing publisher inventory. BuzzFeed's native ad network, along with others in a similar mold, has the potential to minimize these drawbacks by giving publishers a simple, safe way to make money through indirect sales channels.
The ad networks we know today came about as a result of the poor economics of the banner ad. A little history: In the early days of Internet publishing, the banner ad seemed to make sense. Just as many publishers began figuring out the Internet by taking content produced for print and slapping it on the web, they took the standard print ad format -- selling advertisers designated space on a page -- and brought it online too. Instead of selling these ads by the inch though (a measurement suitable for edition-based print publishing), digital ads were sold by the impression, or view, a better fit for the unceasing nature of online media.
Over time, the acceptance and standardization of the banner ad brought a number of side effects along with it, the most important being an incentive for publishers to pack their pages with as many banners as possible. For publishers, the decision was easy: The more banner ads they placed on a page, the more money they stood to make. So instead of running a more manageable (and more user-friendly) three or four banner ads, publishers cluttered their pages with 10, 15 or even 20 of them.
Placing ads on a page was only half the equation though; publishers still needed to sell them. As they soon found out, selling premium, above-the-fold ads was a lot easier than getting advertisers to pony up for the glut of below-the-fold, low-quality inventory. A significant percentage of ads thus went unsold, and into the void stepped ad networks. Even at a heavy discount, publishers figured, it was better to get some money from remnant inventory via ad networks as opposed to making nothing. This would prove to be a poor calculation.
Rather than question the logic of creating more inventory than it was possible to sell, publishers stuck with the model, growing their audiences along with their inventory and watching the original ad networks evolve into a multibillion-dollar tech industry fed largely on remnant inventory. Soon, publishers found themselves exposed to more drawbacks than they perhaps initially bargained for, and the original premise of making more money with more ads came into question.
As it grew, the indirect ecosystem not only enabled advertisers to buy publisher inventory at cheaper prices, devaluing even premium inventory, it also allowed them to buy premium publisher audiences on non-premium sites, thanks to the third-party cookie. The Atlantic's Alexis Madrigal zoomed in on this problem in a long piece about the tough economics of the online publishing industry.
"Advertisers didn't have to buy The Atlantic," he wrote. "They could buy ads on networks that had dropped a cookie on people visiting The Atlantic. They could snatch our audience right out from underneath us." The indirect system, in other words, commoditized his audience, leaving his impressions as valuable, in some ways, as those on third-rate sites.
Recognizing these and other abuses as endemic to the system, publishers today are starting to fight back. Many are trying to limit their dependency on banner ads either by cutting them out of their business completely or by constricting supply. David Payne, the chief digital officer at Gannett who oversaw a major USA Today redesign which dramatically reduced the site's supply of banners, put it this way when I spoke with him for an article for Digiday: "I think we've all proven over the last 12 years that the strategy we've been following -- to create a lot of inventory and then sell it at 95 percent off to these middlemen every day -- is not a long-term strategy."
Publishers have started looking for alternative forms of revenue to fill the gap and, so far, the hottest alternative is the native ad. Everyone from The Atlantic, to Tumblr, to the Washington Post, to Twitter is giving it a try and BuzzFeed, perhaps the extreme example, is all in. It sells only native ads, no banners.
Which brings us to BuzzFeed's ad network. At this early point, it seems like the network should indeed be free of many of the abuses listed above. Its simple nature, for example, ensures that most of the value won't be siphoned out by a group of tech middlemen and will be largely shared by BuzzFeed, participating publishers and minimally, the ad server. Participating in the network, furthermore, should not devalue publishers' existing inventory since it will not provide advertisers access to the same inventory at cheaper prices.
BuzzFeed also claims its networks steers clear of third-party cookies, the audience-snatching culprit that The Atlantic's Madrigal railed against.
"We believe the ultimate targeting is real human-to-human sharing, digital word of mouth, so we don't do third-party cookie targeting," BuzzFeed advertising executive Eric Harris told me via email. "We're not collecting individually identifiable data and will not sell any data."
The approach should help participating publishers breathe a bit easier -- and they may just want to consider demanding the same from any network they engage with, not just BuzzFeed's.
"It's cleaner; it's more straight up," said Fark.com CEO Drew Curtis of BuzzFeed's network. His site, which is one of the partners participating in the launch, embeds BuzzFeed sponsored story previews on its home page, marking them as sponsored. "I just like the fact that there's no screwing around," Curtis explained in a phone interview, "It's exactly what it appears to be, no more no less." Rates from BuzzFeed's ad network, he added, are significantly higher from other indirect channels. "Advertisers," he said, "are willing to pay for less bulls#*t."
Of course, one question participating publishers might ask themselves is why they are helping BuzzFeed profit from sponsored posts instead of selling them on their own sites. The answer might worry BuzzFeed -- at least until it can get its traffic up to the point of advertiser demand -- but if publishers decide to go that route and withdraw from the network, they may be able to pull themselves away from the bad economics that brought them into the network game in the first place.
Alex Kantrowitz covers the digital marketing side of politics for Forbes.com and PBS MediaShift. His writing has previously appeared in Fortune and the New York Times' Local Blog. Follow Alex on Twitter at @Kantrowitz.
This is a summary. Visit our site for the full post ».
Responsive web design -- where "one design fits all devices" -- continues to gain momentum. Dozens of responsive sites have popped up, and a recent post on Idea Lab from Journalism Accelerator outlined how and why media sites should go responsive.
But hold your horses. Despite the mounting hype, responsive websites are still far from becoming ubiquitous, and for good reason.
As much as responsive web design improves user experience and makes it easier for publishers to go cross-platform, the industry's struggle with delivering profitable ads during the first big shift from print to web is still happening. And in this second big shift to a responsive web, that struggle is magnified.
The surface-level problem that a responsive-designed website poses for advertising is that ads are typically delivered in fixed dimensions (not proportional to the size of their container) and typically sold based on exact position. Initial solutions to this issue largely focus on making ads as flexible as the web page, i.e., selling ads in packages that include different sizes to fit all sorts of devices, rather than the traditional fixed-width slots, or making ads that are themselves responsive. Ad firm ResponsiveAds, for example, has come up with various strategies for making ads adjust to different screen sizes.
But these approaches are not yet ideal. For example, when the Boston Globe went responsive in 2011, the site used just a few fixed-sized ads, placed in highly controlled positions that could then move around the page.
Andrés Max, a software engineer and user experience designer at Mashable, told me via email: "In the end technology (and screen resolutions) will keep evolving, so we must create ads and websites that are more adaptive than responsive."
Here, he means that ads should adapt to the medium and device instead of just responding to set resolution break-points. After all, we might also need to scale up ads for websites accessed on smart TVs.
Miranda Mulligan, the executive director of the Knight Lab at Northwestern University and part of the team that helped the Globe transition to responsive, agrees. She told me via email, "We need a smarter ad serving system that can detect viewport sizes, device capability, and they should be set up to be highly structured, with tons of associated metadata to maximize flexibility for display."
Moreover, many web ads today are rich media ads -- i.e., takeovers, video, pop-overs, etc. -- so incorporating these interactive rich ads goes beyond a flexibility in sizes. A lot of pressure is resting on designers and developers to innovate ad experiences for the future, but evolving tech tools can help clear a path for making interactive ads flexible and fluid. The arrival of HTML5 brought many helpful additions that aid in creating responsive sites in general.
"HTML5 does provide lots of room for innovation not only for responsive but for richer websites and online experiences," Max said. "For example, we will see a lot of use of the canvas concept for creating great online games and interactions."
In the iceberg of web advertising problems, what ads will look like on responsive sites is just the tip. According to Mulligan, a major underlying problem is still the lack of communication between publishing and advertising. The ad creation and delivery environment is infinitely complex. Publishers range from small to very large, and much of the web development code and creative visuals are made outside of the core web publishing team.
One of the problems is that there are so many moving parts and parties involved: ad networks that publishers subscribe to; ad servers that publishers own themselves; ad servers that publishers license from other companies; sales teams within large publishers; the Interactive Advertising Bureau (IAB); and more. The obligatory silos make it very hard for good communication and flexible results to transpire.
The challenge of mobile advertising on responsive sites, Mulligan later said via phone, "has very little to do with the web design technique and has a lot to do with the fact that we have really complicated ways of getting revenue attached to our websites."
In other words, the display ad system is still broken. And now, the same old problem is more pronounced in responsive mobile sites, where another layer of complication is introduced.
"We have to go and talk to seven different places and say, 'you know how you used to give us creative that would've been fixed-width? What we need from you now is flexible-width,'" Mulligan said.
While responsive web design inherently may not be the source of advertising difficulties, the fact that it amplifies the existing problems is a good reason for web publishers to be cautious about going responsive. In the meantime, a paradigm shift in how web content generates revenue is still desperately needed. Instead of plunging into using responsive ads for responsive sites, perhaps everyone can get in the same room and prototype alternatives to display ads altogether.
The Boston Globe screenshots above were captured by the BuySellAds blog.
Jenny Xie is the PBS MediaShift editorial intern. Jenny is a senior at Massachusetts Institute of Technology studying architecture and management. She is a digital-media junkie fascinated by the intersection of media, design, and technology. Jenny can be found blogging for MIT Admissions, tweeting @canonind, and sharing her latest work and interests here.
This is a summary. Visit our site for the full post ».
"On condition of anonymity" is one of the most important phrases in journalism. At Tor, we are working on making that more than a promise.
The good news: The Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.
The bad news: People who were used to getting away with atrocities are aware that the Internet has made it possible for journalists to talk to sources, gather video and photos from citizens, and to publish despite efforts to censor the news.
Going into journalism is a quick way to make a lot of enemies. Authoritarian regimes, corporations with less-than-stellar environmental records, criminal cartels, and other enemies of the public interest can all agree on one thing: Transparency is bad. Action to counter their activities starts with information. Reporters have long been aware that threats of violence, physical surveillance, and legal obstacles stand between them and the ability to publish. With digital communication, there are new threats and updates to old ones to consider.
Eavesdroppers can reach almost everything. We rely on third parties for our connections to the Internet and voice networks.The things you ask search engines, the websites you visit, the people you email, the people you connect to on social networks, and maps of the places you have been carrying a mobile phone are available to anyone who can pay, hack, or threaten their way into these records. The use of this information ranges from merely creepy to harmful.
You may be disturbed to learn about the existence of a database with the foods you prefer, the medications you take, and your likely political affiliation based on the news sites you read. On the other hand, you may be willing to give this information to advertisers, insurance companies, and political campaign staff anyway. For activists and journalists, having control over information can be a matter of life and death. Contact lists, chat logs, text messages, and hacked emails have been presented to activists during interrogations by government officials. Sources have been murdered for giving information to journalists.
If a journalist does manage to publish, there is no guarantee that people in the community being written about can read the story. Censorship of material deemed offensive is widespread. This includes opposition websites, information on family planning, most foreign websites, platforms for sharing videos, and the names of officials in anything other than state-owned media. Luckily, there are people who want to help ensure access to information, and they have the technology to do it.
Tor guards against surveillance and censorship by bouncing your communications through a volunteer network of about 3,000 relays around the world. These relays can be set up using a computer on a home connection, using a cloud provider, or through donations to people running servers.
When you start Tor, it connects to directory authorities to get a map of the relays. Then it randomly selects three relays. The result is a tunnel through the Internet that hides your location from websites and prevents your Internet service provider from learning about the sites you visit. Tor also hides this information from Tor -- no one relay has all of the information about your path through the network. We can't leak information that we never had in the first place.
The Tor Browser, a version of Firefox that pops up when you are connected to the Tor network, blocks browser features that can leak information. It also includes HTTPS Everywhere, software to force a secure connection to websites that offer protection for passwords and other information sent between you and their servers.
Tor is just one part of the solution. Other software can encrypt email, files, and the contents of entire drives -- scrambling the contents so that only people with the right password can read them. Portable operating systems like TAILS can be put on a CD or USB drive, used to connect securely to the Internet, and removed without leaving a trace. This is useful while using someone else's computer at home or in an Internet cafe.
The Guardian Project produces open-source software to protect information on mobile phones. Linux has come a long way in terms of usability, so there are entire operating systems full of audiovisual production software that can be downloaded free of charge. This is useful if sanctions prevent people from downloading copies of commercial software, or if cost is an issue.
These projects are effective. Despite well-funded efforts to block circumvention technology, hundreds of thousands of people are getting past firewalls every day. Every video of a protest that ends up on a video-sharing site or the nightly news is a victory over censorship.
There is plenty of room for optimism, but there is one more problem to discuss. Open-source security software is not always easy to use. No technology is immune to user error. The responsibility for this problem is shared by developers and end users.
The Knight Foundation is supporting work to make digital security more accessible. Usability is security: Making it easier to use software correctly keeps people safe. We are working to make TAILS easier to use. Well-written user manuals and video tutorials help high-risk users who need information about the risks and benefits of technology in order to come up with an accurate threat model. We will be producing more educational materials and will ask for feedback to make sure they are clear.
When the situation on the ground changes, we need to communicate with users to get them back online safely. We will expand our help desk, making help available in more languages. By combining the communication skills of journalists and computer security expertise of software developers, we hope to protect reporters and their sources from interference online.
You can track our progress and find out how to help at https://blog.torproject.org and https://www.torproject.org/getinvolved/volunteer.html.en.
Karen Reilly is Development Director at The Tor Project, responsible for fundraising, advocacy, general marketing, and policy outreach programs for Tor. Tor is a software and a volunteer network that enables people to circumvent censorship and guard their privacy online. She studied Government and International Politics at George Mason University.
We're excited to announce that the first version of the LocalWiki API has just been released!
In June, folks in Raleigh, N.C., held their annual CityCamp event. CityCamp is a sort of "civic hackathon" for Raleigh. During one part of the event, people broke up into teams and came up with projects that used technology to help solve local, civic needs.
What did almost every project pitched at CityCamp have in common? "Almost every final CityCamp idea had incorporated a stream of content from TriangleWiki," CityCamp and TriangleWiki organizer Reid Seroz said in an interview with Red Hat's Jason Hibbets.
The LocalWiki API makes it really easy for people to build applications and systems that push and pull information from a LocalWiki. In fact, the API has already been integrated into a few applications. LocalWiki is an effort to create community-owned, living information repositories that will provide much-needed context behind the people, places, and events that shape our communities.
The winning project at CityCamp Raleigh, RGreenway, is a mobile app that helps residents find local greenways. They plan to push/pull data from the TriangleWiki's extensive listing of greenways.
Another group in the Raleigh-Durham area, Wanderful, is developing a mobile application that teaches residents about their local history as they wander through town. They're using the LocalWiki API to pull pages and maps from the TriangleWiki.
Ultimately, we hope that LocalWiki can be thought of as an API for the city itself -- a bridge between local data and local knowledge, between the quantitative and the qualitative aspects of community life.
You can read the API documentation to learn about the new API. You'll also want to make sure you check out some of the API examples to get a feel for things.
We did a lot of work to integrate advanced geospatial support into the API, extending the underlying API library we were using -- and now everyone using it can effortlessly create an awesome geospatially aware API.
This is just the first version of the API, and there's a lot more we want to do! As we add more structured data to LocalWiki, the API will get more and more useful. And we hope to simplify and streamline the API as we see real-world usage.
Want to help? Share your examples for interacting with the API from a variety of environments -- jump in on the page on dev.localwiki.org or add examples/polish to the administrative documentation.
CityCamp photo courtesy of CityCamp Raleigh.
Philip Neustrom is a software engineer in the San Francisco Bay area. He co-founded DavisWiki.org in 2004 and is currently co-directing the LocalWiki.org effort. For the past several years he has worked on a variety of non-profit efforts to engage everyday citizens. He oversaw the development of the popular VideoTheVote.org, the world's largest coordinated video documentation project, and was the lead developer at Citizen Engagement Laboratory, a non-profit focused on empowering traditionally underrepresented constituencies. He is a graduate of the University of California, Davis, with a bachelor's in Mathematics.
The State Decoded project is putting U.S. state laws online, making them easy to search, understand and navigate. Our laws are organized badly, but The State Decoded is reorganizing them automatically, connecting people with the legal information they need with the ease of a Google search.
In implementing many of the features necessary to provide this experience, it would be easy to try to reinvent the wheel. While it's novel to provide this sort of functionality on a legal website, the functionality itself is hardly new. Recommendations of similar laws are really no different than Amazon's ability to recommend similar products. Making legal text easier to understand is really just an application of natural language processing. And a simple, elegant search interface is nothing Google hasn't figured out.
At its core, all of this is about the same thing: analyzing a series of texts and determining how they relate to one another. That's a solved design pattern.
Many of these design patterns already exist in one piece of software: Solr. The Solr document indexing software can be thought of as a search engine, meant to be installed on a single website, although it's really much more advanced than that. It's the unchallenged champion of search engine software, its power and flexibility unrivaled.
Solr is a natural for The State Decoded. Solr provides some features that would otherwise need to be built from scratch and provides a framework that will make some exciting analysis and collaboration possible.
The use of Solr has been tested out on Virginia Decoded, which is one of the state-level implementations of the State Decoded software. That work was donated by Open Source Connections, a Solr consultancy shop with an interest in good governance. Three interns there -- David Dodge, Joseph Featherston, and Kasey McKenna -- spent a chunk of their summer analyzing legal code structures and figuring out how to best index and represent them within Solr.
Another feature that Solr provides is the ability to respond to remote search queries. That is, every state site that uses Solr can allow their laws to be searched by other sites, which would make it possible to search multiple states' laws in one fell swoop, or for one state site to highlight the existence of similar laws in other states.
More than anything else, Solr provides a framework to enable innovative analysis of legal codes. For example, Apache Mahout -- a machine learning library -- can be plugged into this Solr setup, which could automatically and completely reorganize an entire legal code into topical clusters, find un-tagged laws and apply topical descriptions to them to make them easier to identify, or analyze the laws that a site visitor has looked at and recommend others that might be of interest to them.
Although requiring the use of Solr for The State Decoded does make the software somewhat more complicated to implement, the benefits are too large to be ignored. Version 0.6 of The State Decoded, due out November 1, will be the first release of the software that is Solr-based.
Waldo Jaquith has been a website developer for 18 years and an open-government technology activist for 16 years. He holds a degree in political science from Virginia Tech and was a 2005 fellow at the Sorensen Institute for Political Leadership. He and his wife live on a small farm near Charlottesville, Va., where he works for the Miller Center at the University of Virginia.
The best stories across the web on journalism and digital education
1. What it means to reboot journalism education (Poynter)
2. U.S. schools have fallen behind on science, tech prep (OC Register)
3. U.S. colleges get more virtual (Today)
4. A guide to selling, swapping, buying and renting textbooks (Business Insider)
5. Brain Hive offers on-demand K-12 e-book library lending (Publishers Weekly)
Get the weekly Journalism Education Roundup email from MediaShift
This is a summary. Visit our site for the full post ».
Almost a year ago, I was hired by Ushahidi to work as an ethnographic researcher on a project to understand how Wikipedians managed sources during breaking news events.
Ushahidi cares a great deal about this kind of work because of a new project called SwiftRiver that seeks to collect and enable the collaborative curation of streams of data from the real-time web about a particular issue or event. If another Haiti earthquake happened, for example, would there be a way for us to filter out the irrelevant, the misinformation, and build a stream of relevant, meaningful and accurate content about what was happening for those who needed it? And on Wikipedia's side, could the same tools be used to help editors curate a stream of relevant sources as a team rather than individuals?
When we first started thinking about the problem of filtering the web, we naturally thought of a ranking system that would rank sources according to their reliability or veracity. The algorithm would consider a variety of variables involved in determining accuracy, as well as whether sources have been chosen, voted up or down by users in the past, and eventually be able to suggest sources according to the subject at hand. My job would be to determine what those variables are -- i.e., what were editors looking at when deciding whether or not to use a source?
I started the research by talking to as many people as possible. Originally I was expecting that I would be able to conduct 10 to 20 interviews as the focus of the research, finding out how those editors went about managing sources individually and collaboratively. The initial interviews enabled me to hone my interview guide. One of my key informants urged me to ask questions about sources not cited as well as those cited, leading me to one of the key findings of the report (that the citation is often not the actual source of information and is often provided in order to appease editors who may complain about sources located outside the accepted Western media sphere). But I soon realized that the editors with whom I spoke came from such a wide variety of experience, work areas and subjects that I needed to restrict my focus to a particular article in order to get a comprehensive picture of how editors were working. I chose a 2011 Egyptian revolution article on Wikipedia because I wanted a globally relevant breaking news event that would have editors from different parts of the world working together on an issue with local expertise located in a language other than English.
Using Kathy Charmaz's grounded theory method, I chose to focus editing activity (in the form of talk pages, edits, statistics and interviews with editors) from January 25, 2011 when the article was first created (within hours of the first protests in Tahrir Square), to February 12 when Mubarak resigned and the article changed its name from "2011 Egyptian protests" to "2011 Egyptian revolution." After reviewing the big-picture analyses of the article using Wikipedia statistics of top editors, and locations of anonymous editors, etc., I started work with an initial coding of the actions taking place in the text, asking the question, "What is happening here?"
I then developed a more limited codebook using the most frequent/significant codes and proceeded to compare different events with the same code (looking up relevant edits of the article in order to get the full story), and to look for tacit assumptions that the actions left out. I did all of this coding in Evernote because it seemed the easiest (and cheapest) way of importing large amounts of textual and multimedia data from the web, but it wasn't ideal because talk pages, when imported, need to be re-formatted, and I ended up using a single column to code data in the first column since putting each conversation on the talk page in a cell would be too time-consuming.
I then moved to writing a series of thematic notes on what I was seeing, trying to understand, through writing, what the common actions might mean. I finally moved to the report writing, bringing together what I believed were the most salient themes into a description and analysis of what was happening according to the two key questions that the study was trying to ask: How do Wikipedia editors, working together, often geographically distributed and far from where an event is taking place, piece together what is happening on the ground and then present it in a reliable way? And how could this process be improved?
Ethnography Matters has a great post by Tricia Wang that talks about how ethnographers contribute (often invisible) value to organizations by showing what shouldn't be built, rather than necessarily improving a product that already has a host of assumptions built into it.
And so it was with this research project that I realized early on that a ranking system conceptualized this way would be inappropriate -- for the single reason that along with characteristics for determining whether a source is accurate or not (such as whether the author has a history of presenting accurate news article), a number of important variables are independent of the source itself. On Wikipedia, these include variables such as the number of secondary sources in the article (Wikipedia policy calls for editors to use a majority of secondary sources), whether the article is based on a breaking news story (in which case the majority of sources might have to be primary, eyewitness sources), or whether the source is notable in the context of the article. (Misinformation can also be relevant if it is widely reported and significant to the course of events as Judith Miller's New York Times stories were for the Iraq War.)
This means that you could have an algorithm for determining how accurate the source has been in the past, but whether you make use of the source or not depends on factors relevant to the context of the article that have little to do with the reliability of the source itself.
Another key finding recommending against source ranking is that Wikipedia's authority originates from its requirement that each potentially disputed phrase is backed up by reliable sources that can be checked by readers, whereas source ranking necessarily requires that the calculation be invisible in order to prevent gaming. It is already a source of potential weakness that Wikipedia citations are not the original source of information (since editors often choose citations that will be deemed more acceptable to other editors) so further hiding how sources are chosen would disrupt this important value.
On the other hand, having editors provide a rationale behind the choice of particular sources, as well as showing the variety of sources rather than those chosen because of loading time constraints may be useful -- especially since these discussions do often take place on talk pages but are practically invisible because they are difficult to find.
Analyzing the talk pages of the 2011 Egyptian revolution article case study enabled me to understand how Wikipedia editors set about the task of discovering, choosing, verifying, summarizing, adding information and editing the article. It became clear through the rather painstaking study of hundreds of talk pages that editors were:
It was important to discover the work process that editors were following because any tool that assisted with source management would have to accord as closely as possible with the way that editors like to do things on Wikipedia. Since the process is managed by volunteers and because volunteers decide which tools to use, this becomes really critical to the acceptance of new tools.
After developing a typology of sources and isolating different types of Wikipedia source work, I made two sets of recommendations as follows:
Regarding a ranking system for sources, I'd argue that a descriptive repository of major media sources from different countries would be incredibly beneficial, but that a system for determining which sources are ranked highest according to usage would yield really limited results. (We know, for example, that the BBC is the most used source on Wikipedia by a high margin, but that doesn't necessarily help editors in choosing a source for a breaking news story.) Exposing the variables used to determine relevancy (rather than adding them up in invisible amounts to come up with a magical number) and showing the progression of sources over time offers some opportunities for innovation. But this requires developers to think out of the box in terms of what sources (beyond static texts) look like, where such sources and expertise are located, and how trust is garnered in the age of Twitter. The full report provides details of the recommendations and the findings and will be available soon.
This is my first comprehensive ethnographic project, and one of the things I've noticed when doing other design and research projects using different methodologies is that, although the process can seem painstaking and it can prove difficult to articulate the hundreds of small observations into findings that are actionable and meaningful to designers, getting close to the experience of editors is extremely valuable work that is rare in Wikipedia research. I realize now that in the past when I actually studied an article in detail, I knew very little about how Wikipedia works in practice. And this is only the beginning!
Heather Ford is a budding ethnographer who studies how online communities get together to learn, play and deliberate. She currently works for Ushahidi and is studying how online communities like Wikipedia work together to verify information collected from the web and how new technology might be designed to help them do this better. Heather recently graduated from the UC Berkeley iSchool where she studied the social life of information in schools, educational privacy and Africans on Wikipedia. She is a former Wikimedia Foundation Advisory Board member and the former Executive Director of iCommons - an international organization started by Creative Commons to connect the open education, access to knowledge, free software, open access publishing and free culture communities around the world. She was a co-founder of Creative Commons South Africa and of the South African nonprofit, The African Commons Project as well as a community-building initiative called the GeekRetreat - bringing together South Africa's top web entrepreneurs to talk about how to make the local Internet better. At night she dreams about writing books and finding time to draw.
This article also appeared at Ushahidi.com and Ethnography Matters. Get the full report at Scribd.com.
Another tool to waste time.
AllThingsD :: “So many people are worried that technology is mediating us, but I think it’s just giving us a new way to hang out with our friends,” says Salzberg, co-creator of Makr.io, a “collaborative Web remixing tool” where users try to one-up each other by posting funny captions on pictures, a la lolcats.
A report by Liz Gannes, allthingsd.com
This is part 2 in a series. In part 1, I talked about how we think of ourselves at Ushahidi and how we think of success in our world. It set up the context for this post, which is about where we're going next as an organization and with our platform.
We realize that it's hard to understand just how much is going on within the Ushahidi team unless you're in it. I'll try to give a summarized overview, and will answer any questions through the comments if you need more info on any of them.
Ushahidi's primary source of income is private foundation grant funding (Omidyar Network, Hivos, MacArthur, Google, Cisco, Knight, Rockefeller, Ford), and we don't take any public funding from any country so that we are more easily able to maintain our neutrality. Last year, we embarked on a strategy to diversify our revenue stream, endeavoring to decrease our percentage of revenues based on grant funding and offset that with earned revenue from client projects. This turned out to be very hard to do within our current team structure, as the development team ended up being pulled off of platform-side work and client-side work suffered for it. Many internal deadlines were missed, and we found ourselves unable to respond to the community as quickly as we wanted.
This year we split out an "external projects team" made up of some of the top Ushahidi deployers in the world, and their first priority is to deal with client and consulting work, followed by dev community needs. We're six months into this strategy, and it seems like this team format will continue to work and grow. Last year, 20% of our revenue was earned; this year we'd like to get that to the 30-40% range.
When anyone joins the Ushahidi team, we tend to send them off to some conference to speak about Ushahidi in the first few weeks. There's nothing like knowing that you're going to be onstage talking about your new company to galvanize you into really learning about and understanding everything about the organization. Basically, we want you to understand Ushahidi and be on the same mission with us. If you are, you might explain what we do in a different way than I do onstage or in front of a camera, but you'll get the right message out regardless.
You have a lot of autonomy within your area of work, or so we always claimed internally. This was tested earlier this year, where David Kobia, Juliana Rotich and myself as founders were forced to ask whether we were serious about that claim, or were just paying it lip-service. Brian Herbert leads the Crowdmap team, which in our world means he's in charge of the overall architecture, strategy and implementation of the product.
The Crowdmap team met up in person earlier this year and hatched a new product plan. They re-envisioned what Crowdmap could be, started mocking up the site, and began building what would be a new Crowdmap, a complete branch off the core platform. I heard this was underway, but didn't get a brief on it until about six weeks in. When I heard what they had planned, and got a complete walk-through by Brian, I was floored. What I was looking at was so different from the original Ushahidi, and thus what we have currently as Crowdmap, that I couldn't align the two in my mind.
My initial reaction was to shut it down. Fortunately, I was in the middle of a random 7-hour drive between L.A. and San Francisco, so that gave me ample time to think by myself before I made any snap judgments. More importantly, it also gave me time to call up David and talk through it with him. Later that week, Juliana, David and I had a chat. It was at that point that we realized that, as founders, we might have blinders on of our own. Could we be stuck in our own 2008 paradigm? Should we trust our team to set the vision for a product? Did the product answer the questions that guide us?
The answer was yes.
The team has done an incredible job of thinking deeply about Crowdmap users, then translating that usage into a complete redesign, which is both beautiful and functional at the same time. It's user-centric, as opposed to map-centric, which is the greatest change. But, after getting around our initial feelings of alienness, we are confident that this is what we need to do. We need to experiment and disrupt ourselves -- after all, if we aren't willing to take risks and try new things, then we fall into the same trap that those who we disrupted did.
For about a year we've been asking ourselves, "If we rebuilt Ushahidi, with all we know now, what would it look like?"
To redesign, re-architect and rebuild any platform is a huge undertaking. Usually this means part of the team is left to maintain and support the older code, while the others are building the shiny new thing. It means that while you're spending months and months building the new thing, that you appear stagnant and less responsive to the market. It means that you might get it wrong and what you build is irrelevant by the time it's launched.
Finally, after many months of internal debate, we decided to go down this path. We've started with a battery of interviews with users, volunteer developers, deployers and internal team members. The recent blog post by Heather Leson on the design direction we're heading in this last week shows where we're going. Ushahidi v3 is the complete redesign of Ushahidi's core platform, from the first line of code to the last HTML tag. On the front-end it's mobile web-focused out of the gate, and the backend admin area is about streamlining the publishing and verification process.
At Ushahidi we are still building, theming and using Ushahidi v2.x, and will continue to do so for a long time. This idea of a v3 is just vaporware until we actually decide to build it, but the exercise has already born fruit because it forces us to ask what it might look like if we weren't constrained by the legacy structure we had built. We'd love to get more input from everyone on this as we go forward.
After a couple of fits and starts, SwiftRiver is now being tried out by 500-plus beta testers. It's 75% of the way to completion, but usable, and so it's out and we're getting the feedback from everyone on what needs to be changed, added and removed in order to make it the tool we all need to manage large amounts of data. It's an expensive, server-intensive platform to run, so those who use it in the future will have to pay for its use when using it on our servers. As always, the core code will be made available, free and open source, for those who would like to set it up and run it on their own.
The amount of change and internal change that Ushahidi is undertaking is truly breathtaking to us. We're cognizant of just how much we're putting on the edge. However, we know this; in our world of technology, those who don't disrupt themselves will themselves be disrupted. In short, we'd rather go all-in to make this change happen ourselves than be mired in a state of stagnancy and defensive activity.
As always, this doesn't happen in a vacuum for Ushahidi. We've relied on those of you who are the coders and deployers to help us guide the platforms for over four years. Many of you have been a part of one of these product rethinks. If you aren't already, and would like to be, get in touch with myself or Heather to get into it and help us re-envision and build the future.
Raised in Kenya and Sudan, Erik Hersman is a technologist and blogger who lives in Nairobi. He is a co-founder of Ushahidi, a free and open-source platform for crowdsourcing information and visualizing data. He is the founder of AfriGadget, a multi-author site that showcases stories of African inventions and ingenuity, and an African technology blogger at WhiteAfrican.com. He currently manages Ushahidi's operations and strategy, and is in charge of the iHub, Nairobi's Innovation Hub for the technology community, bringing together entrepreneurs, hackers, designers and the investment community. Erik is a TED Senior Fellow, a PopTech Fellow and speaker and an organizer for Maker Faire Africa. You can find him on Twitter at @WhiteAfrican
This post originally appeared on Ushahidi's blog.
The Next Web :: Disneyland is going to get a touch more awesome if new research coming out of Disney’s Pittsburgh research lab gets its day. The project, called Botanicus Interacticus, turns plants into multi-touch sensors.
A report by Matthew Panzarino, thenextweb.com
Visit the Research Labs here Disney, disneyresearch.com
HT: Andreas Floemer, t3n
"Tell the chef, the beer is on me."
"Basically the price of a night on the town!"
"I'd love to help kickstart continued development! And 0 EUR/month really does make fiscal sense too... maybe I'll even get a shirt?" (there will be limited edition shirts for two and other goodies for each supporter as soon as we sold the 200)