Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2013

14:02

The newsonomics of Spies vs. Spies

So who do you root for in this coming battle, as Google petitions the feds? Are you on the side of Big Brother or Little Brother — and remind me, which is which? It’s a 50-year-update on Mad Magazine’s iconic Spy vs. Spy.

The Surveillance State is — at least for this month — in front of the public. The Guardian’s rolling revelations of National Security Agency phone and web spying have again raised the bogeyman of Big Data — not the Big Data that all the airport billboards offer software to tame, but the Big Data that the unseen state can use against us. We’ve always had a love/hate relationship with big technology and disaster, consuming it madly as Hollywood churns out mad entertainments. We like our dystopia delivered hot and consumable within two hours. What we don’t like is the ooky feeling we are being watched, or that we have to make some kind unknowable choice between preventing the next act of terror and preserving basic Constitutional liberties.

Americans’ reactions to the stories is predictable. Undifferentiated outrage: “I knew they were watching us.” Outrageous indifference: “What do you expect given the state of the world?” That’s not surprising. Americans and Europeans have had the same problem thinking about the enveloping spider’s web of non-governmental digital knowledge. (See The Onion headline: “Area Man Outraged His Private Information Being Collected By Someone Other Than Advertisers.”)

While top global media, including The Guardian, The Washington Post, and The New York Times, dig into the widening government spying questions, let’s look at the ferment in the issues of commercial surveillance. There’s a lot of it, and it would take several advanced degrees and decoder rings to understand all of it. No, it’s not the same thing as the issues surrounding PRISM. But it will be conflated with national security, and indeed the overlapping social and political questions are profound. Let’s look at some recent developments and some of the diverse players in this unfolding drama and see where publishers do — and could — fit in.

The commercial surveillance culture is ubiquitous, perhaps even less hemmed in by government policy than the NSA, and growing greatly day by day. While Google asks the FISA court to allow it to release more detail about the nature of federal data demands, its growing knowledge of us seems to have no bounds. From our daily searches, to the pictures (street to sky) taken of our homes, to the whereabouts relayed by Google Maps, and on and on.

It’s not just Google, of course. Facebook, whose users spend an average of seven hours per month online disclosing everything, is challenging Google for king of the data hill. A typical news site might have 30 to 40 cookies — many of them from ad-oriented “third parties” — dropped from it. That explains why those “abandoned” shopping carts, would-be shoe purchases, and fantasy vacation ads now go with us seemingly everywhere we move on the web. It’s another love/hate relationship: We’re enamored of what Google and Facebook and others can do for us, but we’re disquieted by their long reach into our lives. It’s a different flavor of ooky.

We are targeted. We are retargeted. Who we are, what we shop for, and what we read is known by untold number of companies out there. Though we are subject to so much invisible, involuntary, and uncompensated crowdsourcing, the outrage is minimal. It’s not that it hasn’t been written about. Among others, The Wall Street Journal has done great work on it, including its multi-prize-winning three-year series on “What They Know.”

Jim Spanfeller, now CEO of Spanfeller Media Group and the builder of Forbes.com, related the PRISM NSA disclosures to commercial tracking in a well-noticed column (“At What Price Safety? At What Price Targeted Advertising?”) last week. His point: We’re all essentially ignorant of what’s being collected about us, and how it is being used. As we find out more, we’re not going to be happy.

His warning to those in the digital ad ecosystem: Government will ham-handedly regulate tracking of consumer clicks if the industry doesn’t become more “honest and transparent.”

Spanfeller outlined for me the current browser “Do Not Track” wars, which saw its latest foray yesterday. Mozilla, parent of Firefox, the third most-popular browser by most measures, said it will move forward with tech that automatically blocks third-party cookies in its browser. Presumably, users will be able to turn back on such cookies, but most will go with the defaults in the browsers they use.

The Mozilla move, much contested and long in the works, follows a similar decision by Microsoft with its release of the latest Internet Explorer. Microsoft is using a “pro-privacy” stance as a competitive weapon against Google, advancing both Bing search and IE. Spanfeller notes that Microsoft’s move hasn’t had much effect, at least yet, because “sites aren’t honoring it.”

These browser wars are one front, and much decried by forces like the Interactive Ad Bureau, the Digital Ad Alliance, and its “Ad Choices” program — which prefer consumer opt-out. Another front is an attempt at industry consensus through the World Wide Web Consortium, or W3C. Observers of that process believe it is winding its way to failure. Finally, also announced yesterday was the just-baked Cookie Clearinghouse, housed at the Stanford Center for Internet and Society. The driving notion, to be fleshed out: creating whitelists and blacklists of cookies allowed and blocked. (Good summaries by both Ad Age’s Kate Kaye and ZDNet’s Ed Bott.)

Never too far from the action, serial entrepreneur John Taysom was in Palo Alto this week as well. Taysom, a current senior fellow at Harvard’s Advanced Leadership Initiative, is an early digital hothouse pioneer, having led Reuters’ Greenhouse project way back in the mid-’90s. His list of web startups imagined and sold is impressive, and now he’s trying to put all that experience to use around privacy issues. As a student of history, old and modern, his belief is this: “When they invented the Internet, they didn’t add a privacy layer.”

“We need a Underwriters Laboratory for our time,” he told me Wednesday. UL served a great purpose at a time (1894) of another tech revolution: electricity. Electricity, like computer tech these days, seemed exciting, but the public was wary. It wasn’t afraid of behind-the-scenes chicanery — it literally was concerned about playing with fire. So UL, as a “global independent safety science company” — a kind of neutral, Switzerland-like enterprise — was set up to assure the public that electrical appliances were indeed tested and safe.

Could we do the same with the Internet?

He’s now working on a model, colloquially named “Three’s A Crowd,” to reinsert a “translucent” privacy layer in the tech stack. His model is based on a lot of current thinking on how to both better protect individual privacy and actually improve the targeting of messages by business and others. It draws on k-anonymity and Privacy by Design principles, among others.

In brief, Taysom’s Harvard project is around creating a modern UL. It would be a central trusted place, or really set of places, that institutions and businesses (and presumably governments) could draw from, but which protect individual identification. He calls it an I.D. DMZ, or demilitarized zone.

He makes the point that the whole purpose of data mining is to get to large enough groups of people with similar characteristics — not to find the perfect solution or offer for each individual. “Go up one level above the person,” to a small, but meaningfully sized, crowd. The idea: increase anonymity, giving people the comfort of knowing they are not being individually targeted.

Further, the levels of anonymity could differ depending on the kind of information associated with anyone. ”I don’t really mind that much about people knowing my taste in shirts. If it’s about the location of my kids, I want six sigmas” of anonymity, he says. Taysom, who filed a 2007 U.K. patent, now approved, on the idea, is now putting together both his boards of advisors and trustees.

Then there are emerging marketplace solutions to privacy. What havoc the digital marketplace hath wrought may be solved by…the digital marketplace. D.C.-based Personal.com is one of the leading players in that emerging group. Yes, this may be the coming personal data economy. Offering personal data lockers starting at $29.99 a year, Personal.com is worth a quick tour. What if you could store all your info in a digital vault, it asks? Among the kinds of “vaults”: passwords, memberships and rewards programs, credit and debit card info, health insurance, and lots more.

It’s a consumer play that’s also a business play. The company is now targeting insurance, finance, and education companies and institutions, who would then offer consumers the opportunity to ingest their customer information and keep it in vault and auto-fill features then let consumers re-use such information once it is banked. Think Mint.com, but broader.

Importantly, while Personal.com deals potentially with lots of kinds of digital data, its business doesn’t touch on the behavioral clickstream data that is at the heart of the Do Not Track fracas.

Do consumer want such a service? Personal.com won’t release any numbers on customers or business partners. Getting early traction may be tough.

Embedded in the strategy: a pro-consumer tilt. Personal.com offers an “owner data agreement,” basically certifying that it is the consumer, not Personal.com, that owns the data. It is a tantalizing idea: What if we individually could control our own digital data, setting parameters on who could use what and how? What if we as consumers could monetize our own data?

Neither Personal.com nor John Taysom’s project nor the various Do Not Track initiatives envision that kind of individually driven marketplace, and I’ve been told there are a whole bunch of technical reasons why it would be difficult to achieve. Yet, wouldn’t that be the ultimate capitalist, Adam Smith solution to this problem of runaway digital connectedness — a huge exchange that would facilitate the buying and selling of our own data?

For publishers, all this stuff is headache-producing. News publishers from Manhattan to Munich complain about all the third-party cookies feeding low-price exchanges, part of the reason their digital ad businesses are struggling. But there is a wide range of divergent opinion about how content-creating publishers will fare in Do Not Track world. They may benefit from diminished competition, but would they be able to adequately target for advertisers? Will Google and Facebook do even better in that world?

So, for publishers, these privacy times demand three things:

  • Upscale their own data mining businesses. “There’s a big difference between collecting and using data,” says Jonathan Mendez, CEO of Yieldbot, that works with publishers to provide selling alternatives to Google search. That’s a huge point. Many publishers don’t yet do enough with their first-party data to adequately serve advertiser needs.
  • Take a privacy-by-design approach to emerging business. How you treat consumers in product design and presentation is key here, with some tips from Inc. magazine.
  • Adopt a pro-privacy position. Who better than traditionally civic-minded newspaper companies than to help lead in asserting a sense of ownership of individual data? If news companies are to re-assert themselves as central to the next generation of their communities and of businesses, what better position than pro-privacy — and then helping individuals manage that privacy better?

It’s a position that fits with publishers’ own interests, and first-party data gathering (publisher/reader) makes more intuitive sense to citzen readers. For subscribers — those now being romanced into all-access member/subscribers — the relationship may make even more sense. Such an advocacy position could also help re-establish a local publisher as a commercial hub.

News and magazine publishers won’t have to create the technology here — certainly not their strong suits — but they can be early partners as consortia and companies emerge in the marketplace.

Photo by Fire Monkey Fire used under a Creative Commons license.

April 02 2013

12:12
10:39

How Public Lab Turned Kickstarter Crowdfunders Into a Community

Public Lab is structured like many open-source communities, with a non-profit hosting and coordinating the efforts of a broader, distributed community of contributors and members. However, we are in the unique position that our community creates innovative open-source hardware projects -- tools to measure and quantify pollution -- and unlike software, it takes some materials and money to actually make these tools. As we've grown over the past two years, from just a few dozen members to thousands today, crowdfunding has played a key role in scaling our effort and reaching new people.

DIY Spectrometry Kit Kickstarter

Kickstarter: economies of DIY scale

Consider a project like our DIY Spectrometry Kit, which was conceived of just after the Deepwater Horizon oil spill to attempt to identify petroleum contamination. In the summer of 2012, just a few dozen people had ever built one of our designs, let alone uploaded and shared their work. As the device's design matured to the point that anyone could easily build a basic version for less than $40, we set out to reach a much larger audience while identifying new design ideas, use cases, and contributors, through a Kickstarter project. Our theory was that many more people would get involved if we offered a simple set of parts in a box, with clear instructions for assembly and use.

By October 2012, more than 1,600 people had backed the project, raising over $110,000 -- and by the end of December, more than half of them had received a spectrometer kit. Many were up and running shortly after the holidays, and we began to see regular submissions of open spectral data at http://spectralworkbench.org, as well as new faces and strong opinions on Public Lab's spectrometry mailing list.

Kickstarter doesn't always work this way: Often, projects turn into startups, and the first generation of backers simply becomes the first batch of customers. But as a community whose mission is to involve people in the process of creating new environmental technologies, we had to make sure people didn't think of us as a company but as a community. Though we branded the devices a bit and made them look "nice," we made sure previous contributors were listed in the documentation, which explicitly welcomed newcomers into our community and encouraged them to get plugged into our mailing list and website.

newbox.jpg

As a small non-profit, this approach is not only in the spirit of our work, but essential to our community's ability to scale up. To create a "customer support" contact rather than a community mailing list would be to make ourselves the exclusive contact point and "authority" for a project which was developed through open collaboration. For the kind of change we are trying to make, everyone has to be willing to learn, but also to teach -- to support fellow contributors and to work together to improve our shared designs.

Keeping it DIY

One aspect of the crowdfunding model that we have been careful about is the production methods themselves. While it's certainly vastly different to procure parts for 1,000 spectrometers, compared to one person assembling a single device, we all agreed that the device should be easy to assemble without buying a Public Lab kit -- from off-the-shelf parts, at a reasonable cost. Thus the parts we chose were all easily obtainable -- from the aluminum conduit box enclosure, to the commercially available USB webcams and the DVD diffraction grating which makes spectrometry possible.

spectrometry.jpg

While switching to a purpose-made "holographic grating" would have made for a slightly more consistent and easy-to-assemble kit (not to mention the relative ease of packing it vs. chopping up hundreds of DVDs with a paper cutter...), it would have meant that anyone attempting to build their own would have to specially order such grating material -- something many folks around the world cannot do. Some of these decisions also made for a slightly less optimal device -- but our priority was to ensure that the design was replicable, cheap, and easy. Advanced users can take several steps to dramatically improve the device, so the sky is the limit!

The platform effect

One clear advantage of distributing kits, besides the bulk prices we're able to get, is that almost 2,000 people now have a nearly identical device -- so they can learn from one another with greater ease, not to mention develop applications and methodologies which thousands of others can reproduce with their matching devices. We call this the "platform effect" -- where this "good enough" basic design has been standardized to the point that people can build technologies and techniques on top of it. In many ways, we're looking to the success of the Arduino project, which created not only a common software library, but a standardized circuit layout and headers to support a whole ecology of software and hardware additions which are now used by -- and produced by -- countless people and organizations.

Spectral Challenge screenshot

As we continue to grow, we are exploring innovative ways to use crowdfunding to get people to collaboratively use the spectrometers they now have in hand to tackle real-world problems. Recently, we have launched the Spectral Challenge, a kind of "X Prize for DIY science", but it's crowdfunded -- meaning that those who support the goals of the Challenge can participate in the competition directly, or by contributing to the prize pool. Additionally, Public Lab will continue to leverage more traditional means of crowdfunding as our community develops new projects to measure plant health and produce thermal images -- and we'll have to continue to ensure that any kits we sell clearly welcome new contributors into the community.

The lessons we've learned from our first two kit-focused Kickstarters will help us with everything from the box design to the way we design data-sharing software. The dream, of course, is that in years to come, as we pass the 10,000- and 100,000-member marks, we continue to be a community which -- through peer-to-peer support -- helps one another identify and measure pollution without breaking the bank.

The creator of GrassrootsMapping.org, Jeff Warren designs mapping tools, visual programming environments, and flies balloons and kites as a fellow in the Center for Future Civic Media, and as a student at the MIT Media Lab's Design Ecology group, where he created the vector-mapping framework Cartagen. He co-founded Vestal Design, a graphic/interaction design firm in 2004, and directed the Cut&Paste Labs project, a year-long series of workshops on open source tools and web design in 2006-7 with Lima designer Diego Rotalde. He is a co-founder of Portland-based Paydici.com.

August 30 2012

05:36

Reddit as journalism: Crowdsourcing an interview with the President

GigaOM :: Reddit landed a personal appearance by the President of the United States on Wednesday when Barack Obama stopped by for one of the site’s “Ask Me Anything” interviews — an event that further adds to the web community’s reputation as an alternative source of journalism.

Hi, I’m Barack Obama, President of the United States. Ask me anything. I’ll be taking your questions for half an hour starting at about 4:30 ET.

Proof it's me: https://twitter.com/BarackObama/status/240903767350968320

Hey, everyone: I'll be taking your questions online today. Ask yours here: OFA.BO/gBof44 -bo

— Barack Obama (@barackobama) August 29, 2012

A report by Mathew Ingram, gigaom.com

August 17 2012

14:53

Twitter's Clockwork Raven: Crowdsourced analytics

GigaOM :: Twitter says its Clockwork Raven web app will make it easier for even non-technical people to post job requests to Amazon Mechanical Turk crowdsourcing job site. The app is now available for download from Github.

A report by Barb Darrow, gigaom.com

August 02 2012

17:26

The Washington Post launches crowdsourcing platform

Washington Post :: The Washington Post today announced it has launched a new platform for crowdsourcing. “Crowd Sourced” is The Post’s special feature that allows Post journalists to ask questions about today’s concerns and begin a conversation about these issues. Users will be able to answer those questions and vote for the ideas they value most, so the most popular responses are surfaced on the page.

Continue to read here www.washingtonpost.com

April 26 2012

14:00

LedeHub to Foster Open, Collaborative Journalism

I'm honored to be selected as one of the inaugural AP-Google Journalism and Technology Scholarship fellows for the 2012-13 academic school year, and am excited to begin work on my project, LedeHub.

I believe in journalism's ability to better the world around us. To fully realize the potential of journalism in the digital age, we need to transform news into a dialogue between readers and reporters. LedeHub does just that, fostering collaborative, continuous and open journalism while incorporating elements of crowdsourcing to allow citizens, reporters and news organizations to come together in unprecedented ways.

LedeHub in Action

Here's a potential case study: "Alice" isn't a journalist, but she loves data and can spot the potential for a story amid the rows and columns of a CSV file. She comes across some interesting census data illustrating the rise of poverty in traditionally wealthy Chicagoland suburbs, but isn't quite sure how to use it, so she points her browser to www.ledehub.com. She creates a new story repository called "census-chicago-12," tags it under "Government Data," and commits the numbers.

Two days later, "Bob" -- a student journalist with a knack for data reporting -- is browsing the site and comes across Alice's repository. He forks it and commits a couple paragraphs of analysis. Alice sees Bob's changes and likes where he's headed, so she merges it back into her repository, and the two continue to collaborate. Alice works on data visualization, and Bob continues to do traditional reporting, voicing the story of middle-class families who can no longer afford to send their children to college.

A few days later, a news outlet like the Chicago Tribune sees "census-chicago-12" and flags it as a promising repository -- pulls it, edits, fact-checks and publishes the story, giving Alice and Bob their first bylines.

As you can see, LedeHub re-imagines the current reporting and writing workflow while underscoring the living nature of articles. By representing stories as "repositories" -- with the ability to edit, update, commit and revert changes over time -- the dynamic nature of news is effectively captured.

Fostering Open-Source Journalism

GitHub and Google Code are social coding platforms that have done wonders for the open-source community. I'd like to see similar openness in the journalism industry.

My proposal for LedeHub is to adapt the tenets of Git -- a distributed version control system -- and appropriate its functionality as it applies to the processes of journalism. I will implement a web application layer on top of this core functionality to build a tool for social reporting, writing and coding in the open. This affords multiple use cases for LedeHub, as illustrated in the case study I described above -- users can start new stories, or search for and contribute to stories already started. I'd like to mirror the basic structure of GitHub, but re-appropriate the front end to cater to the news industry and be more reporter-focused, not code-driven. That said, here's a screenshot of the upcoming LedeHub repository on GitHub (to give you a general idea of what the LedeHub dashboard might look like):

ledehub.jpg

Each story repository may contain text, data, images or code. The GitHub actions of committing (adding changes), forking (diverging story repositories to allow for deeper collaboration and account for potential overlap) and cloning will remain analagous in LedeHub. Repositories will be categorized according to news "topics" or "areas" like education or politics. Users -- from citizens to reporters or coders -- will have the ability to "watch" different story repositories they are interested in and receive updates when changes to that story are made. Users can also comment on different "commits" for a story, offering their input or suggestions for improvement. GitHub offers a "company" option, which allows for multiple users to be added to the organization, a feature I would like to mimic in my project for news outlets, in addition to Google Code's "issues" feature.

Next Steps

I recognize that the scope of my project is ambitious, and my current plan is to segment implementation into iterations -- to build an initial prototype to test within one publication and expand from there.

Journalism needs to become more open, like the web. Information should be shared. The collaboration between the New York Times and the Guardian over WikiLeaks data was very inspiring, two "competing" organizations sharing confidential information for publication. With my project, LedeHub, I hope to foster similar transparency and collaboration.

So, that's the proposal. There's still a lot to figure out. For example, what's the best way to motivate users to collaborate? What types of data can be committed? What copyright issues need to be considered? Should there be compensation involved? Fact-checking? Sound off. I'd love to hear your thoughts.

Keep up with all the new content on Collaboration Central by following our Twitter feed @CollabCentral or subscribing to our RSS feed or email newsletter:







Get Collaboration Central via Email

Katie Zhu is a junior at Northwestern University, studying journalism and computer science, and is particularly interested in human-computer interaction, data visualization and interaction design. She has previously interned at GOOD in Los Angeles, where she helped build GOOD's mobile website. She continues development work part-time throughout the school year, and enjoys designing and building products at the intersection of news and technology. She was selected as a finalist in the Knight-Mozilla News Technology Partnership in 2011.

This is a summary. Visit our site for the full post ».

April 20 2012

14:00

How Ushahidi Deals With Data Hugging Disorder

At Ushahidi, we have interacted with various organizations around the world, and the key thing we remember from reaching out to some NGOs (non-governmental organizations) in Kenya is that we faced a lot of resistance when we began in 2008, with organizations not willing to share data which was often in PDFs and not in machine-readable format.

This was especially problematic as we were crowdsourcing information about the events that happened that year in Kenya. Our partners in other countries have had similar challenges in gathering relevant and useful data that is locked away in cabinets, yet was paid for by taxpayers. The progress in the Gov 2.0 and open data space around the world has greatly encouraged our team and community.

When you've had to deal with data hugging disorder of NGOs, open data is a welcome antidote and opportunity. Our role at Ushahidi is to provide software to help collect data, and visualize the near real-time information that's relevant for citizens. The following are some thoughts from our team and what I had hoped to share at OGP in Brazil.

ushahidi.jpg

Government Data is important

  • It is often comprehensive - It covers the entire country. For example, a national census covers an entire country, so it has a large sample, whereas other questionnaires have a smaller sample.
  • Verified - Government data is "clean" data; it has been verified -- for example, the number of schools in a particular region. Crowdsourcing projects done by government can be quite dependable. (Read this example of how Crowdmap was used by the Ministry of Agriculture in Afghanistan to collect commodity prices.)
  • Official - government data forms the basis of government decision making and policy. If you want to influence government policy and interventions, it needs to be based on official data.
  • Expensive - Government data because it is comprehensive and verified is expensive to collect -- this expense is covered by the taxpayer.

Platforms are important

Libraries were built before people could read. Libraries drove the demand for literacy. Therefore, it makes sense that data and data platforms exist before before citizens have become literate in data. As David Eaves wrote in the Open Knowledge Foundation blog:

It is worth remembering: We didn't build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

Some countries like Kenya now have the data, and now open-source platforms available not just for Kenya but worldwide. What are we missing?

Platforms like Ushahidi are like fertile land, and having open data is like having good seeds. (Good data equals very good seeds.) But fertile land and seeds are not much without people and actions on that very land. We often speak about technology being 10 percent of what needs to go into a deployment project -- the rest is often partnership, hard work and, most of all, community. Ordinary citizens can be farmers of the land; we need to get ordinary citizens involved at the heart of open government for it to powerful.

Ushahidi's role

Accessible data: The ownership debate has been settled as we agree government data belongs to the citizens. However, ownership is useless without access. If you own a car that you do not have access to, that car is useless to you. In the same way, if our citizens own data they have no access to, it's useless to them. Ownership is exercised through access. Ushahidi makes data accessible -- our technology "meets you where you are." No new devices are needed to interact with the data.

Digestible data: Is Africa overpopulated? If Africa is overpopulated or risks overpopulation, what intervention should we employ? Some have suggested sterilization. However, the data shows us that the more education a woman has, the less babies she has. Isn't a better intervention increasing education opportunities for women? This intervention also has numerous additional advantages for a country -- more educated people are usually more economically productive.

Drive demand for relevant data: Governments are frustrated that the data they have released is not being used. Is this because data release is driven mainly by the supply side, not the demand side -- governments release what they want to release, not what is wanted? How do we identify data that will be useful to the grassroots? We can crowdsource demand for data. For example: The National Taxpayer Alliance in Kenya has shown that when communities demand and receive relevant data, they become more engaged and empowered. There are rural communities suing MPs for misusing constituency development funds. They knew the funds were misused because of the availability of relevant data.

Closing the feedback loop: The key to behavioral change lies in feedback loops. These are very powerful, as exemplified by the incredible success of platforms like Facebook, which are dashboards of our social lives and that of our networks. What if we had a dashboard of accountability and transparency for the government? How about a way to find out if the services funded and promised for the public were indeed delivered and the service level of said services? For example: The concept of Huduma in Kenya, showed an early prototype of what such a dashboard would look like. We are working on more ways of using the Ushahidi platform to provide for this specific use case. Partnership announcements will be made in due course about this.

All this, To what end? Efficiency and change

If we as citizens can point out what is broken, and if the governments can be responsive to the various problems there are, we can perhaps see a delta in corruption and service provision.

Our role at Ushahidi is making sure there's no lack of technology to address citizen's concerns. Citizens can also be empowered to assist each other if the data is provided in an open way.

Open Data leading to Open Government

It takes the following to bridge open data and open government:

  • Community building - Co-working spaces allow policy makers, developers and civic hackers to congregate, have conversations, and build together. Examples are places like the iHub in Kenya, Bongo Hive in Zambia, and Code For America meetups in San Fransisco, just to name a few.
  • Information gathering and sharing - Crowdsourcing plus traditional methods give not only static data but a near real-time view of what's going on on the ground.
  • Infrastructure sharing - Build capacity once, reuse many times -- e.g., Crowdmap.
  • Capacity building - If it works in Africa, it can work anywhere. Developing countries have a particularly timely opportunity of building an ecosystem that is responsive to citizens and can help to leapfrog by taking open data, adding real-time views, and most of all, acting upon that data to change the status quo.
  • Commitment from government - We can learn from Chicago (a city with a history of graft and fraud), where current CTO John Tolva and Mayor Rahm Emmanuel have been releasing high-value data sets, running hackathons, and putting up performance dashboards. The narrative of Chicago is changing to one of a startup haven! What if we could do that for cities with the goal of making smart cities truly smart from the ground up? At the very least, surfacing the real-time view of conditions on the ground, from traffic, energy, environment and other information that can be useful for urban planners and policy makers. Our city master plans need a dose of real-time information so we can build for our future and not for our past.
  • Always including local context and collaboration in the building, implementation and engagement with citizens.

Would love to hear from you about how Ushahidi can continue to partner with you, your organization or community to provide tools for processing data easily and, most importantly, collaboratively.

Daudi Were, programs director for Ushahidi, contributed to this post.

A longer version of this story can be found on Ushahidi's blog.

06:02

New Crowdsourcing, Curation and Liveblogging Training

Hi all! I’ve been traveling a lot for Digital First lately to spread the gospel of social media to my colleagues. So, if you’ve seen my presentations before, you’d know that I make very wordy Powerpoints so that people who weren’t there to see me prattle on about my favorite things can still follow what we went [...]

April 19 2012

14:31

Kickstarter gives birth to a market: Crowdfunding turns Pebble into a platform

GigaOM :: By now, you’ve probably seen the soaring funding numbers put up by Pebble, the e-paper smartwatch that has broken records as the biggest Kickstarter project to date with $4.7 million raised. It is even more interesting to see how the Kickstarter campaign is helping create an instant ecosystem around Pebble’s app platform, giving it the kind of scale that is very hard to conjure up so quickly in the software world.

Continue to read Ryan Kim, gigaom.com

April 18 2012

17:46

Who watches the watchmen? The Guardian crowdsources its investigation into online tracking

As Guardian journalists were preparing to launch their new investigative project on cookies and other online tools that track you around the web, they realized they had to figure out just what kind of trackers exist on their own website.

Turns out this isn’t an easy task. “There are so many legacy ones from us that we forgot about — we had to do some research,” said Ian Katz, deputy editor of the Guardian.

Like many news sites, the Guardian has a mix of cookies — some for geotargeting where readers are, some for registering readers on the site, some for advertising, and more. The end result was this illuminating guide that lays itself over a story page and shows what cookies the Guardian uses. That kind of transparency fits with the Guardian’s embrace of what it calls open journalism, but it’s also an incentive for readers to uncover what kind of cookies follow them around the web. As part of their investigation, the Guardian wants readers to help guide their reporting by telling them what cookies they encounter in their day-to-day internet use. Thanks to Mozilla’s Collusion add-on, users will be able to track the trackers and then hand over that data to the Guardian.

“Essentially what we’re saying is, ‘You tell us what cookies you receive over the period you use this tool and we will find out which are the most prevalent cookies,’” Katz said. “We will do the work of finding out what they are and what they do.”

This week the Guardian is publishing Battle for the Internet, a series that looks at the future of the Internet and the players involved, from the private sector to governments, militaries, and activists. Cookies have an new significance because of a regulation passed last year that requires sites based in the U.K. to inform users they are being watched.

Joanna Geary, the digital development editor for the Guardian, said the idea is to go deep on cookies — not just what they do, but the companies behind them, what happens to the information they collect, and how they connect various parts of the web. Geary said the Collusion tool was perfect for this project because it not only tracks the trackers, but it provides a helpful — if not scary — illustration of how cookies work across various sites. “The Guardian being what it is, and being conscious of our commitment to open journalism, it felt like this was the right project to get our readers involved in,” she said. As for the Guardian’s own self-examination, “I think it would be weird if we had undergone any sort of crowdsourcing project without doing it,” she said. “We have the responsibility of telling people what we use on our site ourselves.”

Asking the crowd for help is a regular part of the Guardian’s playbook, and because of that they’ve learned a bit about what works and what doesn’t. Katz said a big part of success in crowdsourcing it the ease of contributing to a given project, whether you are asking someone to look at a document for a few minutes or put a pin on a map. This project could prove a bit more tricky since it requires downloading a browser add-on (that only works on Firefox) and later exporting data to the Guardian.

But just as important as the ease-of-use question is the motivation, Katz said. “You have to tap into an issue people are relatively fired up about,” he said. “You can’t sort of create that sense of urgency unless people already feel it.” Katz said people need to not only feel like they are making a difference — they also have to see their work in action. Katz admits that not all of the Guardian’s crowdsourcing efforts have been as successful as they hoped, saying the responsibility for that lies with the paper “when we have not reflected that work back in an interesting way.”

Katz said the graphics team will work on visualizations from the cookie data to display findings from readers. But the ultimate fate of any further reporting rests in what the audience finds. Instead of reporting out what it sees as problem cookies, the paper is asking readers to show what trackers are a growing part of daily life online. “It’s a genuine sort of combined enterprise, that both sides are bringing something to the party,” he said. “In this case, you bring the data and we’ll do the reporting.”

Image from Danny Sullivan used under a Creative Commons license.

12:06

Kickstarter 28 days to go - 'Tube': the open-source 3D animation experiment

The Tube | Kickstarter :: Animation with substance. The crowd funds it, the crowd owns it. Tube is the experimental production of a 3D animated short about the dream and failure and achievement of immortality. It's also a love letter to free software and open culture that marks their convergence with independent filmmaking.

Continue to read Bassam Kurdali, www.kickstarter.com

April 16 2012

05:52

Crowdfunded - Pepple: $2,965,439 pledged of a $100,000 goal for an 'E-Paper Watch'

The facts: Pepple's "E-Paper Watch for iPhone and Android" in numbers (Apr 16, 07:47 CET): Backers: 20,914 - $2,965,439 pledged of $100,000 goal and still 32 days to go.

AmandaPeyton.com :: Back to the Pebble watch. Consumer electronics are among the most well-funded projects on Kickstarter despite the fact that it’s dubious whether they should even be included in the scope of fundable projects. Which is really fascinating because that to me means that consumer electronics as a market has been ripe for disruption all along. That said, it’s ridiculously not obvious that disruption would come from the same place that allows an artist with a sharpie, a hotel room and a webcam a way to make the art she wants.

Continue to read Amanda Peyton, amandapeyton.com

April 15 2012

11:08

In case of emergency: Twitcident crowdsources tweets to help out in crises

The Verge :: Researchers from the Delft University of Technology in the Netherlands have created Twitcident, a framework for filtering and analyzing tweets to crowdsource information about crises. For the past ten months the system has been in testing as a support program for the Dutch police and fire department.

Twitcident-jpg

Continue to read www.theverge.com

Visit the service site twitcident.com

April 14 2012

05:58

Behind closed doors: Broadcasters battle online disclosure of political ad buys

ProPublica :: The Federal Communications Commission is scheduled to vote April 27 on whether to require TV stations to post online public information about political ad buys. Some form of the rule seems likely to pass, but the industry and others are lobbying the FCC to alter the nature of the final rule.

Crowdsourced: ProPublic is collecting stations' public paper files with the help of readers:

[ProPublica:] With the help of readers around the country, ProPublica is collecting stations’ public paper files containing data on political ads and posting them online because the information is generally unavailable elsewhere. See “Free the Files.”

Continue to read Justin Elliott, www.propublica.org

April 12 2012

12:33

Crowdsourced video production: Poptent launches premium video production unit

The Next Web :: Poptent, a US-based crowdsourced video production and media company, has launched Poptent Productions, a premium video production business unit for big brands and agencies. Founded in 2007, Poptent lets creatives sign up for free and offer their services.

Continue to read thenextweb.com

March 30 2012

13:41

Kickstarter shares the effects of its blockbuster season

TechCrunch :: February was a big month for Kickstarter. Not only did they have a number of record-breaking projects, but they were shoved into the mainstream consciousness with a flood of traditional news coverage.

But there was always the question of whether these thousands of pledges would have any lasting effect on the site. Could such a rush of attention actually have negative effects, increasing competition and bringing in more projects than the site’s population of donors can handle?

Continue to read Devin Coldewey, techcrunch.com

March 29 2012

14:00

Creating a Taxonomy of News Partnerships

In collaborative journalism right now we can see media theorist Clay Shirky's urge towards vast experimentation manifested. The journalism partnerships emerging around the country vary in size and type, and the practices that define those partnerships are still being negotiated and hashed out in newsrooms and communities.

Some partnerships bring together very different news organizations in order to provide expanded coverage, while others coalesce around similar newsrooms to cut down on duplicative efforts. Some focus on local or hyperlocal news, while others focus on regional and national reporting. Some bring the resources of multiple organizations together to focus on one issue in depth, while others partner with the public to capture a range of different angles on one issue.

This diversity in approaches to collaborative journalism is one of its strengths -- and one of its great challenges.

A Collaboration Framework

Journalists, editors and managers at news organizations are trying to navigate the parameters of these new kinds of partnerships as they happen. Developing a framework to categorize journalism collaborations is useful as practitioners look for lessons and models to replicate and build on. The dynamics between different newsrooms, and their various motivations for partnering, shape how a given collaboration is structured. While some collaborations may defy categorization, a few basic partnership models have emerged:

  • Commercial News Collaborations: These partnerships tend to be contractual agreements between commercial news organizations such as television stations and newspapers. They are often defined by the legal deals that structure them: Shared Services Agreements, Local News Sharing Agreements, Newspaper Broadcast Cross-Ownership, Joint Operating Agreements, etc. Many of these agreements consolidate resources, equipment, production and even newsroom staff. These kinds of commercial partnerships and near-mergers pre-date the larger collaborative trend we've witnessed across newsrooms since 2008.

  • Non-Profit and Commercial Collaborations: These partnerships are usually between public or non-commercial entities and a private news organization. This model gained significant attention during the Comcast-NBC merger debates because Comcast promised to expand local news coverage on NBC stations through partnerships with non-profit journalism organizations. Other examples include the New York Times' local news partnerships with non-profits in major media markets and sites like California Watch, whose model is based on these partnerships. In these arrangements, the commercial news outlet often serves as the distributor of content the non-profit produces. However, more complex and expansive non-profit and commercial reporting collaborations are also emerging.

  • Public and Non-Commercial Collaborations: These partnerships connect multiple public media outlets or bring public radio and TV stations together in collaboration with other non-profit newsrooms. The networked nature of the U.S. public media system, in which stations across the country are both producers and distributors, has meant that partnerships within the system are built into the DNA of the organizations. In recent years, innovative public media producers have built on that history and taken collaboration to the next level. We have also seen inventive partnerships between public media broadcasters and non-profit digital news startups.

  • University Collaborations: University partnerships with local news organizations are engaging journalism and mass communications students in hands-on reporting efforts that are producing some great journalism. This model takes many forms, from curricular-based service-learning efforts to campus-based investigative reporting workshops, and involves both commercial and non-commercial news organizations.

  • Community and Audience Collaborations: Journalists are also collaborating with their communities in new and important ways. Crowdsourcing and crowdfunding -- as exemplified by projects at The Guardian, ProPublica and public media's Public Insight Network and Spot.Us -- are finding new ways for audiences to contribute to the funding, research and editorial decisions that shape the news. At their best, these projects are not just transactional, wherein the audience hands over something (money, information) and gets something in return (a story or other journalistic product); they are transformative for both journalists and participants -- as in the case of Departures, a web-based documentary series about Los Angeles developed by public media station KCET in close partnership with community members.

This taxonomy focuses primarily on editorial collaborations around the production of specific news products; however, each collaborative model listed above also encompasses cases in which news organizations can and do collaborate around shared infrastructure. Examples of infrastructure-driven collaboration include: broadcasters sharing equipment, such as news helicopters; two non-profits sharing the costs of developing a mobile app; and universities acting as fiscal agents for journalism organizations. Organizations like J-Lab, the Media Consortium and the Investigative News Network are all helping facilitate both editorial and infrastructural partnerships.

No One-Size-Fits-All Solutions

silver-bullet.jpgToo often, in debates over the future of journalism, we get caught up looking for a silver bullet -- the one business model to rule them all. Some debates about collaboration echo this narrow focus, assuming there will be a universal set of practices or guidelines that newsrooms can replicate and scale across the country. The categorization above should highlight the vastly different approaches to journalistic collaboration that exist.



We are still at the early stages of experimentation with large- and small-scale collaboration across the news and journalism ecosystem. Partners differ, motivations differ, needs differ and funding differs. This list isn't meant to suggest that news organizations only draw lessons from partnerships that most closely resemble their own -- indeed quite the opposite is true: We should be drawing on the lessons from across models, but we should do so with an awareness of the unique context of each collaboration. Each of the various models outlined above present unique challenges and opportunities that deserve to be unpacked and detailed in more depth.

Do you think these five categories are comprehensive or would you add others? Or would you suggested categorizing collaboration more by the type of journalism than the structure of the newsroom? For example, we might reorganize the list above to highlight similarities and differences between collaborations organized around investigative reporting, niche journalism, covering local beats, etc. Let me know how you would organize the field in the comments below.  

Photo of silver bullet by Flickr user Ed Schipel.

Josh Stearns is a journalist, organizer and community strategest. He is Journalism and Public Media Campaign Director for Free Press, a national, non-partisan, non-profit organization working to reform the media through education, organizing and advocacy. He was a co-author of "Saving the News: Toward a national journalism strategy," "Outsourcing the News: How covert consolidation is destroying newsrooms and circumventing media ownership rules," and "On the Chopping Block: State budget battles and the future of public media." Find him on Twitter at @jcstearns.

This is a summary. Visit our site for the full post ».

February 20 2012

09:31

“All that is required is an issue about which others are passionate and feel unheard”

Here’s a must-read for anyone interested in sports journalism that goes beyond the weekend’s player ratings. As one of the biggest names in European football goes into administration, The Guardian carries a piece by the author of Rangerstaxcase.com, a blogger who “pulled down the facade at Rangers”, including a scathing commentary on the Scottish press’s complicity in the club’s downfall:

“The Triangle of Trade to which I have referred is essentially an arrangement where Rangers FC and their owner provide each journalist who is “inside the tent” with a sufficient supply of transfer “exclusives” and player trivia to ensure that the hack does not have to work hard. Any Scottish journalist wishing to have a long career learns quickly not to bite the hands that feed. The rule that “demographics dictate editorial” applied regardless of original footballing sympathies.

“[...] Super-casino developments worth £700m complete with hover-pitches were still being touted to Rangers fans even after the first news of the tax case broke. Along with “Ronaldo To Sign For Rangers” nonsense, it is little wonder that the majority of the club’s fans were in a state of stupefaction in recent years. They were misled by those who ran their club. They were deceived by a media pack that had to know that the stories it peddled were false.”

Over at Rangerstaxcase.com, the site expands on this in its criticism of STV for uncritical reporting:

“There does not appear to be a point where the media learns its lessons. There is no capacity for improvement. No voice that says: we have been misled by people from this organisation so often in the past that we need to get corroboration before we publish anything more. Alastair Johnston, you will recall, artfully created the impression for Rangers’ supporters and shareholders  that the payment of the tax bills that are now crushing their club would be the responsibility of the parent company. His words then were carefully chosen to avoid actually lying, but his intended audience seemed in little doubt at the time as to what they thought he meant.  Either Mr. Johnston has been misrepresented by STV or he appears to be trying to gain an advantage in the battle to oust Whyte by misleading Rangers’ supporters.”

The piece also includes some interesting reflections on collaborative journalism and crowdsourcing:

“Rangerstaxcase.com has become a platform for some of the sharpest minds and most accomplished professionals to share information, debate, and form opinions based upon a rational interpretation of the facts rather than PR-firm fabrications. In all of the years when the mainstream media had a monopoly on opinion forming and agenda setting, the more sentient football fan had no outlet for his or her opinions. Blogs and other modern media, like Twitter, have democratised information distribution.

“Rangerstaxcase.com has gone far beyond its half-baked “I know a secret” origins to become a forum for citizen journalism. The power of the crowd‑sourced investigation initiated by anyone who is able to ignite the interest of others is a force that has the potential to move mountains in our society. All that is required is an issue about which others are passionate and feel unheard.”

Rangerstaxcase.com is not unique. Combine the passion of sports supporters with the lack of critical faculty in much sports journalism and you have potentially fertile ground.

For my own club, Bolton Wanderers, for example, I turn to Manny Road (site currently laid low by a malware attack).

For the Olympics there will be a regular and easy supply of good news stories to wade through, but also an extremely active network of local and international blogs from people scrutinising the foggier side of the Olympic spirit, which is why I set up Help Me Investigate the Olympics and am encouraging my students to connect with those communities.

05:55

Yelp goes public: What it means about the future of crowdsourced media

GigaOM :: But the important thing here is that the filing means Yelp could become is one of the first almost entirely crowdsourced media entity entities to go public. Yelp’s entire business is built on the more than 25 million reviews that it has accrued over the years from its users. That user-submitted content is the reason that Yelp attracts more than 66 million unique visitors a month.

What’s interesting is how Yelp’s valuation compares it to other Internet companies and what it means for the future of media and publishing.


Clipped from: yelp.com (share this clip)

Continue to read Ryan Lawler, gigaom.com

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl