Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2013

14:02

The newsonomics of Spies vs. Spies

So who do you root for in this coming battle, as Google petitions the feds? Are you on the side of Big Brother or Little Brother — and remind me, which is which? It’s a 50-year-update on Mad Magazine’s iconic Spy vs. Spy.

The Surveillance State is — at least for this month — in front of the public. The Guardian’s rolling revelations of National Security Agency phone and web spying have again raised the bogeyman of Big Data — not the Big Data that all the airport billboards offer software to tame, but the Big Data that the unseen state can use against us. We’ve always had a love/hate relationship with big technology and disaster, consuming it madly as Hollywood churns out mad entertainments. We like our dystopia delivered hot and consumable within two hours. What we don’t like is the ooky feeling we are being watched, or that we have to make some kind unknowable choice between preventing the next act of terror and preserving basic Constitutional liberties.

Americans’ reactions to the stories is predictable. Undifferentiated outrage: “I knew they were watching us.” Outrageous indifference: “What do you expect given the state of the world?” That’s not surprising. Americans and Europeans have had the same problem thinking about the enveloping spider’s web of non-governmental digital knowledge. (See The Onion headline: “Area Man Outraged His Private Information Being Collected By Someone Other Than Advertisers.”)

While top global media, including The Guardian, The Washington Post, and The New York Times, dig into the widening government spying questions, let’s look at the ferment in the issues of commercial surveillance. There’s a lot of it, and it would take several advanced degrees and decoder rings to understand all of it. No, it’s not the same thing as the issues surrounding PRISM. But it will be conflated with national security, and indeed the overlapping social and political questions are profound. Let’s look at some recent developments and some of the diverse players in this unfolding drama and see where publishers do — and could — fit in.

The commercial surveillance culture is ubiquitous, perhaps even less hemmed in by government policy than the NSA, and growing greatly day by day. While Google asks the FISA court to allow it to release more detail about the nature of federal data demands, its growing knowledge of us seems to have no bounds. From our daily searches, to the pictures (street to sky) taken of our homes, to the whereabouts relayed by Google Maps, and on and on.

It’s not just Google, of course. Facebook, whose users spend an average of seven hours per month online disclosing everything, is challenging Google for king of the data hill. A typical news site might have 30 to 40 cookies — many of them from ad-oriented “third parties” — dropped from it. That explains why those “abandoned” shopping carts, would-be shoe purchases, and fantasy vacation ads now go with us seemingly everywhere we move on the web. It’s another love/hate relationship: We’re enamored of what Google and Facebook and others can do for us, but we’re disquieted by their long reach into our lives. It’s a different flavor of ooky.

We are targeted. We are retargeted. Who we are, what we shop for, and what we read is known by untold number of companies out there. Though we are subject to so much invisible, involuntary, and uncompensated crowdsourcing, the outrage is minimal. It’s not that it hasn’t been written about. Among others, The Wall Street Journal has done great work on it, including its multi-prize-winning three-year series on “What They Know.”

Jim Spanfeller, now CEO of Spanfeller Media Group and the builder of Forbes.com, related the PRISM NSA disclosures to commercial tracking in a well-noticed column (“At What Price Safety? At What Price Targeted Advertising?”) last week. His point: We’re all essentially ignorant of what’s being collected about us, and how it is being used. As we find out more, we’re not going to be happy.

His warning to those in the digital ad ecosystem: Government will ham-handedly regulate tracking of consumer clicks if the industry doesn’t become more “honest and transparent.”

Spanfeller outlined for me the current browser “Do Not Track” wars, which saw its latest foray yesterday. Mozilla, parent of Firefox, the third most-popular browser by most measures, said it will move forward with tech that automatically blocks third-party cookies in its browser. Presumably, users will be able to turn back on such cookies, but most will go with the defaults in the browsers they use.

The Mozilla move, much contested and long in the works, follows a similar decision by Microsoft with its release of the latest Internet Explorer. Microsoft is using a “pro-privacy” stance as a competitive weapon against Google, advancing both Bing search and IE. Spanfeller notes that Microsoft’s move hasn’t had much effect, at least yet, because “sites aren’t honoring it.”

These browser wars are one front, and much decried by forces like the Interactive Ad Bureau, the Digital Ad Alliance, and its “Ad Choices” program — which prefer consumer opt-out. Another front is an attempt at industry consensus through the World Wide Web Consortium, or W3C. Observers of that process believe it is winding its way to failure. Finally, also announced yesterday was the just-baked Cookie Clearinghouse, housed at the Stanford Center for Internet and Society. The driving notion, to be fleshed out: creating whitelists and blacklists of cookies allowed and blocked. (Good summaries by both Ad Age’s Kate Kaye and ZDNet’s Ed Bott.)

Never too far from the action, serial entrepreneur John Taysom was in Palo Alto this week as well. Taysom, a current senior fellow at Harvard’s Advanced Leadership Initiative, is an early digital hothouse pioneer, having led Reuters’ Greenhouse project way back in the mid-’90s. His list of web startups imagined and sold is impressive, and now he’s trying to put all that experience to use around privacy issues. As a student of history, old and modern, his belief is this: “When they invented the Internet, they didn’t add a privacy layer.”

“We need a Underwriters Laboratory for our time,” he told me Wednesday. UL served a great purpose at a time (1894) of another tech revolution: electricity. Electricity, like computer tech these days, seemed exciting, but the public was wary. It wasn’t afraid of behind-the-scenes chicanery — it literally was concerned about playing with fire. So UL, as a “global independent safety science company” — a kind of neutral, Switzerland-like enterprise — was set up to assure the public that electrical appliances were indeed tested and safe.

Could we do the same with the Internet?

He’s now working on a model, colloquially named “Three’s A Crowd,” to reinsert a “translucent” privacy layer in the tech stack. His model is based on a lot of current thinking on how to both better protect individual privacy and actually improve the targeting of messages by business and others. It draws on k-anonymity and Privacy by Design principles, among others.

In brief, Taysom’s Harvard project is around creating a modern UL. It would be a central trusted place, or really set of places, that institutions and businesses (and presumably governments) could draw from, but which protect individual identification. He calls it an I.D. DMZ, or demilitarized zone.

He makes the point that the whole purpose of data mining is to get to large enough groups of people with similar characteristics — not to find the perfect solution or offer for each individual. “Go up one level above the person,” to a small, but meaningfully sized, crowd. The idea: increase anonymity, giving people the comfort of knowing they are not being individually targeted.

Further, the levels of anonymity could differ depending on the kind of information associated with anyone. ”I don’t really mind that much about people knowing my taste in shirts. If it’s about the location of my kids, I want six sigmas” of anonymity, he says. Taysom, who filed a 2007 U.K. patent, now approved, on the idea, is now putting together both his boards of advisors and trustees.

Then there are emerging marketplace solutions to privacy. What havoc the digital marketplace hath wrought may be solved by…the digital marketplace. D.C.-based Personal.com is one of the leading players in that emerging group. Yes, this may be the coming personal data economy. Offering personal data lockers starting at $29.99 a year, Personal.com is worth a quick tour. What if you could store all your info in a digital vault, it asks? Among the kinds of “vaults”: passwords, memberships and rewards programs, credit and debit card info, health insurance, and lots more.

It’s a consumer play that’s also a business play. The company is now targeting insurance, finance, and education companies and institutions, who would then offer consumers the opportunity to ingest their customer information and keep it in vault and auto-fill features then let consumers re-use such information once it is banked. Think Mint.com, but broader.

Importantly, while Personal.com deals potentially with lots of kinds of digital data, its business doesn’t touch on the behavioral clickstream data that is at the heart of the Do Not Track fracas.

Do consumer want such a service? Personal.com won’t release any numbers on customers or business partners. Getting early traction may be tough.

Embedded in the strategy: a pro-consumer tilt. Personal.com offers an “owner data agreement,” basically certifying that it is the consumer, not Personal.com, that owns the data. It is a tantalizing idea: What if we individually could control our own digital data, setting parameters on who could use what and how? What if we as consumers could monetize our own data?

Neither Personal.com nor John Taysom’s project nor the various Do Not Track initiatives envision that kind of individually driven marketplace, and I’ve been told there are a whole bunch of technical reasons why it would be difficult to achieve. Yet, wouldn’t that be the ultimate capitalist, Adam Smith solution to this problem of runaway digital connectedness — a huge exchange that would facilitate the buying and selling of our own data?

For publishers, all this stuff is headache-producing. News publishers from Manhattan to Munich complain about all the third-party cookies feeding low-price exchanges, part of the reason their digital ad businesses are struggling. But there is a wide range of divergent opinion about how content-creating publishers will fare in Do Not Track world. They may benefit from diminished competition, but would they be able to adequately target for advertisers? Will Google and Facebook do even better in that world?

So, for publishers, these privacy times demand three things:

  • Upscale their own data mining businesses. “There’s a big difference between collecting and using data,” says Jonathan Mendez, CEO of Yieldbot, that works with publishers to provide selling alternatives to Google search. That’s a huge point. Many publishers don’t yet do enough with their first-party data to adequately serve advertiser needs.
  • Take a privacy-by-design approach to emerging business. How you treat consumers in product design and presentation is key here, with some tips from Inc. magazine.
  • Adopt a pro-privacy position. Who better than traditionally civic-minded newspaper companies than to help lead in asserting a sense of ownership of individual data? If news companies are to re-assert themselves as central to the next generation of their communities and of businesses, what better position than pro-privacy — and then helping individuals manage that privacy better?

It’s a position that fits with publishers’ own interests, and first-party data gathering (publisher/reader) makes more intuitive sense to citzen readers. For subscribers — those now being romanced into all-access member/subscribers — the relationship may make even more sense. Such an advocacy position could also help re-establish a local publisher as a commercial hub.

News and magazine publishers won’t have to create the technology here — certainly not their strong suits — but they can be early partners as consortia and companies emerge in the marketplace.

Photo by Fire Monkey Fire used under a Creative Commons license.

April 18 2012

17:46

Who watches the watchmen? The Guardian crowdsources its investigation into online tracking

As Guardian journalists were preparing to launch their new investigative project on cookies and other online tools that track you around the web, they realized they had to figure out just what kind of trackers exist on their own website.

Turns out this isn’t an easy task. “There are so many legacy ones from us that we forgot about — we had to do some research,” said Ian Katz, deputy editor of the Guardian.

Like many news sites, the Guardian has a mix of cookies — some for geotargeting where readers are, some for registering readers on the site, some for advertising, and more. The end result was this illuminating guide that lays itself over a story page and shows what cookies the Guardian uses. That kind of transparency fits with the Guardian’s embrace of what it calls open journalism, but it’s also an incentive for readers to uncover what kind of cookies follow them around the web. As part of their investigation, the Guardian wants readers to help guide their reporting by telling them what cookies they encounter in their day-to-day internet use. Thanks to Mozilla’s Collusion add-on, users will be able to track the trackers and then hand over that data to the Guardian.

“Essentially what we’re saying is, ‘You tell us what cookies you receive over the period you use this tool and we will find out which are the most prevalent cookies,’” Katz said. “We will do the work of finding out what they are and what they do.”

This week the Guardian is publishing Battle for the Internet, a series that looks at the future of the Internet and the players involved, from the private sector to governments, militaries, and activists. Cookies have an new significance because of a regulation passed last year that requires sites based in the U.K. to inform users they are being watched.

Joanna Geary, the digital development editor for the Guardian, said the idea is to go deep on cookies — not just what they do, but the companies behind them, what happens to the information they collect, and how they connect various parts of the web. Geary said the Collusion tool was perfect for this project because it not only tracks the trackers, but it provides a helpful — if not scary — illustration of how cookies work across various sites. “The Guardian being what it is, and being conscious of our commitment to open journalism, it felt like this was the right project to get our readers involved in,” she said. As for the Guardian’s own self-examination, “I think it would be weird if we had undergone any sort of crowdsourcing project without doing it,” she said. “We have the responsibility of telling people what we use on our site ourselves.”

Asking the crowd for help is a regular part of the Guardian’s playbook, and because of that they’ve learned a bit about what works and what doesn’t. Katz said a big part of success in crowdsourcing it the ease of contributing to a given project, whether you are asking someone to look at a document for a few minutes or put a pin on a map. This project could prove a bit more tricky since it requires downloading a browser add-on (that only works on Firefox) and later exporting data to the Guardian.

But just as important as the ease-of-use question is the motivation, Katz said. “You have to tap into an issue people are relatively fired up about,” he said. “You can’t sort of create that sense of urgency unless people already feel it.” Katz said people need to not only feel like they are making a difference — they also have to see their work in action. Katz admits that not all of the Guardian’s crowdsourcing efforts have been as successful as they hoped, saying the responsibility for that lies with the paper “when we have not reflected that work back in an interesting way.”

Katz said the graphics team will work on visualizations from the cookie data to display findings from readers. But the ultimate fate of any further reporting rests in what the audience finds. Instead of reporting out what it sees as problem cookies, the paper is asking readers to show what trackers are a growing part of daily life online. “It’s a genuine sort of combined enterprise, that both sides are bringing something to the party,” he said. “In this case, you bring the data and we’ll do the reporting.”

Image from Danny Sullivan used under a Creative Commons license.

February 03 2012

11:37

LIVE: Session 1A – Online video

Most publishers will have at least dipped their toe into the pool of online video, but what does it take to really make a splash in this area, and reap the traffic rewards? This session will feature innovative case studies of cutting-edge online video which can enhance the way content is presented and shared, as well as top tips from experienced online video journalists, publishers and those leading key developments in web-native video about the opportunities to be exploited through the online medium.

With: Christian Heilmann, Mozilla Popcorn; Josh de la Mare, editor of video, Financial Times; John Domokos, video producer, the Guardian; David Dunkley Gyimah, video journalist, academic and consultant.

11.44

 

With HTML5 the video becomes just another page element which can be edited and overlayed. “The timestamp is the glue.”

11.42

 

“video is a black hole on the web” – Google cannot find the content. To make it more ‘findable’ we must use a great headline and separate our content out from the presentation. If the text can be separated it out from the video (eg using Universal Subtitles) you can edit text after publishing video. Google can find the text and it helps readers to skip to the bit of the video they want.

HTML5 video allows for all of that.

11.39

 

He says when it comes to video online, shorter is better – otherwise people get fidgety and start checking Twitter or FarmVille!

Now it’s Chris Heilmann of Mozilla Popcorn – he says he has a background in radio.

11.34

 

David Dunkley Gyimah is up next – a video journalist, academic and artist in residence at the Southbank, apparently!
Reportage in 1991/2 was “the YouTube of the BBC back then” – young and disruptive.
It all comes back to cinema. You need to get people to feel something, and to do that you need to experiment with image and movement and how best to capture that.

11.34

 

“we’re prone to following trends when we should also focus on exemplars” – Gyimah studies legendary cinematic directors. He also recommends Media Storm as an exemplar for online video.

11.32

 

Question: “isn’t the FT just putting TV news online?”


A: we have a mixture of polished content and more raw, on the ground news. That seems to be what the FT audience want, but again, it’s an evolving medium. We definitely aim for much short videos online – almost always under 5 minutes.

11.18

“The human face is absolutely crucial” – the individual details that help you to understand the wider story.

Josh de la Mare closes by reminding us that “nothing is sacred” – the medium is still evolving and there’s no stable formula for producing online video.

11.16

The FT has had a studio for about 3 years. FT video produces short comment and interview clips that go deeper into niche angles of the broader story.

FT also use on-site camera crews and provide theirjournalists with flip cams, encouraging them to shoot footage all over the world.

11.13

 

Josh de la Mare: FT mostly uses talking heads because that’s most appropriate for our audience.
Video can get to the emotional heart of a story. The FT used video to represent the human side to the impact of 9/11.

11.10

 

User generated content (UGC) is not a free and easy way to get great video clips!


The Guardian is exploring ways to engage with readers using multimedia. Domokos shows us an example which worked – people speaking out against disability living allowance cuts. These videos worked because the subjects had a real personal reason to produce them. The raw result is also not something a traditional camera crew could ever have got by treating them as “case studies”. 

Every time we use video, we must be using it because it’s the RIGHT way to tell the story, not the easy way

10.52

The Online Video session has kicked off with moderator David Hayward from BBC College of Journalism.

Follow the twitter hash tag #newsrw

January 19 2012

21:40

MediaStream - Mozilla demos real time audio video manipulation in Firefox

ars technica :: Mozilla is drafting a proposal for a new Web standard called MediaStream Processing that introduces JavaScript APIs for manipulating audio and video streams in real time. The specification is still at an early stage of development, but Mozilla has already started working on an implementation for testing purposes. Mozilla's Robert O'Callahan, the author of the MediaStream Processing API proposal draft, released experimental Firefox builds that include MediaStream Processing support.

Continue to read Ryan Paul, arstechnica.com

Tags: Mozilla

January 05 2012

15:20

In the Digital Age, How Much Is Informal Education Worth?

Education content on MediaShift is brought to you by:

USCad68x68.gif

Innovation. Reputation. Opportunity. Get all the advantages journalism and PR pros need to help put their future in focus. Learn more about USC Annenberg's Master's programs.


You can learn anything you want on the Internet, so the adage goes. But even if that's true, even if it's now easier than ever to learn about almost any subject online, there are still very few opportunities to gain formal recognition -- "credit," if you will -- for informal learning done online.

In September, the Mozilla Foundation launched its Open Badges Project, an effort to develop a technology framework that would make it easier to build, display and share digital learning badges. These badges are meant to showcase and recognize all kinds of skills and competencies -- subject matter expertise as college degrees are meant to indicate, for example, as well as "soft skills" that aren't so easily apparent based on traditional forms of credentialing. (We examined some of the technology infrastructure of the Open Badges Project in a story earlier this year.)

When the Mozilla Foundation announced the Open Badges Project, it was in conjunction with the MacArthur Foundation and HASTAC, as "Badges for Lifelong Learning" is the theme of this year's Digital Media and Learning Competition, an annual contest that supports research of how digital technologies are changing the way we learn and work. Onstage at the formal unveiling of the Open Badges Project were representatives from not just Mozilla and the MacArthur Foundation, but from the Departments of Education, Labor and Veterans Affairs, NASA, as well as other businesses.

When the Open Badges Project was first announced, some educators questioned whether "badges" were a form of gamification of education, just another way, they said, to force learners to think more about certification and credentialing than about the learning process itself. But participation in the Open Badge Project from businesses and agencies like the Department of Labor has given it credibility. And whether we like it or not, many learners are extrinsically motivated to pursue certain educational endeavors -- they need skills and often certification in order to demonstrate their mastery to employers.

But what will it mean for employers?

openbadges.png

But even with the Department of Labor's involvement in the Open Badges Project and in the DML Competition, will employers recognize badges?

As informal learning opportunities grow, gaining employers' recognition and acceptance may well be one of the most important challenges of the coming years.

Having a formal degree -- whether it's a high school or a college diploma -- still carries the most weight with employers, and in some ways, badges may simply serve to complement these. But even with the emphasis on degrees, having some way to highlight other skills, competencies, and experiences is important in setting one potential hire apart from another. Indeed, many job descriptions do frame the necessity of a college degree this way -- "or equivalent experience" -- so the task ahead for the Mozilla Open Badges project will be, in part, to be seen as a valid "equivalent."

A number of the badges that were submitted to the DML Competition, for example, serve to highlight the accomplishments of teens. As youth unemployment remains high -- 16.8% in the U.S. and upwards of 50% in Spain -- alternate forms of credentialing might be able to help those without any higher education and often without substantial work experience find ways to showcase the skills they do possess.

Similarly, a badge proposal from the Department of Veterans Affairs -- Badges for Vets -- may help veterans translate their military experience into civilian job skills.

On the cusp

While badges might help employers better identify and recruit qualified employees, there are still some questions about whether this would actually function any differently than current hiring practices. But a shift may already be underway, evident in other new forms of credentialing that the Internet is providing. The recent announcement from MIT about its plans to offer a certificate for its new online learning initiative is just one indication that informal learning is on the cusp of more formal recognition.

This is already happening, to a certain extent, in the tech industry where the right programming skills aren't necessarily correlated to college degrees. (It's quite possible, for example, to have your bachelor's in Computer Science and not know a particular programming language.) Stack Overflow, for example, launched a job recruitment site this year, allowing job hunters to highlight not just their resume but to showcase their best answers from the larger Q&A website. And TopCoder, another tech company, offers programming competitions whereby participants have long had the ability to share their scores with potential employers, something that CTO Mike Lyons said is helpful during job searches: "Rather than saying 'look me up,' people have this transportable widget at their website."

Showcasing these sorts of accomplishments on one's own website is becoming increasingly important as job applicants find ways to leverage their online presence -- their blogs, digital portfolios, LinkedIn recommendations and the like -- knowing that employers are prone to Google them. As such, it seems clear that the resume of the future will likely contain lots of digital links, whether they're Open Badges or otherwise. What's less clear is how much of this digital profile will matter to employers, or if they'll still look for that formal piece of paper, a college degree.

Open education advocate and university professor David Wiley is optimistic. "Say I'm Google," he wrote on his blog, "and I need to hire an engineer. My job ad requirement says 'BS in Computer Science or equivalent.' I get two applicants. The first has a BS in Computer Science from XYZ State College. The second has certificates of successful completion for open courses in data structures and algorithms, artificial intelligence, and machine learning from Stanford and MITx. Do you think I'll seriously consider candidate two? You bet I will."

But Wikipedia co-founder Larry Sanger is less certain that the Open Badges Project, in its current manifestation at least whereby anyone can create a badge and offer a credential, will actually mean anything to employers:

If a "badge" is the sort of thing that by common practice almost anybody can define, and then claim, then I'm not likely to take it seriously, and most others won't either. In other words, the badge is a credential and a credential has to have, well, credibility. If supposed credentials are granted as easily as diploma mill "degrees," the whole endeavor will -- obviously, I think -- not get off the ground. Some geeks might go about claiming to have all sorts of "badges," but when it comes to hiring, I will ignore such self-claimed badges.

Of course, we have a long way to go before badges are ubiquitous the same way that college degrees are. As it currently stands, the Open Badges Project is too young to elicit much attention from human resources departments. (The HR officials I talked to hadn't heard of the project.) But as alternative credentialing efforts -- whether from Stack Overflow or from MIT -- take off, it's likely to be an issue that more employers (and employees and higher education institutions) are going to have to face.

Audrey Watters is an education technology writer, rabble-rouser, and folklorist. She writes for MindShift, O'Reilly Radar, Hack Education, and ReadWriteWeb.

mindshift-logo-100x100.pngThis post originally appeared on KQED's MindShift, which explores the future of learning, covering cultural and tech trends and innovations in education. Follow MindShift on Twitter @mindshiftKQED and on Facebook.



Education content on MediaShift is brought to you by:

USCad68x68.gif

Innovation. Reputation. Opportunity. Get all the advantages journalism and PR pros need to help put their future in focus. Learn more about USC Annenberg's Master's programs.


This is a summary. Visit our site for the full post ».

December 14 2011

21:56

Rethinking Planet Mozilla: The challenge of too much signal

The Daily Planet covered the news in Metropolis

Planet Mozilla is home to more than 500 news feeds.

There will be even more feeds, as Mozilla’s growing army of Web makers are added to the planet over the coming weeks. (If you’re a Web maker, find out how to get your feed added.)

And I hope there will be even more news feeds down the road as the Mozilla community continues to expand.

Over the last couple of weeks, I would guess that there have been roughly fifteen new posts on Planet Mozilla on any given day. When there’s an event like the Mozilla Festival happening, the number of posts can easily double or triple.

These are not short, fluffy posts either. These are 100% signal: Meeting minutes from project teams, results of internal research, new software features, product announcements, and event summaries rich with links to talks, presentations, and more.

But herein lies the challenge: Planet Mozilla is a classic fire hose. There’s lots of information and very little categorization. It’s constantly flowing. If you try to drink from it, you might just find yourself underwater. As new feeds get added, the challenge of too much signal starts to undermine the benefit of having a planet.

@paulrouget It’s a serious problem & opportunity, IMHO. Too much signal is the problem you want to have. But opportunity to re-think is huge

— Phillip Smith (@phillipadsmith) December8, 2011


There are many ways that people commonly approach the challenge of too much signal, for example by adding community filtering (think: simple up-voting mechanisms like what you’ll find on Hacker News) or by clustering related content together to make it easier to follow certain streams of information. There’s a lot that can be done in this direction, no doubt, that would improve the experience of Planet Mozilla.

However, those are not the ideas that interest me.

When I think about Mozilla, I think of a city that is growing. At its core is a small city council (Mozilla’s board of directors and executives). There is an active city staff (Mozilla’s employees) and many people that work directly on city projects (project-based staff, consultants, etc.). Expanding out from there is a large community of people who are active citizens: the shop owners, academics, activists, and so on (mostly volunteer contributors to Mozilla projects like Firefox). Beyond that there are the 400,000,000 people who live in the city every day just going about their business (people who use Mozilla software or interact with Mozilla projects).

Like any city, I believe that Mozillaville needs a smart, scrappy news organization to help its citizens understand what’s going on around them.

In Superman’s city, Metropolis, that was the Daily Planet.

In Spiderman’s city, New York, that was the Daily Bugle.

In Mozillaville, I think that the job should go to Planet Mozilla.

Some of the most interesting technology news stories are happening right here in our city, Mozillaville, so why are we’re waiting for other news organizations to cover them?

We have the scoops. We have the experts. We have the technology. So what are we waiting for?

(This is part of a series of posts. The first one is here.)

October 07 2011

18:30

What newsrooms can learn from open-source and maker culture

“Newsosaur” blogger and media consultant Alan Mutter some time ago suggested that journalism has become a lot more like Silicon Valley. Newspapers are too risk-averse, he said, and so they “need some fresh DNA that will make them think and act more like techies and less like, well, newspaper people.”

When Seth was at the Hacks/Hackers hack day at ONA11 last month, as part of his larger project studying Hacks/Hackers, he mentioned this idea to Phillip Smith, a digital publishing consultant who has been instrumental in the Knight-Mozilla News Technology Partnership (the same collective we wrote about in August).

Maker culture is a way of thinking — a DIY aesthetic of tinkering, playing around, and rebuilding, all without fear of failure.

While Smith generally agreed with Mutter’s premise — of course Silicon Valley could bring a little dynamism to newspapers and journalism — he offered a caveat: The technology sector that Smith knew a decade ago was more about hacking-in-the-open and building cool stuff for others to enjoy, with a secondary emphasis on making money. Now the inverse is true: Silicon Valley is much less about the ideals of the open web, and much more about (as another observer has put it) short-sighted technology for the sake of “big exits and big profits.”

So it’s a bit of a mistake, we think, to go down the route of saying that journalism needs to become like Silicon Valley, in part because Silicon Valley is not simply a world of innovation, but also a highly competitive, secretive, and unstable metaphor. (Think: Groupon IPO, or even The Social Network.)

Instead, open source might be what people are hoping for when they think about remaking journalism — both in terms of innovating the business/system of news, and in terms of making it more transparent and participatory.

In a widely circulated recent post, Jonathan Stray suggested that the news industry could draw on “maker culture” to create a new kind of journalism — one that plumbs the institutional and technical complexities of major issues (like global finance) in a way that invites bright, curious “makers” to hack the system for the good of society. “This is a theory of civic participation based on empowering the people who like to get their hands dirty tinkering with the future,” Stray wrote. Josh Stearns built on this line of reasoning, arguing that “maker culture is the willingness to become experts of a system, and then use that expertise to change the system.”

Their approach to “maker culture,” we believe, can have a direct and practical implementation in journalism through a focus on integrating open source into the newsroom. As both Stray and Stearns point out: Maker culture is a way of thinking — a DIY aesthetic of tinkering, playing around, and rebuilding, all without fear of failure. Just the kind of thing journalism needs.

Maker culture is bound up in the technology and ethos of hacker culture, as James Losey of the New America Foundation has helpfully showed us. Losey (and colleague Sascha Meinrath) think this kind of “internet craftsmanship” is instrumental to sustaining the very architecture of the web itself: having the freedom and control and playful curiosity to shape networking technologies to meet one’s needs. Gutenberg, as Losey and Meinrath argue, was the first hacker, fundamentally rethinking technology. And it’s from this hacker mindset that we can take cues for rebooting the tools, practices, and frameworks of journalism.

Silicon Valley is not just a world of innovation, but also a highly competitive, secretive, and unstable metaphor.

So: Add maker/hacker culture, mix in a bit of theorist Richard Sennett, who believes in the capacity of individuals to reshape and innovate new media, and sprinkle some open-source juice into journalism, and you get the following:

1. New tools, stage one
We see this already. At ONA11, Knight-Mozilla’s Dan Sinker led a panel on open source in the newsroom that featured representatives from several major players: ProPublica (Al Shaw), The New York Times (Jacqui Cox), and the Chicago Tribune’s News Apps Team (Brian Boyer). Meanwhile, folks at The Seattle Times are using fairly simple tools like Tableau Public to visualize census data. In short: Already there are people inside newsrooms building cool and creative things.

2. New tools, stage two
This stage means going beyond the existing crop of databases, visualizations, and crowdsourcing applications (amazing as they are!) to look a bit more holistically at the system of news and the incorporation of open source in truly opening up the newsroom. In other words, can news organizations build open-source platforms for refining whole content management systems, or for building entirely new distribution systems?

Gutenberg was the first hacker, fundamentally rethinking technology.

Reflecting on Knight-Mozilla’s “hacking and making” week in Berlin — a gathering (dubbed, despite the month, “Hacktoberfest“) that featured journalists, designers, developers, and several news organization partners — Sinker made an interesting observation about open-source tools for newsrooms. Some of the news partners worried that “open-souce code would reveal too much,” but then it dawned on them that coordination among them would actually be facilitated by “working in the open.” They realized that “it meant far more than just code — it meant a new way of working, of embracing collaboration, and of blazing a real way forward.”

Beyond the benefits to collaboration, it’s important to remember that “open source” doesn’t necessarily mean “non-commercial.” If newsrooms develop open-source tools that make newswork (or knowledge management generally) easier, they can find revenue opportunities in selling services around that open code.

3. New thinking: A maker mindset + open source
What does it mean to incorporate the tinkering and playing and reshaping of maker culture back into journalism? The news industry is one of the last great industrial hold-overs, akin to the car industry. Newsrooms are top-heavy, and built on a factory-based model of production that demands a specific output at the end of the day (or hour). They’re also hierarchical, and, depending on whom you ask, the skills of one journalist, are for the most part, interchangeable with those of most other journalists — in part because journalists share a common set of values, norms, and ideals about what it means to do their work.

Thus, merging elements of maker culture and journalism culture might not be easy. Challenging the status quo is hard. The expectations of producing content, of “feeding the beast,” might get in the way of thinking about and tinkering with the software of news, maker-style. It can’t just be the newsroom technologists hacking the news system; it has to be journalists, all of them, reflecting on what it means to do news differently. We have to make time for journalists to rethink and reshape journalism like a hacker would retool a piece of code.

4. New frameworks: The story as code
While observing the Knight-Mozilla digital learning lab, some of the coolest things we saw were the final projects designed to reimagine journalism. (See all of the projects here and here.) What made these pitches so interesting? Many of them tried to bring a fundamental rethink to problems journalism is struggling to resolve — for instance, how to make information accessible, verifiable, and shareable.

So if we think about the story as code, what happens? It might seem radical, but try to imagine it: Journalists writing code as the building blocks for the story. And while they write this code, it can be commented on, shared, fact-checked, or augmented with additional information such as photos, tweets, and the like. This doesn’t have to mean that a journalist loses control over the story. But it opens up the story, and puts it on a platform where all kinds of communities can actively participating as co-makers.

We have to make time for journalists to rethink and reshape journalism like a hacker would retool a piece of code.

In this way, it’s a bit like the “networked journalism” envisioned by Jeff Jarvis and Charlie Beckett — although this code-tweaking and collaboration can come after the point of initial publication. So, your investigative pieces are safe — they aren’t open-sourced — until they become the source code for even more digging from the public.

In all of this thinking of the ways open source is changing journalism, there are some clear caveats:

1. For open-source projects to succeed, they require lots of people, a truly robust network of regular contributors. Given the amount of time that people might be willing to spend with any one article, or with news in general, who knows whether the real public affairs journalism that might benefit the most from open source would, in fact, get the kind of attention it needs to change the framework.

2. Open source requires some form of leadership. Either you have someone at the top making all the decisions, or you have some distributed hierarchy. As one newspaper editor told a fellow academic, Sue Robinson, in her study of participatory journalism, “Someone’s gotta be in control here.”

Image by tiana_pb used under a Creative Commons license.

August 03 2011

14:00

Transparency, iteration, standards: Knight-Mozilla’s learning lab offers journalism lessons of open source

This spring, the Knight Foundation and Mozilla took the premise of hacks and hackers collaboration and pushed it a step further, creating a contest to encourage journalists, developers, programmers, and anyone else so inclined to put together ideas to innovate news.

Informally called “MoJo,” the Knight-Mozilla News Technology Partnership has been run as a challenge, the ultimate prize being a one-year paid fellowship in one of five news organizations: Al Jazeera English, the BBC, the Guardian, Boston.com, and Zeit Online.

We’ve been following the challenge from contest entries to its second phase, an online learning lab, where some 60 participants were selected on the basis of their proposal to take part in four weeks of intense lectures. At the end, they were required to pitch a software prototype designed to make news, well, better.

Through the learning lab, we heard from a super cast of web experts, like Chris Heilmann, one of the guys behind the HTML5 effort; Aza Raskin, the person responsible for Firefox’s tabbed browsing; and John Resig, who basically invented the jQuery JavaScript library; among other tech luminaries. (See the full lineup.)

There was a theme running through the talks: openness. Not only were the lectures meant to get participants thinking about how to make their projects well-designed and up to web standards, but they also generally stressed the importance of open-source code. (“News should be universally accessible across phones, tablets, and computers,” MoJo’s site explains. “It should be multilingual. It should be rich with audio, video, and elegant data visualization. It should enlighten, inform, and entertain people, and it should make them part of the story. All of that work will be open source, and available for others to use and build upon.”)

We also heard from journalists: Discussing the opportunities and challenges for technology and journalism were, among other luminaries, Evan Hansen, editor-in-chief of Wired.com; Amanda Cox, graphics editor of The New York Times; Shazna Nessa, director of interactive at the AP; Mohamed Nanabhay, head of new media at Al Jazeera English; and Jeff Jarvis.

In other words, over the four weeks of the learning lab’s lectures, we heard from a great group of some of the smartest journalists and programmers who are thinking about — and designing — the future of news. So, after all that, what can we begin to see about the common threads emerging between the open source movement and journalism? What can open source teach journalism? And journalism open source?

Finding 1:
* Open source is about transparency.
* Journalism has traditionally not been about transparency, instead keeping projects under wraps — the art of making the sausage and then keeping it stored inside newsrooms.

Because open-source software development often occurs among widely distributed and mostly volunteer participants who tinker with the code ad-hoc, transparency is a must. Discussion threads, version histories, bug-tracking tools, and task lists lay bare the process underlying the development — what’s been done, who’s done it, and what yet needs tweaking. There’s a basic assumption of openness and collaboration achieving a greater good.

Ergo: In a participatory news world, can we journalists be challenged by the ethics of open source to make the sausage-making more visible, even collaborative?

No one is advocating making investigative reporting an open book, but sharing how journalists work might be a start. As Hansen pointed out, journalists are already swimming in information overload from the data they gather in reporting; why not make some of that more accessible to others? And giving people greater space for commenting and offering correction when they think journalists have gone wrong — therein lies another opportunity for transparency.

Finding 2:
* Open source is iterative.
* Journalism is iterative, but news organizations generally aren’t (yet).

Software development moves quickly. Particularly in the open source realm, developers aren’t afraid to make mistakes and make those mistakes public as they work through the bugs in a perpetual beta mode rather than wait until ideas are perfected. The group dynamic means that participants feel free to share ideas and try new things, with a “freedom to fail” attitude that emphasizes freedom much more than failure. Failure, in fact, is embraced as a step forward, a bug identified, rather than backward. This cyclical process of iterative software development — continuous improvement based on rapid testing — stands in contrast to the waterfall method of slower, more centralized planning and deployment.

On the one hand, journalism has iterative elements, like breaking news. As work, journalism is designed for agility. But journalism within legacy news organizations is often much harder to change, and tends to be more “waterfall” in orientation: The bureaucracy and business models and organizational structures can take a long time to adapt. Trying new things, being willing to fail (a lot) along the way, and being more iterative in general are something we can learn from open-source software.

Finding 3:
* Open source is about standards.
* So is journalism.

We were surprised to find that, despite its emphasis on openness and collaboration, the wide world of open source is also a codified world with strict standards for implementation and use. Programming languages have documentation for how they are used, and there is generally consensus among developers about what looks good on the web and what makes for good code.

Journalism is also about standards, though of a different kind: shared values about newsgathering, news judgment, and ethics. But even while journalism tends to get done within hierarchical organizations and open-source development doesn’t, journalism and open source share essentially the same ideals about making things that serve the public interest. In one case, it’s programming; in the other case, it’s telling stories. But there’s increasingly overlap between those two goals, and a common purpose that tends to rise above mere profit motive in favor of a broader sense of public good.

However, when it comes to standards, a big difference between the the open-source movement and journalism is that journalists, across the board, aren’t generally cooperating to achieve common goals. While programmers might work together to make a programming language easier to use, news organizations tend to go at their own development in isolation from each other. For example, The Times went about building its pay meter fairly secretly: While in development, even those in the newsroom didn’t know the details about the meter’s structure. Adopting a more open-source attitude could teach journalists, within news organizations and across them, to think more collaboratively when it comes to solving common industry problems.

Finding 4:
* Open-source development is collaborative, free, and flexible.
* Producing news costs money, and open source may not get to the heart of journalism’s business problems.

Open-source software development is premised on the idea of coders working together, for free, without seeking to make a profit at the expense of someone else’s intellectual property. Bit by bit, this labor is rewarded by the creation of sophisticated programming languages, better-and-better software, and the like.

But there’s a problem: Journalism can’t run on an open source model alone. Open source doesn’t give journalism any guidance for how to harness a business model that pays for the news. Maybe open-source projects are the kind of work that will keep people engaged in the news, thus bulking up traditional forms of subsidy, such as ad revenue. (Or, as in the case of the “open R&D” approach of open APIs, news organizations might use openness techniques to find new revenue opportunities. Maybe.)

Then again, the business model question isn’t, specifically, the point. The goal of MoJo’s learning lab, and the innovation challenge it’s part of, is simply to make the news better technologically — by making it more user-friendly, more participatory, etc. It’s not about helping news organizations succeed financially. In all, the MoJo project has been more about what open source can teach journalism, not vice versa. And that’s not surprising, given that the MoJo ethos has been about using open technologies to help reboot the news — rather than the reverse.

But as the 60 learning lab participants hone their final projects this week, in hopes of being one of the 15 who will win a next-stage invite to a hackathon in Berlin, they have been encouraged to collaborate with each other to fill out their skill set — by, say, a hack partnering with a hacker, and so forth. From those collaborations may come ideas not only for reinventing online journalism, but also for contributing to the iteration of open-source work as a whole.

So keep an eye out: Those final projects are due on Friday.

August 02 2011

15:08

#MozNewsLab lectures by @Shazna from @AP_Interactive, @Mohamed from @AJEnglish & @iA from, well, @iA now online

Week three of the #MozNewsLab is all wrapped up.

I’m almost experiencing a pang of sadness that we only have a few days to go until the lab is concluded. It really has flown by too quickly.

Of course, that sadness is offset by two things:

  1. Twenty participants will be invited to the next phase of the program: a five-day event in Berlin focused on building software prototypes.

  2. Having the opportunity to get out and enjoy what’s left of this amazing summer! :) My guess is that all of the people involved in #MozNewsLab — the particpants, and the faculty — are looking forward to a few days off.

First things first…

Last week we turned the corner from a focus on technology to a focus on journalism, news, and reporting. All of the guest speakers were asked to share their experiencing of where and how technology is impacting their newsrooms, or what changes are underway at news organizations today in the context of technology.

The week was kicked off by Shazna Nessa, Director of Interactive at the Associated Press in New York. Shazna shared how the AP is changing — how they are trying to break down silos and formalize technology in the newsroom, as well as introducing new skills and pushing toward new forms of interactive news presentation.

You can watch Shazna’s lecture here.

Following Shazna was Mohamed Nanabhay, Head of Online at Al Jazeera English. Mohamed delivered a mile-a-minute lecture on the speed at which Al Jazeera English has moved into our consciousness, and what that has meant for their news delivering infrastructure. Mohamed also dived into questions about sources, fact checking, verification, and the role of user-generated content in Al Jazeera English’s reporting work.

You can watch Mohamed’s lecture here.

Closing out the week’s lecture series was Oliver Reichenstein, CEO of Information Architects. Oliver delivered a 10,000 foot view of the changes underway in news organizations from the perspective of one of the world’s leading design agencies — an agency that has been responsible for some high-profile re-designs, successful software products, and innovative thinking on the future of news.

Oliver’s talk highlighted the tension between design considerations of news sites, and the business considerations that are often in contrast. You can watch Oliver’s lecture here.

We’re in the final sprint. The assignments from last week are starting to flow in to the #MozNewsLab Planet, and many of them are heading in the direction of the final project that is due on Friday.

Yesterday, we heard from Evan Hansen; Tomorrow we hear from Jeff Jarvis.

It’s been a whirlwind month. I hope you’ve enjoyed following along.

July 25 2011

16:31

#MozNewsLab week two lectures by @codepo8 @jresig & @jjg now online

The #MozNewsLab is hurtling toward the grand finale on August 5th. We’re past the half-way mark, and it feels like time is compressing each day into a New York minute.

We wrapped up week two of the lab last Friday. Here’s a quick recap:

The first lecture last week was a shot-in-the-arm of open-Web goodness: The Mozilla Foundation’s Executive Director Mark Surman talked about the broader Mozilla + Journalism initiative, touched on Why Mozilla cares about news, and introduced out guest speaker, Christian Heilmann.

From there, Heilmann — a developer ‘evangelist’ at Mozilla — took participants on a whirlwind tour of the State of the Browser in 2011. HTML5, CSS3, new APIs, WebGL — you name it, he covered it. You can find the lecture online here: recording, notes, and slides.

Next up was none other than John Resig. Resig is implicated in more successful open-source software projects that you can shake a stick at. He’s been leading the jQuery project for more than five years now, and has learned a lot about the ‘Open Source Process’: the ins-and-outs of building great software and a great community that supports it. John shared those learnings with the lab — it was an incredibly insightful voyage through the history of jQuery, and John’s tips on creating successful open-source software community.

You can find the lecture online here: recording, notes, and slides.

Jesse James Garrett — the ‘Father of AJAX’ — joined us on Friday to deliver the final lecture of the week. His talk focused on the conceptual model for thinking about successful interactive experiences, what he calls the ‘Elements of User Experience’. I must admit, I was quite excited to hear Jesse speak, as I’ve been a big fan ever since reading his book many, many years ago. Jesse expanded quite a bit on the early models of user experience that he pioneered and ofter many insightful new ideas about how to approach the experience of a software project or product.

You can find the lecture online here: recording, notes, and slides.

We’ve just kicked off week three. Hope you’re following along. There’s still time to send a ‘message in a bottle’ to the lab.

Last but not least, Mozilla’s Media, Freedom and the Web festival is really starting to come together. If you’re interested in the nexus of the open Web and media production, you may want to mark your calendar.

July 20 2011

16:36

#MozNewsLab week one lectures by @azaaza @burtherman & @amandacox now online

The participants in the #MozNewsLab are kicking-up such an amazing storm of ideas, that I’m finding it hard to concentrate long enough to put my own thoughts to keyboard this week.

So, in lieu of some suitably witty update, here’s a quick re-cap of the first week’s lectures:

The week kicked off with a lecture by the renowned interface designer, Aza Raskin. Aza recently held the position of Creative Lead for Firefox, he’s now working on a start-up called Massive Health.

Aza’s lecture focused on designing in the open and rapid prototyping. You can find the slides here, or watch the recorded lecture (with synced slides) here. The #MozNewsLab participants also took great notes here.

On Wednesday, the lab heard from journalism-entrepreneur Burt Herman. Burt shared his life experiences — from his time as journalist with the Associated Press, to his current adventures as co-founder of the award-winning journalism tech start-up, Storify.com

These two lectures dovetailed perfectly together: both focused on the strategy of rapidly iterating software product ideas, being willing to kill early ideas if necessary, and incorporating user input into the development & design process.

You can find Burt’s slides here, and his recorded lecture here. (Notes here.)

We closed out the week on Friday with a mind-expanding, 1000 mile-per-hour, lecture by Amanda Cox. Amanda Cox is a graphics editor at the New York Times, where she creates charts and maps for the print and web versions of the paper.

Amanda’s lecture was the perfect finale for the week — it provided a whirlwind tour of how the New York Times graphics desk thinks about the data that it presents online. Slides here, lecture here, and notes here.

Week two is already off to a great start. John Resig is scheduled to present later today. It’s an exciting week in the #MozNewsLab.

July 07 2011

16:56

Learning lab schedule: week-by-week. Plus: new lecture by @iA CEO Oliver Reichenstein

Oliver Reichenstein, CEO of iA Inc

First the great news, then the good news. (FYI: There is no bad news in MoJo-ville.)

I’m excited to let you know that we’ve confirmed that Oliver Reichenstein, CEO of iA Inc, will deliver a lecture for the lab in July.

For those who are not familiar with iA (Information Architects, Inc.), let me just say this: very few organizations have had as much impact when it comes to modern-day information design. Not only is iA “one of the best-known design agencies in the world,” but it is also an organization that is not afraid to take some risks by developing its own products — from the ubiquitous iA³ Template for WordPress, to ultra-minimalist writing software iA Writer for the Mac and iPad.

I should also note that iA worked with our news parter Zeit Online to produce the innovative HTML5, tablet-friendly, version of zeit.de — if you have a tablet (or know how to change your User Agent settings), you should take a moment to check it out.

Welcome aboard, Oliver.

Learning lab schedule

Now the good news. After months of hard work — planning, organizing, and cajoling — I’m happy to be able to unveil the (almost final) schedule for the learning lab (all times are listed in Pacific Time):

Week 1 - Design thinking and product development

July 11 - 10:00 a.m. to 12:00 p.m.

Speaker: Aza Raskin is a renowned interface designer who recently held the position of Creative Lead for Firefox.

July 13 - 9:00 to 10:30 a.m.

Speaker: Burt Herman is an entrepreneurial journalist. He is the CEO of Storify and a co-founder of Hacks/Hackers.

July 15 - 8:00 to 9:30 a.m.

Special guest: To be announced. Topic: Data visualization.

Week 2 - New capabilities in the browser and new ways of building community

July 18 - 8:00 to 9:30 a.m.

Speaker: Chris Heilmann is a geek and hacker by heart. In a previous life, he was responsible for delivering Yahoo Maps Europe and Yahoo Answers. He’s currently a Mozilla Developer Evangelist, focusing on all things open web, HTML5, and working open.

July 20 - 10:00 to 11:30 a.m.

Speaker: John Resig is a programmer and entrepreneur. He’s the creator and lead developer of the jQuery JavaScript library, and has had his hands in more interesting open source projects that you can shake a stick at. Until recently, John was the JavaScript Evangelist at Mozilla. He’s currently the Dean of Open Source and head of JavaScript development at Khan Academy.

July 22 - 8:00 to 9:30 a.m.

Special guest: Jesse James Garrett, co-founder and president of Adaptive Path, is one of the world’s most widely recognized technology product designers. Topic: Focusing on the users.

Week 3 - Technology meets news production: new challenges in the newsroom

July 25 - 8:00 to 9:30 a.m.

Speaker: To be announced.

July 27 - 8:00 to 9:30 a.m.

Speaker: Mohamed Nanabhay, is Head of Online at Al Jazeera Egnlish based in Doha, Qatar.

July 29 - 8:00 to 9:30 a.m.

Special guest: Oliver Reichenstein, CEO Information Architects Inc.

Week 4 - The future of journalism

August 1 - 8:00 to 9:30 a.m.

Speaker: Evan Hansen is the Editor In Chief of Wired.com.

August 3 - 8:00 to 9:30 a.m.

Speaker: Jeff Jarvis is the author of What Would Google Do? He blogs about media and news at Buzzmachine.com. He is associate professor and director of the interactive journalism program and the new business models for news project at the City University of New York’s Graduate School of Journalism.

August 5 - Time TBD

Speaker: You! Participants will present their final projects.

There you have it, in all it’s shining glory. Let me know if you have any questions about the speakers, the format, or the topics to be covered.

June 27 2011

14:14

Learning lab update: invitations to go out tomorrow. @jjg & @mohamed confirmed to lecture.

Yes, you heard that right: I was able to corner both user-experience pioneer Jesse James Garrett and Mohamed Nanabhay, Head of Online for Al Jazeera English, at the Civic Media conference last week, and both have agreed to deliver a lecture for the first Knight-Mozilla learning lab.

Who’s working hard for you? :)

Okay, now that I’ve got your attention, here’s a quick update on our progress toward concluding the challenge phase of the program, and moving into the learning lab phase:

  • Roughly 300 submissions were received during the challenge. It was a bit more than we were expecting. The quality of many of the submissions was quite high. Generally speaking, we wanted to ensure that each submission was reviewed thoroughly, and that each entry was seen by two pairs of eyes.

  • We expanded the review team and extended the review period by several days to ensure that each reviewer had enough time to read and comment on each entry. (Not a small amount of work, I assure you.)

  • That work is now complete. Ben and Jacob are going to send out invitations to the learning lab this week. I’m hoping the invitations go out tomorrow morning — fingers crossed! — but there are some remaining technical hurdles to get past before they can go out.

  • If you don’t receive an invitation tomorrow — fear not — you’ll be automatically added to a waiting list. We’ll be inviting people from the waiting list as we hear back from the first group of invitees. We’re aiming to have the process completed by Friday, July 1st.

I’ll post further updates on the invitation process and progress here throughout week.

Now, on to the learning lab itself. You may be asking: What should I expect if I’m a learning lab participant? Well, here’s a preview.

  • Just a reminder, the Knight-Mozilla learning lab will run from July 11th - August 5th, 2011. Those that receive an invitation will be expected to commit at least 10 hours a week to the lab.

  • The lab will focus on four key themes, one each week, which will be roughly: How to work open: the secret sauce of Mozilla’s software and community; How to take and idea from concept to product; Challenges that newsrooms and news users face today; and Journalism is evolving: What journalism might look like tomorrow.

  • To explore those themes, each week will include two mandatory lectures — Monday and Wednesday at roughly 8AM Pacific Time, 11AM Eastern Time, 4PM British Standard Time, and 5PM Central European Summer Time — and one optional lecture the same time on Friday. Each lecture will be approximately 30 minutes, with 30 minutes for Q&A.

  • The optional lectures will be just as amazing as the mandatory lectures, but will focus a bit more on practical skills and understanding vs. the big picture of the mandatory lectures. For example, if you arrive at the lab as a programmer with lots of product development experience, but little or no understanding of what a journalist actually does, we’ll have a lecture for you. And vice-versa: if you arrive as a tech-savvy journalist but with little experience building software, we’ll have a lecture for you too.

  • Each week participants will be asked to complete an assignment that builds on the information from the lectures. Participants will submit assignments by publishing it on their own blog. So, if you don’t have one yet, get moving. ;)

  • Last but not least will be your lab project. The lab project is the ‘big idea’ that you’re working on — perhaps your challenge submission; perhaps something new — throughout the four week lab. You’ll present your lab project for review at the conclusion of the lab.

The lab will be delivered entirely online. We’ll be using the Peer-to-Peer University platform for course material, assignments, and discussions. The lectures will be delivered synchronously (and attendance will be taken!) using the rather awesome Big Blue Button platform.

Your ship’s crew for the lab will be Alex and yours truly, and four excellent course shepherds that I’ll be introducing over the coming days.

I’ll be posting updates as we confirm the remaining lecture spots, and as we make progress with getting the invitations out. Stay tuned and let me know if you have any questions.

June 14 2011

15:19

Bringing out the big guns: @emilybell @richgor @reporterslab to advise on @KnightMozilla learning lab curriculum.

Rich Gordon, Sarah Cohen, and Emily Bell For the last couple of months, I’ve been quietly whittling away at the master plan for the first “learning lab” of the Knight-Mozilla News Technology Partnership.

Without a doubt, this is the most personally exciting aspect of my involvement with the project: starting in July, I will lead a group of sixty smart individuals through an intense four-week online learning experience.

During the lab we will unleash a fire hose of thinking about the collision of technology and journalism, about working open, and the process of taking software from idea to product.

We have big ambitions for these learning labs, obviously. So, to ensure that the curriculum meets those ambitions and tangible learning objectives, I reached out for help from some of the smartest people I know who are already teaching at the nexus of technology, journalism, and news.

Incredibly, they said yes!

So, I’m very excited to announce that Emily Bell, Sarah Cohen, and Rich Gordon have agreed to make some personal time available to help out as curriculum advisers:

  • Emily Bell: Emily Bell is director of the Tow Center for Digital Journalism at Columbia’s Graduate School of Journalism. She previously worked for the Observer and then the Guardian for 18 years, setting up MediaGuardian.co.uk in 2000 and becoming editor-in-chief of Guardian Unlimited in 2001. In September 2006, she was promoted to director of digital content for Guardian News & Media.

  • Sarah Cohen: Sarah Cohen directs the newly-launched ReportersLab.org and is the Knight Professor of the Practice at Duke University’s DeWitt Wallace Center for Media and Democracy. She was a reporter and editor at The Washington Post for more than 10 years, working for investigative teams and projects across departments. Her journalism awards include most major national prizes, including the Pulitzer Prize for Investigative Reporting.

  • Rich Gordon: Rich Gordon is a professor and director of digital innovation at Medill School Northwestern University. At Medill, he launched the school’s graduate program in new media journalism. He has spent most of his career exploring the areas where journalism and technology intersect. At The Miami Herald, he was among the first generation of journalists to lead online publishing efforts at newspapers.

Next week we start the hard work of determining what homework to assign. Yes, that’s right, there will be homework … and required reading … and mandatory lectures.

Just because it’s online, doesn’t mean it’s not going to kick your ass, and blow your mind.

P.S. We’ve been lining up some incredible lectures, and I’ll be announcing some new additions in the coming days. Stay tuned! :)

May 30 2011

16:16

Six @KnightMozilla lightning pitches from Chicago-area #HacksHackers

After peeking inside the Chicago Tribune’s news apps team last week, I descended deep, deep into the Tribune’s basement. Once home to printing presses, the lower levels of the Tribune tower were about to host a conversation about the future of news, courtesy of Hacks/Hackers Chicago and the Knight-Mozilla News Technology Partnership.

Fueled only by pizza and sugary sodas, and a mercilessly-short presentation, these brave hacks and hackers put pen to paper and brainstormed how to improve news experiences on the open Web.

A mere thirty minutes later, they were asked to present those ideas back to the group. Here they are:


I’m excited about what the MoJo team has been able to do via these ‘design jam’ events with the Hacks/Hackers community and our news partners. It’s more than just getting the word out about the innovation challenges: we’re helping to build community and conversations around the field of news innovation that will have impact for years to come.

A big thanks is due to Trib staffers Joe Germuska and Chris Groskopf, and Medill’s Rich Gordon for organizing this event. And to the following folks who made the event possible by showing up and really participating:

Thanks again, folks. If I missed your name, please let me know.

May 25 2011

15:42

A peek inside the @TribApps Team at the Chicago Tribune.

Open-Web innovation appears to be the name of the game in the Chicago Tribune’s News Applications department. I had a chance to sit down with Joe Germuska, Christopher Groskopf, and Brian Boyer from the @TribApps team yesterday in Chicago, and I had a few questions on my mind:

  • What is the scope of their work? What do they work on day-to-day, week-to-week, and month-over-month?
  • How does the news apps team interface with the editorial and other departments?
  • What is the experience of being an island inside a ‘traditional’ or ‘legacy’ news organization?

The scope of this team’s work is nothing short of awe-inspiring. They’re responsible for a wide range of projects: from classic ‘news apps’ like the 2010 Illinois School Report Cards to the unlikely job of deploying a massive number of Wordpress sites to power the TribLocal.com network.

Nonetheless, they still have the time and opportunity to work on forward-thinking initiatives like the Chicago Breaking News Live Web app, and to release tools like the The Newsapps Boundary Service for other newsrooms to build on.

Through all of these varying demands, open-web thinking seems to permeate everything they do. For example:

  • Chris shares his experiences building news apps with big data for other organizations to learn from;
  • Joe is collaborating with other newsrooms and news apps developers to build tools that will make it easier for reporters to explore and make sense of census data (Joe, do you have a link for me?)
  • The whole team is focused on releasing re-usable code and building a body of knowledge about how to handle the unique needs of a newsroom.

As for the advantages of working in a nimble team like this, Brian put it succinctly when he said “we can roll a new rig every day to improve how we do our development.” Translation: even in the real-world environment of a newsroom, with deadlines and deliverables looming, and despite the challenges of a their IT department, this team is able to rapidly experiment and test new ideas.

Interestingly — and even though the team was started by individuals with a journalism-first background — new team members have come to the job with more technology and computer science experience, than traditional journalism chops.

I was curious about this from the perspective of the Knight-Mozilla fellows that will be heading into newsrooms this fall, and how they might have similar backgrounds.

If the @TribApps team is any indication, I think our fellows will have a fighting chance at survival.

May 24 2011

14:15

Defining journalism on the open Web: Six ideas.

Here’s a mental exercise: Let’s brainstorm a list of the changes that define what journalism will look like tomorrow, or — better yet — let’s answer the question ‘what is journalism on the open Web?’

Below are six ideas to start the exercise with. None are original or entirely new. Many are stolen from people much smarter than I am about such things. Each idea speaks to a shift that is underway already, or about to begin, in most professional news organizations.

Maybe you’re experiencing one of these shifts? Perhaps you have your own to share? I hope that you’ll add to the list, or the conversation in some way: maybe we can build a comprehensive definition of ‘journalism on the open Web’ and share it with the world.

Journalism today            -->  Journalism tomorrow
-------------------------------------------------------------
Publishing is the end       -->  Publishing is the beginning
Reporter talks to sources   -->  Sources go direct
Markets are conversations   -->  Journalism is a conversation
Curate the Web              -->  Re-mix the Web
The perfect CMS             -->  The Web *is* the CMS
Thinking about the Web      -->  Web thinking

Publishing is the beginning: On the open Web, the act of publishing something is the beginning of the conversation. It’s the first step toward creating a community, engaging with ideas in the open, and providing a platform for others to build on top of. It isn’t a simple act of Rinse. Wash. Repeat. on a never-ending 24-hour cycle that starts and stops when the words go to print.

Sources go direct: This is a phrase coined by the ‘irascible gadfly’ Dave Winer to document the disintermediation of journalists and news organizations in the conversation between those with information and those who want the information. This disintermediation is made possible by the open Web and the open-source software that powers it, and it’s a trend that is only going to continue.

Journalism is a conversation: More than ten years ago, David Weinberger wrote that “markets are conversations” in the Cluetrain Manifesto, a statement which predicted that walls would be torn down between the people inside of organizations, and those outside. Perhaps it has taken longer for the message to penetrate the thick walls of the Daily Bugle, but the day has come to accept that journalism is also a conversation and those walls will come down too.

Re-mix the Web: Today it is possible for those with limited technical skills to curate the Web. Curation is just starting to be seen as something that journalism professionals need to learn how to do. Tomorrow, however, it will be possible to re-mix the Web: to create entirely new experiences from the component parts. That is what the Hackasaurus project is teaching kids today. Today’s kids are tomorrow’s news users — be ready.

The Web is the CMS: NPR’s Middle East uprising super-star journalist Andy Carvin doesn’t need a better content-management system to make better journalism. On the contrary, the Web is his content-management system. Re-tooling the back-office IT in news organizations is the wrong problem to focus on — those systems were not designed for rapid change — a complete re-boot is necessary, and the new ‘back office’ will use the open Web as the kernel, operating system, and publishing tool.

Web thinking: Emily Bell calls it “being of the Web, not just on the Web,” but — taking a phrase from the ten-year-old Web of Change community — I like to call it Web Thinking. This isn’t just about being ‘digital first.’ It’s about looking outside the newsroom, relinquishing some control, playing some new roles — like convener and connector — and moving at Internet speed. There’s so much more, but that’s a whole post of its own.

Those are the six changes that are on my mind today. How about you: What changes and shifts are you experiencing?

May 18 2011

13:43

You must be the conversation you want to see in the world

“A great community isn’t something that you just set up and periodically patch. Running a great community is a full-time job, not a weekend hack project.” — Alex Payne

The last week was a valuable learning moment. The launch of the Beyond Comment Threads challenge stirred up a lot of conversation around the Web: on sites like Slashdot and Hacker News, and also on the MoJo community list.

Around the same time, I was busy kicking the hornets nest again with a post over on the PBS MediaShift Idealab (related Hacker News thread).

It was an incredible opportunity to see the potential of online discussion, comments, and debate applied to the very challenges that have been presented:

  • Re-think the relationship between news users and producers;
  • Demonstrate new forms of user interaction with news;
  • Push beyond the ways we currently think about comments and online debate.

Meanwhile, I’ve been speaking with a number of publishers about the tension between their aspirations for discussion in the context of news, and the realities that one must face when the comment switch is flipped to the on position.

I’ve tried to distill some key themes below, but I’m hoping that you can also weigh in with your own experiences.

  1. The “Eyes on the Street” theory still holds online: Most publishers now agree that it’s critical for them, their staff, and the authors of the content to play a role in the community that they are convening at the end of their articles. Without visibility and natural surveillance, comments threads can quickly become a no-mans land.

  2. There is no free content: CP Scott may have said that “Comment is free,” but convening the specific type of online discussion & debate that many publishers aspire to have on their sites comes with a cost. The cost of having moderators, community policing tools, and — in many cases — the liability insurance quickly starts to add up. For many sites with active comment threads, just reviewing the comments that are reported as ‘offensive’ can take up significant time, let alone reading through to look for comments that are insightful, informative, or contain new information.

  3. Publishers & authors are still ‘on top’: No matter how you slice it, the pristine words of the bourgeoisies & intellectuals still sit high above the comments of the unwashed masses, the rabble, the proletariat (how these filthy ‘wage slaves’ have time to comment all day continues to defy all explanation). In all seriousness, this visual presentation can work to re-create the classic divides in society, with both groups feeling inaccurately reflected or simply not respected.

  4. Comments become the culture of a site: If a publisher is lucky enough to become the flash point for lively conversations — especially conversations that happen between commenters, and not just ‘up’ toward the original article — it often becomes evident that a specific culture starts to emerge. It is that emergent culture that becomes the environment that other passers-by (and, um, potential advertisers) use to assess and evaluate the community. Is it a ghetto full of broken windows? Or is it a bohemian coffee house brimming with spirited debate? It is this culture that is both the risk and reward for publishers.

To keep up with expectations and aspirations, publishers appear to have two choices:

  1. Create better systems: This is the focus of the current Knight-Mozilla innovation challenge, and is often a controversial option. There rarely is a one-size-fits-all solution, and interventions that work incredibly well in one context can easily fail in others. What looks visually uncomplicated to one, may appear like an inaccessible mess to another. Most worryingly, I fear that publishers looking for silver bullet will turn to “real names” as the only answer and that the open web will lose the identity battle, while commenters lose the choice to be anonymous.

  2. Create better commenters: It is this idea that intrigues me the most today. What does it mean to create better commenters? Is it simply the badges and reward systems that sites like Huffington Post are experimenting with? Is it an extension of the kinds of ideas that the Sacremento Press is working on where contributors earn virtual accreditation by attending workshops? Or is it something else entirely, where those who comment have to pay or earn their spot on the virtual podium? Or perhaps a system where one can endorse another, similar to sites like LinkedIn?

What are your experiences?

May 13 2011

17:50

Comments are dead. Long live comments!

Cross-posted from PBS MediaShift Idea Lab

Let’s face it — technically speaking, comments are broken. With few exceptions, they don’t deliver on their potential to be a force for good.

Web-based discussion threads have been part of the Internet experience since the late 1990s. However, the form of user commentary has stayed fairly static, and — more importantly — few solutions have been presented that address the complaints of publishers, commenters, or those of us who actually read comments.

beyond comment threads.jpg

Publishers, for the most part, want software that will stamp out trolls and outsource the policing to the community itself (or, failing that, to Winnipeg). Commenters, on the other hand, want a functional mini-soapbox from which to have their say — preferably something that is easy to log into and has as few limitations as possible (including moderation). The rest of us are left to deal with the overly complicated switches, flashing lights, and rotary knobs that we’re expected to know how to use to dial in to the conversation so it’s just right for our individual liking, not too hot and not too cold.

Thankfully, there is an opportunity today to really innovate. New capabilities in the browser, and emerging standards provide an opportunity to completely rethink the relationship between news users and producers — between those who comment and those who are commented upon — and to demonstrate new forms of user interaction that are atomic, aggregated, augmented, or just plain awesome.

That’s why our next Knight-Mozilla Challenge is for you to come up with a more dynamic space for online discussions. You can submit your idea here, and you could win a trip to Berlin to compete with other innovators — or even win a year-long fellowship in a newsroom.

Publishers’ dirty little secret

The truth is, many news publishers don’t actually think comments are a good thing. Or if publishers won’t go so far as to admit that, they’ll usually agree that the so-called return on investment when enabling comments, discussion and debate on their site is not entirely clear.

Therein lies the biggest tension in the “beyond comment threads” challenge: At the end of the day, those who comment on stories, and those who have their articles commented upon, often have very different views on the topic.

Ask publishers about the purpose of comments and they’ll often speak to the very aspirations of independent journalism and a free press: democratic debate, informed citizens, and free speech. Ask them about the reality of comment threads on their site, and a very different picture is likely to emerge.

On the other end of the spectrum are the people who comment. No doubt, for some, it’s their very comment — or comments, in the case of those who actively comment — that creates the value on a given page, not the editorial. For others, the value is in the conversation that coalesces or unfolds in the context of a given story — but, to ease the minds of publishers, always at a safe distance from the “real content,” usually at the end of a story, or well below the fold.

In between are the rest of us, the people who benefit from the tension between publishers and commenters. We rely on the individuals who choose to comment to add context and clarifications, do extra fact-checking (a skill that’s often a casualty of newsroom cutbacks), and, ultimately, to hold the publisher accountable — publicly — and using the publishers’ own soapbox to do so. At the same time, we rely on publishers and reporters to start the conversation and keep it civil.

No wonder publishers are still asking questions about the value of comments: It takes a lot of work to build a successful online community, and the outcome is not guaranteed to work in their favor.

The Slashdot Era

Sometime in late 1997 or 1998, a bunch of hackers who agreed that commenting was broken (or — at that time — just simply missing) on most news sites decided to take matters into their own hands. Enter the era of Slashdot, an early example of the kind of sites that would begin to separate church from state by disconnecting the discussion from the content being discussed. These sites — with lots of comment and little content in the editorial sense — threw some powerful ideas into the mix: community, identity and karma (or incentives).

Thumbnail image for slashdot.jpg

Fast-forward to today, more than 10 years later, and not much has changed.

Newer sites, like Hacker News and Reddit, continue in the Slashdot tradition, but don’t break much new ground, nor attempt to innovate on how online discussion is done. At the same time, publishers — realizing the conversation was increasingly happening elsewhere — have improved or re-tooled their commenting systems in the hope of keeping the discussion on their sites. But instead of innovating, they’ve simply imitated, and little real progress has been made.

In an era where Huffington Post is the “state-of-the-art” for online discussion, I ask myself: What went wrong?

Enter the innovators’ dilemma

Meanwhile, as the events above unfolded, the rest of the web went on innovating. As publishers and comment-driven communities lamented their situation and pondered how to improve it, the conversation left those sites entirely. The people formerly known as the audience were suddenly empowered to have their say almost anywhere, via micro-blogs, status updates, and social networks.

It was the classic innovators’ dilemma at work. While focusing on how to make commenting systems better, many people didn’t see the real innovation happening: Everyone on the Internet was given their own, personal commenting system. Services like Twitter and Identi.ca solved the most pressing issue for commenters: autonomy. Services such as Facebook and LinkedIn addressed another problem: identity.

Unfortunately, not all innovation is good. Local improvements do not always equal systemwide benefits. That is the situation we are left with today: Comments, discussion and identity are scattered all over the web. Even worse, the majority of what we as individuals have to say online is locked in competing, often commercial, prisons — or “corporate blogging silos” — and is completely disconnected from our online identity.

The Sixth Estate

The opportunity in the beyond comment threads challenge is to radically re-imagine how we, the users, relate to the people producing news, and to each other. It’s time to get out of a 10-year-old box and completely rethink the current social and technical aspects of online discussion and debate. It’s time to stop thinking about faster horses, and start thinking about cars (or jetpacks!).

To get specific, let’s start with a list of great experiences that are made possible with comments:

  • Providing value to the publisher: Think about the times that comments have revealed new facts, uncovered sources, or pointed out easily correctable errors. This exemplifies the opportunity for a community to provide value back to a publisher, and helps answer the return-on-investment question. Recently, during the uprising in the Middle East and the earthquake in Japan — when several news organizations introduced real-time streams that mixed editorial content with user-submitted comments — we witnessed a glimmer of something new. What does it look like to push those ideas to their extremes?

  • Publishers and users working together: Sites like Stack Overflow (and the other sites in that network) introduced a new standard for directed conversations. More than just question-and-answer forums, these sites attempted to leverage the sense of community on sites like Slashdot and Hacker News, but also direct that energy toward a socially useful outcome, such as collective wisdom. If the Press is a “key social institution that helps us understand what’s going on in the world around us,” then we are all responsible for making it better — reporters, publishers and news readers. So what does that collaboration and the goal of collectively assembled wisdom (other than Wikipedia, of course) look like?

  • Holding publishers, or authors, accountable: If the publishers’ aim is to stamp out trolls, the commenters’ equivalent goal is to squelch bad reporting. Many readers expect news stories to be factually accurate, fair and balanced, and free of hidden agendas or unstated personal opinions. Comments were the first opportunity to quickly point out shortcomings in a story (versus a letter to the editor that may or may not be printed some days or weeks later). Think of that span — an immediate retort versus an edited response published well after the fact — and project it into the future, and then ask yourself, “How far could an idea like MediaBugz go?”

The last example on my list has to do with providing value to the community and learning together. How do we address the myriad concerns on both sides of the fence and come out the other end with something that isn’t broken? How can the historical tension between the need for anonymity and the perceived advantages of a real identity be overcome using our knowledge and the tools of the open web? In what way can the visual language of online discussion be taken beyond “thumbs up” or “thumbs down?” And what does it look like to enable commenting on the HTML5 web, which is increasingly driven by video, audio, animations and interactivity?

In those rare inspirational moments — when two sides of a conversation come together and actually listen — there is the nucleus of the idea that inspired the world to embrace comments in the first place. How do we weave that idea into the web of tomorrow? How do we turn up the volume on everything we love about comments, discussion and debate online, without losing what we love in the process?

That, if you accept it, is your mission.

Cross-posted from PBS MediaShift Idea Lab. Feel free to comment here, or on the original.

May 02 2011

15:55

Win a Newsroom Fellowship by Rethinking Video Storytelling

Recently, we've seen a huge change in video online. The advent of web native video makes it possible to mash up moving images with social media, tie clips to data from across the web or, more simply, create simple transcript-based interfaces for navigating long pieces of video. Yet, despite these capabilities, we've seen almost nothing in the way of new kinds of storytelling. Telling stories with video online today looks pretty much the same as it did when I used to shoot local TV news 20 years ago.

This is something we hope to change with the first Knight Mozilla news innovation challenge topic. We're inviting hacks and hackers from around the world to answer the question: How can new web video tools transform news storytelling? People with the best ideas will get to bring them to life with a full-year paid fellowship in a world-leading newsroom.

The Next 'Montage Moment'

What do I mean by transform storytelling? Just that: taking today's online video tools beyond the mechanical and obvious, bringing people, ideas and events to life in ways we haven't seen before. To get your imagination going, think back to how visual storytelling emerged in the world of cinema.

The Lumiere brothers made some of the world's first films: "Workers Leaving the Lumiere Factory," "Arrival of a Train at the Station," etc. The Lumieres' fixed frame wasn't much to write home about in terms of story. But seeing moving photographs was hugely impressive to most people at the time. It was a technical wonder.


It took 25 years for Sergei Eisenstein to grab hold of this technical wonder and then say: Wow, I bet we could tell a more powerful story if we varied the shots a bit and then edited them together. With "The Battleship Potemkin," he invented the visual language we still use to tell stories today: montage.


The fundamental technology didn't change in those 25 years. The Lumieres knew how to splice film and move the camera around. Eisenstein's breakthrough was to use basic film technology to tell a story in a new and creative way -- which is very much like where we are at with web native video today: huge technological potential just waiting to be seized for creative storytelling. What we need now is a "montage moment" for the web era.

Open Video: A Huge Palette of Awesomeness

The potential of web native video truly is awesome: We can now link any frame within any video to any other part of the web. This was hard to do in the world of Flash video. The introduction of the HTML5 <video> tag over the last two years has made it easy.

Early experiments and demos hint at the potential of this new open video palette. With the recent State of the Union address, PBS used Mozilla's popcorn.js tools to synchronize its live blogging with the timecode of the president's speech:


The same tools have been used to show how transcripts can be used to search and then navigate immediately to anywhere within a long clip. This demo from Danish public radio shows how this can work with web native audio. The same thing could easily be done with video.


Of course, the big potential is in connecting video to the massive amount of media and data that already exist all across the web. Imagine if you could weave the sum of all human knowledge seamlessly into your news story or documentary. That's now possible. This book report demo shows the basics concept, with a student connecting her narration to Wikipedia articles and news reports.


Google and Arcade Fire took this idea a step further, pulling moving images from street view and Google Earth into a rock video. If you enter your ZIP code, your neighborhood becomes a character in the narrative in real time.


The Japanese-based Sour "Mirror" went even further, pulling you into the video. Enter your Facebook ID and turn on your camera, and then you become a character in the band's video -- again, in real time.


These demos make an important point: The line between what's in the frame and what's on the web is dissolving. Or, put nerdily, timecode and hypertext are fusing together. They are becoming one.

Are You the Next Eisenstein?

Despite all the niftyness, there is a problem: These demos do not yet tap the open video palette to tell stories in meaningfully new ways. Open video tools like Mozilla's Popcorn and Butter provide a starting point. But they need people with a creative flare for both web technology and storytelling to bring them to life. Which is exactly why Knight and Mozilla threw out "how can new web video tools transform news storytelling?" as our first MoJo challenge question.

We're hoping that you -- or someone you know -- is up to this challenge. If you think you are, you should enter the MoJo innovation challenge. All you need to do is draw up a napkin sketch showing how you might tell a story in a new way with open video, write a brief paragraph about it, and then submit it online. If your idea is solid, you've got a good chance at a fellowship where you could actually bring it to life at the Al Jazeera, BBC, the Guardian, Die Zeit or the Boston Globe. Who knows? Maybe you could be the Eisenstein of open video.

Find out more about Knight Mozilla News Innovation Partnership on the MoJo website. Or enter the MoJo news innovation challenge today.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl