Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

November 18 2010

18:06

Google News Meta Tags Fail to Give Credit Where Credit Is Due

Far be it for me to question the brilliance of Google, but in the case of its new news meta tagging scheme, I'm struggling to work out why it is brilliant or how it will be successful.

First, we should applaud the sentiment. Most of us would agree that it is a Good Thing that we should be able to distinguish between syndicated and non-syndicated content, and that we should be able to link back to original sources. So it is important to recognize that both of these are -- in theory -- important steps forward both from the perspective of news and the public.

But there are a number of problems with the meta tag scheme that Google proposes.

Problems With Google's Approach

Meta tags are clunky and likely to be gamed. They are clunky because they cover the whole page, not just the article. As such, if the page contains more than one article or, more likely, contains lots of other content besides the article (e.g. links, promos, ads), the meta tag will not distinguish between them. More important is that meta tags are, traditionally, what many people have used to game the web. Put in lots of meta tags about your content, the theory goes, and you will get bumped up the search engine results. Rather than address this problem, the new Google system is likely to make it worse, since there will be assumed to be a material value to adding the "original source" meta tag.

Though there is a clear value in being able to identify sources, distinguishing between an "original source" as opposed to a source is fraught with complications. This is something that those of us working on hNews, a microformat for news, have found when talking with news organizations. For example, if a journalist attends a press conference then writes up that press conference, is that the original source? Or is it the press release from the conference with a transcript of what was said? Or is it the report written by another journalist in the room published the following day? Google appears to suggest they could all be "original sources"; if this extends too far then it is hard to see what use it is.

Even when there is an obvious original source, like a scientific paper, news organizations rarely link back to it (even though it's easy to use a hyperlink). The BBC -- which is generally more willing to source than most -- has historically tended to link to the front page of a scientific publication or website rather than to the scientific paper itself (something the Corporation has sought to address in its more recent editorial guidelines). It is not even clear, in the Google meta-tagging scheme, whether a scientific paper is an original source, or the news article based on it is an original source.

And what about original additions to existing news stories? As Tom Krazit wrote on CNET:

The notion of 'original source' doesn't take into account incremental advances in news reporting, such as when one publication advances a story originally broken by another publication with new important details. In other words, if one publication broke the news of Prince William's engagement while another (hypothetically) later revealed exactly how he proposed, who is the "original source" for stories related to "Prince William engagement," a hot search term on Google today?

Differences with hNews

Something else Google's scheme does not acknowledge is that there are already methodologies out there that do much of what it is proposing, and are in widespread use (ironic given Google's blog post title "Credit where credit is due"). For example, our News Challenge-funded project, hNews already addresses the question of syndicated/non-syndicated, and in a much simpler and more effective way. Google's meta tags do not clash with hNews (both conventions can be used together), but neither do they build on its elements or work in concert with them.

One of the key elements of hNews is "source-org" or the source organization from which the article came. Not only does this go part-way toward the "original source" second tag Google suggests, it also cleverly avoids the difficult question of how to credit a news article that may be based on wire copy but has been adapted since -- a frequent occurence in journalism. The Google syndication method does not capture this important difference. hNews is also already the standard used by the largest American syndicator of content, the Associated Press, and is also used by more than 500 professional U.S. news organizations.

It's also not clear if Google has thought about how this will fit into the workflow of journalists. Every journalist we spoke to when developing hNews said they did not want to have to do things that would add time and effort to what they already do to gather, write up, edit and publish a story. It was partly for this reason that hNews was made easy to integrate into publishing systems; it's also why hNews marks information up automatically.

Finally, the new Google tags only give certain aspects of credit. They give credit to the news agency and the original source but not to the author, or to when the piece was first published, or how it was changed and updated. As such, they are a poor cousin to methodologies like hNews and linked data/RDFa.

Ways to Improve

In theory Google's initiative could be, as this post started by saying, a good thing. But there are a number of things Google should do if it is serious about encouraging better sourcing and wants to create a system that works and is sustainable. It should:

  • Work out how to link its scheme to existing methodologies -- not just hNews but linked data and other meta tagging methods.
  • Start a dialogue with news organizations about sourcing information in a more consistent and helpful way.
  • Clarify what it means by original source and how it will deal with different types of sources.
  • Explain how it will prevent its meta-tagging system from being misused such that the term "original source" becomes useless.
  • Use its enormous power to encourage news organizations to include sources, authors, etc. by ranking properly marked-up news items over plain-text ones.

It is not clear whether the Google scheme -- as currently designed -- is more focused on helping Google with some of its own problems sorting news or with nurturing a broader ecology of good practice.

One cheer for intention, none yet for collaboration or execution.

October 13 2010

15:00

August 18 2010

21:42

July 08 2010

20:14

News Organizations Must Innovate or Die

People in news don't generally think of innovation as their job. It's that old CP Snow thing of the two cultures, where innovation sits on the science not the arts side. I had my own experience of this at the American Society of Newspaper Editors conference in Washington a couple of months ago.

After one of the sessions I spotted an editor whose newspaper had adopted hNews (the Knight-funded news metadata standard we developed with the AP). "How's it going?" I asked him. "Is it helping your online search? Are you using it to mark up your archive?"

Before I had even finished the editor was jotting something down on his notepad. "Here," he said, "Call this guy. He's our technical director -- he'll be able to help you out."

Technology and innovation still remain, for most editors, something the techies do.

So it's not that surprising that over much of the last decade, innovation in news has been happening outside the news industry. In news aggregation, the work of filtering and providing context has been done by Google News, YouTube, Digg, Reddit, NowPublic, Demotix and Wikipedia...I could go on. In community engagement, Facebook, MySpace, and Twitter led the way. In news-related services (the ones that tend to earn money) it has been Craigslist, Google AdWords and now mobile services like Foursquare.

Rather than trying to innovate themselves, many news organisations have chosen instead to gripe from the sidelines. Rupert Murdoch called Google a "thief" and a "parasite." The U.K.'s Daily Mail has published stories about how using Facebook could raise your risk of cancer,, referred to someone as a "Facebook killer" (as in murderer), and runs scare stories about Facebook and child safety. And let's not even start to take apart various news commentators' dismissive attitude towards Twitter.

When they have seen the value of innovation, news organizations have tended to try and buy it in rather than do it themselves, with decidedly mixed results. Murdoch's purchase of MySpace initially looked very smart, but now, as John Naughton wrote over the weekend, it "is beginning to look like a liability." The AOL /Time Warner mashup never worked. Associated Newspapers in the U.K. have done slightly better by making smaller investments in classified sites.

Most news organisations do not see innovation as a critical element of what they do. This is not that unexpected since they spend their day jobs gathering and publishing news. Unfortunately for them, if it doesn't become more central to their DNA they are liable to become extinct.

Speed and Unpredictability of Innovation

At last week's Guardian Activate Summit, Eric Schmidt, Google's CEO, was asked what kept him awake at nights. "Almost all deaths in the IT industry are self-inflicted," Schmidt said. "Large-scale companies make mistakes because they don't continue to innovate."

Schmidt does not need to look far to see how quickly startups can rise and fall. Bebo was started in 2005, was bought by AOL in 2008 for $850 million, and then was sold again this month to Criterion Capital for a fee reported to be under $10 million.

The problem for Schmidt -- and one that is even more acute for news organizations -- is the increasing speed and unpredictability of innovation. "I'm surprised at how random the future has become," Clay Shirky said at the same Activate summit, meaning that the breadth of participation in the digital economy is now so wide that innovation can come from almost anyone, anywhere.

As an example he cited Ushahidi, a service built by two young guys in Kenya to map violence following the election in early 2008 that has now become a platform that "allows anyone to gather distributed data via SMS, email or web and visualize it on a map or timeline." It has been used in South Africa, the Democratic Republic of Congo, India, Pakistan, Gaza, Haiti and in the U.S.

He might also have cited Mendeley, a company which aims to organize the world's academic research papers online. Though only 16 months old, the service already has over 29 million documents in its library, and is used by over 10,000 institutions and over 400,000 people. It won a prize at Activate for the startup "most likely to change the world for the better."

The tools to innovate are much more widely available than they were. Meaning a good idea could be conceived in Nairobi, Bangalore or Vilnius, and also developed and launched there too, and then spread across the world. "The future is harder to predict," Shirky said, "but easier to see."

That's why Google gives one day a week to its employees to work on an innovation of their choice (Google News famously emerged from one employee's hobby project). It is why foundations like Knight have recognized the value of competition to innovation. And it's why Facebook will only enjoy a spell at the peak.

Some Exceptions

There are exceptions in the news industry. The New York Times now has an R&D department, has taken the leap towards linked data, and published its whole archive in reusable RDF. The Guardian innovated with Comment is Free, its Open platform, and the Guardian Data Store. The BBC developed the iPlayer.

The Daily Telegraph had a go, setting up "Euston Partners" under then editor Will Lewis. (Although setting up an innovation center three miles away from the main office did not suggest it was seen as central to the future of the business.) The project was brought back in-house shortly after Lewis left the Telegraph in May 2010 and has been renamed the "Digital Futures Division."

But mostly people in news don't really do innovation. They're too focused on generating content. But as the Knight Foundation has recognized, doing news in the same old way not only doesn't pay -- it doesn't even solve the democratic problems many of those in news are so rightly concerned about. For some people FixMyStreet.com or its U.S. equivalent SeeClickFix is now more likely to give them a direct relationship with their council than the local newspaper.

News and media organizations have to realize that they are in the communications business, and being in that business means helping people to communicate. Giving them news to talk about is a big part of this, but it's not the only part. The sooner they realize this and start to innovate, the better chance they have of surviving the next couple of decades.

July 06 2010

14:00

The ASCAP example: How news organizations could liberate content, skip negotiations, and still get paid

Jason Fry suggested in a post here last week that current paywall thinking might be just a temporary stop along the way to adoption of “paytags — bits of code that accompany individual articles or features, and that allow them to be paid for.” But how? As Fry recognizes, “between wallet friction and the penny gap, the mechanics of paytags make paywalls and single-site meters look like comparatively simple problems to solve.”

I suggested a possible framework for a solution during a couple of sessions at the conference “From Blueprint to Building: Making the Market for Digital Information,” which took place at the University of Missouri’s Reynolds Journalism Institute June 23-25. Basically, my “what-if” consisted of two questions:

  1. What if news content owners and creators adopted a variation on the long-established ASCAP-BMI performance rights organization system as a model by which they could collect payment for some of their content when it is distributed outside the boundaries of their own publications and websites?
  2. And, taking it a step further, what if they used a variant of Google’s simple, clever, and incredibly successful text advertising auction system to establish sales-optimizing pricing for such content?

News publishers have been tying themselves in knots for the last few years deciding whether or not to charge readers for content, and if so, how much and in what fashion — micropayments, subscriptions, metered, freemium and other ideas have all been proposed and are being tested or developed for testing.

As well, publishers have complained about the perceived misuse of their content by aggregators of all stripes and sizes, from Google News down to neighborhood bloggers. They’ve expressed frustration (“We’re mad as hell and we are not going to take it anymore,” Associated Press chair Dean Singleton said last year), and vowed to go after the bandits.

But at the same time, many publishers recognize that it’s to their advantage to have their content distributed beyond the bounds of their own sites, especially if they can get paid for it. When radio was developed in the 1920s, musicians and music publishers recognized they would benefit from wider distribution of their music through the new medium, but they needed a way to collect royalties without each artist having to negotiate individually with each broadcaster.

A model from music

That problem was solved by using a non-profit clearinghouse, ASCAP (American Society of Composers, Authors and Publishers), which had been formed in 1914 to protect rights and collect royalties on live performances. Today the performance-rights market in the U.S. is shared between ASCAP, BMI (Broadcast Music Incorporated, founded by broadcasters rather than artists) and the much smaller SESAC (formerly the Society of European Stage Authors & Composers). Using digital fingerprinting techniques, these organizations collect royalties on behalf of artists whose works are performed in public venues such as restaurants and shopping centers as well as on radio and television stations and streaming services such as Pandora.

Publishers have put a lot of effort into trying to confine news content to tightly-controlled channels such as their own destination websites, designated syndication channels, apps, and APIs in order to control monetization via advertising and direct user payments. But when content moves outside those bounds, as it can very easily, publishers have no way to regulate it or collect fees — so they cry foul and look for ways to stop the piracy or extract payments from the miscreants.

Among the content-protection schemes, AP is rolling out News Registry, which it touts as a way of at least tracking the distribution of content across the web, whether authorized or not, and Attributor offers “anti-piracy” services by “enforcement experts” to track down unauthorized use of content. But for now, content misuse identified by these systems will require individual action to remove it or force payment. In the long run, that’s not a viable way to collect royalties.

Suppose, instead, that news publishers allowed their content to it be distributed anywhere online (just as music can be played by any radio station) as long as it were licensed by a clearinghouse, similar to ASCAP and BMI, that would track usage, set prices, and channel payments back to the content creator/owner?

To do this, perhaps the paytags Fry suggested are needed, or perhaps publishers can learn from the music industry and use the equivalent of the digital fingerprints that allow ASCAP’s MediaGuide to track radio play. (The basic technology for this is around: AP’s News Registry uses hNews microtags as well as embedded pixels (“clear GIFs”); Attributor’s methodology is closer to the digital fingerprinting technique.)

How it could work

The system for broadcast and performance music payments is a three-way exchange consisting of (a) artists and composers, (b) broadcasters and performance venues, and (c) performance rights organizations (ASCAP and BMI).

In the news ecosystem the equivalents would be (a) content creators and owners, (b) end users including both individual consumers and “remixers” (aggregators, other publishers, bloggers, etc.); and (c) one or more content clearinghouses providing services analogous to those of ASCAP and BMI.

The difference between a news payments clearinghouse and the music industry model would be in scale, speed and complexity. In the news ecosystem, just as in the music world, there are potentially many thousands of content creators — but there are millions of potential end users, compared to a manageable number of radio stations and public performance venues paying music licensing fees. And there are far more news stories than musical units; they’re distributed faster and are much shorter-lived than songs. In the radio and public performance sphere, music content still travels hierarchically; that was true in the news business 20 years ago, but today news travels in a networked fashion.

To handle the exchange of rights and content in this vastly more complex environment, a real-time variable pricing model could be developed, benefiting both the buyers and sellers of content. Sellers benefit because with variable pricing or price discrimination, sales and revenue are maximized, since content goods are sold across the price spectrum to various buyers at the price each is willing to pay — think of the way airline seats are sold. Buyers benefit because they can establish the maximum price they are willing to pay. They may not be able buy at that price, but they are not subject to the take-it-or-leave-it of fixed pricing.

When it comes to news content, a variable pricing strategy was suggested last year by Albert Sun, then a University of Pennsylvania student; now a graphics designer with The Wall Street Journal. (Sun also wrote a senior thesis on the idea called “A Mixed Bundling Pricing Model for News Websites.”) The graphs on his post do a good job showing how a price-discrimination strategy can maximize revenue; it was also the subject of one of my posts here at the Lab.

A well-known real-time variable pricing arrangement is the Google AdSense auction system, which establishes a price for every search ad sold by Google. Most of these ads are shown to users at no cost to the advertisers; they pay only when the user clicks on the ad. The price is determined individually for each click, via an algorithm that takes into account the maximum price the advertiser is willing to pay; the prices other advertisers on the same search page are willing to pay; and the relative “Quality Score” (a combination of clickthrough rate, relevancy and landing page quality) assigned to each advertiser by another Google. It works extraordinarily well, not only for advertisers but for Google, which reaps more than $20 billion in annual revenue from it.

Smart economist needed

What’s needed in the news ecosystem is something similar, though quite a bit more complex. Like the Google auction, the buyer’s side would be simple: buyers (whether individuals or remixers such as aggregators) establish a maximum price they are willing to pay for a particular content product — this could be an individual story, video, or audio report, or it could be a content package, like a subscription to a topical channel. This maximum price is determined by an array of factors that will be different for every buyer, but may include timeliness, authoritativeness, relevance to the buyer’s interests, etc., and may also be affected by social recommendations or the buyer’s news consumption habits. But for the purposes of the algorithm, all of these factors are distilled in the buyer’s mind into a maximum price point.

The seller is the content creator or owner who has agreed to share content through the system, including having remixers publish and resell it. Sellers retain ownership rights, and share revenue with the remixer when a transaction takes place. The price that may be acceptable to a content owner/seller will vary (a) by the owner’s reputation or authority (this is analogous to Google’s assignment of a reputation score to advertisers), and (b) by time — since generally, the value of news content will drop quickly within hours or days of its original publication.

The pricing algorithm, then, needs to take into account both the buyer’s maximum price point and the seller’s minimum acceptable price based on time and reputation; and at least two more things: (a) the uniqueness of the content — is it one of several content items on the same topic (multiple reports on an event from different sources), or is it a unique report not available elsewhere (a scoop, or an enterprise story) — and (b) the demand for the particular piece of content — is it popular, is it trending up, or has it run its course?

The outcome of this auction algorithm would be that different prices would be paid by different buyers of the same content — in other words, sales would occur at many points along the demand curve as illustrated in Sun’s post, maximizing revenue. But it’s also likely that the system would establish a price of zero in many cases, which is an outcome that participating publishers would have to accept. And of course, many remixers would choose to offer content free and step into the auction themselves as buyers of publication rights rather than as resellers.

In my mind, the actual pricing algorithm is still a black box, to be invented by a clever economist. For the moment, it’s enough to say that it would be an efficient, real-time, variable pricing mechanism, maintained by a clearinghouse analogous to ASCAP and BMI, allowing content to reach end users through a network, rather than only through the content creator’s own website and licensees. Like ASCAP and BMI, it bypasses the endless complexities of having every content creator negotiate rights and pricing with every remixer. The end result would be a system in which content flows freely to end users, the value of content is maximized, and revenue flows efficiently to content owners, with a share to remixers.

Clearly, such a system would need a lot of transparency, with all the parties (readers, publishers, remixers) able to see what’s going on. For example, if a multiple news sources have stories on the same event, they might be offered to a reader at a range of prices, including options priced above the reader’s maximum acceptable price.

Protecting existing streams

Just as ASCAP and BMI play no role when musicians sell content in uncomplicated market settings the musicians can control — for example, concert tickets, CD sales, posters, or other direct sales — this system would not affect pricing within the confines of the content owner’s own site or its direct licensees. But by enabling networked distribution and sales well beyond those confines, it has the potential to vastly increase the content owner’s revenue. And, the system need not start out with complex, full-blown real-time variable pricing machinery — it could begin with simpler pricing options (as Google did) and move gradually toward something more sophisticated.

Now, all of this depends, of course, on whether the various tentative and isolated experiments in content pricing bear fruit. I’m personally still a skeptic on whether they’ll work well outside of the most dominant and authoritative news sources. I think The New York Times will be successful, just as The Wall Street Journal and Financial Times have been. But I doubt whether paywalls at small regional newspapers motivated by a desire to “protect print” will even marginally slow down the inevitable transition of readers from print to digital consumption of news.

A better long-term strategy than “protect print” would be to move to a digital ecosystem in which any publisher’s content, traveling through a network of aggregators and remixers, can reach any reader, viewer or listener anywhere, with prices set efficiently and on the fly, and with the ensuing revenue shared back to the content owner. The system I’ve outlined would do that. By opening up new potential markets for content, it would encourage publishers to develop higher-value content, and more of it. The news audience would increase, along with ad revenue, because content would travel to where the readers, listeners or viewers are. Aggregators and other remixers would have be incentivized to join the clearinghouse network. Today, few aggregators would agree to compensate content owners for the use of snippets. But many of them would welcome an opportunity legitimately to use complete stories, graphics and videos, in exchange for royalties shared with the content creators and owners.

Granted, this system would not plug every leak. If you email the full text of a story to a friend, technically that might violate a copyright — just like sharing a music file does — but the clearinghouse would not have the means to collect a fee (although the paytag, if attached, might at least track that usage). There will be plenty of sketchy sites out there bypassing the system, just as there are sketchy bars that have entertainment but avoid buying an ASCAP license.

But a system based on a broadly-agreed pricing convention is more likely to gain acceptance than one based on piracy detection and rights enforcement. Like ASCAP’s, the system would require a neutral, probably nonprofit, clearinghouse.

How could such an entity be established, and how would it gain traction among publishers, remixers and consumers? Well, here’s how ASCAP got started: It was founded in 1914 by Victor Herbert, the composer, who was well-connected in the world of musicians, composers, music publishers and performance venues, and who had previously pushed for the adoption of the 1909 Copyright Act. Herbert enlisted influential friends like Irving Berlin and John Philip Sousa.

Today, just as a few outspoken voices like Rupert Murdoch are moving the industry toward paywalls, perhaps a few equally influential voices can champion this next step, a pricing method and payments clearinghouse to enable publishers to reap the value of content liberated to travel where the audience is.

Acknowledgments/disclosures: The organizer of the conference where I had the brainstorm leading to this idea, Bill Densmore, has spent many years thinking about the challenges and opportunities related to networked distribution, payment systems, and user management for authoritative news content. A company he founded, Clickshare, holds patents on related technology, and for the last two years he has worked at the University of Missouri on the Information Valet Project, a plan to create a shared-user network that would “allow online users to easily share, sell and buy content through multiple websites with one ID, password, account and bill.” Densmore is also one of my partners in a company called CircLabs, which grew out of the Information Valet Project. The ideas presented in this post incorporate some of Densmore’s ideas, but also differ in important ways including the nature of the pricing mechanism and whether there’s a need for a single ID.

Photo by Ian Hayhurst used under a Creative Commons license.

February 02 2010

16:27

What Are the Universal Principles that Guide Journalism?

Defining principles of journalism is difficult. Rewarding, but difficult.

Back in 2005 it took the Los Angeles Times a year of internal discussions to settle on its ethical guidelines for journalists. The Committee for Concerned Journalists took four years, did oodles of research and held 20 public forums, in order to come up with a Statement of Shared Purpose with nine principles (which was subsequently fleshed out in the excellent "The Elements of Journalism" by Kovach and Rosenstiel).

Time spent thinking can then translate into a lot of principles. The BBC's editorial guidelines -- which include guidance about more than just journalism -- run to 228 pages. The New York Times' policy on ethics in journalism has more than 10,000 words. Principles needn't be so wordy. The National Union of Journalists (U.K.) code of conduct, first drafted in 1936, has 12 principles adding up to barely more than 200 words.

But, once defined, these principles serve multiple functions. They act as a spur to good journalism, as well as a constraint on bad. They provide protection for freedom of speech and of the press -- particularly from threats or intimidation by the government or commercial organizations. And they protect the public by preventing undue intrusion and providing a means of response or redress.

Principles in the Online World

In an online world, principles can serve another function. They can help to differentiate journalism from other content published on the web, whether that be government information, advertising, promotion, or institutional or personal information.

One of the key elements of hNews -- the draft microformat the Media Standards Trust developed with the AP to make news more transparent -- is rel-principles. This is a line of code that embeds a link within each article to the news principles to which it adheres. It doesn't specify what those principles should be, just that the article should link to some.

Now that lots of news sites are implementing hNews (over 200 sites implemented the microformat in January), we're getting some pushback on this. News sites, and bloggers, generally recognize that transparent principles are a good idea but, having not previously made them explicit online, many of them aren't entirely sure what they should be.

When we started working with OpenDemocracy, for example, they realized they had not made their principles explicit. As a result of integrating hNews, they now have. Similarly, the information architect and blogger Martin Belam, who blogs at currybet.net and integrated hNews in January 2010, wrote: "it turned out that what I thought would be a technical implementation task actually generated a lot of questions addressing the fundamentals of what the site is about... It meant that for the first time I had to articulate my blogging principles."

So, in an effort to help those who haven't yet defined their principles, we're in the process of gathering together as many as we can find, and pulling out the key themes.

This is where you can help.

Asking for Feedback

We've identified 10 themes that we think characterize many journalism statements of principle. This is a result of reviewing dozens of different (English language) principles statements available on the web. The statements were accessed via the very useful journalism ethics page on Wikipedia; via links provided by the Project for Excellence in Journalism; and from the Media Accountability Systems listed on the website of Donald W. Reynolds Institute of Journalism.

These themes are by no means comprehensive -- nor are they intended to be. They are a starting point for those, be they news organizations or bloggers, who are drawing up their own principles and need a place to start.

We'd really like some feedback on whether these are right, if ten is too many, if there are any big themes missing, and which ones have most relevance to the web.

Ten Themes

Our 10 themes are:

  1. Public interest Example: "... to serve the general welfare by informing the people and enabling them to make judgments on the issues of the time" (American Society of Newspaper Editors)
  2. Truth and accuracy Example: "[The journalist] strives to ensure that information disseminated is honestly conveyed, accurate and fair" (National Union of Journalists, UK)
  3. Verification Example: "Seeking out multiple witnesses, disclosing as much as possible about sources, or asking various sides for comment... [The] discipline of verification is what separates journalism from other modes of communication, such as propaganda, fiction or entertainment" (Principles of Journalism, from Project for Excellence in Journalism)
  4. Fairness Example: "... our goal is to cover the news impartially and to treat readers, news sources, advertisers and all parts of our society fairly and openly, and to be seen as doing so" (New York Times Company Policy on Ethics in Journalism)
  5. Distinguishing fact and comment Example: "... whilst free to be partisan, [the press] must distinguish clearly between comment, conjecture and fact" (Editors Code of Practice, PCC, U.K.)
  6. Accountability Example: "The journalist shall do the utmost to rectify any published information which is found to be harmfully inaccurate" (International Federation of Journalists, Principles on the Conduct of Journalists)
  7. Independence Example: "Journalists should be free of obligation to any interest other than the public's right to know... [and] Avoid conflicts of interest, real or perceived" (Society of Professional Journalists)
  8. Transparency (regarding sources) Example: "Aim to attribute all information to its source. Where a source seeks anonymity, do not agree without first considering the source's motives and any alternative, attributable source. Where confidences are accepted, respect them in all circumstances" (Australian Journalists Code)
  9. Restraint (around harassment and intrusion) Example: "The public has a right to know about its institutions and the people who are elected or hired to serve its interests. People also have a right to privacy and those accused of crimes have a right to a fair trial. There are inevitable conflicts between the right to privacy, the public good and the public's right to be informed. Each situation should be judged in the light of common sense, humanity and the public's rights to know" (Canadian Association of Journalists)
  10. Originality (i.e. not plagiarizing) Example: "An AP staffer who reports and writes a story must use original content, language and phrasing. We do not plagiarise, meaning that we do not take the work of others and pass it off as our own" (Associated Press Statement of news values and principles)

There are, of course, many excluded from here. We could, for example, have gone into much more depth in the area of "limitation from harm," which is only briefly referred to in number nine. Principles to inform newsgathering could form another whole section in itself.

There is also the growing area of commercial influence. In the U.S., the FCC has become pretty animated about bloggers taking money to promote goods while appearing to be impartial. In the world, the line between editorial and commercial content can get pretty blurred. Right now this is just covered by number five, independence. Should there be a separate principle around independence from commercial influence?

Any and all responses are much appreciated, so please leave them in the comments. Also feel free to get in touch directly if you'd like to continue the discussion (I'm at martin DOT moore AT mediastandardstrust DOT org).

November 14 2009

12:38
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl