Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 11 2011

13:19

Readers are our regulators

Here’s a post I put up on the Guardian’s Comment is Free (comment there).

Please resist the temptation to impose government regulation on journalism in the aftermath of phone-hacking. Oh, I know, it would be sweet justice for Murdoch pere et fils to be the cause of expanding government authority. But danger lies there. Regulation requires teeth and teeth carry power.

Let me begin by posing four questions:

What activities are to be regulated? Activities that are already criminal, like News Corp.’s, should be prosecuted as crimes. Then does speech itself become the target? In the United States, we grapple with this question in the one exception to our First Amendment, which is about to be tested in the Supreme Court. That loophole to the Bill of Rights gives the Federal Communications Commission authority to regulate and fine mere words on TV and radio. I have argued in the pages of the Guardian that “bullshit” is political speech but we are forbidden to speak it on our air — even about this regulation itself — under threat of a regulator’s chill and penalty. What we need today is more speech, not less.

What should a regulator do in the case of violations? Fine the offender into submission? Close the publication? Does that not give your government the same weapon used by dictators elsewhere against journalists? Doesn’t this return the UK to a regime of licensing the press? Remember that he who grants licenses may also not grant them or revoke them.

Who is the proper regulator? Clearly, it is not the industry. The Press Complaints Commission has proven to be nothing more than a diaphanous gown for the devil. But government? Is government the proper body to supervise the press, to set and oversee its standards? How could it be? The watched become the watchers’ watchers. Certainly government has shown itself to be incompetent and mightily conflicted in this case, as alleged overseers of the crimes at hand end up in high places and the police themselves are reported to be beneficiaries of corruption.

Finally, who is to be regulated? In other words, who is the press? That’s the key question raised here. Alan Rusbridger posed it in his forceful soliloquy on this amazing week: Is Huffington Post the press? Guido Fawkes? By extension, is any blogging citizen? Any YouTube commentator or Twitter witness-cum-reporter? Yes, we wrangle with this same question in the United States, but in the context of who should receive the rights and protections of the press — namely, shield laws — rather than who should be under the thumb of a government agency.

The goal must not be to further solidify the hegemony of the media-government complex but instead to bust it open. We have the tools at hand to do that: journalists, the public they serve, and their new tool of publicness, the internet.

As Rusbridger also said in that video, this was a week marked by the worst of journalism and the best of journalism. Reporting is wot did the bastards in. Nick Davies is the Woodward and Bernstein of the age though it’s a pity that his Nixon built his nearly absolute power — and nearly inevitable corruption — in our profession. The first and most important protection we will have against the likes of him is a business model for the Guardian to sustain Davies and support future generations like him. The second most important thing the Guardian can do is set an example for other journalists.

I was talking with Craig Newmark, founder of craigslist, just yesterday about his cause and favorite obsession: fact-checking. There are scattered organizations that endeavor to check politicians’ and journalists mistakes and lies. But no organization can do it all. How do we scale fact-checking? My thought is that we should see every news organization place a box next to all its reports inviting fact-checking: readers flagging dubious assertions and journalists and readers picking up the challenge to investigate. The Washington Post and the Torrington (Connecticut) Register Citizen have them.

That small addition raises the standards and expectations for journalists’ work and, more importantly, opens the process of journalism to the public, inviting them to act as both watchers and collaborators.

I also think we must increase our diligence to all but eliminate the scourge of the anonymous source. Note that I left an opening for whistleblowers and victims and the too-rare true investigators like Davies. But if we had as an expectation that the News of the World should have told us where and how it learned what it learned about its 4,000 victims, it would have been less able to perpetrate its crimes of hacking and bribery.

The Guardian is making openness its hallmark and this is what it must mean: Rather than closing down journalism to some legislative definition of who may practice the craft, we must open its functions to all. Rather than enabling government and media to become even more entwined, we must explode their bonds and open up the business of both for all to see. Regulators, bureaucrats, politicians, and titans of a dying industry are not the ones to do that.

In researching my next book, Public Parts, I dared to read Jürgen Habermas and his theory of the public sphere. Habermas says the public sphere first emerged as a counterweight to the power of government in the rational, critical debate of the coffeehouses and salons of the 18th century. But almost as soon as this public sphere formed, Habermas laments, it was corrupted and overtaken by mass media. Now, at last, is our opportunity to reverse that flow and to recapture our public sphere.

There’s where this tale’s sweet irony lies: It’s Murdoch & Co. who set the charges to blow apart the very institutional power and cozy relationships they built.

June 13 2011

22:21

FCC Report on Media Offers Strong Diagnosis, Weak Prescriptions

A consensus has begun to emerge around the Federal Communications Commission report, "The Information Needs of Communities," released Thursday: The diagnosis is sound, but the remedies are lacking.

The 465-page report (see full report, embedded below) is the result of 600-plus interviews, hearings and reams of research conducted over 18 months. It represents the most ambitious attempt yet to come to terms with the consequences of the current media transformation. It's a synthetic and comprehensive look at the entire ecosystem -- commercial, non-commercial and user-generated; across print, broadcast, online and mobile -- making it a tremendous resource for advocates, journalists, entrepreneurs and media educators.

Steven Waldman, journalist, editor and digital news entrepreneur, was lead author for this project and worked with a distinguished team of experts from across the country to compile both capsule histories of each sector and an atlas of current facts and figures. See the gallery of graphs from the report below, assembled by Josh Stearns of the media reform organization Free Press, for a sense of the range and depth of the research. (Overwhelmed? A two-page summary of findings and recommendations is also available here.)

Trouble for Local Reporting

The primary conclusion echoes that of many recent reports: Amid vibrant experimentation by a broad range of news producers, local reporting is in the biggest trouble. There are less ad dollars for newspapers, fewer reporters on the beat for both print and broadcast, fewer enterprise investigations, and more "hamsterized" reporters, all resulting in a gap in the ability to hold governments and corporations to account.

The report also represents an unprecedented effort by the FCC to take stock of the results of previous policy decisions supporting non-commercial and community media. Rather than focusing solely on public broadcasting as the answer to commercial news woes, as many recent analyses have, this report acknowledges the growth and dynamism of a broader non-profit news sector:

More accurate than "public broadcasting," the term "non-profit media" better captures the full range of not-for-profit news and media organizations. Some non-profit media groups are affiliated with public broadcasting, some not; some receive government funds, most do not. But what these groups have in common is this: they plow excess revenue back into the organization, and they have public-interest missions that involve aspirations toward independent journalism.

The report's authors see the growth and vigor in this sector as promising, and even have some kind words to say about public access stations, often dismissed or left entirely out of the local news equation. However, they also confirm that news production by non-commercial outlets is still not sufficient to fill the yawning gap in local reporting that has opened up over the past decade.

steve-waldman.jpg

What's more, stable business models for such outlets have not yet emerged, and the federal funding that undergirds the largest swath of non-commercial outlets, public broadcasters, is under political threat. Corporation for Public Broadcasting funds that were supporting digital innovation were slashed this year, as were funds earmarked for buildout of new station infrastructure.

To add insult to injury, Waldman & Co. note, public interest obligations for commercial stations have been defanged, offering no way to ensure diverse or high-quality local public affairs coverage. Those requirements that remain are rarely enforced.

No Bold Solutions

Yet, bafflingly, despite identifying these clear market gaps, the report stops short of offering bold solutions, perhaps in reaction to the currently charged political and funding climate. Instead, as several commentaries -- such as this piece in GigaOm -- note, the resounding message to the media industry is "don't look to us, we can't help you." GigaOm's Matthew Ingram writes:

One of the biggest trends that the FCC flags as important in the report is the loss of what it calls "accountability" journalism, in which news outlets on a local and/or national level cover the government and thereby act as a check on power. As more than one person has noted, this conclusion isn't exactly a news flash that required government funding and two years of research to unearth, but is arguably still worth highlighting, since it's a gap that has yet to be filled. And what does the FCC think can be done to fill it? Not much.

Commissioner Michael J. Copps objected emphatically to this laissez-faire approach at the report's release; he was the first to observe that "the policy recommendations ... don't track the diagnosis."

For some conservatives and the entrepreneurially minded, that's just fine. "I think I'm relieved that, on first scan, the FCC report on journalism recommends little," tweeted CUNY's director of interactive journalism Jeff Jarvis. As Waldman explained at the release event, a primary goal of the report's recommendations was to protect the First Amendment, a priority that sits well with libertarian commentator Adam Thieirer. He blogged his initial reaction at the Technology Liberation Front site:

For those of us who care about the First Amendment, media freedom, and free-market experimentation with new media business models, it feels like we've dodged a major bullet. The report does not recommend sweeping regulatory actions that might have seen Washington inserting itself into the affairs of the press or bailing out dying business models.

Spurring Conversation

So, what kind of remedies should the report have offered? Of course, I have my own ideas about how taxpayer dollars can best support civic engagement and innovation -- many of which I've reported on in the pages of MediaShift. I also have my own stake in this report, which cites research that I've conducted with colleagues at the Center for Social Media and the New America Foundation -- see the annotations in the embedded version of the report below for some highlights.

But, as several observers noted, the report will do its job if it spurs broader conversation about how best to support the evolution of news. That process has already begun.

Read more:

Using Storify, I've compiled reactions currently being shared via Twitter.

[View the story "Reactions to the FCC's Information Needs of Communities Report:" on Storify]

And, you can read the full document here:



Jessica Clark is a Senior Fellow at American University's Center for Social Media, a Knight Media Policy Fellow at the New America Foundation, and is currently consulting with the Association of Independents in Radio on a forthcoming initiative.

This is a summary. Visit our site for the full post ».

February 11 2011

22:05

WSJ Series Inspires 'Do Not Track' Bill from Rep. Jackie Speier



MP_internetprivacy_small.jpg

We didn't plan it this way, but the timing was perfect. Rep. Jackie Speier (D-Calif.) introduced a bill today in Congress that would give the FTC the power to create a "Do Not Track" database so people could opt out of online tracking. And her bill comes right during our special series about online privacy, which included a roundtable discussion (and debate) about the "Do Not Track" database and its feasibility. And Speier told me one of the inspirations for the bill was her outrage from reading the Wall Street Journal's What They Know series.

On one side is privacy groups such as Consumer Watchdog and the Electronic Frontier Foundation who worked with Speier on the bill. On the other side are behavioral ad firms and publishers who would prefer that massive numbers of people don't opt out from tracking, which helps them serve targeted ads. In the 5Across roundtable discussion, Yahoo's chief trust officer Anne Toth put it this way: "I think it's critical that people realize that collecting data about consumers online gives enormous benefits. Right now, advertising makes the Internet free. And people want a free Internet. And information leads to innovation and ideas. What I'm worried about most is that with 'Do Not Track' and government regulation, we throw out the baby with the bathwater and stifle innovation."

I talked with Rep. Speier today by phone and she wasn't buying that argument. She believes that the technology exists to create a one-button "Do Not Track" solution so people can opt out of tracking. Her bill is far from alone in the online privacy debate, as a flurry of bills are expected in Congress this year. Plus, she does not have a GOP co-sponsor on the bill nor is she a member of the House Energy and Commerce Committee. She still remains confident that the overwhelming public support for "Do Not Track" will give her bill momentum and she is "cautiously optimistic" she can get a GOP member to sign on.

The following is the entire audio of my interview with Speier this morning, and below is a transcript from that call.

speierfinal.mp3

Q&A

Why did you decide the time was right to introduce this bill now?

Rep. Jackie Speier: I think there was a growing clamor for privacy protection by the public. For the longest time, we have operated with the ignorance of bliss, I guess, that nothing was going on. There have been a number of recent exposes that have made it clear that there's a lot of tracking going on. And I must tell you that until I read it in the Wall Street Journal, and their 13-part series, I didn't know that Dictionary.com was just a means by which tracking takes place. And they're using something like the dictionary to identify you and then to track you. I was pretty outraged when I read that.

What about self-regulation. A lot of companies in Silicon Valley would prefer to do it themselves. What do you think about those efforts?

Speier: I have a long history on the financial privacy side of this issue. We've had lots of efforts by the industry to offer up pseudo financial privacy protections in California when I was working on that legislation. I'm happy to see the industry step up, but I'm not interested in fig leaf solutions. I want it to be simple and straightforward for consumers to click on one button and not be tracked. I want the FTC to develop the mechanism, and a simple format so the consumer does not have to read 20 pages of legalese.

How would you define tracking? Because it's not as simple as the Do Not Call registry. There's tracking online that people see as being bad, using their information in bad ways, and there's tracking that's just analytics for a website and not really harmful.

535px-US-DoNotCallRegistry-Logo.png

Speier: I think tracking is much more insidious than "Do Not Call." [Those telemarketing calls] were interrupting your dinner hour. Tracking is an activity that often times you don't even know it's going on. They're creating a secret dossier about who you are, they're making assumptions about you and then they're selling that information to third parties that then will market to you products or not, and then the information is then transferred from one source to another.

It starts to impact fundamental things like whether you can access health insurance, life insurance, what premium you're going to pay, based on assumptions they make. The example I used in the press conference today was I'm the chair of the refreshment committee of my church's bazaar so I go out and pay for 15 cases of wine and charge it to my credit card online. That information is then sold thousands of different ways to thousands of different data companies, and then it's sold again.

So let's say a life insurance company that I'd like to get life insurance from has that information and believes I'm an alcoholic. Either they don't sell me life insurance or charges me a higher premium. Or let's say I'm a prospective employee at a new company and they access this information and decide I'm an alcoholic and they don't want me as an employee. It becomes insidious.

I understand the worst-case scenarios, but what about the tracking that's done to give you recommendations on a site or you get ads that are served up that align with your interests? Some of those things aren't insidious or bad.

Speier: That's why you should have a choice. If you're going online to buy a new barbecue, you should be able to click to opt-in to see other barbecues. That's fine. That's your choice. But if you click on the target site, you know you want that barbecue and you don't want to be bothered and don't want to be tracked -- you can buy that barbecue and move on.

You talk about having one button to opt-out, but is that solution going to work or will people end up opting out of things they don't want to opt out of? Should there be more layers to this idea?

Speier: You'll still have advertisers seek you to opt in. The presumption is that somehow everyone is going to opt out. That's not necessarily the case. It's a choice.

What do you think about the solutions that the browsers have offered, from Microsoft's Internet Explorer, Mozilla Firefox and Google Chrome? Do you think what they're doing is a good start?

Speier: I think it's a good start, but I think we need something uniform. I've been told Mozilla's approach [with Firefox] is one that's not enforcing [Do Not Track] so what does that mean? It's more of a fig leaf at that point.

So it's more of a suggestion. "Don't track me... please."

Speier: [laughs] What is that? What it looks like to me is that they're trying to give the appearance that they're doing something, when they're not. I've been down this road before with the financial institutions in California with the financial privacy law. A placebo isn't going to work here.

I've heard from someone at Yahoo that the "Do Not Track" list could stifle innovation and the way they do behavioral advertising. And it could hurt not just Yahoo but startups as well.

Speier: I'm not persuaded by those arguments. That argument was used with the financial privacy law in California, that it would somehow stifle innovation of financial products. It didn't stifle innovation. Credit default swaps were out there for many to engage in. I'm just not buying it.

How will your bill differ from others that are being introduced? Are you coordinating with them in some way?

Speier: I'm hoping that we will coordinate. The bill from Bobby Rush (D-Ill.) is similar, though his would be site-specific. So every time you went to a site, you'd have to click, instead of a one-stop shop for purposes of opting out. My bill is more simplified and universal.

How will the bill dovetail with what's coming out from the FTC? They are in a comment period now, and they'll come out with a final report soon. Are you working with them?

Speier: First, I want to applaud the action they have taken, but we need to give them authority so they can move forward in a meaningful way in this area. They don't presently have the authority to do what we want them to do.

Part of your bill is giving them that authority?

Speier: Yes.

Did they ask for that?

Speier: No. They realize they need it in order to be effective in this area.

How long do you think it would take to implement what you're asking for in this bill?

Speier: I think the technology is already there. I think it should be as instantaneous as the Egyptian freedom. [laughs]

Within 18 days?

Speier: Yes, within 18 days. [laughing]

*****

What do you think about the "Do Not Track Me Online" bill? Would you sign up for such a database? Do you think the FTC should have the power to set up such a database? Share your thoughts in the comments below.

Mark Glaser is executive editor of MediaShift and Idea Lab. He also writes the bi-weekly OPA Intelligence Report email newsletter for the Online Publishers Association. He lives in San Francisco with his son Julian. You can follow him on Twitter @mediatwit.

This is a summary. Visit our site for the full post ».

February 07 2011

19:12

Special Series: Online Privacy

"All the world's a stage," and even moreso with the rise of the Internet, online advertising and social networking. While there is no American "right to privacy" in the Constitution, there are limits to what we want companies, publishers and advertisers to do with our personal information. Do we want advertisers to serve ads based on our web surfing habits? Should we be able to opt out from that kind of tracking? How would that work? The U.S. government -- including the FTC, Commerce Department and Congress -- is considering more regulation, while the industry tries self-regulation...again. While MediaShift gave a nice guide to online privacy a couple years back, the time is right to give an in-depth look at online privacy in the age of the always-on social web.

All the Online Privacy Posts

> Will U.S. Government Crack the Whip on Online Privacy? by Jonathan Peters

Coming Soon

> Facebook privacy issue timeline by Corbin Hiar

> A lively 5Across roundtable discussion with Yahoo's Anne Toth, EFF's Lee Tien, California Office of Privacy Protection's Joanne McNabb, CNET's Declan McCullagh and Stanford's Ryan Calo. Hosted by Mark Glaser.

> Privacy issues around advertising and marketing by Mya Frazier

> How can publishers protect data of users? by Dorian Benkoil

*****

What do you think about our series? Did we miss anything? Share your thoughts on how you protect your privacy online and whether you think there should be more laws to protect your privacy.

Mark Glaser is executive editor of MediaShift and Idea Lab. He also writes the bi-weekly OPA Intelligence Report email newsletter for the Online Publishers Association. He lives in San Francisco with his son Julian. You can follow him on Twitter @mediatwit.

This is a summary. Visit our site for the full post ».

18:49

Will U.S. Government Crack the Whip on Online Privacy?

This week MediaShift will be running an in-depth special report on Online Privacy, including a timeline of Facebook privacy issues, a look at how political campaigns retain data, and a 5Across video discussion. Stay tuned all week for more stories on privacy issues.

MP_internetprivacy_small.jpg

Online privacy is the new openness.

After years of telling all on the Internet, of tweeting about armpit rashes and tantric sex, we may have gone too far, shared too much. We may have lost control of the information that we reveal about ourselves and of the way others use that information. Which is a bad thing.

That's the thinking, at least, behind two government reports released at the end of 2010. The first one, produced by the Federal Trade Commission (FTC), outlines a plan to regulate the "commercial use of consumer data." The second one, produced by the Commerce Department, recommends that the federal government "articulate certain core privacy principles" for the Internet. Together they show that online privacy is very much on the public agenda.

FTC ENDORSES "DO NOT TRACK"

The FTC report, titled Protecting Consumer Privacy in an Era of Rapid Change, begins by noting that "consumer information is more important than ever" and that "some companies appear to treat it in an irresponsible or even reckless manner." It says data about consumer online activity and browsing habits are "collected, analyzed, combined, used, and shared, often instantaneously and invisibly."

google optout.JPGFor example, if I browse online for a product, which I often do, then advertisers could collect and share information about me, including my search history, the websites I visit and the kind of content I view. Likewise, if I participate in a social networking site, which I do, then third-party applications could access the stuff I post on my profile. Today my only lines of defense would be to adjust the privacy controls on my browser, to download a plug-in, or to click the opt-out icon that sometimes appears near an ad.

That's not good enough, according to the FTC report, which is intended to be a roadmap for lawmakers and companies as they develop policies and practices to protect consumer privacy. To that end, the FTC made three proposals.

First, companies should build "privacy protections into their everyday business practices." More specifically, they should provide "reasonable security for consumer data," they should collect "only the data needed for a specific business purpose," they should retain "data only as long as necessary to fulfill that purpose," they should safely "dispose of data no longer being used," and they should create "reasonable procedures to promote data accuracy." In addition, they should implement "procedurally sound privacy practices throughout their organizations."

Although it's unclear what would constitute a "specific business purpose," those suggestions to a great degree reflect existing law. Section 5 of the FTC Act, which prohibits unfair or deceptive practices, can be used to nail companies that fail to secure consumer information. Similarly, the Gramm-Leach-Bliley Act requires financial institutions to take certain steps to secure their information, and the Fair Credit Reporting Act requires consumer agencies to ensure that the entities receiving their information have a permissible reason to receive it. The latter also imposes "safe disposal" obligations on those entities.

Second, companies should "provide choices to consumers about their data practices in a simpler, more streamlined way." This would allow consumers in some transactions to choose the kind and amount of information they reveal about themselves. I say "in some transactions" because companies would have to distinguish between "commonly accepted data practices" and those "of greater concern."

The former includes ordinary transactions in which consumer consent is implied, e.g., I buy a book through Amazon, and I give the company my shipping address. No big deal, says the FTC. The latter, however, includes activities and transactions in which consent is not implied, e.g., an online publisher allows a third party to collect data about my use of the publisher's website. Big deal, says the FTC.

consumers_choice.jpgWhere consent is not implied, consumers "should be able to make informed and meaningful choices," and those choices should be "clearly and concisely described." In the context of online advertising, that means I would be able to choose whether to allow websites to collect and share information about me. The most practical way to give me that choice, according to the FTC, is to place a persistent setting on my browser to signal whether I consent to be tracked and to receive targeted ads. This "do not track" mechanism could give consumers the type of control online that they have offline with the "do not call" list for telemarketers.

Third, companies should "make their data practices more transparent to consumers." They should ensure that their privacy policies are "clear, concise and easy-to-read," and in some circumstances they should allow consumers to check out the data kept about them. Those circumstances remain unclear, but the report says if a company maintains consumer data that are used for decision-making purposes, then it could be required to allow consumers to review that data, essentially to give them the chance to correct any errors.

It's a good thing for the FTC to encourage companies to revisit their privacy policies. Most of them are long and dense and monuments to legalese, and some companies seem to notify me every week about changes to their terms and conditions. Nowhere is their ineffectiveness more apparent than in the world of mobile devices, which often spread privacy policies across dozens of screens, 50 words at a time. On the Internet, meanwhile, it would take consumers hundreds of hours [PDF file] to read the privacy policies they typically encounter in one year. That's hardly helpful to the consumer.

All in all, the FTC report has received mixed reviews. Some say its recommendations won't stop the information free-for-all, while others say it's promising and a step in the right direction. In any case, the commission will need the help of Congress to implement the plan, and that help isn't a sure thing.

COMMERCE DEPT. CALLS FOR PRIVACY CODES

The Commerce Department report, very sexily titled Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework [PDF file], begins by noting that consumer privacy must address "a continuum of risks," such as minor nuisances and unfair surprises, as well as the disclosure of sensitive information in violation of individual rights. The report's purpose is to stimulate discussion among policymakers, and it includes recommendations in four areas.

First, the government should "revitalize" the FTC's Fair Information Practice Principles, a code that addresses how organizations collect and use personal information and the reasonableness of those practices. The amended code should "emphasize substantive privacy protection rather than simply creating procedural hurdles." The specifics are similar to those in the first section of the FTC report: the code should call on companies to be more transparent, it should articulate clear purposes for data collection, it should limit the use of data to those purposes, and it should encourage company audits to enhance accountability.

Screenshot-code.pngSecond, the government should "enlist the expertise and knowledge of the private sector" to develop voluntary codes for specific industries that promote the safeguarding of personal information. To make that happen, the Commerce Department should create a Privacy Policy Office to bring the necessary stakeholders together, and the FTC would enforce the codes once they've been voluntarily adopted.

Well, this makes me think of the old saw that socialism is good in theory but doesn't work. Whether or not that's true, too often the same can be said (truthfully) of voluntary codes. To make this scheme work, at the very least, the FTC should be given rulemaking authority to develop binding codes in the event the private sector doesn't act. Alternatively, as the report suggests, the FTC could ramp up its enforcement of existing privacy laws, to encourage companies to buy in to the voluntary codes, on the theory that the buy-in would entitle them to a legal safe harbor. In other words, complying with a voluntary code would create a presumption of compliance with any privacy legislation based on the amended Fair Information Practice Principles.

Third, the government should be mindful of its global status as a leader in privacy policy. On the one hand, it should develop a regulatory framework for Internet privacy that "enhances trust and encourages innovation," and on the other hand, it should work with the European Union and other trading partners to bridge the differences, in form and substance, between their laws and U.S. law. As the report notes, although privacy laws vary from country to country, many of them are based on similar values.

Fourth, Congress should pass a law to standardize the notification that companies are required to give consumers when data-security breaches occur. Lawmakers also should address "how to reconcile inconsistent state laws," because the differences among them have created undue costs for businesses and have made it more difficult for consumers to understand how their information is protected throughout the country.

In the privacy world my sympathies are chiefly with the consumer, but the patchwork of state security breach notification (SBN) laws is a very real challenge for businesses. Not long ago, I worked with a company that had offices in a number of states, and as a result, it had to comply with a number of different state SBN laws. They were variations on the same theme, of course, but the differences had to be accommodated. The devil was in the details, and from that work it became obvious to me that the compliance costs were high and the benefits low: Some people get better notification than others. That's neither fair for the consumers nor ideal for the company.

The reaction to the Commerce Department report, like the one to the FTC report, has been mixed. Privacy advocates have been critical of it, especially the sections that support self-regulation, but other groups and government officials have commended the Department for taking on a tough issue. For its part, the Department said it plans to incorporate the feedback into its final report, to be released later this year.

NEW COMMITTEE TO CARRY THE PRIVACY FLAG

It's also worth mentioning that in late October, the National Science and Technology Council launched a Subcommittee on Privacy and Internet Policy. Chaired by Cameron Kerry, general counsel of the Commerce Department, and Christopher Schroeder, assistant U.S. attorney general, the subcommittee's job is to monitor global privacy-policy challenges and to address how to meet those challenges.

The charter [PDF file] says the subcommittee will do three things: 1) it will produce a white paper on information privacy in the digital age, building on the work of the FTC and the Commerce Department; 2) it will develop a set of general principles that define a regulatory framework for Internet privacy, one that would apply in the U.S. and globally; and 3) it will coordinate White House statements on privacy and Internet policy, striking a balance between the expectations of consumers and the needs of industry and law enforcement.

LOOKING AHEAD

Online privacy is on the government's brain, no doubt, but it's hard to say what effect, if any, the reports will have. They strike a chord with privacy advocates concerned about the way companies use the information that consumers reveal about themselves. They show sensitivity to the needs of both consumers and businesses. And they don't contain, possibly with the exception of the "do not track" mechanism, any kind of poison pill that would make the reports in their entirety look undesirable to major stakeholders.

Still, many companies already do what the reports recommend, and many of the recommendations to a great degree reflect existing law. So it's fair to wonder how much would change even if lawmakers used the reports to draft legislation. Lots of macro-micro questions remain unanswered, too.

Would all types of businesses be subject to the new framework? What about one that collects only non-sensitive consumer data? How long would businesses be required to retain consumer data? Is there a principled way to come up with a time period? Should companies be allowed to charge a fee to consumers for them to access information that the company maintains about them? If so, how much?

That's just a small sample of the questions that the FTC and Commerce Department need to answer before moving ahead, and they've requested help from interested parties. Readers should feel free to weigh in by contacting the agencies directly; otherwise, drop a comment in the box below.

Jonathan Peters is a lawyer and the Frank Martin Fellow at the Missouri School of Journalism, where he's working on his Ph.D. and specializing in the First Amendment. An award-winning freelancer, he has written on legal issues for a variety of newspapers and magazines. He can be reached at jonathan.w.peters@gmail.com.

This is a summary. Visit our site for the full post ».

October 08 2010

08:25

Online journalism student RSS reader starter pack: 50 RSS feeds

Teaching has begun in the new academic year and once again I’m handing out a list of recommended RSS feeds. Last year this came in the form of an OPML file, but this year I’m using Google Reader bundles (instructions on how to create one of your own are here). There are 50 feeds in all – 5 feeds in each of 10 categories. Like any list, this is reliant on my own circles of knowledge and arbitrary in various respects. But it’s a start. I’d welcome other suggestions.

Here is the list with links to the bundles. Each list is in alphabetical order – there is no ranking:

5 of the best: Community

A link to the bundle allowing you to add it to your Google Reader is here.

  1. Blaise Grimes-Viort
  2. Community Building & Community Management
  3. FeverBee
  4. ManagingCommunities.com
  5. Online Community Strategist

5 of the best: Data

This was a particularly difficult list to draw up – I went for a mix of visualisation (FlowingData), statistics (The Numbers Guy), local and national data (CountCulture and Datablog) and practical help on mashups (OUseful). I cheated a little by moving computer assisted reporting blog Slewfootsnoop into the 5 UK feeds and 10,000 Words into Multimedia. Bundle link here.

  1. CountCulture
  2. FlowingData
  3. Guardian Datablog
  4. OUseful.info
  5. WSJ.com: The Numbers Guy

5 of the best: Enterprise

There’s a mix of UK and US blogs covering the economic side of publishing here (if you know of ones with a more international perspective I’d welcome suggestions), and a blog on advertising to round things up. Frequency of updates was another factor in drawing up the list. Bundle link here.

  1. Ad Sales Blog
  2. Media Money
  3. Newsonomics
  4. Newspaper Death Watch
  5. The Information Valet

5 of the best: Industry feeds

Something of a catch-all category. There are a number of BBC blogs I could have included but The Editors is probably the most important. The other 4 feeds cover the 2 most important external drivers of traffic to news sites: search engines and Facebook. Bundle link here.

  1. All Facebook
  2. BBC News – The Editors
  3. Facebook Blog
  4. Search Engine Journal
  5. Search Engine Land

5 of the best: Feeds on law, ethics and regulation

Trying to cover the full range here: Jack of Kent is a leading source of legal discussion and analysis, and Martin Moore covers regulation, ethics and law regularly. Techdirt is quite transparent about where it sits on legal issues, but its passion is also a strength in how well it covers those grey areas of law and the web. Tech and Law is another regular source, while Judith Townend’s new blog on Media Law & Ethics is establishing itself at the heart of UK bloggers’ attempts to understand where they stand legally. Bundle link here.

  1. Jack of Kent
  2. Martin Moore
  3. Media Law & Ethics
  4. Tech and Law
  5. Techdirt

5 of the best: Media feeds

There’s an obvious UK slant to this selection, with Editors Weblog and E-Media Tidbits providing a more global angle. Here’s the bundle link.

  1. Editors Weblog
  2. E-Media Tidbits
  3. Journalism.co.uk
  4. MediaGuardian
  5. paidContent

5 of the best: Feeds about multimedia journalism

Another catch-all category. Andy Dickinson tops my UK feeds, but he’s also a leading expert on online video and related areas. 10,000 Words is strong on data, among other things. And Adam Westbrook is good on enterprise as well as practising video journalism and audio slideshows. Bundle link here.

  1. 10,000 Words
  2. Adam Westbrook
  3. Advancing the Story
  4. Andy Dickinson
  5. News Videographer

5 of the best: Technology feeds

A mix of the mainstream, the new, and the specialist. As the Guardian’s technology coverage is incorporated into its Media feed, I was able to include ReadWriteWeb instead, which often provides a more thoughtful take on technology news. Bundle link here.

  1. Mashable
  2. ReadWriteWeb
  3. TechCrunch
  4. Telegraph Connected
  5. The Register

5 of the best: UK feeds

Alison Gow’s Headlines & Deadlines is the best blog by a regional journalist I can think of (you may differ – let me know). Adam Tinworth’s One Man and his Blog represents the magazines sector, and Martin Belam’s Currybetdotnet casts an eye across a range of areas, including the more technical side of things. Murray Dick (Slewfootsnoop) is an expert on computer assisted reporting and has a broadcasting background. The Online Journalism Blog is there because I expect them to read my blog, of course. Bundle link here.

  1. Currybetdotnet
  2. Headlines and Deadlines
  3. One Man & His Blog
  4. Online Journalism Blog
  5. Slewfootsnoop

5 of the best: US feeds

Jay, Jeff and Mindy are obvious choices for me, after which it is relatively arbitrary, based on the blogs that update the most – particularly open to suggestions here. Bundle link here.

  1. BuzzMachine
  2. Jay Rosen: Public Notebook
  3. OJR
  4. Teaching Online Journalism
  5. Yelvington.com

August 23 2010

17:35

Smartphone, HDTV Boom Begets Gargantuan E-Waste Problem

The digital media revolution promises to improve the quality of our lives though an expanded capacity to communicate, collaborate, learn and make informed decisions. Yet our seemingly insatiable demand for digital media is driving a proliferation of consumer electronic devices and IT infrastructure, which are significantly contributing to a tsunami of toxic electronic waste.

This week U.S. Environmental Protection Agency administrator Lisa Jackson announced that promoting citizen engagement and increasing government accountability on enforcement to improve the design, production, handling, reuse, recycling, exporting and disposal of electronics is of the EPA's top six international priorities. In light of this, publishers, device manufacturers, bandwidth providers and other players in the digital media supply chain should rethink their marketing narratives and redouble their efforts to identify, quantify, disclose and manage the toxic e-waste impacts associated with digital media -- before regulation or catastrophe require them to do so.

The issues and dilemmas related to digital media and e-waste can be complex and confusing, but if they are ignored or only paid lip service to they will be sure to wash up on the shores of our lives... and in our politics, in short order. If you want a quick take on some of the key issues associated with e-waste, take a few minutes to watch this short animated Public Service Announcement co-produced for Good Magazine by Ian Lynam and Morgan Currie:

To learn more, read on. In the weeks ahead we look forward to your questions, comments and suggestions about how issues associated with the environmental impacts of the digital media revolution's e-waste detritus can best be addressed. Here are some thought starters to get the conversation rolling.

FAQ

How much toxic e-waste is being created and what are some of its environmental and social impacts?

According to market analyst firm ABI Research, approximately 53 million tons of electronic waste were generated worldwide in 2009, and only about 13% of it was recycled. The Electronics Take Back Coalition (ETBC) estimates that 14 to 20 million PCs are thrown out every year in the U.S. alone. There has been a recent surge in e-waste created by aggressive marketing encouraging consumers to "upgrade" basic voice-only mobile devices to 3G and 4G smartphones and mobile game consoles. There has also been an enormous surge in CRT monitors and TV sets set into motion by the switch to large flat screen displays and DVRs.

The EPA estimates that over 99 million TV sets, each containing four to eight pounds of lead, cadmium, beryllium and other heavy metals, were stockpiled or stored in the U.S. in 2007, and 26.9 million TVs were disposed of in 2007 -- either by trashing or recycling them. While it's not a large part of the waste stream, e-waste shows a higher growth rate than any other category of municipal waste.

Overall, between 2005 and 2006, total volumes of municipal waste increased by only 1.2 percent, compared to 8.6 percent for e-waste. Particularly troubling are the mountains of hazardous waste from electronic products growing exponentially in developing countries. The United Nations report Recycling - from E-Waste to Resources predicts that e-waste from old computers will jump by 500 percent from 2007 levels in India by 2020 and by 200 percent to 400 percent in South Africa and China. E-waste from old mobile phones is expected to be seven times higher in China and 18 times higher in India. China already produces about 2.3 million ton of e-waste domestically, second only to the United States, which produces about 3 million tons each year.

According to the Electronics Take Back Coalition, e-waste contains over 1,000 toxic materials harmful to humans and our environment, including chlorinated solvents, brominated flame retardants, plasticizers, PVC, heavy metals, persistent organic pollutants, plastics and gases used to make electronic products and their components such as semiconductor chips, batteries, capacitors, circuit boards, and disk drives. E-waste can also contain tin, tantalum, tungsten and gold, of which Section 1502 of the Dodd-Frank Wall Street Reform Act requires reporting if they originated in Congo or a neighboring country.

Not all e-waste is exported to China, India or Africa. The Electronics Take Back Coalition reports that some recyclers and many federal agencies in the U.S. send their e-waste to recycling plants operating in federal prisons operated by UNICOR, a wholly owned subsidiary of the federal Department of Justice. One criticism of UNICOR is that by paying prison workers as little as 23 cents per hour, they undercut private commercial recyclers. Another criticism is that reliance on high tech chain gangs may frustrate development of the free market infrastructure necessary to safely manage the tsunami of e-waste that the digital revolution is intensifying.

How much e-waste does the consumption and production of digital media generate?

Digital media doesn't grow on trees. Its creation, distribution and use requires massive quantities of energy, minerals, metals, petrochemicals and labor. Rather than relying on proprietary estimates of product lifecycles or limited forensic evidence we need reliable standards-based lifecycle inventories of the energy and material flows that make our broadband connectivity and digital media experiences possible. Proponents of digital media often tout the benefits of the digital media shift in terms of the number of trees that will be saved, but shifting to digital media has an environmental footprint and toxic impact that bear greater scrutiny.

The digital media industry has a long way to go before it can declare itself sustainable, or justify its environmental footprint based on cherry-picked data, anecdotal evidence and unfilled promises. Companies like Apple and HP that tout their commitments to sustainability fail to make a even a "greenish" grade in the most recent Greenpeace Greener Electronics Scorecard..

Greenpeace Guide to Greener Electronics

Until media companies, device manufacturers and service providers are inspired to make standards-based environmental product declarations through market pressure or regulation, it will be impossible for consumers to make informed decisions or compare the climate change or e-waste impacts associated with specific products or services. A look at the overall growth trends in a few key categories is enough to justify more serious attention to the issues at hand and to the toxic tragedies that loom over the horizon.

A shift in preference from traditional media to digital media is one key trend. According to the PriceWaterhouseCoopers report, Global Media and Entertainment 2010-2014, digital media's share of consumer spending is growing at double digit rates and is expected to reach 33 percent of their entertainment and media spending by 2014.

Growth in the number of broadband mobile connections and wireless devices is also a determining factor. Smartphone manufacturer Ericsson estimates that the world will reach 50 billion mobile connections within this decade with 80 percent of all people accessing the Internet using their mobile devices. Ericsson estimates there are over 500 million 3G subscriptions worldwide with more than 2 million mobile subscriptions being added per day.

At current rates of growth some pundits believe we may soon face a zettaflood of data, and the number of broadband wireless connections, smartphones, e-books, tablets, game consoles and "wireless devices with IP addresses will outnumber humans on our planet by an order of magnitude. The World Wireless Research Forum predicts 7 trillion devices for 7 billion people by 2017 - a thousand devices for every man, woman and child on the planet.

In short we are rapidly becoming a world of digital media hyper-consumers that need to develop a better understanding of the connections between our rabid digital media appetites and their lifecycle environmental impacts before they become our undoing.

Unfortunately, at present there is no reliable way to determine and compare the greenhouse gas emission or e-waste impacts associated with digital media consumption. While the impact of an individual decision or transaction may be negligible, the aggregate impact of billions of connections and trillions of transactions cannot be left unexamined and unmanaged.

What laws and sources of international, federal, state and local government support for e-waste management are in place and on the horizon?

The U.S. lags behind the EU, which has recently created two new policies on ways to deal with e-waste: the Restriction on the Use of Hazardous Substances (RoHS) and the Waste Electrical and Electronic Equipment. At present the U.S. is also the only member of the Organization for Economic Co-operation and Development that has not ratified the Basel Convention, which is intended to regulate the movement of hazardous waste across international borders.

In addition the U.S. does not have a comprehensive national approach for the reuse and recycling of used electronics, despite efforts to introduce federal legislation such as Senate Bill 1397 - Electronic Device Recycling Research and Development Act. However, electronics manufacturer take-back laws have gained traction at the state level.

An important report on e-waste recently issued by the Government Accounting Office (GAO) titled Electronic Waste: Considerations for Promoting Environmentally Sound Reuse and Recycling states that 23 states have passed legislation mandating statewide e-waste recycling, including several states that introduced legislation in 2010 (in yellow below).

States Passing E-Waste Legislation

All of these laws except California use the Producer Responsibility approach, where the manufacturers must pay for recycling. A guide to current and pending e-waste legislation is available on the Electronics Take Back Coalition website.

The Udall Center for Studies in Public Policy at the University of Arizona recently published an award-winning paper titled E-wasted Time: The Hazardous Lag in

Comprehensive Regulation of the Electronics Recycling Industry in the United States
that addresses the status of electronics recycling regulation in the U.S., as well as how the regulatory climate influences industry practice.

How can consumers and manufacturers of digital electronic devices, providers of broadband connectivity and data center services address digital media/e-waste dilemmas through voluntary initiatives and coalitions?

The EPA provides a guide to locations where electronics can be donated for reuse or recycling through the Plug-In To eCycling Partnership, Responsible Recycling and Recycling Industry Operating Standard RIOS certification initiatives. The Electronics Take Back Coalition and the Basel Action Network (BAN) have developed a competing voluntary program called e-Stewards that identifies recyclers they deem to be environmentally and socially responsible.

Both the Electronics Take Back Coalition and Greenpeace have developed scorecards that rate companies on their policies and the actions they are taking to address e-waste issues. Such sites are far from perfect, but can help can you sort through the confusing combination of apathy, indifference, marketing spin and unfulfilled green promises that predominate in today's consumer electronics marketplace. Before you buy or dispose of a cell phone, e-reader, tablet, PC, display, DVR, set-top box, game console, charger, plug strip, batteries, printers, or other electronic devices ask the manufacturer if there is a standards-based Environmental Product Declaration or Lifecycle Analysis for the product and check if the brand and the product is rated by Greenpeace and EPEAT.

Over the next five years our challenge is to stem the tide of e-waste being exported from the U.S. to the developing world, and develop a legal framework that will support mining and managing the mountains of toxic e-waste in the U.S. and in the developed world. According to Interpol the illegal trafficking of electronic waste (e-waste) is a serious crime and a growing international problem, posing an unacceptable environmental and health risk, in particular in developing countries in Africa and Asia. According to EPA administrator Lisa Jackson: "It's time for us to stop making our trash someone else's problem, start taking responsibility and setting a good example."

Going forward our greater challenge will be to change the prevailing business models and digital media marketing narratives that ignore the toxic tide and rethink the design of next generation digital media devices, media products, data networks and data centers so that they are greener by design, eliminate conflict minerals, use less energy, last longer and can be disassembled, upgraded and recycled responsibly.

*****

Please use the comments area below to share your questions and suggestions. More importantly, use your social networks to engage the marketing and product development executives of digital media companies, device manufacturers, carriers and other key stakeholders -- including elected officials and EPA regulators. Engage them in an informed dialogue on how we can communicate sustainably and decouple the production and consumption of digital media from the scourge of e-waste in a timely and effective manner.

MediaShift environmental correspondent Don Carli is senior research fellow with the non-profit Institute for Sustainable Communication (ISC) where he is director of The Sustainable Advertising Partnership and other corporate responsibility and sustainability programs addressing the economic, environmental and social impacts of advertising, marketing, publishing and enterprise communication supply chains. Don is an Alfred P. Sloan Foundation Industry Studies Program affiliate scholar and is also sustainability editor of Aktuell Grafisk Information Magazine based in Sweden. You can also follow him on Twitter.

This is a summary. Visit our site for the full post ».

July 01 2010

15:26

PCC director speaks out over Lord Puttnam’s criticisms of regulatory body

The director of the Press Complaints Commission (PCC) Stephen Abell has come out fighting in an article on Index on Censorship after Labour peer Lord Puttnam said earlier this week that the regulatory body should be shut down.

Speaking in a speech on parliament and young people on Tuesday, Puttnam said the PCC should be scrapped if newspapers failed to improve their behaviour within a year. In comments made to MediaGuardian, he said the PCC should work to prevent “the slow reduction of politics to a form of gruesome spectator sport” and “ensure the general representation of young people is more representative of reality”.

Abell says Puttnam’s remarks were not based on “well-informed and considered comment” about the PCC’s role and work, but says they are a starting point for debate:

Lord Puttnam is keen to assert that the PCC “cannot” instruct newspapers to be nicer to politicians and young people (two items on his wish list) without pausing to ask the question: should it? There must be the argument that if any body – even a self-regulatory body like the PCC – were to dictate the tone of political coverage, or suggest that there should be more positive stories on youth issues, the result would be a very significant restriction on freedom of expression.

(…)

However, and this is very important, he is right that the PCC must be active agents in maintaining newspaper standards. The coverage of politics, or of issues affecting the young, are two important areas. The PCC must ensure that we hold editors to account for what they report and how they report it. We must ensure that inaccuracies are corrected, intrusions and distortions prevented.

Related reading on Journalism.co.uk: Stephen Abell’s first interview as the new director of the PCC.Similar Posts:



May 09 2010

07:26

Kay Burley. Discuss.

Some say that journalism students should simply be taught how to ‘do’ journalism rather than spending time analysing or reflecting on it. On Saturday Sky’s Kay Burley showed why it’s not that simple – when she berated someone demonstrating in favour of electoral reform:

This, and the copious other clips of her walking a fine line, are a goldmine for lecturers and journalism students – particularly when it comes to discussing broadcast journalism technique, ethics, and regulation.

It helps students to look at their own journalistic practice and ask: in trying to please my bosses or meet an idea of what makes ‘good television’, am I crossing a line? How do the likes of Jeremy Paxman manage to dig behind a story without losing impartiality, or becoming the story themselves (do they manage it?) What, indeed, is the purpose of journalism, and how does that carry through into my practice?

Journalism is easy. You don’t need to study it for 3 years to do it. You don’t need a piece of paper to practise it.

But professional journalism is also the exercise of power – “Power without responsibility,” as the quote has it (which continues: “the prerogative of the harlot throughout the ages”). We expect to scrutinise politicians and hold them to certain ethical standards yet cry foul when the same scrutiny is applied to us. Studying journalism – while doing it – should be about accepting that responsibility and thinking about what it entails. And then doing it better.

So: Kay Burley. Discuss.

May 07 2010

23:18

4 Minute Roundup: FCC's 'Goldilocks' Approach to Regulating Net

Here's the latest 4MR audio report from MediaShift. In this week's edition I focus on the proposal by the FCC chairman Julius Genachowski to find a "third way" of regulating broadband providers. His "Goldilocks" approach tries to inforce fairness and Net neutrality rules, but not be too heavy-handed by avoiding setting prices for ISPs or forcing them to open up their lines. Reaction was tepid from both sides of the political aisle. I try to explain Genachowski's approach, and talk with the Investigative Reporting Workshop's John Dunbar, who thinks there's little to cheer consumer advocates in this proposal.

Check it out:

4mrbareaudio5710.mp3

>>> Subscribe to 4MR <<<

>>> Subscribe to 4MR via iTunes <<<

Listen to my entire interview with John Dunbar:

dunbar full.mp3

Background music is "What the World Needs" by the The Ukelele Hipster Kings via PodSafe Music Network.

Here are some links to related sites and stories mentioned in the podcast:

The FCC's 'Third Way,' Will it Work? at CBSNews

FCC's Genachowski Tries To Carve A 'Third Way' For Regulating ISPs at PaidContent

FCC's Third Way - What You Need to Know at PC Magazine

FCC Chair Cites 'Third Way' For Neutrality at MediaPost

FCC's third way - Regulate Internet access, not Internet content at VentureBeat

How the FCC Plans to Regulate Internet Lines at WSJ Digits

FCC statement - 'Third way' legal framework at CNET

FCC Web Rules Create Pushback at WSJ

Also, be sure to vote in our poll about what you think the FCC's proposal:




What do you think about the FCC's "third way" of regulating broadband?online survey

Mark Glaser is executive editor of MediaShift and Idea Lab. He also writes the bi-weekly OPA Intelligence Report email newsletter for the Online Publishers Association. He lives in San Francisco with his son Julian. You can follow him on Twitter @mediatwit.

This is a summary. Visit our site for the full post ».

March 29 2010

18:12

Competition in Internet, Mobile Services Boosts Democracy

Information and Communication Technologies (ICT) such as the Internet and mobile phones are often recognized for their role in helping connect people and communities, and spread knowledge and information. People may be unaware, however, that they're also a powerful force for international development -- provided that they are not suffocated by regulation and censorship.

The ICT Development and Initiative Dossier from June 2002 [PDF file] stated that, "since the beginning of the 1980s almost all national telecom and information technology markets worldwide have been transformed by technological innovation, product diversification (especially the introduction of mobile/cellular telephony and Internet) and market restructuring (particularly privatization, liberalization and the introduction of independent regulators)."

This holds true in some countries more than others. In some instances, the levels of liberalization and regulation in the ICT sector seem to directly correlate with the health of the country's democracy.

Civil and Economic Benefits

Market liberalization and the adjustment of regulation levels for ICT industries results in a growing shift from state-owned monopolies to a more open market which allows for competition from various dynamic and privately driven entities. Some governments and national operators are threatened by the prospect of increased competition and decreased state control, but for civil society and the economy as a whole, there's an array of benefits.

Economic analyst Vlade Milićević argues that, by adjusting the legislative and regulatory mobile telephony frameworks, increased competition leads to improved customer choice, enhanced quality, more efficient services, reduced prices, faster product innovation and growing economic development for both the market and the relevant country. These positive impacts are notable in various case studies on Central Eastern European countries, where the sector has recently been liberalized.

Similar cost benefits patterns have occurred in various ICT sectors. Between 1998 and 2002, retail prices of the fixed telecommunications industry in the EU decreased by 8.2 percent due to liberalizing the regulatory framework. Likewise, the liberalization of Internet telephony, which includes the legalization of voice over IP (VoIP) services in various countries, resulted in a dramatic decrease in phone charges. For example, in the U.S. a few years ago, calls to India were 50 cents per minute -- now they are less than 5 cents per minute from fixed lines.

Other than decreasing costs, information and telecommunication technology liberalization has other benefits. The use of VoIP enabled the advent of outsourced call centers because it offers the possibility of routing a local number offshore. In the U.S. today, 80 percent of companies have call centers located offshore. This cuts costs for the American companies and generates employment and income for the offshore country. These employment and revenue benefits are significant for countries such as India, Malaysia, Singapore, Kenya and South Africa.

Other examples of the benefits of this form of liberalization include community initiatives like Village Telco, "an easy-to-use, scalable, standards-based, wireless, local, do-it-yourself, telephone company toolkit." It uses open source software, VoIP and other technology to offer free local calls, cheap long distance, Internet access and other information services to previously disadvantaged communities in South Africa and other developing countries.

Lack of Liberalization

However, in some countries such as Zimbabwe, VoIP remains in a legal grey zone. According to a report commissioned by the Commonwealth Telecommunications Organization, "African regulators have been reluctant to legalize VoIP, based on a largely misguided attempt to protect the revenue base of the incumbent fixed-line, and in some cases, mobile telcos." Unprogressive regulators can retard growth in the sector, stunt the country's revenue, create lost opportunities, constrict the adoption of new technologies, and leave communities isolated in information vacuums.

The World Bank recently stated that there is positive and direct correlation between growth in gross domestic product and ICT development. Despite this, two factors seem to be preventing some governments from liberalizing ICT markets: The threat of a decrease in revenues for state controlled monopolies, and the decrease in control of the content that is available to the public. ICTs -- and particularly the use of the Internet and mobile phones -- are making it difficult for undemocratic governments to control information and in this age of communication, information is power.

"Freedom of information is...the touchstone of all the freedoms," according to the 1948 UN Freedom of Information Conference. Similarly, the principles from the World Summit on Information Society of 2003 declared that: "We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organization. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits the Information Society offers."

This sentiment was again reiterated in a recent poll by the BBC, which found that 80 percent of the 27,000 people surveyed around the world believe that access to the Internet is a fundamental human right. However, only about 25 percent of the world's population has access to the Internet, and various countries moderately to severely censor the information available.

Along with many other economic and technological benefits, a global shift to a more liberalized ICT market would honor fundamental human rights, and help create a more equitable and informed world.

March 22 2010

14:00

February 01 2010

11:11

What does John Terry’s case mean for superinjuntions?

The superinjunction obtained by England Captain John Terry was overturned on Friday – and the case raises some interesting issues (cross posted from John Terry: another nail in the superinjunction coffin):

  • Ecen when the superinjunction was in force, you could find out about the story on Twitter and Google – both even promoted the fact of Terry’s affair – via the Twitter trends list and the real-time Google search box.
  • No one got the difference between an injunction and a superinjunction - the former banned reporting of Terry’s alleged affair, the latter banned revealing there was an injunction. They weren’t necessarily both overturned, but there was a widespread assumption you could say what you liked about Terry once the superinjunction was overturned. This wasn’t necessarily the case …
  • The Mail and Telegraph seemed to flout the superinjunction – as did the Press Gazette which decided if wasn’t bound as it hadn’t seen a copy. This seemed risky behaviour legally – which makes me wonder if the papers were looking for a weak case to try to discredit superinjunctions.
  • This superinjunction should never have been granted. What was the original judge thinking?

Google and Twitter ignored the superinjunction

Tweets from while the superinjunction was in force

Tweets from while the superinjunction was in force

The superinjunction was overturned at about 1pm or 2pm on Friday. Needless to say, the papers had a field day over the weekend.

But if you wanted to find out the story on Friday, it was relatively simple to do so. I typed John Terry’s name into Google on Friday at about 11.15am – long before the injunction was lifted – and saw the screenshot, above.

Google’s real-time  search box revealed tweets about John Terry and Wayne Bridge (and there were some giving full details of the affair – including the stuff that didn’t come out until Sunday). Later on Friday, Google pulled the real-time search box – whether this was algorithmic or for legal reasons, I don’t know. But if, spurred on by the clues Google was offering, you typed both Terry and Bridge into Google or Twitter search, and it was simple to find the full story.

And by Friday lunchtime, both John Terry and Wayne Bridge were trending topics on Twitter, raising the profile of the issue. If you clicked on either to see what was being tweeted, you’d have found out about the affair instantly.

Shortly after, a judge ruled there were no grounds for the injunction, super or otherwise.

Guardian links to Twitter search for John Terry

As an aside, I noticed that the Guardian, in its coverage of the superinjunction, even included a link in one of its pieces to a Twitter search on John Terry.

They’ve removed it now (well, I can’t find it anyway and probably for the best. You should either have the balls to run the full story or not. I don’t think publishing a link to a twitter search is a reasonable half way house.)

Confusion still reigned

Once news that the super injunction had been lifted, no one knew (or perhaps cared) where they legally stood on Friday afternoon (as I’ve pointed out before about blogs and reporting restrictions).

It was reported that the superinjunction was lifted – but not whether there was a separate injunction relating to the facts of the case (ie could you report that JT had obtained an injunction, but not say why?).

Despite this, everyone went ahead and shouted about it all over the internet. If there was a separate injunction, it was finished.

You can see the confusion in the comments on this Guardian story from Friday afternoon

Seastorm: I’ve no interest in gossiping about EBJT, but I am a little confused….is the paper concerned now allowed to go ahead and publish the allegations?

Busfield (replying to seastorm): The judgement means that we can now report that there was an injunction. The judge then says that the newspaper concerned will have to make its own assessment of the risks involved in publishing whatever the allegations may be, which will involve considerations of the laws relating to privacy and defamation.

Gooner UK (replying to seastorm): Nope, the removal of the superinjunction means that newspapers are allowed to publish the fact that an injunction is in place, and name the parties involved, but they are still not allowed to publish the subject matter itself.

The injunction still stands, it’s just that we now know an injunction is in place. A superinjunction is so damaging because it means we (the public) are deliberately kept in the dark as to the very existence of an injunction.

And bear in mind that an injunction is in theory an act of last resort anyway. A superinjunction adds another level to that, which can be very dangerous in terms of press freedom.

Busfield (replying to Gooner UK): my understanding, and I am not a lawyer but I have spent much of the day talking to one, is that both the super and the injunction have gone. It is up to the paper concerned to decide whether it can publish its story without breaking the laws of defamation and relating to privacy.

The background: two papers ignore the injunction

It’s also interesting that two newspapers decide to ignore, or sail very close to the wind with regards to, the superinjunction – ie they ran stories that appeared to be in breach of it.

Mail reports injunction’s existence

As the Press Gazette reported on Friday morning (ie before the superinjunction was lifted):

A new “super-injunction” has been used by a Premier League footballer to stop national newspapers reporting his alleged marital infidelity.

The Daily Mail identifies the man only as a married England international.

The Daily Mail today reports, in apparent defiance of the order: “So draconian is Mr Justice Tugendhat’s order that even its existence is supposed to be a secret.”

(It’s interesting that the Press Gazette felt able to run the story about the existence of the superinjnction stating “Press Gazette has not been served with the injunction.” – I would have thought that this was also sailing close to the wind. It knew there was a super injunction, and I’m surprised its lawyers didn’t make an attempt to find out the full details.)

The Mail’s piece had a couple of nods and winks to Terry’s role:

A married England international footballer was granted a sweeping injunction to prevent publication of his affair with the girlfriend of a team-mate … It could be anyone from the captain of the top team in the land …”

What, like the captain of England and Chelsea, you mean?

As does the Telegraph

On top of this, the Telegraph had run a piece, too, according to the Guardian:

Yesterday [Thursday] The Daily Telegraph technically breached the “super” part of the superinjunction by reporting that the courts were hiding the identity of a footballer and allegations about his private life. (This piece appeared in print but is no longer online).

Maybe since the Trafigura injunction, newspapers have been looking for a way to kill off superinjunctions. If they wanted a weak super injunction to pick on as a way to discredit them, this seemed a prime example.

Whatever their reasons, nothing seems likely to happen to the Mail and the Telegraph for breaching or nearly breaching this one – unlike in the Trafigura case, it seems unlikely John Terry is going to successfully sue anyone over this issue.

Conclusion

The Mail sums it up well:

In a scathing ruling, the judge made it clear he suspected Terry was more afraid of losing the commercial deals than anything else.

He said the footballer appeared to have brought his High Court action in a desperate move to protect his earnings – rather than the woman with whom he had been conducting his affair.

(And given this, it’s hard to see how the superinjunction was ever granted.)

There are legitimate reasons for injunctions and even superinjunctions.

But judges need to think very carefully before granting them. And the British courts and the right to privacy should not be used to protect the commercial interests of the “father of the year”.

January 22 2010

15:24

The clause that concerns us all

Business secretary Peter Mandelson’s proposed Digital Economy Bill has ruffled a few feathers in the new media world, but has also gained support from unions and industry bodies. The fate of the bill could have a significant impact on the future of internet use in Britain, and on the growth of new media.

It is difficult to work out just who would benefit if the bill was successfully passed, apart from the government, which stands to gain millions in taxes. It is being touted as an effort to keep pace with technological change, yet in the same breath, threatens to severely limit access and creativity.

The loudest protests concern the worryingly vague clause 17, which would offer unprecedented power to the government to amend the Copyright, Design and Patent Act. Consumer groups have warned the bill could jeopardise privacy laws and make way for unwarranted monitoring and data collection. Critics argue the flexibility of the clause could lead to unfounded claims of copyright breaches and over-reaching power.

The clause states: “The Secretary of State may by order amend Part 1 or this Part for the purpose of preventing or reducing any infringement of copyright by means of the internet if satisfied that (a) the infringement is having a serious adverse effect on businesses or consumers, and (b) making the amendment is a proportionate way to address that effect.”

Interestingly, media heavyweights, Google, Facebook, Yahoo and Ebay joined forces to protest against the clause:

“This clause is so wide that it could put at risk legitimate consumer use of current technology as well as future developments,” the joint letter to Mandelson read.

The government has since moved to allay concerns, hinting it may water it down. Ministers maintain the government is committed to the principle of clause 17, but have drafted amendments.

The National Union of Journalists has signed a letter in support of the clause, stating jobs and the future of ‘creative Britain’ are at stake without it.

The NUJ’s support of the clause, at best, suggests a misguided attempt to protect member’s interests, and at worst, a regressive and short-sighted move, which could hinder the growth of the industry. This is a clause that concerns us all.

December 24 2009

12:33

Argus apologises to BBC producer – a note on media transparency

UK local newspaper title the Brighton Argus has published an apology on its website to Martyn Smith, the Bafta-nominated TV director, producer and writer, after wrongly identifying him its story Brighton TV producer escapes jail for “repulsive” child porn collection.

 The Argus has offered an unreserved apology and to its credit published it online at 7:15pm on 22 December – just over 24 hours after the story was published. The original story also appears to have been taken down from the site, though a cached version remains in Google News.

Interestingly the story is (at time of writing) the third most popular story on the paper’s website – good news for the wrongly identified subject?

This, and an excellent post from Andy Dickinson, made me consider how online tools on newspaper websites (such as traffic counts and commenting systems) can be used for transparency in such cases.

Dickinson’s post refers to a recent apology by the Northumberland Gazette – a Johnston Press title that has a pay wall in place on its website. The apology in this case was published behind the pay wall.

Whether this was purposeful or an oversight, it suggests that pay walls will throw up problems for newspapers, transparency and the Press Complaints Commission (PCC) with regards to its recommendations for publishing apologies and corrections, says Dickinson.

If I am going to pay someone for this stuff then one of the things I should want to know is just how accurate their content is and how transparent they are.

I for one would like to see all corrections and clarifications made free and visible on all parts of media orgs websites before the paywall. That way I can make an informed choice.

What other simple tools or processes should online newspapers be using to encourage transparency?

Similar Posts:



December 08 2009

14:44

What’s your problem with the internet? A crib sheet for news exec speeches

When media executives (and the occasional columnist on a deadline) talk about ‘the problem with the web’ they often revert to a series of recurring themes. In doing so they draw on a range of discourses that betray assumptions, institutional positions and ideological leanings. I thought I’d put together a list of some common memes of hatred directed towards the internet at various points by publishers and journalists, along with some critical context.

If you can think of any other common complaints, or responses to the ones below, post them in the comments and I’ll add them in.

Undemocratic and unrepresentative (The ‘Twitterati’)

The presumption here is that the media as a whole is more representative and democratic than users of the web. You know, geeks. The ‘Twitterati’ (a fantastic ideologically-loaded neologism that conjures up images of unelected elites). A variant of this is the position that sees any online-based protest as ‘organised’ and therefore illegitimate.

Of course the media is hardly representative or democratic on any level. In every general election in the UK during the twentieth century, for example, editorial opinion was to the right of electoral opinion (apart from 1997). In 1983, 1987 and 1992 press support exceeded by at least half the Conservative Party’s share of the vote. Similar stats can be found in US election coverage. The reasons are obvious: media owners are not representative or democratic: by definition they are part of a particular social class: wealthy proprietors or shareholders (although there are other factors such as advertiser influence and organisational efficiencies).

Journalists themselves are not representative either in terms of social classgender, or ethnicity – and have become less representative in recent decades.

But neither is the web a level playing field. Sadly, it has inherited most of the same barriers to entry that permeate the media: lack of literacy, lack of access and lack of time prevent a significant proportion of the population from having any voice at all online.

So any treatment of internet-based opinion should be done with caution. But just as not everyone has a voice online, even fewer people have a voice in print and broadcast. To accuse the web of being unrepresentative can be a smokescreen for the lack of representation in the mainstream media. When a journalist uses the unrepresentative nature of the web as a stick, ask how their news selection process presents a solution to that: is there a PR agency for the poor? Do they seek out a response from the elderly on every story?

And there is a key difference: while journalism becomes less representative, web access becomes more so, with governments in a number of countries moving towards providing universal broadband and access to computers through schools and libraries.

‘The death of common culture’

The internet, this argument runs, is preventing us from having a common culture we can all relate to. Because we are no longer restricted to a few terrestrial channels and a few newspapers – which all share similar editorial values – we are fragmented into a million niches and unable to relate to each other.

This is essentially an argument about culture and the public sphere. The literature here is copious, but one of the key planks is ‘Who defines the public sphere? Who decides what is shared culture?’ Commercial considerations and the needs of elite groups play a key role in both. And of course, what happens if you don’t buy into that shared culture? Alternative media has long attempted to reflect and create culture outside of that mainstream consensus.

You might also argue that new forms of common culture are being created – amateur YouTube videos that get millions of hits; BoingBoing posts; Lolcats; Twitter discussions around jokey hashtag memes – or that old forms of common culture are being given new life: how many people are watching The Apprentice or X Factor because of simultaneous chatter on Twitter?

The ‘echo chamber’/death of serendipity (homophily)

When we read the newspapers or watched TV news, this argument runs, we encountered information we wouldn’t otherwise know about. But when we go online, we are restricted to what we seek out – and we seek out views to reinforce our own (homophily or cyberbalkanisation).

Countering this, it is worth pointing out that in print people tended to buy a newspaper that also supported their own views, whereas online people switch from publication to publication with differing political orientations. It’s also worth pointing out that over 80% of people have come across a news article online while searching for something else entirely. Many websites have ‘related/popular articles/posts/videos’ features that introduce some serendipity. And finally, there is the role of social media in introducing stories we otherwise wouldn’t encounter (a good example here is the Iran elections – how many people would have skimmed over that in a publication or broadcast, but clicked through because someone was tweeting #cnnfail)

That’s not to say homophily doesn’t exist – there is evidence to suggest that people do seek out reinforcements for their own views online – but that doesn’t mean the same trend didn’t exist in print and broadcast, and it doesn’t make that true of everyone. I’d argue that the serendipity of print/broadcast depends on an editor’s news agenda and the serendipity of online depends on algorithms and social networks.

‘Google are parasites’

This argues that Google’s profits are based on other people’s content. I’ve tackled the Google argument previously: in short, Google is more like a map than a publication, and its profits are based on selling advertising very effectively against searches, rather than against content (which is the publisher’s model). It’s also worth pointing out that news content only forms around 0.01% of indexed content, and that news-related searches don’t tend to attract much advertising anyway. (If it was, Google would try to monetise Google News).

It’s often worth looking at the discourses underlying much of the Google-parasite meme. Often these revolve around it being ‘not fair’ that Google makes so much money; around ‘the value of our content’ as if that is set by publishers rather than what the market is willing to pay; and around ‘taking our content’ despite the fact that publishers invite Google to do just that through a) deciding not to use the Robots Exclusion Protocol (ACAP appears to be an attempt to dictate terms, although it’s not technically capable of doing so yet) and b) employing SEO practices.

Another useful experiment with these complaints is to look at what result publishers are really aiming for. Painting Google as a parasite can, variously, be used as an argument to relax ownership rules; to change copyright law to exclude fair comment; or to gain public subsidy (for instance, via a tax on Google or other online operators). In a nutshell, this argument is used to try to re-acquire the monopoly over distribution that publishers had in the physical world, and the consequent ability to set the price of advertising.

‘Bloggers are parasites’

A different argument to the one above, this one seeks to play down the role of bloggers by saying they are reliant on content from mainstream media.

Of course, you could equally point out that mainstream media is reliant on content from PR agencies, government departments, and, most of all, each other. The reliance of local broadcasters on local newspaper content is notorious; the lifting of quotes from other publications equally common. There’s nothing necessarily wrong with that – journalists often lift quotes for the same reasons as bloggers – to contextualise and analyse. The difference is that bloggers tend to link to the source.

Another point to make here is some blogs’ role as ‘Estate 4.5‘, monitoring the media in the same way that the media is supposed to monitor the powerful. “We can fact-check your ass!

‘You don’t know who you’re dealing with’

On the internet no one knows you're a dog

Identity is a complex thing. While it’s easy to be anonymous online, the assertions that people make online are generally judged by their identities, just as in the real world.

However, an identity is more than just a name – online, more than anything, it is about reputation. And while names can be faked, reputations are built over time. Forum communities, for example, are notorious for having a particularly high threshold when it comes to buying into contributions from anyone who has not been an active part of that community for some time. (It’s also worth noting that there’s a rich history of anonymous/pseudonymous writing in newspapers).

Users of the web rely on a range of cues and signals to verify identity and reputation, just as they do in the physical world. There’s a literacy to this, of course, which not everyone has at the same levels. But you might argue that it is in some ways easier to establish the background of a writer online than it was for their print or broadcast counterparts. On the radio, nobody knows you’re a dog.

Rumour and hearsay ‘magically become gospel’

They say “A lie is halfway round the world before the truth has got its boots on.” And it’s fair to say that there is more rumour and hearsay online for the simple reason that there is more content and communication online (and so there’s also more factual and accurate information online too). But of course myths aren’t restricted to one medium – think of the various ‘Winterval’ stories propagated by a range of newspapers that have gained such common currency. Or how about these classics:

Express cover: Migrants take all new jobs

The interactive nature of the web does make it easier for others to debunk hearsay through comments, responses on forums, linkbacks, hashtagged tweets and so on. But interactivity is a quality of use, not of the thing itself, so it depends on the critical and interactive nature of those browsing and publishing the content. Publishers who don’t read their comments, take note.

‘Unregulated’ lack of accountability

Accountability is a curious one. Often those making this assertion are used to particular, formal, forms of accountability: the Press Complaints Commission; Ofcom; the market; your boss. Online the forms of accountability are less formal, but can be quite savage. A ream of critical comments makes you accountable very quickly. Look at what happened to Robert Scoble when he posted something inaccurate; or to Jan Moir when she wrote something people felt was in bad taste. That accountability didn’t exist in the formal structures of mainstream media.

Related to this is the idea that the internet is ‘unregulated’. Of course it is regulated – you have (ironically, relatively unaccountable) organisations like the Internet Watch Foundation, and the law applies just as much online and in the physical world. Indeed, there is a particular problem with one country’s laws being used to pursue people abroad – see, for example, how Russian businessmen have sued American publishers in London for articles which were accessed a few times online. On the other hand, people can escape the attentions of lawyers by mirroring content in other jurisdictions, by simply being too small a target to be worth a lawyer’s time, or by being so many that it is impractical to pursue. These characteristics of the web can be used in the defence of freedoms (see Trafigura) as much as for attacks (hate literature).

Triviality

Trivial is defined as “of very little importance or value”. This is of course a subjective value judgement depending on what you feel is important or valuable. The objection to the perceived triviality of online content – particularly those of social networks and blogs – is another way to deprecate an upstart rival based on a normative ideal of the importance of journalism. And while there is plenty of ‘important’ information in the media, there is also plenty of ‘trivial’ material too, from the 3am girls to gift ideas and travel supplements.

The web has a similar mix. To focus on the trivial is to intentionally overlook the incredibly important. And it is also to ignore the importance of so much apparently ‘trivial’ information – what my friends are doing right now may be trivial to a journalist, but it’s useful ‘news’ or content to me. And in a conversational medium, the exchange of that information is important social glue.

To take journalists’ own news values: people within your social circle are ‘powerful’ within that circle, and therefore newsworthy, to those people, regardless of their power in the wider world.

‘Cult of the amateur’ undermining professionals

This argument has, for me, strange echoes of the arguments against universal suffrage at various points in history. Replace ‘bloggers’ with ‘women’ or ‘the masses’ and ‘professionals’ with ‘men’ or ‘the aristocracy’ in these arguments and you have some idea of the ideology underlying them. It’s the notion that only a select portion of the population are entitled to a voice in the exercise of power.

The discourse of ‘amateur’ is particularly curious. The implication is that amateur means poor quality, whereas it simply means not paid. The Olympics is built on amateurism, but you’d hardly question the quality of Olympic achievement throughout time. In the 19th century much scientific discovery was done by amateur scientists.

Professional, on the other hand, is equated with ‘good’. But professionalism has its own weaknesses: the pressures of deadlines, pressures of standardisation and efficiency, commercialism and market pressures, organisational culture.

That’s not to say that professionalism is bad, either, but that both amateurism and professionalism have different characteristics which can be positive or negative in different situations.

There’s an economic variant to this argument which suggests that people volunteering their efforts for nothing undermines the economic value of those who do the same as part of a paid job. This is superficially true, but some of the reasons for paying people to do work are because you can expect it to be finished within a particular timeframe to a particular quality – you cannot guarantee those with amateur labour (also, amateurs choose what they want to work on), so the threat is not so large as it is painted. The second point is that jobs may have to adapt to this supply of volunteer information. So instead of or as well as creating content the role is to verify it, contextualise it, link it, analyse it, filter it, or manage it. After all, we don’t complain about the ‘cult of the volunteer’ undermining charity work, do we?

Thanks to Nick Booth, Jon Bounds, Will Perrin, Alison Gow, Michele Mclellan, King Kaufman, Julie Posetti, Mark Pack, James Ball, Shane Richmond, Clare White, Sarah Hartley, Mary Hamilton, Matt Machell and Mark Coughlan for contributing ideas via Twitter under the #webhate tag.

November 26 2009

13:29

Dear Mandy … An Open Letter to Peter Mandelson from Dan Bull

Dan Bull is clearly a star in the making…

An open letter to Peter Mandelson regarding the newly announced Digital Economy Bill.

And an interesting use of video, whatever your view.

If you disapprove of the Bill, sign the petition at http://petitions.number10.gov.uk/dont

Write your own message to Lord Mandelson at http://threestrikes.openrightsgroup.org/

Dan Bull’s home page: http://www.myspace.com/danbull

Follow Dan on Twitter @itsDanBull – share the message with the #dearmandy tag.

November 19 2009

21:15

What I was told when I asked about blogs joining the PCC

Following recent coverage of the PCC’s Baroness Buscombe’s Independent interview where she possibly mooted the idea of the PCC regulating blogs, I thought I would share some correspondence I had with the PCC recently over the same issue. In a nutshell: blogs can already choose to operate under the PCC anyway.

I asked Simon Yip of the PCC whether a hyperlocal blog could opt in to the PCC Code and self-regulation. These are his replies:

“They can decide to adhere to the PCC Code if they choose. To fall formally within the system overseen by the PCC, they would have to subscribe to the body responsible for funding the Commission.

“I am afraid I am unable to answer the question of cost, as it depends on the circulation of the newspaper [sic]. As you can imagine, it would vary from publication to publication.

“For any publication to subscribe to the Code of Practice, the publication would contact Pressbof.”

So there you go. If you can afford to pay for a shiny PCC badge, then you’re welcome.

And of course, that’s the main hurdle to the idea of PCC regulation of blogs: few blogs could afford to pay, and even fewer would want to. Meanwhile, there is no financial incentive for the PCC to recruit blogs (nor is there any incentive for bloggers – yet – in joining an organisation whose 2 main purposes appear to be to stave off statutory regulation and to mediate disputes to avoid legal costs).

Whether there is financial incentive in trying to attract public funding to do so, or to use blogs as a common foe to do the same is, of course, a separate matter.

What is much more worrying than this blogging regulation sideshow is the apparent ignorance demonstrated by Baroness Buscombe in talking about Google and the news industry’s business plans, described earlier on this blog by Matt Wardman.

The most curious quote for me from her SoE speech is this one, following on from a paragraph which attempts to conjure up the now almost pantomime-like Monster Of Google.

“I urge you to recall the recent words of Eric Schmidt, Google’s CEO: “We use as our primary goal the benefit to end users. That’s who we serve.” So there you have it: the end user matters, not those who create content in the first place.”

Is she saying that serving users above content creators is a Bad Thing? Weren’t newspapers supposed to serve their readerships as well? Or did that change while I wasn’t looking?

10:43

Baroness Buscombe, the Press Complaints Commission and the Internet: Hard Questions

Baroness Buscombe, the Chairman of the Press Complaints Commission, gave a speech this week to the Society of Editors, followed by some comments to Ian Burrell of the Independent about a desire to “regulate the blogosphere “.

The Baroness has taken several steps backwards from her previous statements to Mr Burrell, and has attempted to emphasise that any proposals would be “voluntary”.

I am sceptical as to whether this is a true change of mind, or a simply more nuanced journey aiming for the same destination by a more circuitous, and perhaps better hidden, route. Ian Burrell has pointed out that he had a direct interview with her for 40 minutes, so making that mistake would not be easy. However, that has been addressed elsewhere by perhaps hundreds of people, with a vigorous collective letter from hundreds of bloggers.

For me, in addition to the “will we … won’t we … will we … won’t we … regulate the bloggers” game of Hokey-Cokey, this affair has highlighted a number of problems with both the Press Complaints commission, and perhaps with Baroness Buscombe herself.

Firstly, the Chairman of the Press Complaints Commission is a position which surely depends on political and commercial neutrality. Perhaps it can only be compared to that of Speaker of the House of Commons. How is it possible for a Peer who takes the whip for a political party to be neutral?

Secondly, despite the Chairman of the PCC clearly needing to be a neutral figure, Baroness Buscombe used her speech to the Society of Editors to make party political points.

Thirdly, having read Baroness Buscombe’s speech to the Society of Editors, I think that her, and the PCC’s, level of knowledge and understanding about the Internet is open to question.

And finally, Baroness Buscombe applauds the aggressive media investigations of the House of Commons, and MPs’ Expenses, yet suggests that they need to lay off the House of Lords – where she is a member; this at a time when the finanical skeletons have begun to emerge, creaking, from their Lordships’ cupboards into the light of day. That is a double standard.

Let me illustrate this with a few extracts.

Political Neutrality

Baroness Buscombe opens with a recounting of her experience as a Shadow Minister fighting the current Labour administration, including:

Of course the fact that unfortunately we do have such a dysfunctional democracy – particularly given the House of Commons appears almost entirely to have forgotten what they are there for – means it is vital that the press is free to investigate and probe and tell it like it is.

You can rightly feel proud that, from unraveling the government’s misleading spinning of intelligence in the Iraq War to exposing uncensored details of MPs’ expenses, the British press has filled the democratic deficit in recent years.

Does this partisan accusation, whether true or not, have any place in a speech by the person who is ultimately responsible for determining the accuracy or otherwise of such claims made by newspapers?

And why has she not, at the very least, resigned the Conservative whip?

Understanding the Internet

Baroness Buscombe, on news aggregators and search engines:

Together the press, all commercial broadcasters, film, book publishing and music industries must now work together to find a new business model with the Search Engines. The latter, the aggregators, think it is ok to enjoy the use of all your valuable intellectual property and ad revenues for little or no return.

This statement is simply untrue. Major aggregators do *not* use *all* of the intellectual property of newspapers and media. Google, which is attacked by the Baroness in the following paragraph, runs the Google News service.

Google News takes 1) a headline, and 2) up to around 155 characters of text.

It must be very depressing for journalists who spend a whole week creating a 5000-word article to realise that only the first 2 lines and the subeditor headline are of any value !

Further, Google offers a complete opt-out service, either from having articles included in the site’s cache, or from having a site indexed altogether. I use it myself on the Wardman Wire to prevent caching, since I have taken the trouble to invest in a high-quality server and want the visitors to come to my site rather than read the Google cache.

If services such as Google News are covering content from newspapers and the media, it is simply because those newspapers have made a decision to allow Google to do so.

The issue of aggregators and search engines, and their impact on the revenues of newspapers, has been one of the very highest priorities of the industry for months, and it is worrying that the head of the PCC hasn’t got to grips with the basic concepts involved after 6 months with the organisation (Wikipedia quotes her start date as April 2009).

Leave their Lordships Alone

Baroness Buscombe on the Commons, and the importance of vigorous scrutiny:

I know that this is not a popular message with many of my fellow Parliamentarians, some of whom are bruised by recent coverage, but we must consider the MPs’ expenses furore as a whole, and not focus on individual injustices.

What is the main lesson to be learned?

Surely, it is that the absence of scrutiny in the first place allowed a culture of abuse to flourish. If trust in politics is at a low ebb, it is because there has been too little freedom to shine a light on politicians’ activities, not too much.

However, about 4 paragraphs later the tone of Baroness Buscombe’s speech changes:

Which leads me to the House of Lords. I may be partisan, but is it really in anyone’s interests for the media to be party to the undermining of our Second Chamber – one of the few platforms in this country where people can stand up and say what they believe without fear or favour?

This is astonishing at a time when the light of day is at last shining on abuses of the Expenses system in the Upper Chamber. This is not a good recommendation for a Press Regulator who is trying to declare her support for strong investigation by journalists.

And that letter …

The letter should should still be signed by as wide a range of bloggers as possible, because – even if we take Baroness Buscombe’s new position as being the real one – the PCC and the Baroness clearly need someone to explain to them how the Internet works.

Wrapping Up

You can find the letter and the argument behind it, and sign up, here at Liberal Conspiracy .

Before signing, I’d encourage readers to read the whole speech and judge my comments in their full context.

At present this riposte has been driven largely by bloggers in the political niche; I’d particularly encourage bloggers in the media and journalism areas to offer their support.

But the bloggers who I really want to sign up are those for either the Society of Editors, or the Press Complaints Commission.

Unfortunately, neither of them has a blogger. Perhaps that would be a good first step to find out more about the internet before Baroness Buscombe makes another speech.

They presumably already have an insight into how quickly the online community can react when necessary.

Further Coverage

  1. Mark Pack has a slightly less pointed critique of Baroness Buscombe’s speech.
  2. Roy Greenslade has three articles about the “blog regulation” incident.
  3. The Heresiarch has a different angle entitled “Bloggers Repel Boarders“. Ooh-arr, me hearties.
  4. Liberal Conspiracy has the “Unity letter“.

November 09 2009

09:51
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl