Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 24 2012

14:35

This Week in Review: Twitter’s ongoing war with developers, and plagiarism and online credibility

[Since the review was off last week, this week's review covers the last two weeks.]

More Twitter restrictions for developers: Twitter continued to tighten the reins on developers building apps and services based on its platform with another change to its API rules last week. Most of it is pretty incomprehensible to non-developers, but Twitter did make itself plain at one point, saying it wants to limit development by engagement-based apps that market to consumers, rather than businesses. (Though a Twitter exec did clarify that at least two of those types of services, Storify and Favstar, were in the clear.)

The Next Web’s Matthew Panzarino clarified some of the technical jargon, and Marketing Land’s Danny Sullivan explained whom this announcement means Twitter likes and doesn’t like, and why. ReadWriteWeb’s Dan Frommer gave the big-picture reason for Twitter’s increasing coldness toward developers — it needs to generate tons more advertising soon if it wants to stay independent, and the way to do that is to keep people on Twitter, rather than on Twitter-like apps and services. (Tech entrepreneur Nova Spivack said that rationale doesn’t fly, and came up with a few more open alternatives to allow Twitter to make significant money.)

That doesn’t mean developers were receptive of the news, though. Panzarino said these changes effectively kill the growth of third-party products built on Twitter’s platform, and Instapaper founder Marco Arment argued that Twitter has made itself even harder to work with than the famously draconian Apple. Eliza Kern and Mathew Ingram of GigaOM talked to developers about their ambivalence with Twitter’s policies and put Twitter’s desire for control in perspective, respectively.

Several observers saw these changes as a marker of Twitter’s shift from user-oriented service to cog in the big-media machine. Tech designer Stowe Boyd argued Twitter “is headed right into the central DNA of medialand,” and tech blogger Ben Brooks said Twitter is now preoccupied with securing big-media partnerships: “Twitter has sold out. They not only don’t care about the original users, but they don’t even seem to care much for the current users — there’s a very real sense that Twitter needs to make money, and they need to make that money yesterday.” Developer Rafe Colburn pointed out how many of Twitter’s functions were developed by its users, and developer Nick Bruun said many of the apps that Twitter is going after don’t mimic its user experience, but significantly improve it. Killing those apps and streamlining the experience, said GigaOM’s Mathew Ingram, doesn’t help users, but hurts them.

Part of the problem, a few people said, was Twitter’s poor communication. Harry McCracken of Time urged Twitter to communicate more clearly and address its users alongside its developers. Tech entrepreneur Anil Dash offered a rewritten (and quite sympathetic) version of Twitter’s guidelines.

There’s another group of developers affected by this change — news developers. The Lab’s Andrew Phelps surveyed what the changes will entail for various Twitter-related news products (including a couple of the Lab’s own), and journalism professor Alfred Hermida warned that they don’t bode well for the continued development of open, networked forms of journalism.

Plagiarism, credibility, and the web: Our summer of plagiarism continues unabated: Wired decided to keep Jonah Lehrer on as a contributor after plagiarism scandal, though the magazine said it’s still reviewing his work and he has no current assignments. Erik Wemple of The Washington Post lamented the lack of consequences for Lehrer’s journalistic sins, and both he and Poynter’s Craig Silverman wondered how the fact-checking process for his articles would go. Meanwhile, Lehrer was accused by another source of fabricating quotes and also came under scrutiny for mischaracterizing scientific findings.

The other plagiarizer du jour, Time and CNN’s Fareed Zakaria, has come out much better than Lehrer so far. Zakaria resigned as a Yale trustee, but Time, CNN, and The Washington Post (for whom he contributes columns) all reinstated him after reviewing his work for them, with Time declaring it was satisfied that his recent lapse was an unintentional error. However, a former Newsweek editor said he ghost-wrote a piece for Zakaria while he was an editor there, though he told the New York Observer and Poynter that he didn’t see it as a big deal.

Some defended Zakaria on a variety of grounds. Poynter’s Andrew Beaujon evaluated a few of the arguments and found only one might have merit — that the plagiarism might have resulted from a research error by one of his assistants. The Atlantic’s Robinson Meyer, meanwhile, argued that plagiarism has a long and storied history in American journalism, but hasn’t always been thought of as wrong.

Others saw the responses by news organizations toward both Zakaria and Lehrer as insufficient. Poynter’s Craig Silverman argued that those responses highlighted a lack of consistency and transparency (he and Kelly McBride also wrote a guide for news orgs on how to handle plagiarism), while journalism professor Mark Leccese said Zakaria’s employers should have recognized the seriousness of plagiarism and gone further, and Steven Brill at the Columbia Journalism Review called for more details about the nature of Zakaria’s error.

A New York Times account of Zakaria’s error focused on his hectic lifestyle, filled with the demands of being a 21st-century, multiplatform, personally branded pundit. At The Atlantic, book editor and former journalist Peter Osnos focused on that pressure for a pundit to publish on all platforms for all people as the root of Zakaria’s problem.

The Times’ David Carr pinpointed another factor — the availability of shortcuts to credibility on the web that allowed Lehrer to become a superstar before he learned the craft. (Carr found Lehrer’s problems far more concerning than Zakaria’s.) At Salon, Michael Barthel also highlighted the difference between traditional media and web culture, arguing that the problem for people like Zakaria is their desire to inhabit both worlds at once: “The way journalists demonstrate credibility on the Web isn’t better than how they do in legacy media. It’s just almost entirely different. For those journalists and institutions caught in the middle, that’s a real problem.” GigaOM’s Mathew Ingram argued that linking is a big part of the web’s natural defenses against plagiarism.

Untruths and political fact-checking: The ongoing discussion about fact-checking and determining truth and falsehood in political discourse got some fresh fuel this week with a Newsweek cover story by Harvard professor Niall Ferguson arguing for President Obama’s ouster. The piece didn’t stand up well to numerous withering fact-checks (compiled fairly thoroughly by Newsweek partner The Daily Beast and synthesized a bit more by Ryan Chittum of the Columbia Journalism Review).

Ferguson responded with a rebuttal in which he argued that his critics “claim to be engaged in ‘fact checking,’ whereas in nearly all cases they are merely offering alternative (often silly or skewed) interpretations of the facts.” Newsweek’s editor, Tina Brown, likewise referred to the story as opinion (though not one she necessarily agreed with) and said there isn’t “a clear delineation of right and wrong here.”

Aside from framing the criticism as a simple difference of opinion rather than an issue of factual (in)correctness, Newsweek also acknowledged to Politico that it doesn’t have fact-checkers — that its editors “rely on our writers to submit factually accurate material.”  Poynter’s Craig Silverman provided some of the history behind that decision, which prompted some rage from Charles Apple of the American Copy Editors Society. Apple asserted that any news organization that doesn’t respect its readers or public-service mission enough to ensure their work is factually accurate needs to leave the business. The Atlantic’s Ta-Nehisi Coates said the true value of fact-checkers comes in the culture of honesty they create.

Mathew Ingram of GigaOM wondered if that fact-checking process might be better done in public, where readers can see the arguments and inform themselves. In an earlier piece on campaign rhetoric, Garance Franke-Ruta of The Atlantic argued that in an era of willful, sustained political falsehood, fact-checking may be outliving its usefulness, saying, “One-off fact-checking is no match for the repeated lie.” The Lab’s Andrew Phelps, meanwhile, went deep inside the web’s leading fact-checking operation, PolitiFact.

The Times’ new CEO and incremental change: The New York Times Co. named a new CEO last week, and it was an intriguing choice — former BBC director general Mark Thompson. The Times’ article on Thompson focused on his digital expansion at the BBC (which was accompanied by a penchant for cost-cutting), as well as his transition from publicly funded to ad-supported news. According to the International Business Times, those issues were all sources of skepticism within the Times newsroom. Bloomberg noted that Thompson will still be subject to Arthur Sulzberger’s vision for the Times, and at the Guardian, Michael Wolff said Thompson should complement that vision well, as a more realistic and business-savvy counter to Sulzberger.

The Daily Beast’s Peter Jukes pointed out that many of the BBC’s most celebrated innovations during Thompson’s tenure were not his doing. Robert Andrews of paidContent also noted this, but said Thompson’s skill lay in being able to channel that bottom-up innovation to fit the BBC’s goals. Media analyst Ken Doctor argued that the BBC and the Times may be more alike than people think, and Thompson’s experience at the former may transfer over well to the latter: “Thompson brings the experience at moving, too slowly for some, too dramatically for others, a huge entity.” But Mathew Ingram of GigaOM said that kind of approach won’t be enough: “The bottom line is that a business-as-usual or custodial approach is not going to cut it at the NYT, not when revenues are declining as rapidly as they have been.”

Joe Pompeo of Capital New York laid out a thorough description of the Sulzberger-led strategy Thompson will be walking into: Focusing on investment in the Times, as opposed to the company’s other properties, but pushing into mobile, video, social, and global reach, rather than print. And Bloomberg’s Edmund Lee posited the idea that the Times could be in increasingly good position to go private.

The Assange case and free speech vs. women’s rights: WikiLeaks’ Julian Assange cleared another hurdle last week — for now — in his fight to avoid extradition to Sweden on sexual assault accusations when Ecuador announced it would grant him asylum. Assange has been staying in the Ecuadorean Embassy in London for two months, but British officials threatened to arrest Assange in the embassy. Ecuador’s decision gives him immunity from arrest on Ecuadorean soil (which includes the embassy).

Assange gave a typically defiant speech for the occasion, but the British government was undeterred, saying it plans to resolve the situation diplomatically and send Assange to Sweden. Ecuador’s president said an embassy raid would be diplomatic suicide for the U.K., and Techdirt’s Mike Masnick was appalled that Britain would even suggest it. Filmmakers Michael Moore and Oliver Stone argued in The New York Times that Assange deserves support as a free-speech advocate, while Gawker’s Adrian Chen said the sexual assault case has nothing to do with free speech. Laurie Penny of The Independent looked at the way free speech and women’s rights are being pitted against each other in this case. Meanwhile, Glenn Greenwald of The Guardian excoriated the press for their animosity toward Assange.

Reading roundup: We’ve already covered a bunch of stuff over the past week and a half, and there’s lots more to get to, so here’s a quick rundown:

— Twitter and Blogger co-founder Evan Williams announced the launch of Medium, a publishing platform that falls somewhere between microblogging and blogging. The Lab’s Joshua Benton has the definitive post on what Medium might be, Dave Winer outlined his hopes for it, and The Awl’s Choire Sicha wrote about the anti-advertising bent at sites like it.

— A few social-news notes: Two features from the Huffington Post and the Lab on BuzzFeed’s ramped-up political news plans; TechCrunch’s comparison of BuzzFeed, Reddit, and Digg; and a feature from the Daily Dot on Reddit and the future of social journalism.

— The alt-weekly The Village Voice laid off staffers late last week, prompting Jim Romenesko to report that the paper is on the verge of collapse and Buzzfeed’s Rosie Gray to chronicle its demise. Poynter’s Andrew Beaujon said the paper still has plenty left, and The New York Times’ David Carr said the problem is that the information ecosystem has outgrown alt-weeklies.

— Finally, three great food-for-thought pieces, Jonathan Stray here at the Lab on determining proper metrics for journalism, media consultant Mark Potts on a newspaper exec’s 20-year-old view of the web, and Poynter’s Matt Thompson on the role of the quest narrative in journalism.

Photo of Jonah Lehrer by PopTech and drawing of Julian Assange by Robert Cadena used under a Creative Commons license.

August 13 2012

14:02

April 27 2012

14:00

This Week in Review: Rupert takes the stand, and the Post’s pressure on young aggregators

Fresh accusations and denials for News Corp.: After several months of investigation, News Corp.’s Rupert Murdoch and his son, James, testified this week before the British government’s Leveson inquiry into their company’s phone hacking and bribery scandal. Rupert made headlines by apologizing for his lack of action to stop the scandal and by admitting there was a cover-up — though he said he was the victim of his underlings’ cover-up, not a perpetrator himself (a charge one of those underlings strenuously objected to).

Murdoch also said he “panicked” by closing his News of the World newspaper last year, but said he should have done so years earlier. He spent the first day of his testimony defending himself against charges of lobbying public officials for favors, saying former Prime Minister Gordon Brown “declared war” on News Corp., which Brown denied. James Murdoch also testified to a lack of knowledge of the scandal and cozy relationships with officials.

Attention in that area quickly shifted this week to British Culture Minister Jeremy Hunt, with emails released to show that he worked to help News Corp. pick up support last year for its bid to takeover the broadcaster BSkyB — the same bid he was charged with overseeing. Hunt called the accusation “laughable” and refused calls to resign, though one of his aides did resign, saying his contact with News Corp. “went too far.”

The commentary on Murdoch’s appearance was, perhaps surprisingly, mixed. The Washington Post’s Erik Wemple mocked the fine line Murdoch apparently walked in his currying favor from public officials, and the Guardian’s Nick Davies said Murdoch looks vulnerable: “The man who has made millions out of paying people to ask difficult questions, finally faced questioners he could not cope with.” He antagonized quite a few powerful people in his testimony, Davies said, and the Leveson inquiry ultimately holds the cards here.

But Murdoch biographer Michael Wolff said Rupert doesn’t use his newspapers to gain officials’ favor in the way he’s accused of doing, and Reuters’ Jack Shafer argued that there’s nothing really wrong with lobbying regulators to approve your proposals anyway. “Don’t damn Murdoch for learning the rules of the regulatory game and then playing them as aggressively as he can,” he wrote.

Plagiarism and aggregation at the Post: A Washington Post blogger named Elizabeth Flock resigned last week after being caught plagiarizing, but the story went under the radar until the Post’s ombudsman, Patrick Pexton, wrote a column charging the Post with failing to properly guide its youngest journalists. Pexton said he talked with other young Post aggregators who “felt as if they were out there alone in digital land, under high pressure to get Web hits, with no training, little guidance or mentoring and sparse editing.”

Poynter’s Craig Silverman wrote a strong follow-up to the column, talking to several people from the Post and emphasizing the gravity of Flock’s transgression, but also throwing cold water on the “journalism’s standards are gone, thanks to aggregation” narrative. Reuters’ Jack Shafer thought Pexton went too easy on Flock’s plagiarism, but others thought it was the Post he wasn’t hard enough on. The Awl’s Trevor Butterworth said Flock’s mistake within the Post’s aggregation empire shed light on the “inherent cheapness of the product and the ethical dubiety of the entire process. You see, the Post—or any legacy news organization turned aggregator—wants to have its cake and other people’s cake too, and to do so without damaging its brand as a purveyor of original cake.”

BoingBoing’s Rob Beschizza made the same point, criticizing the Post for trying to dress up its aggregation as original reporting. The Raw Story’s Megan Carpentier used the example as a warning that even the most haphazard, thoughtless aggregated pieces have a certain online permanence under our bylines.

Technology, connection, and loneliness: A week after an Atlantic cover story asked whether Facebook was making us lonely (its answer: yes), MIT professor and author Sherry Turkle echoed the same point last weekend in a New York Times opinion piece. Through social and mobile media, Turkle argued, we’re trading conversation for mere connection, sacrificing self-reflection and the true experience of relating with others in the process.

Numerous people disputed her points, on a variety of different fronts. Cyborgology’s David Banks charged Turkle with “digital dualism,” asserting that “There is no ‘second self’ on my Facebook profile — it’s the same one that is embodied in flesh and blood.” At The Atlantic, Alexandra Samuel said Turkle is guilty of a different kind of dualism — an us/them dichotomy between (generally younger) social media users and the rest of us. Turkle, she wrote, “assumes conversations are only meaningful when they look like the conversations we grew up having.”

Like Banks, Mathew Ingram of GigaOM pointed out the close connection between online and offline relationships, and sociology prof Zeynep Tufekci argued at The Atlantic that if we are indeed seeing a loss in substantive interpersonal connection, it has more to do with our flight to the suburbs than social media. Claude Fischer of Boston Review disputed the idea that loneliness is on the rise in the first place, and in a series of thoughtful tweets, Wired’s Tim Carmody said the road to real relationship is in our own work, not in our embrace or denial of technologies.

New media lessons from academics and news orgs: The University of Texas hosted its annual International Symposium on Online Journalism last weekend, one of the few of the scores of journalism conferences that brings together both working journalists and academics. As usual, University of British Columbia j-prof Alfred Hermida live-blogged the heck out of the conference, and you can see his summaries of each of his 14 posts here.

Several people distilled the conference’s many presentations into a few themes: The Lab’s staff identified a few, including the need to balance beauty and usefulness in data journalism and the increasing centrality of mobile in news orgs’ strategies. At the Nonprofit Journalism Hub, conference organizer Amy Schmitz Weiss organized the themes into takeaways for news orgs, and Wisconsin j-prof Sue Robinson published some useful notes, organized by subject area.

A couple of specific items from the conference: The Lab’s Adrienne LaFrance wrote on a University of Texas study that found that the people most likely to pay for news are young men who are highly interested in news, though it also found that our stated desires in news consumption don’t necessarily match up with our actual habits. And Dan Gillmor touted the news-sharing potential of one of the conference’s presenters, LinkedIn, saying it’s the first site to connect news sharing with our professional contacts, rather than our personal ones.

[Editor's note: Mark's too modest to mention the paper he coauthored and presented at ISOJ.]

Reading roundup: Several interesting debates lurked just a bit under the radar this week. Here’s a quick lay of the land:

— Reuters’ Felix Salmon wondered why the New York Times doesn’t sell early access to its big business scoops to hedge funds looking for a market advantage, as Reuters and Bloomberg do. GigaOM’s Mathew Ingram argued that the public value of those is too great to do that, and Salmon responded to his and others’ objections. The conversation also included a lively Twitter exchange, which Ingram and the Lab’s Joshua Benton Storified.

— The Chicago Tribune announced its decision to outsource its TribLocal network of community news sites to the Chicago company Journatic, laying off about 20 employees in the process. The Chicago Reader and Jim Romenesko gave some more information about Journatic (yes, the term “content farm” comes up, though its CEO rejected the term). Street Fight’s Tom Grubisich called it a good deal for the Tribune.

— In a feature at Wired, Steven Levy looked at automatically written stories, something The Atlantic’s Rebecca Greenfield said she didn’t find scary for journalism’s future prospects, since those stories aren’t really journalism. Nebraska j-prof Matt Waite also said journalists shouldn’t be afraid of something that frees them up to do their jobs better, and GigaOM’s Mathew Ingram tied together the Journatic deal and the robot journalism stories to come up with something a bit less optimistic.

— This week on the ebook front: A good primer on the U.S. Department of Justice lawsuit of Apple and publishers for price-fixing, which The Wall Street Journal’s Gordon Crovitz said is a completely normal and OK practice. Elsewhere, some publishers are dropping digital rights management, and a publishing exec talked to paidContent about why they broke DRM.

— Gawker revealed its new commenting system this week — the Lab’s Andrew Phelps gave the background, Gawker’s Nick Denton argued in favor of anonymity, Dave Winer wanted to see the ability for anyone to write an article on it, and GigaOM talked with Denton about the state of tech.

— Google shut down its paid-content system for publishers, One Pass, saying it’s moved on to its Consumer Surveys.

— Finally, a few long reads for the weekend: David Lowery on artist rights and the new business model for creative work, Ethan Zuckerman on the ethics of tweet bombing, danah boyd on social media and fear, and Steve Buttry and Dan Conover on restoring newsroom morale.

Rupert Murdoch artwork by Surian Soosay and texting photo by Ed Brownson used under a Creative Commons license.

January 22 2012

16:58

Johann Hari leaves the Independent after plagiarism storm

Guardian :: Columnist Johann Hari, the journalist at the Independent who was suspended for plagiarism, has announced that he will not return to the newspaper. Hari had been undergoing retraining in the United States and was expected to return to the Independent next month but said on his personal website that he did not want to see colleagues taking the blame for his mistakes.

Continue to read Conal Urquhart, www.guardian.co.uk

December 13 2010

15:41

Plagiarists should at least be *competent* plagiarists – Media Ooops 002

Plagiarism is an interesting game.

You can either rewrite the piece, find a bit more information, leave other bits out, and – if you’re the Daily Mail – reduce the reading age by a year or three.

Or you can acknowledge that the story came from somewhere else, and give a hat-tip for a nugget, or a small fee for an article.

Or you can try and ride both horses and end up sitting on your backside in the middle.

So, we have Exhibit A, from Dizzy Thinks:

The Department for Culture, Media and Sport has a dedicated civil servant working on the Queen’s Diamond Jubilee celebrations in 2012? Not particular shocking really, but there is an oddity.

According to an FoI release, one of the roles of this civil servant is the development of equalities impact assessment for the Queen’s celebratory bash. Why does a celebration for one person need an equalities impact assessment?

Mind you, as an eagle-eyed reader put to to me. Perhaps it’s because she’s (a) a woman, (b) a pensioner, (c) dependent on state benefits, and (d) married to an immigrant?

and Exhibit B, from the Daily Mail:

The Department for Culture, Media and Sport has a dedicated civil servant working on the Queen’s Diamond Jubilee celebrations in 2012. One of the roles of the civil servant is the development of an ‘equalities impact assessment’. Why does a ­celebration for one person need an equalities impact assessment? Is it because she’s a woman, a pensioner, relies on the state for handouts — and is married to a foreigner?

It’s clearly derivative, and it’s a complete item in a Diary column, for heaven’s sake. A tip would cost about twenty pounds or a gift voucher, and an acknowledgement would cost nothing.

There appears to be an opening in the market for “how to plagiarise stories without making it patently obvious” at the Mail.

November 10 2010

22:17

March 23 2010

18:27

Why Newsrooms Don't Use Plagiarism Detection Services

Six years ago, in the wake of the Jayson Blair scandal at the New York Times, Peter Bhatia, then the president of the American Society of Newspaper Editors, gave a provocative speech at the organization's 2004 conference.

"One way to define the past ASNE year is to say it began with Jayson Blair and ended with Jack Kelley," he said.

Bhatia's message was that it was time for the industry and profession to take new measures to prevent serious breaches such as plagiarism and fabrication:

Isn't it time to say enough? Isn't it time to attack this problem in our newsrooms head-on and put an end to the madness? ... Are we really able to say we are doing everything possible to make sure our newspaper and staffs are operating at the highest ethical level?

Today, six years after his speech, another plagiarism scandal erupted at the New York Times (though it's certainly not on the scale of Blair's transgressions). A separate incident also recently unfolded at the Daily Beast. Once again, the profession is engaged in discussion and debate about how to handle this issue. One suggestion I made in my weekly column for Columbia Journalism Review was for newsrooms to start using plagiarism detection software to perform random checks on articles. New York Times public editor Clark Hoyt followed up with a blog post on this idea, and there has been other discussion.

Many people are left wondering how effective these services are, why they aren't being used in newsrooms, and which ones might be the best candidates for use in journalism. Surprisingly, it turns out that newsrooms are more interested in finding out who's stealing their content online than making sure the content they publish is original.

Why Newsrooms Don't Use Them

  1. Cost: The idea of spending potentially thousands of dollars on one of these services is a tough sell in today's newsrooms. "We've had a lot conversation with media outlets, particularly after a major issue comes up, but the conversation is ultimately what is the cost and whatever cost I give them, they think it's nuts," Robert Creutz, the general manager of the iThenticate plagiarism detection service, told me. He estimated his service, which is the most expensive one out there, would charge between $5,000 and $10,000 per year to a large newspaper that was doing random checks on a select number of articles every day. Many other detection services would charge far less, but it seems that any kind of cost is prohibitive these days.
  2. Workflow When New York Times public editor Clark Hoyt asked the paper's "editor for information and technology" about these services, he was told the paper has concerns about the reliability of the services. Hoyt also wrote that "they would present major workflow issues because they tend to turn up many false-positive results, like quotes from newsmakers rendered verbatim in many places." News organizations are, of course, hesitant to introduce anything new into their processes that will take up more time and therefore slow down the news. They currently see these services as a time suck on editors, and think the delay isn't worth the results.
  3. Catch-22 In basic terms, these services compare a work against web content and/or against works contained in a database of previously published material. (Many services only check against web content.) Major news databases such as Nexis and Factiva are not part of the proprietary plagiarism detection databases, which means the sample group is not ideal for news organizations. As a result, news organizations complain that the services will miss potential incidents of plagiarism. But here's the flip side: If they signed up with these services and placed their content in the database, it would instantly improve the quality of plagiarism detection. Their unwillingness to work with the services is a big reason why the databases aren't of better quality.
  4. Complicated contracts The Hartford Courant used the iThenticate service a few years ago to check op-ed pieces. "It's worth the cost," Carolyn Lumsden, the paper's commentary editor, told American Journalism Review back in 2004. "It doesn't catch absolutely everything, but it catches enough that you're alerted if there's a problem." When I followed up with her a few weeks back, she told me the paper had ended its relationship with the company. "We had a really good deal with them ... But then iThenticate wanted us to sign a complicated multipage legal contract. Maybe that's what they do with universities (their primary client). But we weren't interested in such entanglements."

The Strange Double Standard

So, as a result of the above concerns, almost no traditional news organizations use a plagiarism detection service to check their work either before or after publication. (On the other hand, Demand Media, a company that has attracted a lot of criticism for its lack of quality content and low pay, is a customer of iThenticate.) Here's the strange twist: Many of these same organizations are happy to invest the money, time and other resources required to use services that check if someone else is plagiarizing their work.

bailey.jpg

Jonathan Bailey, the creator of Plagiarism Today and president of CopyByte, a consultancy that helps advise organizations about detecting and combating plagiarism, said he's aware of many media companies that pay to check and see if anyone's stealing their content.

"It's fascinating because one of the companies I work with is Attributor ... and I'm finding lots of newspapers and major publications are on board with [that service], but they are not using it to see if the content they're receiving is original," he said. "It's a weird world for me in that regard. A lot of news organizations are interested in protecting their work from being stolen, but not in protecting themselves from using stolen work."

(MediaShift will publish a follow-up article tomorrow that looks at Attributor.)

How they Work

Bailey compares these services to search engines. Just as Google will take a query and check it against its existing database of web content, plagiarism detection services check a submitted work against an existing database of content.

"They work fundamentally with the same principles as search engines," he said. "They all take data from various sources and fingerprint it and compress it and store it in a database. When they find potential matches, they do a comparison."

Each service has its own algorithm to locate and compare content, and they also differ in terms of the size of their databases. Many of the free or less expensive services only search the current web. That means they don't compare material against archived web pages or proprietary content databases.

Bailey said that another big difference between services is the amount of work they require a person to undertake in order examine any potential matches. (This is the concern voiced by the editor at the New York Times.) Some services return results that list a series of potential matches, but don't explain which specific sentences or paragraphs seem suspect. This causes a person to spend time eliminating false positives.

ithen.jpgBailey also said some of the web-only services are also unable to distinguish between content that is properly quoted or attributed, and material that is stolen. This, too, can waste a person's time. However, he said that iThenticate, for example, does a decent job of eliminating the more obvious false positives, and that it has an API that enables it to be integrated into any web-based content management system.

Where They're Most Effective

Bailey has used and tested a wide variety of the plagiarism detection services available, and said they vary widely in terms of quality. Along with his experience, Dr. Debora Weber-Wulff, a professor of Media and Computing at Hochschule für Technik und Wirtschaft Berlin, has conducted two tests of these services. Her 2007 study is available here, and Bailey also wrote about her 2008 research on his website.

Asked to summarize the effectiveness of these services, Dr. Weber-Wulff offered a blunt
reply by email: "Not at all."

Weber-Wulff_2008.jpg

"They don't detect plagiarism at all," she wrote. "They detect copies, or near copies. And they are very bad at doing so. They miss a lot, since they don't test the entire text, just samples usually. And they don't understand footnoting conventions, so they will flag false positives."

Her tests involved taking 31 academic papers that included plagiarized elements and running them through different services. Her data is important and worth looking at, though journalists should note that academic papers and articles are not going to elicit the same results. The Hartford Courant seemed happy with its service, as was the Fort Worth Star-Telegram when it used one a few years ago, according to Clark Hoyt's blog post. On the other hand, the New York Times continues to have concerns.

For his part, Bailey mentioned a few services that might work for journalism.

"IThenticate do a very good job, they provide a good service but it is pricey," he said, "and it is very difficult to motivate newspapers to sign up when they're putting second mortgages on their buildings."

He also mentioned Copyscape.

"Copyscape is another one that is very popular and it's very cheap at only 5 cents a search," he said, noting it took the top spot in Dr. Weber-Wulff's latest study. "It's very good at matching -- it uses Google -- and it does a thorough job, though the problem is that it only searches the current web, so you have a limited database."

He recommends using Copyscape if the plan is to perform spot checks for a news organization. Bailey also mentioned SafeAssign as a third offering to look at.


In his view, it's unacceptable that news organizations are ignoring these services.

"The risks of not running a [plagiarism] check are incredibly high," he said, citing the damage that can be done to a brand, and the potential for lawsuits. "At the very least, they should be doing post-publication checking rather than letting the public-at-large or competitors do it for them -- because that's when things get particularly ugly.

Craig Silverman is an award-winning journalist and author, and the managing editor of MediaShift and Idea Lab. He is founder and editor of Regret the Error, the author of Regret the Error: How Media Mistakes Pollute the Press and Imperil Free Speech, and a weekly columnist for Columbia Journalism Review. Follow him on Twitter at @CraigSilverman.

This is a summary. Visit our site for the full post ».

March 12 2010

15:00

This Week in Review: Plagiarism and the link, location and context at SXSW, and advice for newspapers

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

The Times, plagiarism and the link: A few weeks ago, the resignations of two journalists from The Daily Beast and The New York Times accused of plagiarism had us talking about how the culture of the web affects that age-old journalistic sin. That discussion was revived this week by the Times’ public editor, Clark Hoyt, whose postmortem on the Zachery Kouwe scandal appeared Sunday. Hoyt concluded that the Times “owes readers a full accounting” of how Kouwe’s plagiarism occurred, and he also called out DealBook, the Times’ business blog for which Kouwe wrote, questioning its hyper-competitive nature and saying it needs more oversight. (In an accompanying blog post, Hoyt also said the Times needs to look closer at implementing plagiarism prevention software.)

Reuters’ Felix Salmon challenged Hoyt’s assertion, saying that the Times’ problem was not that its ethics were too steeped in the ethos of the blogosphere, but that they aren’t bloggy enough. Channeling CUNY prof Jeff Jarvis’ catchphrase “Do what you do best and link to the rest,” Salmon chastised Kouwe and other Times bloggers for rewriting stories that other online news organizations beat them to, rather than simply linking to them. “The problem, here, is that the bloggers at places like the NYT and the WSJ are print reporters, and aren’t really bloggers at heart,” Salmon wrote.

Michael Roston made a similar argument at True/Slant the first time this came up, and ex-newspaperman Mathew Ingram strode to Salmon’s defense this time with an eloquent defense of the link. It’s not just a practice for geeky insiders, he argues; it’s “a fundamental aspect of writing for the web.” (Also at True/Slant, Paul Smalera made a similar Jarvis-esque argument.) In a lengthy Twitter exchange with Salmon, Times editor Patrick LaForge countered that the Times does link more than most newspapers, and Kouwe was an exception.

Jason Fry, a former blogger for the Wall Street Journal, agreed with Ingram and Smalera, but theorizes that the Times’ linking problem is not so much a refusal to play by the web’s rules as “an unthinking perpetuation of print values that are past their sell-by date.” Those values, he says, are scoops, which, as he argued further in a more sports-centric column, readers on the web just don’t care about as much as they used to.

Location prepares for liftoff: The massive music/tech gathering South By Southwest (or, in webspeak, SXSW) starts today in Austin, Texas, so I’m sure you’ll see a lot of ideas making their way from Austin to next week’s review. If early predictions are any indication, one of the ideas we’ll be talking about is geolocation — services like Foursquare and Gowalla that use your mobile device to give and broadcast location-specific information to and about you. In anticipation of this geolocation hype, CNET has given us a pre-SXSW primer on location-based services.

Facebook jump-started the location buzz by apparently leaking word to The New York Times that it’s going to unveil a new location-based feature next month. Silicon Alley Insider does a quick pro-and-con rundown of the major location platforms, and ReadWriteWeb wonders whether Facebook’s typically privacy-guarding users will go for this.

The major implication of this development for news organizations, I think, is the fact that Facebook’s jump onto the location train is going to send it hurtling forward far, far faster than it’s been going. Within as little as a year, location could go from the domain of early-adopting smartphone addicts to being a mainstream staple of social media, similar to the boom that Facebook itself saw once it was opened beyond college campuses. That means news organizations have to be there, too, developing location-based methods of delivering news and information. We’ve known for a while that this was coming; now we know it’s close.

The future of context: South By Southwest also includes bunches of fascinating tech/media/journalism panels, and one of them that’s given us a sneak preview is Monday’s panel called “The Future of Context.” Two of the panelists, former web reporter and editor Matt Thompson and NYU professor Jay Rosen, have published versions of their opening statements online, and both pieces are great food for thought. Thompson’s is a must-read: He describes the difference between day-to-day headline- and development-oriented information about news stories that he calls “episodic” and the “systemic knowledge” that forms our fundamental framework for understanding an issue. Thompson notes how broken the traditional news system’s way of intertwining those two forms of knowledge are, and he asks us how we can do it better online.

Rosen’s post is in less of a finished format, but it has a number of interesting thoughts, including a quick rundown of reasons that newsrooms don’t do explanatory journalism better. Cluetrain Manifesto co-author Doc Searls ties together both Rosen’s and Thompson’s thoughts and talks a bit more about the centrality of stories in pulling all that information together.

Tech execs’ advice for newspapers: Traditional news organizations got a couple of pieces of advice this week from two relatively big-time folks in the tech world. First, Netscape co-founder Marc Andreessen gave an interview with TechCrunch’s Erick Schonfeld in which he told newspaper execs to “burn the boats” and commit wholeheartedly to the web, rather than finding way to prop up modified print models. He used the iPad as a litmus test for this philosophy, noting that “All the new [web] companies are not spending a nanosecond on the iPad or thinking of ways to charge for content. The older companies, that is all they are thinking about.”

Not everyone agreed: Newspaper Death Watch’s Paul Gillin said publishers’ current strategy, which includes keeping the print model around, is an intelligent one: They’re milking the print-based profits they have while trying to manage their business down to a level where they can transfer it over to a web-based model. News business expert Alan Mutter offered a more pointed counterargument: “It doesn’t take a certifiable Silicon Valley genius to see that no business can walk away from some 90% of its revenue base without imploding.”

Second, Google chief economist Hal Varian spoke at a Federal Trade Commission hearing about the economics of newspapers, advising newspapers that rather than charging for online content, they should be experimenting like crazy. (Varian’s summary and audio are at Google’s Public Policy Blog, and the full text, slides and Martin Langeveld’s summary are here at the Lab. Sync ‘em up and you can pretty much recreate the presentation yourself.) After briefly outlining the status of newspaper circulation and its print and online advertising, Varian also suggests that newspapers make better use of the demographic information they have of their online readers. Over at GigaOM, Mathew Ingram seconds Varian’s comments on engagement, imploring newspapers to actually use the interactive tools that they already have at their sites.

Reading roundup: We’ll start with our now-weekly summary of iPad stuff: Apple announced last week that you can preorder iPads as of today, and they’ll be released April 3. That could be only the beginning — an exec with the semiconductor IP company ARM told ComputerWorld we could see 50 similar tablet devices out this year. Multimedia journalist Mark Luckie urged media outlets to develop iPad apps, and Mac and iPhone developer Matt Gemmell delved into the finer points of iPad app design. (It’s not “like an iPhone, only bigger,” he says.)

I have two long, thought-provoking pieces on journalism, both courtesy of the Columbia Journalism Review. First, Megan Garber (now with the Lab) has a sharp essay on the public’s growing fixation on authorship that’s led to so much mistrust in journalism — and how journalists helped bring that fixation on. It’s a long, deep-thinking piece, but it’s well worth reading all the way through Garber’s cogent argument. Her concluding suggestions for news orgs regarding authority and identity are particularly interesting, with nuggets like “Transparency may be the new objectivity; but we need to shift our definition of ‘transparency’: from ‘the revelation of potential biases,’ and toward ‘the revelation of the journalistic process.’”

Second, CJR has the text of Illinois professor Robert McChesney’s speech this week to the FTC, in which he makes the case for a government subsidy of news organizations. McChesney and The Nation’s John Nichols have made this case in several places with a new book, “The Death and Life of American Journalism,” on the shelves, but it’s helpful to have a comprehensive version of it in one spot online.

Finally, the Online Journalism Review’s Robert Niles has a simple tip for newspaper publishers looking to stave off their organizations’ decline: Learn to understand technology from the consumer’s perspective. That means, well, consuming technology. Niles provides a to-do list you can hand to your bosses to help get them started.

March 04 2010

09:18
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl