Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 25 2013

12:15

December 01 2011

15:20

June 25 2011

05:20

March 31 2011

18:10

January 12 2011

17:45

January 07 2011

15:20

Ruling or no, always ask permission before re-using images on the social web

If you’re to believe Agence France-Press – and many journalists who I’ve personally met – “regular people” don’t have the same copyright protections on the web as journalists. This isn’t true and hasn’t been true – and I’m glad a court said so.

AFP tried to argue in court that by uploading his photos to Twitter/Twitpic, a professional photographer was giving them permission to use and repurpose them. Last week, a court in New York’s Southern District declared what many of us already knew – putting photos on TwitPic doesn’t just make it up for grabs.

When I tweeted about this, I had a couple of journalists tell me it didn’t protect Twitter users’ photos, just those of journalists. This is a pretty common assumption I hear around the web and in the newsrooms I’ve worked in, so I don’t feel too out of line pointing out Virginia journalist Jordan Fifer for this tweet:

  1. Jordan Fifer
    JordanFifer . @mjenkins News orgs have better case for "fair use" of Twitter pics if it comes from a layperson with no financial gain from the pic 30 Dec 2010 from web
-- this quote was brought to you by quoteurl

He said the ruling only protected professional photographers and that the Fair Use Doctrine protects news outlets who want to use Twitpics without permission. Not true on both counts, though the latter isn’t as cut-and-dried.

For one, the ruling said:

[b]y their express language, Twitter’s terms grant a license to use content only to Twitter and its partners. Similarly, Twitpic’s terms grant a license to use photographs only to Twitpic.com or affiliated sites. . . . the provision that Twitter ‘encourage[s] and permit[s] broad re-use of Content’ does not clearly confer a right on others to re-use copyrighted postings

While those terms may (and likely do) differ for other Twitter-related photo services, Twitpic’s terms state only Twitpic and its affiliates have a right to users’ photos:

…you retain all of your ownership rights in your Content. However, by submitting Content to Twitpic, you hereby grant Twitpic a worldwide, non-exclusive, royalty-free, sublicenseable and transferable license to use, reproduce, distribute, prepare derivative works of, display, and perform the Content in connection with the Service and Twitpic’s (and its successors’ and affiliates’) business…

AFP is not an affiliated business with Twitpic, it is a user and has only an end-user license. All users who use this service, at least, own the copyright on their images. Other services, like YFrog, for instance, do allow all users to use and repurpose work uploaded to their servers.

The terms do not, however, differentiate between the copyright of a professional photographer and that of a non-professional.

Secondly, Fair Use gives news outlets a lot of leeway on using user content, but it can only go so far. To review, this bit of copyright law contains four factors that will help determine if unauthorized use of copyrighted material is fair:

  1. The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
  2. The nature of the copyrighted work
  3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole
  4. The effect of the use upon the potential market for, or value of, the copyrighted work

When you use a photo belonging to someone else on a website, on TV or in print, you are using the entire image (not a portion) and using it for profit (most of the time, if you are a for-profit news outlet). Chip Stewart, a journalism professor at TCU dismisses the Fair Use argument in social media images, saying:

Under the four-part balancing test applied by courts in looking at fair use, I don’t see how any one favors the republisher:  The use is for-profit, the entire photo is used, it most likely is a significant element of the news story, and it harms the market for the original copyright owner by giving away for free what the owner could legally sell.

So what can we conclude from all this?

1. Assume the users of social media services own the copyright on the work they produce and upload there. In most cases, only those social media services and those they work with generally the the right to use that content without permission.

2. …but users and outlets should check the terms of service on the photo services to see the specific copyright and use terms for each service. Professionals, news outlets and others with copyright concerns should take care to use a service that does not claim ownership of the images uploaded there.

3. Using these copyrighted photos without permission doesn’t fall under Fair Use.

4. No matter what the services’ terms may be, it’s always best to ask for permission before taking photos from the web and using them at your news organization.

5.  If you do ask for permission and get it, make sure the user is the one who actually took the photo. As it happened in the case described above (and is frequently the case on Facebook), the person displaying the image is not the one who owns the copyright.

September 08 2010

14:30

August 16 2010

12:00

Truth-o-Meter, 2G: Andrew Lih wants to wikify fact-checking

Epic fact: We are living at the dawn of the Information Age. Less-epic fact: Our historical moment is engendering doubt. The more bits of information we have out there, and the more sources we have providing them, the more wary we need to be of their accuracy. So we’ve created a host of media platforms dedicated to fact-checking: We have PolitiFact over here, FactCheck over there, Meet the Facts over there, @TBDFactsMachine over there, Voice of San Diego’s Fact Check blog over there, NewsTrust’s crowdsourced Truthsquad over there (and, even farther afield, source verifiers like Sunlight’s new Poligraft platform)…each with a different scope of interest, and each with different methods and metrics of verification. (Compare, for example, PolitiFact’s Truth-o-Meter to FactCheck.org’s narrative assessments of veracity.) The efforts are admirable; they’re also, however, atomized.

“The problem, if you look at what’s being done right now, is often a lack of completeness,” says Andrew Lih, a visiting professor of new media at USC’s Annenberg School of Communication & Journalism. The disparate outlets have to be selective about the scope of their fact-checking; they simply don’t have the manpower to be comprehensive about verifying all the claims — political, economic, medical, sociological — pinging like pinballs around the Internet.

But what if the current fact-checking operations could be greater than the sum of their parts? What if there were a centralized spot where consumers of news could obtain — and offer — verification?

Enter WikiFactCheck, the new project that aims to do exactly what its name suggests: bring the sensibility — and the scope — of the wiki to the systemic challenges of fact-checking. The platform’s been in the works for about two years now, says Lih (who, in addition to creating the wiki, is a veteran Wikipedian and the author of The Wikipedia Revolution). He dreamed it up while working on WikiNews; though that project never reached the scope of its sister site — largely because its premise of discrete news narratives isn’t ideal for the wiki platform — a news-focused wiki that could succeed, Lih thought, was one that focused on the core unit of news: facts themselves. When Jay Rosen added attention to the need for systematic fact-checking of news content — most notably, through his campaign to fact-check the infamously info-miscuous Sunday shows — it became even more clear, Lih told me: This could be a job for a wiki.

WikiFactCheck wants not only to crowdsource, but also to centralize, the fact-checking enterprise, aggregating other efforts and creating a framework so extensive that it can also attempt to be comprehensive. There’s a niche, Lih believes, for a fact-checking site that’s determinedly non-niche. Wikipedia, he points out, is ultimately “a great aggregator”; and much of WikiFactCheck’s value could similarly be, he says, to catalog the results of other fact-checking outfits “and just be a meta-site.” Think Rotten Tomatoes — simple, summative, unapologetically derivative — for truth-claims.

If the grandeur implicit in that proposition sounds familiar, it’s because the idea for WikiFactCheck is pretty much identical to the one that guided the development of Wikipedia: to become a centralized repository of information shaped by, and limited only by the commitment of, the crowd. A place where the veracity of information is arbitrated discursively — among people who are motivated by the desire for veracity itself.

Which is idealistic, yes — unicornslollipopsrainbows idealistic, even — but, then again, so is Wikipedia. “In 2000, before Wikipedia started, the idea that you would have an online encyclopedia that was updated within seconds of something happening was preposterous,” Lih points out. Today, though, not only do we take Wikipedia for granted; we become indignant in those rare cases when entries fail to offer us up-to-the-minute updates on our topics of interest. Thus, the premise of WikiFactCheck: What’s to say that Wikipedia contributors’ famous commitment — of time, of enthusiasm, of Shirkian surplus — can’t be applied to verifying information as well as aggregating it?

What such a platform would look like, once populated, remains to be seen; the beauty of a wiki being its flexibility, users will make of the site what they will, with the crowd determining which claims/episodes/topics deserve to be checked in the first place. Ideally, “an experienced community of folks who are used to cataloging and tracking these kinds of things” — seasoned Wikipedians — will guide that process, Lih says. As he imagines it, though, the ideal structure of the site would filter truth-claims by episode, or “module” — one episode of “Meet the Press,” say, or one political campaign ad. “I think that’s pretty much what you’d want: one page per media item,” Lih says. “Whether that item is one show or one ad, we’ll have to figure out.”

Another thing to figure out will be how a wiki that will likely rely on publishing comprehensive documents — transcripts, articles, etc. — to verify their contents will dance around copyright issues. But “if there ever were a slam-dunk case for meeting all the attributes of the Fair Use Doctrine,” Lih says, “this is it.” Fact-checking is criticism and comment; it has an educational component (particularly if it operates under the auspices of USC Annenberg); and it doesn’t detract from content’s commercial value. In fact: “I can’t imagine another project that could be so strong in meeting the standards for fair use,” Lih says.

And what about the most common concern when it comes to informational wikis — that people with less-than-noble agendas will try to game the system and codify baseless versions of the truth? “In the Wikipedia universe, what has shaken out is that a lot of those folks who are not interested in the truth wind up going somewhere else,” Lih points out. (See: Conservapedia.) “They find that the community that is concerned with neutrality and with getting verifiable information into Wikipedia is going to dominate.” Majority rules — in a good way.

At the same time, though, “I welcome die-hard Fox viewers,” Lih says. “I welcome people who think Accuracy in Media is the last word. Because if you can cite from a reliable source — from a congressional record, from the Census Bureau, from the Geological Survey, from CIA Factbook, from something — then by all means, I don’t really care what your political stripes are. Because the facts should win out in the end.”

Photo of Andrew Lih by Kat Walsh, used under a GNU Free Documentation License.

July 30 2010

14:15

This Week in Review: WikiLeaks’ new journalism order, a paywall’s purpose, and a future for Flipboard

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

WikiLeaks, data journalism and radical transparency: I’ll be covering two weeks in this review because of the Lab’s time off last week, but there really was only one story this week: WikiLeaks’ release of The War Logs, a set of 90,000 documents on the war in Afghanistan. There are about 32 angles to this story and I’ll try to hit most of them, but if you’re pressed for time, the essential reads on the situation are Steve Myers, C.W. Anderson, Clint Hendler, and Janine Wedel and Linda Keenan.

WikiLeaks released the documents on its site on Sunday, cooperating with three news organizations — The New York Times, The Guardian, and Der Spiegel — to allow them to produce special reports on the documents as they were released. The Nation’s Greg Mitchell ably rounded up commentary on the documents’ political implications (one tidbit from the documents for newsies: evidence of the U.S. military paying Afghan journalists to write favorable stories), as the White House slammed the leaks and the Times for running them, and the Times defended its decision in the press and to its readers.

The comparison that immediately came to many people’s minds was the publication of the Pentagon Papers on the Vietnam War in 1971, and two Washington Post articles examined the connection. (The Wall Street Journal took a look at both casesFirst Amendment angles, too.) But several people, most notably ProPublica’s Richard Tofel and Slate’s Fred Kaplan, quickly countered that the War Logs don’t come close to the Pentagon Papers’ historical impact. They led a collective yawn that emerged from numerous political observers after the documents’ publication, with ho-hums coming from Foreign Policy, Mother Jones, the Washington Post, and even the op-ed page of the Times itself. Slate media critic Jack Shafer suggested ways WikiLeaks could have planned its leak better to avoid such ennui.

But plenty of other folks found a lot that was interesting about the entire situation. (That, of course, is why I’m writing about it.) The Columbia Journalism Review’s Joel Meares argued that the military pundits dismissing the War Logs as old news are forgetting that this information is still putting an often-forgotten war back squarely in the public’s consciousness. But the most fascinating angle of this story to many of us future-of-news nerds was that this leak represents the entry of an entirely new kind of editorial process into mainstream news. That’s what The Atlantic’s Alexis Madrigal sensed early on, and several others sussed out as the week moved along. The Times’ David Carr called WikiLeaks’ quasi-publisher role both a new kind of hybrid journalism and an affirmation of the need for traditional reporting to provide context. Poynter’s Steve Myers made some astute observations about this new kind of journalism, including the rise of the source advocate and WikiLeaks’ trading information for credibility. NYU j-prof Jay Rosen noted that WikiLeaks is the first “stateless news organization,” able to shed light on the secrets of the powerful because of freedom provided not by law, but by the web.

Both John McQuaid and Slate’s Anne Applebaum emphasized the need for data to be, as McQuaid put it, “marshaled in service to a story, an argument,” with McQuaid citing that as reason for excitement about journalism and Applebaum calling it a case for traditional reporting. Here at the Lab, CUNY j-prof C.W. Anderson put a lot this discussion into perspective with two perceptive posts on WikiLeaks as the coming-out party for data journalism. He described its value well: “In these recent stories, its not the presence of something new, but the ability to tease a pattern out of a lot of little things we already know that’s the big deal.”

As for WikiLeaks itself, the Columbia Journalism Review’s Clint Hendler provided a fascinating account of how its scoop ended up in three of the world’s major newspapers, including differences in WikiLeaks’ and the papers’ characterization of WikiLeaks’ involvement, which might help explain its public post-publication falling-out with the Times. The Times profiled WikiLeaks and its enigmatic founder, Julian Assange, and several others trained their criticism on WikiLeaks itself — specifically, on the group’s insistence on radical transparency from others but extreme secrecy from itself. The Washington Post’s Howard Kurtz said WikiLeaks is “a global power unto itself,” not subject to any checks and balances, and former military reporter Jamie McIntyre called WikiLeaks “anti-privacy terrorists.”

Several others were skeptical of Assange’s motives and secrecy, and Slate’s Farhad Manjoo wondered how we could square public trust with such a commitment to anonymity. In a smart Huffington Post analysis of that issue, Janine Wedel and Linda Keenan presented this new type of news organization as a natural consequence of the new cultural architecture (the “adhocracy,” as they call it) of the web: “These technologies lend themselves to new forms of power and influence that are neither bureaucratic nor centralized in traditional ways, nor are they generally responsive to traditional means of accountability.”

Keeping readers out with a paywall: The Times and Sunday Times of London put up their online paywall earlier this month, the first of Rupert Murdoch’s newspapers to set off on his paid-content mission (though some other properties, like The Wall Street Journal, have long charged for online access). Last week, we got some preliminary figures indicating how life behind the wall is going so far: Former Times media reporter Dan Sabbagh said that 150,000 of the Times’ online readers (12 percent of its pre-wall visitors) had registered for free trials during the paywall’s first two weeks, with 15,000 signing on as paying subscribers and 12,500 subscribing to the iPad app. PaidContent also noted that the Times’ overall web traffic is down about 67 percent, adding that the Times will probably tout these types of numbers as a success.

The Guardian did its own math and found that the Times’ online readership is actually down about 90 percent — exactly in line with what the paper’s leaders and industry analysts were expecting. Everyone noted that this is exactly what Murdoch and the Times wanted out of their paywall — to cut down on drive-by readers and wring more revenue out of the core of loyal ones. GigaOM’s Mathew Ingram explained that rationale well, then ripped it apart, calling it “fundamentally a resignation from the open web” because it keeps readers from sharing (or marketing) it with others. SEOmoz’s Tom Critchlow looked at the Times’ paywall interface and gave it a tepid review.

Meanwhile, another British newspaper that charges for online access, the Financial Times, is boasting strong growth in online revenue. The FT’s CEO, John Ridding, credited the paper’s metered paid-content system and offered a moral argument for paid access online, drawing on Time founder Henry Luce’s idea that an exclusively advertising-reliant model weakens the bond between a publication and its readers.

Flipboard and the future of mobile media: In just four months, we’ve already seen many attention-grabbing iPad apps, but few have gotten techies’ hearts racing quite like Flipboard, which was launched last week amid an ocean of hype. As Mashable explained, Flipboard combines social media and news sources of the user’s choosing to create what’s essentially a socially edited magazine for the iPad. The app got rave reviews from tech titans like Robert Scoble and ReadWriteWeb, which helped build up enough demand that it spent most of its first few post-release days crashed from being over capacity.

Jen McFadden marveled at Flipboard’s potential for mobile advertising, given its ability to merge the rich advertising experience of the iPad with the targeted advertising possibilities through social media, though Martin Belam wondered whether the app might end up being “yet another layer of disintermediation that took away some of my abilities to understand how and when my content was being used, or to monetise my work.” Tech pioneer Dave Winer saw Flipboard as one half of a brilliant innovation for mobile media and challenged Flipboard to encourage developers to create the other half.

At the tech blog Gizmodo, Joel Johnson broke in to ask a pertinent question: Is Flipboard legal? The app scrapes content directly from other sites, rather than through RSS, like the Pulse Reader. Flipboard’s defense is that it only offers previews (if you want to read the whole thing, you have to click on “Read on Web”), but Johnson delved into some of the less black-and-white scenarios and legal issues, too. (Flipboard, for example, takes full images, and though it is free for now, its executives plan to sell their own ads around the content under revenue-sharing agreements.) Stowe Boyd took those questions a step further and looked at possible challenges down the road from social media providers like Facebook.

A new perspective on content farms: Few people had heard of the term “content farms” about a year ago, but by now there are few issues that get blood boiling in future-of-journalism circles quite like that one. PBS MediaShift’s eight-part series on content farms, published starting last week, is an ideal resource to catch you up on what those companies are, why people are so worked up about them, and what they might mean for journalism. (MediaShift defines “content farm” as a company that produces online content on a massive scale; I, like Jay Rosen, would define it more narrowly, based on algorithm- and revenue-driven editing.)

The series includes an overview of some of the major players on the online content scene, pictures of what writing for and training at a content farm is like, and two posts on the world of large-scale hyperlocal news. It also features an interesting defense of content farms by Dorian Benkoil, who argues that large-scale online content creators are merely disrupting an inefficient, expensive industry (traditional media) that was ripe for a kick in the pants.

Demand Media’s Jeremy Reed responded to the series with a note to the company’s writers that “You are not a nameless, faceless, soul-less group of people on a ‘farm.’ We are not a robotic organization that’s only concerned about numbers and data. We are a media company. We work together to tell stories,” and Yahoo Media’s Jimmy Pitaro defended the algorithm-as-editor model in an interview with Forbes. Outspoken content-farm critic Jason Fry softened his views, too, urging news organizations to learn from their algorithm-driven approach and let their audiences play a greater role in determining their coverage.

Reading roundup: A few developments and ideas to take a look at before the weekend:

— We’ve written about the FTC’s upcoming report on journalism and public policy earlier this summer, and Google added its own comments to the public record last week, urging the FTC to move away from “protectionist barriers.” Google-watcher Jeff Jarvis gave the statement a hearty amen, and The Boston Globe’s Jeff Jacoby chimed in against a government subsidy for journalism.

— Former equity analyst Henry Blodget celebrated The Business Insider’s third birthday with a very pessimistic forecast of The New York Times’ future, and, by extension, the traditional media’s as well. Meanwhile, Judy Sims targeted a failure to focus on ROI as a cause of newspapers’ demise.

— The Columbia Journalism Review devoted a feature to the rise of private news, in which news organizations are devoted to a niche topic for an intentionally limited audience.

— Finally, a post to either get you thinking or, judging from the comments, foaming at the mouth: Penn professor Eric Clemons argues on TechCrunch that advertising cannot be our savior online: “Online advertising cannot deliver all that is asked of it.  It is going to be smaller, not larger, than it is today.  It cannot support all the applications and all the content we want on the internet. And don’t worry. There are other things that can be done that will work well.”

June 11 2010

15:51

June 10 2010

12:00

June 08 2010

13:30

Why link out? Four journalistic purposes of the noble hyperlink

[To link or not to link? It's about as ancient as questions get in online journalism; Nick Carr's links-as-distraction argument is only the latest incarnation. Yesterday, Jason Fry tried to contextualize the linking debate around credibility, readability, and connectivity. Here, Jonathan Stray tries out his own, more pragmatically focused four-part division. Tomorrow, we'll have the result of Jonathan's analysis of how major news organizations link out and talk about linking out. —Josh]

You don’t need links for great journalism — the profession got along fine for hundreds of years without them. And yet most news outlets have at least a website, which means that links are now (in theory, at least) available to the majority of working journalists. What can links give to online journalism? I see four main answers.

Links are good for storytelling.

Links give journalists a way to tell complex stories concisely.

In print, readers can’t click elsewhere for background. They can’t look up an unfamiliar term or check another source. That means print stories must be self-contained, which leads to conventions such as context paragraphs and mini-definitions (“Goldman Sachs, the embattled American investment bank.”) The entire world of the story has to be packed into one linear narrative.

This verbosity doesn’t translate well to digital, and arguments rage over the viability of “long form” journalism online. Most web writing guides suggest that online writing needs to be shorter, sharper, and snappier than print, while others argue that good long form work still kills in any medium.

Links can sidestep this debate by seamlessly offering context and depth. The journalist can break a complex story into a non-linear narrative, with links to important sub-stories and background. Readers who are already familiar with certain material, or simply not interested, can skip lightly over the story. Readers who want more can dive deeper at any point. That ability can open up new modes of storytelling unavailable in a linear, start-to-finish medium.

Links keep the audience informed.

Professional journalists are paid to know what is going on in their beat. Writing stories isn’t the only way they can pass this knowledge to their audience.

Although discussions of journalism usually center around original reporting, working journalists have always depended heavily on the reporting of others. Some newsrooms feel that verifying stories is part of the value they add, and require reporters to “call and confirm” before they re-report a fact. But lots of newsrooms simply rewrite copy without adding anything.

Rewriting is required for print, where copyright prevents direct use of someone else’s words. Online, no such waste is necessary: A link is a magnificently efficient way for a journalist to pass a good story to the audience. Picking and choosing the best content from other places has become fashionably known as “curation,” but it’s a core part of what journalists have always done.

Some publishers are reluctant to “send readers away” to other work. But readers will always prefer a comprehensive source, and as the quantity of available information explodes, the relative value of filtering it increases.

Links are a currency of collaboration.

When journalists use links to “pay” people for their useful contributions to a story, they encourage and coordinate the production of journalism.

Anyone who’s seen their traffic spike from a mention on a high-profile site knows that links can have immediate monetary impact. But links also have subtler long term value, both tangible (search rankings) and intangible (reputation and status.)  One way or another, a link is generally valuable to the receiver.

A complex, ongoing, non-linear story doesn’t have to be told by a single organization. In line with the theory of comparative advantage, it probably shouldn’t be. Of course journalists can (and should) collaborate formally. But links are an irresistible glue that can coordinate journalistic production across newsrooms and bloggers alike.

This is an economy that is interwoven with the cash economy in complex ways. It may not make business sense to pay another news organization for publishing a crucial sub-story or a useful tip, but a link gives credit where credit is due — and traffic. Along this line, I wonder if the BBC’s policy of not always linking to users who supply content is misguided.

Links enable transparency.

In theory, every statement in news writing needs to be attributed. “According to documents” or “as reported by” may have been as far as print could go, but that’s not good enough when the sources are online.

I can’t see any reason why readers shouldn’t demand, and journalists shouldn’t supply, links to all online resources used in writing a story. Government documents and corporate financial disclosures are increasingly online, but too rarely linked. There are some issues with links to pages behind paywalls and within academic journals, but nothing that seems insurmountable.

Opinion and analysis pieces can also benefit from transparency. It’s unfair — and suspect — to critique someone’s position without linking to it.

Of course, reporters must also rely on sources that don’t have a URL, such as people and paper documents. But even here I would like to see more links, for transparency and context: If the journalist conducted a phone interview, can we listen to the recording? If they went to city hall and saw the records, can they scan them for us? There is already infrastructure for journalists who want to do this. A link is the simplest, most comprehensive, and most transparent method of attribution.

Photo by Wendell used under a Creative Commons license.

March 24 2010

18:34

November 11 2009

21:05

Does Gawker's Publication of McSteamy Sex Tape Constitute Fair Use?

It probably seemed like a fun idea at the time.

Last year, Eric Dane, known as "McSteamy" from the show "Grey's Anatomy," his wife Rebecca Gayheart, and former beauty queen Kari Ann Peniche decided to make a home movie. Yes, that type of home movie. The threesome recorded themselves nakedly fumbling around in bed, slurring words, and splashing in a hot tub.

McSteamy_Doctor.jpg

Given Dane's popularity on the show, it was almost a forgone conclusion that the tape would somehow make its way onto the Internet, and Gawker was happy to make it happen. It published the video in August, and has since racked up over 3.25 million page views.

Before posting the video, Gawker whittled it down from 12 minutes to just under four and added some special effects to cover McSteamy's, well, steamy. (Its sister site, Fleshbot, used an uncensored version.) The tape, as edited by Gawker, does not actually show the threesome having sex -- it's not a porno. In fact, if the video didn't show Gayheart and Peniche without their shirts, and bleeped out the swear words, it might be suitable for daytime TV.


Hollywood sex tapes making their way to the Internet are nothing new. It has happened to Paris Hilton, Tonya Harding, and, of course, Pamela Anderson and Tommy Lee.

While lawsuits almost always follow leaked sex tapes, few cases ever go to trial. (Paris Hilton's suit, for example, ended in a settlement that reportedly made the heiress $400,000.) Dane and Gayheart's suit, which was filed three weeks ago in a California federal court, is surprisingly not about invasion of privacy or defamation of character, as is common when a sex tape goes public. Instead, the couple claim that Gawker's publication of the video violates their copyright. This makes it a unique situation.

I recently described for a friend what the video did and didn't show, and explained that as long as Gawker didn't help steal the tape, it does not matter how they got it. After my 15-minute soliloquy, she asked, "So, who will win?"

"I give Gawker a three-point spread," I said.

Here's how the case of McSteamy V. Gawker breaks down, along with a look at the larger legal issues at play.

Does a Sex Tape Fall Under Fair Use?

In 1976, Congress enacted the Copyright Act, which states that a copyright holder has the exclusive right to distribute or reproduce copyrighted material. However, the law includes one big exception, which is called "fair use." Section 107 of the Copyright Act states that a person or business can publish portions of copyrighted material so long as it is for the purposes of criticism, comment, or news reporting.

Gaby Darbyshire, a barrister and the vice president for Gawker Media, told me that the company published the video because it was "newsworthy." But simply labeling something as news doesn't automatically constitute "fair use." In order to determine whether Gawker deserves the law's exception, a court will look at four factors listed in Section 107.

First, a court will look at whether Gawker used the video for commercial purposes. Obviously, Gawker is a for-profit business, but that alone doesn't prevent it from publishing the video.

Instead, a court will consider the purpose and character of Gawker's use of the video. The question here is whether the website posted Dane and Gayheart's video for news or commercial purposes. If Gawker edited the tape to suit a newsworthy purpose, the website would have given the video a meaning different than that of the original, thus making "fair use" appropriate.

Here's the argument that Gawker will likely make: Dane, Gayheart, and Peniche made the tape because they wanted to record sexual acts. According to Darbyshire, however, Gawker posted the tape because they found some news value in the recording. Darbyshire said that seeing "Dane, his wife, and a former beauty queen who went on a reality show to be treated for sex addiction, and reportedly is a Hollywood madam," together is newsworthy. Thus, Gawker will claim that its use of the video added a news element to a home movie.

David Ludwig, an intellectual property attorney for the law firm Dunlap, Grubb & Weaver, agrees with Darbyshire. "Newsworthiness does not limit itself to hard news, it can involve celebrities as well," he said.

As a result you can probably score a point for Gawker on this issue.

Second, a court will examine whether Dane's tape was published or unpublished at the time of Gawker's use. In terms of "fair use," the law states that "the fact that a work is unpublished shall not itself bar a finding of fair use." However, "scooping" a copyright holder on their work does make the "fair use" exception less likely. In a 1985 decision, the Supreme Court stated that a copyright holder has the "right to control the first public appearance" of copyrighted material. Gawker's post was the first time the public had ever seen the video, meaning that Gawker does not have much of an argument here. Call it McSteamy 1, Gawker 1.

Third, a court will look at the "amount and substantiality" of Gawker's posting in relation to the video as a whole. Gawker posted just under four minutes of the 12-minute tape. As far as the law is concerned, the posting's length may critically compromise Gawker's claim to "fair use."

In 1987, the Second Circuit Court of Appeals held, for a variety of reasons, that appropriating one-third of 17 letters written by author J.D. Salinger did not constitute "fair use" because it was more than "necessary to disseminate the facts." Ludwig suggested that Gawker could have legally posted a screen-shot or a snippet of the video to prove that their story was true. Instead, they excerpted a third of the video. Dane 2, Gawker 1.

Fourth, a court will ask whether Gawker's publication of the video supplanted the need for an individual to purchase a legitimate copy of the couple's tape. This depends on what material Gawker left on the cutting room floor.

If the whole video consists only of the threesome hanging around a house naked, then perhaps, after viewing the Gawker excerpt, no one would be interested in purchasing the full version. Thus, "fair use" would be off the table. "No one is going to buy a work if it's freely available on the Internet," Ludwig said.

However, if Gawker edited out some really juicy material -- sex scenes, for example -- then people could still be interested in a bona fide copy of the recording. Though Darbyshire declined to offer any specifics, you can probably assume the McSteamy threesome gets more interesting than what is currently available on Gawker. Dane 2, Gawker 2.

Fair Use Versus Infringement

To recap, Dane and Gayheart appear to have a valid claim against Gawker for copyright infringement. However, Gawker has a formidable defense by way of the "fair use" exception. It's important to note that the four factors outlined above are not examined in isolation of one another. Instead, courts try to balance them against each other.

In the end, if this case goes to trial, the outcome will likely depend on what Gawker chose to cut from the video. It's a strange reality that, in the case of sex tapes, what a news organization doesn't publish is sometimes more important that what it does.

Rob Arcamona is a second-year law student at The George Washington University Law School. Prior to attending law school, Rob worked at the Student Press Law Center and also helped establish ComRadio, the Pennsylvania State University's student-run Internet-based radio station. He writes the Protecting the Source blog.

This is a summary. Visit our site for the full post ».

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl