Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 29 2011

20:00

What’s Project Thunderdome, you ask? Inside Jim Brady’s new job at Journal Register Company

So Jim Brady, formerly of AOL, washingtonpost.com, and TBD, is now of the Journal Register Company. That’s big news — and not only because it had been an open question where Brady would land after he left TBD in November. As John Paton, JRC’s CEO, put it in a release announcing the news: “The debate of bloggers vs. journalists or citizen journalists vs. professionals is now over. The new business models of news demand we understand and incorporate both.”

Brady “will immediately be responsible for Project Thunderdome,” the release notes, which is an initiative that is — besides being, obviously, one of the most delightfully named projects in the news innovation world — “Journal Register Company’s plan for engaging audience and creating content across all platforms and geographies.”

But what is Thunderdome, exactly? I talked to Brady and Jon Cooper, JRC’s vice president for content, to learn more.

Essentially, they told me, Thunderdome is an attempt to take Journal Register’s current collective of media properties — community newspapers joined at the top by a corporate brand — into an interwoven network. While the project’s ultimately about content (both improving its quality and expanding it), it’s also about production practices: It’s trying, in particular, to create uniform standards across the organization when it comes to content management. Ultimately, Thunderdome will mean a redesign for JRC’s digital platform as well as its print platform, giving JRC’s papers — in print and especially (per JRC’s “digital first, print last” ethos) online — a standard look and feel.

Essentially, Journal Register is “building a news system,” Brady puts it, “as opposed to trying to retrofit one that came out of a different time.”

On the one hand, Thunderdome is about systematization and centralization — and the production-side efficiencies that come from them. “Right now, we operate anywhere from 6 to 8 [CMSes], depending on who you talk to,” Cooper notes — and the reason that number varies is that “we have places that don’t actually have a CMS.”

Those CMS-less properties — gird yourselves, techies — use a Windows folder structure to manage their content. Imagine erasing a coworker’s content, Cooper notes, “because you happen to name your story ‘Fire,’ and I had named my story ‘Fire,’ and I copy over yours.”

So that’s one side of it. But Thunderdome is also about the categorization of content. Much of the efficiency JRC hopes will be gained from the new system will come from the bifurcation of local content and what it calls “common” content — in other words, from distinguishing between information that requires feet-on-the-street reporting and information can be provided by wire services or other more centralized sources. It’s the classic distinction between wire content and original, taken to the next level. So take, for example, content like stocks, weather, comics — things that a journalist might not define as journalism in the strictest sense, but which readers want as part of their news experience. Journal Register, across its 18 daily papers, does over 50,000 of those pages a year, Cooper told me, creating different products for different locations. Sometimes that’s necessary, of course (the weather in Connecticut being different from the weather in Michigan); often, though, it’s simply redundant — a waste of time and resources.

Thunderdome aims to establish a 40/60 — or even 50/50 — ratio of local content to common content. “JRC is a fairly large organization,” Cooper notes, “so we have a decent amount of power that we put behind a project like this.” As Brady puts it, the system will allow the papers “to spend their actual staff time covering local news and embedding themselves in the local community — which they have to do to make themselves successful.”

A big part of Brady’s job at JRC will be to figure out the specifics of that kind of production-side streamlining, determining, for example, content partners — JRC, yesterday, announced a financial-news-content deal with The Street — and staffing the Thunderdome effort with vertical content specialists. Another part of it will be figuring out the audience engagement side of the equation — to put to use the knowledge Brady gained at TBD. “That’s key to us,” Cooper says — “key to Thunderdome, key to our brand expansion, key to our current brands.” Brady’s work won’t just be about “providing leadership to our journalists,” he notes; it’ll also be about “working with our communities — our physical, geographic communities, but also our digital communities.”

Which all sounds eminently reasonable and, well, not Thunderous. So, then: What’s with that name? JRC’s CEO, John Paton, named the project, Cooper told me. When he’d visited The Washington Post, someone had talked about the paper’s digital center as “the Thunderdome” — and the name, both epic and tongue-in-cheek at once, came into play as a working title as Journal Register laid out its (also epically named) Ben Franklin Project. The project came out of the basic realization, Cooper says, that “we can’t wait for a unified CMS; we can’t wait for a unified technology to be in place. We have to make it happen sooner.”

It’s that kind of thinking that attracted Brady to Journal Register in the first place. When you’re looking for a new job, he notes, you’re looking at both “the size of the opportunity and the size of the challenge.” For Thunderdome, the size of both is “large.” “Folks have done production hubs; folks have done content bureaus or content sharing,” Cooper says. “But what we’re really looking to do is to empower local journalism. And part of that is to remove the roadblocks to small operations.”

Image by rachelbinx used under a Creative Commons license.

March 08 2011

15:00

Matt Waite: To build a digital future for news, developers must be able to hack at the core of old systems

Editor’s Note: Matt Waite was until recently news technologist at the St. Petersburg Times, where — among many other projects — he was the primary developer behind Politifact, which won a Pulitzer Prize. He’s also been a leader for the movement to combine news and code in new and interesting ways.

Matt is now teaching journalism at the University of Nebraska and working with news orgs under the shingle Hot Type Consulting. Here, he talks about his disappointment with the pace and breadth of the evolution of coding and news apps in contemporary journalism.

Pay attention to the noise, and you start to hear signal. There’s an awakening going on — quiet and slow, but it’s there. There are voices talking about data and apps and journalism becoming more than just writers writing and editors editing. There are labs starting and partnerships forming. There was a whole conference late last month — NICAR in Raleigh — that more than ever was a creative collision of words and nerds.

It’s tempting to say that a real critical mass is afoot, marrying journalists and technologists and finally getting us to this “Future of Journalism” thing we keep hearing about. I’ve recently had a job change that’s given me some time to reflect on this movement of journalism+programming.

In a word, I’m disappointed.

Not in what’s been done. There’s some amazing work going on inside newsrooms and out, work that every news publisher and manager should be looking at with jealous, thieving eyes. Things like the Los Angeles Times crime app. It’s amazing. The Chicago Tribune elections app. ProPublica’s Docs app. The list goes on and on.

I’m disappointed on what hasn’t been done. Where we, from inside news organizations, haven’t gone. Where we haven’t been allowed to go.

To understand my disappointment, you have to understand, at a very low level, how news gets published and the minds of the people who are actually responsible for the newspaper arriving on your doorstep.

Evolution, but only on the edges

To most journalists, once copy gets through the editors, through the copy desk, and onto a page, there comes a point where magic happens and poof — the paper appears on the doorstep. But if you’ve seen it, you know it’s not magic: It’s a byzantine series of steps, through exceedingly expensive software and equipment, run in a sequence every night in a manner that can be timed with a stopwatch. Any glitch, hiccup, delay, or bump in the process is a four-alarm emergency, because at the other end of this dance is an army of trucks waiting for bundles of paper. In short, it’s got to work exactly the same way every night or piles of cash get burned by people standing around waiting.

Experimentation with the process isn’t just uncomfortable — it’s dangerous and expensive and threatens the very production of the product. In other words, it doesn’t happen unless it’s absolutely necessary and can demonstrably cut costs.

Knowing that, it’s entirely understandable why many of the people who manage newspapers — who have gone their whole professional lives with this rhythmic production model consciously and subconsciously in their minds — would view the world through that prism. Most newspapers rely on gigantic, expensive, monolithic content management systems that function very much like the production systems that print the paper every day. Inputs go in, magic happens, a website comes out. It works the same way every day or there’s hell to pay.

And around that rhythmic mode of operation, we’ve created comfortable workflows that feed it. And because it’s comfortable, there’s an amazing amount of inertia around all of it. Change is scary. The consequences down the line could be bad. We should go slow.

Now, I’m not going to tell you that experimentation is forbidden in the web space, because it’s not. But that experimentation takes place almost entirely outside the main content management system. Story here, news app there. A blog? A separate software stack. Photo galleries? Made elsewhere, embedded into a CMS page (maybe). Graphics? Same. Got something more, like a whole high school sports stats and scores system? Separate site completely, but stories stay in the CMS. You don’t get them.

In short, experiment all you want, so long as you never touch the core product.

And that is the source of my disappointment. All this talk about a digital future, about moving journalism onto the web, about innovation and saving journalism is just talk until developers are allowed to hack at the very core of the whole product. To argue otherwise is to argue that the story form, largely unchanged from print, is perfect and to change it is unnecessary. Hogwash.

The evolution of the story form

Now, I’m not saying “Trash the story form! Down with it all!” The story form has been honed over millennia. We’ve been telling stories since we invented language. A story is a very efficient means to get information from one human to another. But to believe that a story has to be a headline, byline, body copy, a publication date, maybe some tags, and maybe a photo — because that’s what some vendor’s one-size-fits-all content management system tells us is all we get — is ludicrous. It’s a dangerous blind spot just waiting to be exploited by competitors.

I believe that all stories are not the same, and that each type of story we do as journalists has opportunities to augment the work with data, structure, and context. There’s opportunities to alter how a story fits into place, and time. To change the atomic structure of what we do as journalists.

Imagine a crime story that had each location in the crime story stored, providing readers with maps that show not just where the crime happened, but crime rates in those areas over time and recent similar crimes, automatically generated for every crime story that gets written. A crime story that automatically grabs the arrest report or jail record for the accused and pulls it up, automatically following that arrestee and updating the mugshot with their jail status, court status, or adjudication without the reporter having to do anything. Then step back to a page that shows all crime stories and all crime data in your neighborhood or your city. The complete integration of oceans of crime data to the work of journalists, both going on every day without any real connection to each other. Rely on the journalists to tell the story, rely on the data to connect it all together in ways that users will find compelling, interesting, and educational.

Now take that same concept and apply it to politics. Or sports. Or restaurant reviews. Any section of the paper. Obits, wedding announcements, you name it.

Can your CMS do that? Of course it can’t. The amount of customization, the amount of experimentation, the amount of journalism that would have to go on to make that work is impossible for a vendor selling a product to do. But it’s precisely the kind of experimentation we need to be doing.

Building from the ground up

The prevailing notions in newsrooms, whether stated explicitly or just subconsciously believed, is this print-production mindset. Stories, for the most part, function as they do in print — a snapshot in time, alone by itself, unalterable after it’s stamped onto a medium and pushed into the world.

What I’ve never seen is the complete counter-argument to that mindset. The alpha to its omega. Here’s what I think that looks like:

Instead of a single monolithic system, where a baseball game story is the same as a triple murder story, general interest news websites should be a confederation of custom content management systems that handle stories of a specific type. Each system has its own features, pulling data, links, tweets and anything else that can shed light on the topic. Humans + computers. Automated aggregates where they make sense, human judgment where it’s needed. The home page is merely a master aggregation of this confederation.

Each area of the site can evolve on its own, given changes in available data, technology, or staff. It’s the complete destruction and rebuilding of every piece of the workflow. Everyone’s job would change when it came to producing the news.

Crazy, you say? Probably. My developer friends and readers with IT backgrounds are spitting their coffee out right now. But is it any more crazy than continuing to use a print-production approach on the web? I don’t think it is. It is the equal and opposite reaction: little innovation at the core vs. a complete custom rebuilding of it. Frankly, I believe neither is sustainable, but only one continues at mass scale. And I believe it’s the wrong one.

While I was at the St. Petersburg Times, we took this approach of rebuilding the core from scratch with PolitiFact. We built it from the ground up, augmenting the story form with database relationships to people, topics, and rulings (among others). We added transparency by making the listing of sources a required part of an item. We took the atomic parts of a fact-check story and we built a new molecule with them. And with that molecule, we built a national audience for a regional newspaper and won a Pulitzer Prize.

Not bad for a bunch of print journalists experimenting with the story form on the web.

I would be lying if I said that I wasn’t disappointed that PolitiFact’s success didn’t unleash a torrent of programmers and journalists and journalist/programmers hacking away on new story forms. It hasn’t and I am.

But I’m not about to blame programmers in the newsroom. Many that I talk to are excited to experiment in any way they can with journalism and the web. The enemy is what we cling to. And it’s time to let go.

November 22 2010

15:00

With its new food blog, WordPress gets into the content-curation game

This month, the company associated with one of the world’s most popular blogging platforms took its first, quiet step into the realm of for-profit content aggregation. FoodPress, a human-curated recipe blog, is a collaboration between blogging giant WordPress.com and Federated Media, a company that provides advertising to blogs and also brokers more sophisticated sponsorship deals. Lindt chocolate is already advertising on the site.

“We have a huge pool of really motivated and awesome food bloggers,” explained Joy Victory, WordPress’ editorial czar. (Yes, that is, delightfully, her official title.) Food was a natural starting place for a content vertical.

If the FoodPress model takes off, it could be the beginning of a series of WordPress content verticals covering different topics. WordPress.com currently hosts more than 15.1 million blogs, and when the FoodPress launch was announced, excited WordPress commenters were already asking for additional themed pages on subjects like art, restaurants, and beer.

(To clarify the sometimes confusing nomenclature: WordPress the blogging software — sometimes called WordPress.org — is free, open source, and installed on your own web server; we use it under the hood here at the Lab. WordPress.com is a for-profit venture offering a hosted version of WordPress software, owned by Automattic, which was founded by WordPress developer Matt Mullenweg. FoodPress is a WordPress.com project.)

For now, though, FoodPress’ creators are keeping their focus on their first blog and seeing what kind of traffic and advertising interest it attracts — the start-small-then-scale approach. And one question that remains to be answered in this first experimental effort is how WordPress bloggers will respond to the monetization of their content, and whether featured bloggers will want compensation beyond the additional traffic they’re likely to receive.

So far, the response from users has been overwhelmingly enthusiastic, Victory said. While the familiar issue of blogger compensation has been raised in response to the new venture, “our users don’t seem concerned so far,” she said. Instead, they’re largely excited about the possibility of even more themed sites. Advertising is already a part of WordPress.com, Victory pointed out, popping up on individual WordPress blogs unless a user is signed into WordPress itself.

WordPress’ venture into the editorial realm is significant on its own merits, but it also provides a fascinating case study in how media jobs have proliferated even as the news industry suffers. Victory used to work for metro newspapers, as did Federated Media’s Neil Chase. Now the two are working on a project that brings atomized pieces of user-created content together as a singular web publication. (FoodPress’ tagline: “Serving up the hottest dishes on WordPress.com.”)

Victory is optimistic about this “new way of looking at journalism” — even though, she said, “I consider myself someone who has left traditional journalism behind.” But while some of the FoodPress content is aggregated automatically, Victory believes as well in the value of human curation in creating a good user experience — a sentiment shared among many in the burgeoning ranks of web curators. (Up to now, WordPress’ content curation has focused mainly on Freshly Pressed, a collection of featured blog posts on the site’s homepage, which Victory hand-selects daily.) And to bring more editorial oversight to FoodPress, Federated Media turned to one of its affiliated bloggers, Jane Maynard, to oversee the project — a paid, part-time position.

The blog won’t be just an experiment in curation, though; it will also be a case study in collaboration. “It’s the first step in what we think will be a critical partnership,” Chase noted — one that emerged organically from the collaboration-minded, conversational world of San Francisco-based startups. And just as Federated Media and Automattic have shared the duties of creating the site, he said, they will also share the revenue FoodPress generates.

As for the expectations for that revenue? Victory isn’t releasing traffic stats for FoodPress at this point — both she and Chase were hesitant to talk too much about a project still in beta testing — but noted that the site’s social media presence is growing, with, as of this posting, more than 1,400 Facebook “Likes” and 1,200 Twitter followers. The rest will, like a recipe itself, develop over time. “This is a little bit of an experiment for us,” Victory said. “And we’re hoping it’s wildly successful.”

June 10 2010

14:00

Linking by the numbers: How news organizations are using links (or not)

In my last post, I reported on the stated linking policies of a number of large news organizations. But nothing speaks like numbers, so I also trawled through the stories on the front pages of a dozen online news outlets, counting links, looking at where they went, and how they were used.

I checked 262 stories in all, and to a certain degree, I found what you’d expect: Online-only publications were typically more likely to make good use of links in their stories. But I also found that use of links often varies wildly within the same publication, and that many organizations link mostly to their own topic pages, which are often of less obvious value.

My survey included several major international news organizations, some online-only outlets, and some more blog-like sites. Given the ongoing discussion about the value of external links, and the evident popularity of topic pages, I sorted links into “internal”, “external”, and “topic page” categories. I included only inline links, excluding “related articles” sections and sidebars.

Twelve hand-picked news outlets hardly make up an unbiased sample of the entire world of online news, nor can data from one day be counted as comprehensive. But call it a data point — or a beginning. For the truly curious, the spreadsheet contains article-level numbers and notes.

Of the dozen online news outlets surveyed, the median number of links per article was 2.6. Here’s the average number of links per article for each outlet:

Source Internal External Topic Page Total BBC News 0 0 0 0 CNN 0.3 0.2 0.7 1.2 Politico 0.7 0.2 0.6 1.5 Reuters.com 0.1 0.2 1.4 1.7 Huffington Post 1.1 1.0 0 2.1 The Guardian 0.5 0.2 1.8 2.4 Seattle Post-Intelligencer 0.9 1.9 0 2.8 Washington Post 1.0 0.3 2.0 3.3 Christian Science Monitor 2.5 1.1 0 3.6 TechCrunch 1.8 3.6 1.2 6.6 The New York Times 1 1.2 4.6 6.8 Nieman Journalism Lab 1.4 13.1 0 14.5

The median number of internal links per article was 0.95, the median number of external links was 0.65, and the median number of topic page links was also 0.65. I had expected that online-only publications would have more links, but that’s not really what we see here. TechCrunch and our own Lab articles rank quite high, but so does The New York Times. Conversely, the BBC, Reuters, CNN, and The Huffington Post are not converting from a print mindset, so I would have expected them to be more web native — but they rank at the bottom.

What’s going on here? In short, we’re seeing lots of automatically generated links to topic pages. Many organizations are using topic pages as their primary linking strategy. The majority of links from The New York Times, The Washington Post, Reuters.com, CNN, and Politico — and for some of these outlets the vast majority — were to branded topic pages.

Topic pages can be a really good idea, providing much needed context and background material for readers. But as Steve Yelvington has noted, topic pages aren’t worth much if they’re not taken seriously. He singles out “misplaced trust in automation” as a pitfall. Like many topic pages, this CNN page is nothing more than a pile of links to related stories.

It doesn’t seem very useful to use such a high percentage of a story’s links directing readers to such pages. I wonder about the value of heavy linking to broad topic pages in general. How much is the New York Times reader really served by having a link to the HBO topic page from every story about the cable industry, or the Washington Post reader served by links on mentions of the “GOP”?

I suspect that links to topic pages are flourishing because such links can be generated by automated tools and because topic pages can be an SEO strategy, not because topic page links add great journalistic value. My suspicion is that most of the topic page links we are seeing here are automatically or semi-automatically inserted. Nothing wrong with automation — but with present technology it’s not as relevant as hand-coded links.

So what do we see when we exclude topic page links?

Excluding links to topic pages — counting only definitely hand-written links — the median number of links per article drops to 1.7. The implication here is that something like 30 percent of the links that one finds in online news articles across the web go to topic pages, which certainly matches my reading experience. Sorting the outlets by internal-plus-external links also shows an interesting shift in the linking leaderboard.

Source Internal External Total BBC News 0 0 0 Reuters.com 0.1 0.2 0.3 CNN 0.3 0.2 0.5 The Guardian 0.5 0.2 0.7 Politico 0.7 0.2 0.9 Washington Post 1.0 0.3 1.3 Huffington Post 1.1 1.0 2.1 The New York Times 1 1.2 2.2 Seattle Post-Intelligencer 0.9 1.9 2.8 Christian Science Monitor 2.5 1.1 3.6 TechCrunch 1.8 3.6 5.4 Nieman Journalism Lab 1.4 13.1 14.5

The Times and the Post have moved down, and online-only outlets Seattle Post-Intelligencer and Christian Science Monitor have moved up. TechCrunch still ranks high with a lot of linking any way you slice it, and the Lab is still the linkiest because we’re weird like that. (To prevent cheating, I didn’t tell anyone at the Lab, or elsewhere, that I was doing this survey.) But the BBC, CNN, and Reuters are still at the bottom.

Linking is unevenly executed, even within the same publication. The number of links per article depended on who was writing it, the topic, the section of the publication, and probably also the phase of the moon. Even obviously linkable material, such as an obscure politician’s name or a reference to comments on Sarah Palin’s Facebook page, was inconsistently linked. Meanwhile, one anomalous Reuters story linked to the iPad topic page on every single reference to “iPad” — 16 times in one story. (I’m going to have to side with the Wikipedia linking style guide here, which says link on first reference only.)

Whether or not an article contains good links seems to depend largely on the whim of the reporter at most publications. This suggests a paucity of top-down guidance on linking, which is in line with the rather boilerplate answers I got to my questions about linking policy.

Articles posted to the “blog” section of a publication generally made heavier use of links, especially external links. The average number of external links per page at The New York Times drops from 1.2 to 0.8 if the single blog post in the sample is excluded — it had ten external links! Whatever news outlets mean by the word “blog,” they are evidently producing their “blogs” differently, because the blogs have more links.

The wire services don’t link. Stories on Reuters.com — as distinguished from stories delivered on Reuters professional products — had an average of 1.7 links per article. But only 0.3 of these links were not to topic pages, and only blog posts had any external links at all. Stories read on Reuters professional products sometimes contain links to source financial documents or other Reuters stories, though it’s not clear to me whether these systems use or support ordinary URLs. The Associated Press has no hub news website of its own so I couldn’t include it in my survey, but stories pushed to customers through their standard feed do not include inline links, though they sometimes include links in an an “On the Net” section at the end of the story.

As I wrote previously, Reuters and AP told me that the reason they don’t include inline hyperlinks is that many of their customers publish on paper only and use content management systems that don’t support HTML.

What does this all mean? The link has yet to make it into the mainstream of journalistic routine. Not all stories need links, of course, but my survey showed lots of examples where links would have provided valuable backstory, context, or transparency. Several large organizations are diligent about linking to their own topic pages, probably with the assistance of automation, but are wildly inconsistent about linking to anything else. The cultural divide between “journalists” and “bloggers” is evident by the way that writers use links (or don’t use them), even within the same newsroom. The major wire services don’t yet offer integrated hypertext products for their online customers. And when automatically generated links are excluded, online-only publications tend to take links more seriously.

June 09 2010

16:00

Making connections: How major news organizations talk about links

Links can add a lot of value to stories, but the journalism profession as a whole has been surprisingly slow to take them seriously. That’s my conclusion from several months of talking to organizations and reporters about their linking practices, and from counting the number and type of links from hundreds of stories.

Wikipedia has a 5,000 word linking style guide. That might be excessive, but at least it’s thorough. I wondered what professional newsrooms thought of linking, so I contacted a number of them and asked how they were directing their reporters to use links. I got answers — but sometimes vague answers.

In this post I’ll report those answers, and in the next post I’ll discuss the results of my look into how links are actually being used in the published work of a dozen news outlets.

The BBC made its linking intentions public in a March 19 post by website editor Steve Herrmann.

Related links matter: They are part of the value you add to your story — take them seriously and do them well; always provide the link to the source of your story when you can; if you mention or quote other publications, newspapers, websites — link to them; you can, where appropriate, deep-link; that is, link to the specific, relevant page of a website.

I asked Herrmann for details and reported his responses previously. Then I sent this paragraph to other news organizations and asked about their linking policies. A spokesperson for The New York Times wrote:

Yes, the guidance we offer to our journalists is very similar to that of the BBC, in that we encourage them to provide links, where appropriate, to sources and other relevant information.

Washington Post managing editor Raju Narisetti made similar remarks, but emphasized that the Post encourages “deep linking.”

While we don’t have a formal policy yet on linking, we are actively encouraging our reporters, especially our bloggers, to link to relevant and reliable online sources outside washingtonpost.com and in doing so, to be contextual, as in to link to specific content [rather] than to a generic site so that our readers get where they need to get quickly.

Why would anyone not link to the exact page of interest? In the news publishing world, the issue of deep linking has a history of controversy, starting with the Shetland Times vs. Shetland News case in 1996.

The Wall Street Journal and Dow Jones Newswires wouldn’t discuss their linking policy, as a spokesperson wrote to me:

As you can see from the site, we do link to many outside news organizations and sources. But unfortunately, we don’t publicly discuss our policies, so we won’t have anyone to elaborate on this.

From observation, I did confirm that Dow Jones Newswires don’t reliably link to source documents even when publicly available online. I found a simple story about a corporate disclosure, tracked down the disclosure document on the stock exchange web site, then called the Dow Jones reporter and confirmed that this was the source of the story. But it’s unfair to single out Dow Jones, because wire services don’t do linking generally.

The Associated Press does not include inline links in stories, though they sometimes append links in an “On the Net” section at the bottom of stories. A spokesperson explained why there is no inline linking:

In short, a technical constraint. We experimented with inline linking a year or so ago but had difficulties given the huge variety of downstream systems, at AP and subscriber locations, that handle our copy. The AP serves 1,500 member U.S. papers, as well as thousands of commercial Web sites and ones operated by the papers, radio and TV stations, and so on.

Reuters links in various ways from stories viewed within its professional desktop products, including links to source documents and previous Reuters stories, though these links are not always standard URLs. Their newswire product does not include links. A spokesperson asked not to be quoted directly, but explained that, like the Associated Press, many of their customers could not handle inline links — and no copy editor wants to be forced to manually remove embedded HTML. She also said that Reuters sees itself as providing an authoritative news source that can be used without further verification. I get her point, but I don’t see it as a reason to not point to public sources.

The wire services are in a tricky position. Not only are many of their customers unable to handle HTML, but it’s often not possible for the wires to link to their previous stories — either because they aren’t posted online or they’re posted on many subscriber websites. This illuminates an unsolved problem with syndication and linking generally: if every user of syndicated material posts copy independently on their own site, there is no canonical URL that can be used by the content creator to refer to a particular story. (The AP’s been thinking about this.)

These sorts of technical issues are definitely a barrier, and staff from several newsrooms told me that their print-era content management systems don’t handle links well. There’s also no standard format for filing a story with hyperlinks — copy might be drafted in Microsoft Word, but links are unlikely to survive being repeatedly emailed, cut and pasted, and squeeze through any number of different systems.

But technical obstacles don’t much matter if reporters don’t value links enough to write them into their stories. In conversations with staff members from various newsrooms, I’ve frequently heard that cultural issues are a barrier. When paper is seen as the primary product, adding good links feels like extra work for the reporter, rather than an essential part of the storytelling form. Some publishers are also suspicious that links to other sites will “send readers away” — a view that would seem to contradict the suspicion of inbound links from aggregators.

Reading between the lines, it seems that most newsrooms have yet to make a strong commitment to linking. This would explain the mushiness of some of the answers I received, where news organizations “encourage” their reporters or offer “guidance” on linking. If, as I believe, links are an essential part of online journalism, then the profession has a way to go to exploit the digital medium. In my next post, I’ll break down some numbers on how different news organizations are using links today.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl