Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 28 2013

15:00

This Week in Review: The backlash against Greenwald and Snowden, and RSS’s new wave

glenn-greenwald-cc

Greenwald, journalism, and advocacy: It’s been three weeks since the last review, and a particularly eventful three weeks at that. So this review will cover more than just the last week, but it’ll be weighted toward the most recent stuff. I’ll start with the U.S. National Security Agency spying revelations, covering first the reporter who broke them (Glenn Greenwald), then his source (Edward Snowden), and finally a few brief tech-oriented pieces of the news itself.

Nearly a month since the first stories on U.S. government data-gathering, Greenwald, who runs an opinionated and meticulously reported blog for the Guardian, continues to break news of further electronic surveillance, including widespread online metadata collection by the Obama administration that continues today, despite the official line that it ended in 2011. Greenwald’s been the object of scrutiny himself, with a thorough BuzzFeed profile on his past as an attorney and questions from reporters about old lawsuits, back taxes, and student loan debt.

The rhetoric directed toward Greenwald by other journalists was particularly fierce: The New York Times’ Andrew Ross Sorkin said on CNBC he’s “almost arrest” Greenwald (he later apologized), and most notably, NBC’s David Gregory asked Greenwald “to the extent that you have aided and abetted Snowden,” why he shouldn’t be charged with a crime. The Washington Post’s Erik Wemple refuted Gregory’s line of questioning point-by-point and also examined the legal case for prosecuting Greenwald (there really isn’t one).

There were several other breakdowns of Gregory’s questions as a way of defending himself as a professional journalist by excluding Greenwald as one; of these, NYU professor Jay Rosen’s was the definitive take. The Los Angeles Times’ Benjamin Mueller seconded his point, arguing that by going after Greenwald’s journalistic credentials, “from behind the veil of impartiality, Gregory and his colleagues went to bat for those in power, hiding a dangerous case for tightening the journalistic circle.”

The Freedom of the Press Foundation’s Trevor Timm argued that Gregory is endangering himself by defining journalism based on absence of opinion, and The New York Times’ David Carr called for journalists to show some solidarity on behalf of transparency. PaidContent’s Mathew Ingram used the case to argue that the “bloggers vs. journalists” tension remains important, and Greenwald himself said it indicated the incestuous relationship between Washington journalists and those in power.

A few, like Salon’s David Sirota, turned the questions on Gregory, wondering why he shouldn’t be charged with a crime, since he too has disclosed classified information. Or why he should be considered a journalist, given his track record of subservience to politicians, as New York magazine’s Frank Rich argued.

Earlier, Rosen had attempted to mediate some of the criticism of Greenwald by arguing that there are two valid ways of approaching journalism — with or without politics — that are both necessary for a strong press. Former newspaper editor John L. Robinson added a call for passion in journalism, while CUNY’s Jeff Jarvis and Rolling Stone’s Matt Taibbi both went further and argued that all journalism is advocacy.

edward-snowden-stencil-cc

Snowden and leaking in public: The other major figure in the aftermath of this story has been Edward Snowden, the employee of a national security contractor who leaked the NSA information to Greenwald and revealed his identity shortly after the story broke. The U.S. government charged Snowden with espionage (about which Greenwald was understandably livid), as he waited in Hong Kong, not expecting to see home again.

The first 48 hours of this week were a bit of blur: Snowden applied for asylum in Ecuador (the country that’s been harboring WikiLeaks’ Julian Assange), then reportedly left Hong Kong for Moscow. But Snowden wasn’t on a scheduled flight from Moscow to Cuba, creating confusion about where exactly he was — and whether he was ever in Moscow in the first place. He did all this with the apparent aid of WikiLeaks, whose leaders claimed that they know where Snowden is and that they could publish the rest of his NSA documents. It was a bit of a return to the spotlight for WikiLeaks, which has nonetheless remained on the FBI’s radar for the last several years, with the bureau even paying a WikiLeaks volunteer as an informant.

We got accounts from the three journalists Snowden contacted — Greenwald, The Washington Post’s Barton Gellman, and filmmaker Laura Poitras — about their interactions with him, as well as a probe by New York Times public editor Margaret Sullivan into why he didn’t go to The Times. In a pair of posts, paidContent’s Mathew Ingram argued that the leak’s path showed that having a reputation as an alternative voice can be preferable to being in the mainstream when it comes to some newsgathering, and that news will flow to wherever it finds the least resistance. The Times’ David Carr similarly concluded that news stories aren’t as likely to follow established avenues of power as they used to.

As The Washington Post’s Erik Wemple described, news organizations debated whether to call Snowden a “leaker,” “source,” or “whistleblower,” Several people, including The Atlantic’s Garance Franke-Ruta and Forbes’ Tom Watson, tried to explain why Snowden was garnering less popular support than might be expected, while The New Yorker’s John Cassidy detailed the backlash against Snowden in official circles, which, as Michael Calderone of The Huffington Post pointed out, was made largely with the aid of anonymity granted by journalists.

Numerous people, such as Kirsten Powers of The Daily Beast, also decried that backlash, with Ben Smith of BuzzFeed making a particularly salient point: Journalists have long disregarded their sources’ personal motives and backgrounds in favor of the substance of the information they provide, and now that sources have become more public, the rest of us are going to have to get used to that, too. The New York Times’ David Carr also noted that “The age of the leaker as Web-enabled public figure has arrived.”

Finally the tech angle: The Prism program that Snowden leaked relied on data from tech giants such as Google, Apple, Facebook, and Yahoo, and those companies responded first by denying their direct involvement in the program, then by competing to show off their commitment to transparency, as Time’s Sam Gustin reported. First, Google asked the U.S. government for permission to reveal all their incoming government requests for information, followed quickly by Facebook and Microsoft. Then, starting with Facebook, those companies released the total number of government requests for data they’ve received, though Google and Twitter pushed to be able to release more specific numbers. Though there were early reports of special government access to those companies’ servers, Google reported that it uses secure FTP to transfer its data to the government.

Instagram’s bet on longer (but still short) video: Facebook’s Instagram moved into video last week, announcing 15-second videos, as TechCrunch reported in its good summary of the new feature. That number drew immediate comparisons to the six-second looping videos of Twitter’s Vine. As The New York Times noted, length is the primary difference between the two video services (though TechCrunch has a pretty comprehensive comparison), and Instagram is betting that longer videos will be better.

The reason isn’t aesthetics: As Quartz’s Christopher Mims pointed out, the ad-friendly 15-second length fits perfectly with Facebook’s ongoing move into video advertising. As soon as Instagram’s video service was released, critics started asking a question that would’ve seemed absurd just a few years ago: Is 15 seconds too long? Josh Wolford of WebProNews concluded that it is indeed too much, at least for the poorly produced amateur content that will dominate the service. At CNET, Danny Sullivan tried to make peace with the TL;DR culture behind Vine and Instagram Video.

Several tech writers dismissed it on sight: John Gruber of Daring Fireball gave it a terse kiss-off, while Mathew Ingram of GigaOM explained why he won’t use it — can’t be easily scanned, and a low signal-to-noise ratio — though he said it could be useful for advertisers and kids. PandoDaily’s Nathaniel Mott argued that Instagram’s video (like Instagram itself) is more about vanity-oriented presentation than useful communication. And both John Herrman of BuzzFeed and Farhad Manjoo of Slate lamented the idea that Instagram and Facebook seem out of ideas, with Manjoo called it symptomatic of the tech world in general. “Instead of invention, many in tech have fallen into the comfortable groove of reinvention,” Manjoo wrote.

Chris Gayomali of The Week, however, saw room for both Vine and Instagram to succeed. Meanwhile, Nick Statt of ReadWrite examined the way Instagram’s filters have changed the way photography is seen, even among professional photographers and photojournalists.

google-reader-mark-all-as-readThe post-Google Reader RSS rush: As Google Reader approaches its shutdown Monday, several other companies are taking the opportunity to jump into the suddenly reinvigorated RSS market. AOL launched its own Reader this week, and old favorite NetNewsWire relaunched a new reader as well.

Based on some API code, there was speculation that Facebook could be announcing its own RSS reader soon. That hasn’t happened, though The Wall Street Journal reported that Facebook is working on a Flipboard-like mobile aggregation device. GigaOM’s Eliza Kern explained why she wouldn’t want a Facebook RSS feed, while Fast Company’s Chris Dannen said a Facebook RSS reader could actually help solve the “filter bubble” like-minded information problem.

Sarah Perez of TechCrunch examined the alternatives to Google Reader, concluding disappointedly that there simply isn’t a replacement out there for it. Her colleague, Darrell Etherington, chided tech companies for their reactionary stance toward RSS development. Carol Kopp of Minyanville argued, however, that much of the rush toward RSS development is being driven just as much by a desire to crack the mobile-news nut, something she believed could be accomplished. RSS pioneer Dave Winer was also optimistic about its future, urging developers to think about “What would news do?” in order to reshape it for a new generation.

Reading roundup: A few of the other stories you might have missed over the past couple of weeks:

— Rolling Stone’s Michael Hastings, who had built up a reputation as a maverick through his stellar, incisive reporting on foreign affairs, was killed in a car accident last week at age 33. Several journalists — including BuzzFeed’s Ben Smith, The Guardian’s Spencer Ackerman, Slate’s David Weigel, and freelancer Corey Pein — wrote warm, inspiring remembrances of a fearless journalist and friend. Time’s James Poniewozik detected among reporters in general “maybe a little shame that more of us don’t always remember who our work is meant to serve” in their responses to Hastings’ death.

— Pew’s Project for Excellence in Journalism issued a study based on a survey of nonprofit news organizations that provided some valuable insights into the state of nonprofit journalism. The Lab’s Justin Ellis, Poynter’s Rick Edmonds, and J-Lab’s Jan Schaffer explained the findings. Media analyst Alan Mutter urged nonprofit news orgs to put more focus on financial sustainability, while Michele McLellan of the Knight Digital Media Center called on their funders to do the same thing.

— Oxford’s Reuters Institute also issued a survey-based study whose findings focused on consumers’ willingness to pay for news. The Lab’s Sarah Darville and BBC News’ Leo Kelion summarized the findings, while paidContent’s Mathew Ingram gave an anti-paywall reading. The Press Gazette also highlighted a side point in the study — the popularity of live blogs.

— Texas state politics briefly grabbed a much broader spotlight this week with state Sen. Wendy Davis’ successful 13-hour filibuster of a controversial abortion bill. Many people noticed that coverage of the filibuster (and surrounding protest) was propelled by digital photo and video, rather than cable news. VentureBeat’s Meghan Kelly, Time’s James Poniewozik, and The Verge’s Carl Franzen offered explanations.

— Finally, a couple of reads from the folks at Digital First, one sobering and another inspiring: CEO John Paton made the case for the inadequacy of past-oriented models in sustaining newspapers, and digital editor Steve Buttry collected some fantastic advice for students on shaping the future of journalism.

Photos of Glenn Greenwald by Gage Skidmore and Edward Snowden stencil by Steve Rhodes used under a Creative Commons license. Instagram video by @bakerbk.

March 29 2013

13:30

December 19 2011

07:37

Magazine editing: managing information overload

In the second of three extracts from the 3rd edition of Magazine Editing, published by Routledge, I talk about dealing with the large amount of information that magazine editors receive. 

Managing information overload

A magazine editor now has little problem finding information on a range of topics. It is likely that you will have subscribed to email newsletters, RSS feeds, Facebook groups and pages, YouTube channels and various other sources of news and information both in your field and on journalistic or management topics.

There tend to be two fears driving journalists’ information consumption: the fear that you will miss out on something because you’re not following the right sources; and the fear that you’ll miss out on something because you’re following too many sources. This leads to two broad approaches: people who follow everything of any interest (‘follow, then filter’); and people who are very strict about the number of sources of information they follow (‘filter, then follow’).

A good analogy to use here is of streams versus ponds. A pond is manageable, but predictable. A stream is different every time you step in it, but you can miss things.

As an editor you are in the business of variety: you need to be exposed to a range of different pieces of information, and cannot afford to be caught out. A good strategy for managing your information feeds then, is to follow a wide variety of sources, but to add filters to ensure you don’t miss all the best stuff.

If you are using an RSS reader one way to do this is to have specific folders for your ‘must-read’ feeds. Andrew Dubber, a music industries academic and author of the New Music Strategies blog, recommends choosing 10 subjects in your area, and choosing five ‘must-read’ feeds for each, for example.

For email newsletters and other email updates you can adopt a similar strategy: must-reads go into your Inbox; others are filtered into subfolders to be read if you have time.

To create a folder in Google Reader, add a new feed (or select an existing one) and under the heading click on Feed Settings… – then scroll to the bottom and click on New Folder… – this will also add the feed to that folder.

If you are following hundreds or thousands of people on Twitter, use Twitter lists to split them into manageable channels: ‘People I know’; ‘journalism’; ‘industry’; and so on. To add someone to a list on Twitter, visit their profile page and click on the list button, which will be around the same area as the ‘Follow’ button.

You can also use websites such as Paper.li to send you a daily email ‘newspaper’ of the most popular links shared by a particular list of friends every day, so you don’t miss out on the most interesting stories.

Social bookmarking: creating an archive and publishing at the same time

Social bookmarking tools like Delicious, Digg and Diigo can also be useful in managing web-based resources that you don’t have time to read or think might come in useful later. Bookmarking them essentially ‘files’ each webpage so you can access them quickly when you need them (you do this by giving each page a series of relevant tags, e.g. ‘dieting’, ‘research’, ‘UK’, ‘Jane Jones’).

They also include a raft of other useful features, such as RSS feeds (allowing you to automatically publish selected items to a website, blog, or Twitter or Facebook account), and the ability to see who else has bookmarked the same pages (and what else they have bookmarked, which is likely to be relevant to your interests).

Check the site’s Help or FAQ pages to find out how to use them effectively. Typically this will involve adding a button to your browser’s Links bar (under the web address box) by dragging a link (called ‘Bookmark on Delicious’ or similar) from the relevant page of the site (look for ‘bookmarklets’).

Then, whenever you come across a page you want to bookmark, click on that button. A new window will appear with the name and address of the webpage, and space for you to add comments (a typical tactic is to paste a key quote from the page here), and tags.

Useful things to add as tags include anything that will help you find this later, such as any organisations, locations or people that are mentioned, the author or publisher, and what sort of information is included, such as ‘report’, ‘statistics’, ‘research’, ‘casestudy’ and so on.

If installing a button on your browser is too complicated or impractical many of these services also allow you to bookmark a page by sending the URL to a specific email address. Alternatively, you can just copy the URL and log on to the bookmarking site to bookmark it.

Some bookmarking services double up as blogging sites: Tumblr and Stumbleupon are just two. The process is the same as described above, but these services are more intuitively connected with other services such as Twitter and Facebook, so that bookmarked pages are also automatically published on those services too. With one click your research not only forms a useful archive but also becomes an act of publishing and distribution.

Every so often you might want to have a clear out: try diverting mailings and feeds to a folder for a week without looking at them. After seven days, ask which ones, if any, you have missed. You might benefit from unsubscribing and cutting down some information clutter. In general, it may be useful to have background information, but it all occupies your time. Treat such things as you would anything sent to you on paper. If you need it, and it is likely to be difficult to find again, file it or bookmark it. If not, bin it. After a while, you’ll find it gets easier.

Do you have any other techniques for dealing with information overload?

 

July 20 2011

14:42

How to collaborate (or crowdsource) by combining Delicious and Google Docs

RSS girl by Heather Weaver

RSS girl by HeatherWeaver on Flickr

During some training in open data I was doing recently, I ended up explaining (it’s a long story) how to pull a feed from Delicious into a Google Docs spreadsheet. I promised I would put it down online, so: here it is.

In a Google Docs spreadsheet the formula =importfeed will pull information from an RSS feed and put it into that spreadsheet. Titles, links, datestamps and other parts of the feed will each be separated into their own columns.

When combined with Delicious, this can be a useful way to collect together pages that have been bookmarked by a group of people, or any other feed that you want to analyse.

Here’s how you do it:

1. Decide on your tag, network or user

The spreadsheet will pull data from an RSS feed. Delicious provides so many of these that you are spoilt for choice. Here are the main three:

A tag

Used by various people.

Advantages: quick startup – all you need to do is tell people the tag (make sure this is unique, such as ‘unguessable2012′).

Disadvantages: others can hijack the tag – although this can be cleaned from the resulting data.

A network

Consisting of the group of people who are bookmarking:

Advantages: group cannot be infiltrated.

Disadvantages: setup time – may need to create a new account to build the network around.

A user

Created for this purpose:

Advantages: if users are not confident in using Delicious, this can be a useful workaround.

Disadvantages: longer set up time – you’ll need to create a new account, and work out an easy way for it to automatically capture bookmarks from the group. One way is to pull an RSS feed of any mentions on Twitter and use Twitterfeed to auto-tweet them with a hashtag, and then Packrati.us to auto-bookmark all tweeted links (a similar process is detailed here).

The RSS feed for each will be found at the bottom of pages, and is consistently formatted like so:

Delicious.com/tag/unguessable2012

Delicious.com/network/unguessable2012

Delicious.com/unguessable2012

2. Create your spreadsheet

In Google Docs, create a new spreadsheet and in the first cell type the following formula:

=importfeed(“

…adding your RSS feed after the quotation mark, and then this at the end:

“)

So it looks something like this:

=importfeed(“http://feeds.delicious.com/v2/rss/tag/unguessable2012?count=15″)

Now press enter and after a moment the spreadsheet should populate with data from that feed.

You’ll note, however, that at most you will have only 15 rows of data here. That’s because the RSS feed you’ve copied includes that limitation.

If you look at the RSS feed you’ll see an easy clue on how to change this…

So, try editing it so that the count=15 part of that URL reads count=20 instead. You can put a higher number – but Google Docs will limit results to 20 at a time.

3. Collecting contributions

Technically, you’re now all set up. The bigger challenge is, of course, in getting people to contribute. It helps if they can see the results – so think about publishing your spreadsheet.

You’ll also need to make sure that you check it regularly and copy into a backup spreadsheet so you don’t miss results after that top 20.

But if you find it doesn’t work it may be worth thinking of other ways of doing this – for example, with a Google Form, or using =importfeed with the RSS feed for a search on results for a Twitter hashtag containing links (Twitter’s advanced search allows you to limit results accordingly – and all search results come with an RSS feed link like this one)

Of course there are far more powerful ways of doing this which are worth exploring once you’ve understood the basic possibilities.

Doing more with =importfeed

The =importfeed formula has some other elements that we haven’t used.

Another way to do this, for example, is to paste your RSS feed URL into cell A1 and type the following anywhere else:

=importfeed(A1, ”Items Title”, FALSE, 20)

This has 4 parts in the parentheses:

  1. A1 – this points at the URL you just pasted in cell A1, and means that you only have to change what’s in A1 to change the feed being grabbed, rather than having to edit the formula itself
  2. “Items Title” – this is the part of the feed that is being grabbed. If you look in the feed you will see a part that says <item> and within that, an element called <title> – that’s it. You could change this to “Items URL” to get the <URL> part of <title> instead, for example. Or you could just put “Items” and get all 5 parts of each item (title, author, URL, date created, and summary). You can also use “feed” to get information about the feed itself, or “feed URL” or “feed title” or “feed description” to get that single piece of information.
  3. FALSE – this just says whether you want a header row or not. Setting to TRUE will add an extra row saying ‘Title’, for example.
  4. 20 – the number of results you want.

You can see an example spreadsheet with 3 sheets demonstrating different uses of this formula here.

PrintFriendly

April 11 2011

13:00

Data for journalists: understanding XML and RSS

If you are working with data chances are that sooner or later you will come across XML – or if you don’t, then, well, you should do. Really.

There are some very useful resources in XML format – and in RSS, which is based on XML – from ongoing feeds and static reference files to XML that is provided in response to a question that you ask. All of that is for future posts – this post attempts to explain how XML is relevant to journalism, and how it is made up.

What is XML?

XML is a language which is used for describing information, which makes it particularly relevant to journalists – especially when it comes to interrogating large sets of data.

If you wanted to know how many doctors were privately educated, or what the most common score was in the Premiership last season, or which documents were authored by a particular civil servant, then XML may be useful to you.

(That said, this post doesn’t show you how to do any of that – it is mainly aimed at explaining how XML works so that you can begin to think about those possibilities.)

XML stands for “eXtensible Markup Language”. It’s the ‘markup’ bit which is key: XML ‘marks up’ information as being something in particular: relating to a particular date, for example; or a particular person; or referring to a particular location.

For example, a snippet of XML like this -

<city>Paris</city>
<country>France</country>

- tells you that the ‘Paris’ in this instance is a city, rather than a celebrity. And that it’s in France, not Texas.

That makes it easier for you to filter out information that isn’t relevant, or combine particular bits of information with data from elsewhere.

For example, if an XML file contains information on authors, you can filter out all but those by the person you’re interested in; if it contains publication dates, you can use that to plot associated content on a timeline.

Most usefully, if you have a set of data yourself such as a spreadsheet, you can pull related data from a relevant XML file. If your spreadsheet contains football teams and the XML provides locations, images, and history for each, then you can pull that in to create a fuller picture. If it contains addresses, there are services that will give you XML files with the constituency for those postcodes.

What is RSS?

RSS is a whole family of formats which are essentially based on XML – so they are structured in the same way, containing ‘markup’ that might tell you the author, publication date, location or other details about the information it relates to.

There is a lot of variation between different versions of RSS, but the main thing for the purposes of this post is that the various versions of RSS, and XML, share a structure which journalists can use if they know how to.

Which version isn’t particularly important: as long as you understand the principles, you can adapt what you do to suit the document or feed you’re working with.

Looking at XML and RSS

XML documents (for simplicity’s sake I’ll mostly just refer to ‘XML’ for the rest of this post, although I’m talking about both XML and RSS) contain two things that are of interest to us: content, and information about the content (‘markup’).

Information about the content is contained within tags in angle brackets (also known as chevrons): ‘<’ and ‘>’

For example: <name> or <pubDate> (publication date).

The tag is followed by the content itself, and a closing tag that has a forward slash, e.g. </name> or </pubDate>, so one line might look like this:

<name>Paul Bradshaw</name>

At this point it’s useful to have some XML or RSS in front of you. For a random example go to the RSS feed for the Scottish Government News.

To see the code right-click on that page and select View Source or similar – Firefox is worth using if another browser does not work; the Firebug extension also helps. (Note: if the feed is generated by Feedburner this won’t work: look for the ‘View Feed XML‘ button in the middle right area or add ?format=xml to the feed URL).

What you should see will include the following:

<item>
<title>Manufactured Exports Q4 2010</title>
<link>http://www.scotland.gov.uk/News/Releases/2011/04/06100351</link>
<description>A National Statistics publication for Scotland.</description>
<guid isPermaLink="true">http://www.scotland.gov.uk/News/Releases/2011/04/06100351</guid>
<pubDate>Wed, 06 Apr 2011 00:00:00 GMT</pubDate>
</item>

In the RSS feed itself this doesn’t start until line 14 (the first 13 lines are used to provide information about the feed as a whole, such as the version of RSS, title, copyright etc).

But from line 14 onwards this pattern repeats itself for a number of different ‘items’.

As you can see, each item has a title, a link, a description, a permalink, and a publication date. These are known as child elements (the item is the parent, or the ‘root element’).

More journalistic examples can be found at Mercedes GP’s XML file of the latest F1 Championship Standings (see the PS at the end of Tony Hirst’s post for an explanation of how this is structured), and MySociety’s Parliament Parser, which provides XML files on all parts of government, from MPs and peers to debates and constituencies, going back over a decade. Look at the Ministers XML file in Firefox and scroll down until you get to the first item tagged <ministerofficegroup>. Within each of those are details on ministerial positions. As the Parliament Parser page explains:

“Each one has a date range, the MP or Lord became a minister at some time on the start day, and stopped being one at some time on the end day. The matchid field is one sample MP or Lord office which that person also held. Alternatively, use the people.xml file to find out which person held the ministerial post.”

You’ll notice from that quote that some parts of the XML require cross-referencing to provide extra details. That’s where XML becomes very useful.

Using it in practice: working with XML in Yahoo! Pipes

Yahoo! Pipes provides a good introduction in working with data in XML or RSS. You’ll need to sign up at Pipes.Yahoo.com and click on ‘Create a Pipe‘.

You’ll now be editing a new project. On the left hand column are various ‘modules’ you can use. Click on ‘Sources‘ to expand it, and click and drag ‘Fetch Feed’ onto the graph paper-style canvas.

The 'Fetch Feed' module
The ‘Fetch Feed’ module

Copy the address of your RSS feed and paste it into the ‘Fetch Feed’ box. I’m using this feed of Health information from the UK government.

If you now click on the module so that it turns orange, you should be able (after a few moments) see that feed in the Debugger window at the bottom of the screen.

Click on the handle in the middle to pull it up and see more, and click on the arrows on the left to drill down to the ‘nested’ data within each item.

Drilling down into the data within an RSS feed
Drilling down into the data within an RSS feed

As you drill down you can see elements of data you can filter. In this case, we’ll use ‘region‘.

To filter the feed based on this we need the Filter module. On the left hand side click on ‘Operators‘ to expand that, and then drag the ‘Filter‘ module into the canvas.

Now drag a pipe from the circle at the bottom of the ‘Fetch Feed’ module to the top of the ‘Filter’ module.

Drag a pipe from Fetch Feed to Filter
Drag a pipe from Fetch Feed to Filter

Wait a moment for the ‘Filter’ module to work out what data the RSS feed contains. Then use the drop down menus so that it reads “Permit items that match all of the following”.

The next box determines which piece of data you will filter on. If you click on the drop-down here you should see all the pieces of data that are associated with each item.

Select the data you are filtering on
Select the data you are filtering on

We’re going to select ‘region’, and say that we only want to permit items where ‘region’ contains ‘North West’. If any of these don’t make any sense, look at the original RSS feed again to see what they contain.

Now drag a final pipe from the bottom of the ‘Filter’ module to the top of ‘Pipe output‘ at the bottom of the canvas. If you click on either you should be able to see in the Debugger that now only those items relating specifically to the North West are displayed.

If you wanted to you could now save this and click ‘Run Pipe‘ to see the results. Once you do you should notice options to ‘Get as RSS‘ – this would allow you to subscribe to this feed yourself or publish it on a website or Twitter account. There’s also ‘Get as JSON’ which is a whole other story – I’ll cover JSON in a future post.

You can see this pipe in action – and clone it yourself – here.

Oh, and a sidenote: if you wanted to grab an XML file in Yahoo! Pipes rather than an RSS feed, you would use ‘Fetch Data’ instead of ‘Fetch Feed’.

Just the start

There’s much more you can do here. Some suggestions for next steps:

Those are for future posts. For now I just want to demonstrate how XML works to add information-about-information which you can then use to search, filter, and combine data.

And it’s not just an esoteric language that is used by a geeky few as part of their newsgathering: journalists at Sky News, The Guardian and The Financial Times – to name just a few – all use this as a routine part of publishing, because it provides a way to dynamically update elements within a larger story without having to update the whole thing from scratch – for example by updating casualty numbers or new dates on a timeline.

And while I’m at it, if you have any examples of XML being used in journalism for either newsgathering or publishing, let me know.

PrintFriendly

January 12 2011

20:23

The Independent’s Facebook revolution

Like Robert Fisk

The Independent newspaper has introduced a fascinating new feature on the site that allows users to follow articles by individual writers and news about specific football teams via Facebook.

It’s one of those ideas so simple you wonder why no one else appears to have done it before: instead of just ‘liking’ individual articles, or having to trudge off to Facebook to see if there’s a relevant page you can become a fan of, the Indie have applied the technology behind the ‘Like’ button to make the process of following specific news feeds more intuitive.

To that end, you can pick your favourite football team from this page or click on the ‘Like’ button at the head of any commentator’s homepage. The Independent’s Jack Riley says that the feature will be rolled out to columnists next, followed by public figures, places, political parties, and countries.

The move is likely to pour extra fuel on the overblown ‘RSS is dying‘ discussion that has been taking place recently. The Guardian’s hugely impressive hackable RSS feeds (with full content) are somewhat put in the shade by this move – but then the Guardian have generated enormous goodwill in the development community for that, and continue to innovate. Both strategies have benefits.

At the moment the Independent’s new Facebook feature is plugged at the end of each article by the relevant commentator or about a particular club. It’s not the best place to put given how many people read articles through to the end, nor the best designed to catch the eye, and it will be interesting to see whether the placement and design changes as the feature is rolled out.

It will also be interesting to see how quickly other news organisations copy the innovation.

More coverage at Read Write Web and Future of Media.

December 19 2010

18:00

Games, systems and context in journalism at News Rewired

I went to News Rewired on Thursday, along with dozens of other journalists and folk concerned in various ways with news production. Some threads that ran through the day for me were discussions of how we publish our data (and allow others to do the same), how we link our stories together with each other and the rest of the web, and how we can help our readers to explore context around our stories.

One session focused heavily on SEO for specialist organisations, but included a few sharp lessons for all news organisations. Frank Gosch spoke about the importance of ensuring your site’s RSS feeds are up to date and allow other people to easily subscribe to and even republish your content. Instead of clinging tight to content, it’s good for your search rankings to let other people spread it around.

James Lowery echoed this theme, suggesting that publishers, like governments, should look at providing and publishing their data in re-usable, open formats like XML. It’s easy for data journalists to get hung up on how local councils, for instance, are publishing their data in PDFs, but to miss how our own news organisations are putting out our stories, visualisations and even datasets in formats that limit or even prevent re-use and mashup.

Following on from that, in the session on linked data and the semantic web,Martin Belam spoke about the Guardian’s API, which can be queried to return stories on particular subjects and which is starting to use unique identifiers -MusicBrainz IDs and ISBNs, for instance – to allow lists of stories to be pulled out not simply by text string but using a meaningful identification system. He added that publishers have to licence content in a meaningful way, so that it can be reused widely without running into legal issues.

Silver Oliver said that semantically tagged data, linked data, creates opportunities for pulling in contextual information for our stories from all sorts of other sources. And conversely, if we semantically tag our stories and make it possible for other people to re-use them, we’ll start to see our content popping up in unexpected ways and places.

And in the long term, he suggested, we’ll start to see people following stories completely independently of platform, medium or brand. Tracking a linked data tag (if that’s the right word) and following what’s new, what’s interesting, and what will work on whatever device I happen to have in my hand right now and whatever connection I’m currently on – images, video, audio, text, interactives; wifi, 3G, EDGE, offline. Regardless of who made it.

And this is part of the ongoing move towards creating a web that understands not only objects but also relationships, a world of meaningful nouns and verbs rather than text strings and many-to-many tables. It’s impossible to predict what will come from these developments, but – as an example – it’s not hard to imagine being able to take a photo of a front page on a newsstand and use it to search online for the story it refers to. And the results of that search might have nothing to do with the newspaper brand.

That’s the down side to all this. News consumption – already massively decentralised thanks to the social web – is likely to drift even further away from the cosy silos of news brands (with the honourable exception of paywalled gardens, perhaps). What can individual journalists and news organisations offer that the cloud can’t?

One exciting answer lies in the last session of the day, which looked at journalism and games. I wrote some time ago about ways news organisations were harnessing games, and could do in the future – and the opportunities are now starting to take shape. With constant calls for news organisations to add context to stories, it’s easy to miss the possibility that – as Philip Trippenbachsaid at News Rewired - you can’t explain a system with a story:

Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work.

Many of the issues we cover – climate change, government cuts, the deficit – at macro level are systems that could be interestingly and interactively explored with games. (Like this climate change game here, for instance.) Other stories can be articulated and broadened through games in a way that allows for real empathy between the reader/player and the subject because they are experiential rather than intellectual. (Like Escape from Woomera.)

Games allow players to explore systems, scenarios and entire universes in detail, prodding their limits and discovering their flaws and hidden logic. They can be intriguing, tricky, challenging, educational, complex like the best stories can be, but they’re also fun to experience, unlike so much news content that has a tendency to feel like work.

(By the by, this is true not just of computer and console games but also of live, tabletop, board and social games of all sorts – there are rich veins of community journalism that could be developed in these areas too, as theRochester Democrat and Chronicle is hoping to prove for a second time.)

So the big things to take away from News Rewired, for me?

  • The systems within which we do journalism are changing, and the semantic web will most likely bring another seismic change in news consumption and production.
  • It’s going to be increasingly important for us to produce content that both takes advantage of these new technologies and allows others to use these technologies to take advantage of it.
  • And by tapping into the interactive possibilities of the internet through games, we can help our readers explore complex systems that don’t lend themselves to simple stories.

Oh, and some very decent whisky.

Cross-posted at Metamedia.

October 08 2010

08:25

Online journalism student RSS reader starter pack: 50 RSS feeds

Teaching has begun in the new academic year and once again I’m handing out a list of recommended RSS feeds. Last year this came in the form of an OPML file, but this year I’m using Google Reader bundles (instructions on how to create one of your own are here). There are 50 feeds in all – 5 feeds in each of 10 categories. Like any list, this is reliant on my own circles of knowledge and arbitrary in various respects. But it’s a start. I’d welcome other suggestions.

Here is the list with links to the bundles. Each list is in alphabetical order – there is no ranking:

5 of the best: Community

A link to the bundle allowing you to add it to your Google Reader is here.

  1. Blaise Grimes-Viort
  2. Community Building & Community Management
  3. FeverBee
  4. ManagingCommunities.com
  5. Online Community Strategist

5 of the best: Data

This was a particularly difficult list to draw up – I went for a mix of visualisation (FlowingData), statistics (The Numbers Guy), local and national data (CountCulture and Datablog) and practical help on mashups (OUseful). I cheated a little by moving computer assisted reporting blog Slewfootsnoop into the 5 UK feeds and 10,000 Words into Multimedia. Bundle link here.

  1. CountCulture
  2. FlowingData
  3. Guardian Datablog
  4. OUseful.info
  5. WSJ.com: The Numbers Guy

5 of the best: Enterprise

There’s a mix of UK and US blogs covering the economic side of publishing here (if you know of ones with a more international perspective I’d welcome suggestions), and a blog on advertising to round things up. Frequency of updates was another factor in drawing up the list. Bundle link here.

  1. Ad Sales Blog
  2. Media Money
  3. Newsonomics
  4. Newspaper Death Watch
  5. The Information Valet

5 of the best: Industry feeds

Something of a catch-all category. There are a number of BBC blogs I could have included but The Editors is probably the most important. The other 4 feeds cover the 2 most important external drivers of traffic to news sites: search engines and Facebook. Bundle link here.

  1. All Facebook
  2. BBC News – The Editors
  3. Facebook Blog
  4. Search Engine Journal
  5. Search Engine Land

5 of the best: Feeds on law, ethics and regulation

Trying to cover the full range here: Jack of Kent is a leading source of legal discussion and analysis, and Martin Moore covers regulation, ethics and law regularly. Techdirt is quite transparent about where it sits on legal issues, but its passion is also a strength in how well it covers those grey areas of law and the web. Tech and Law is another regular source, while Judith Townend’s new blog on Media Law & Ethics is establishing itself at the heart of UK bloggers’ attempts to understand where they stand legally. Bundle link here.

  1. Jack of Kent
  2. Martin Moore
  3. Media Law & Ethics
  4. Tech and Law
  5. Techdirt

5 of the best: Media feeds

There’s an obvious UK slant to this selection, with Editors Weblog and E-Media Tidbits providing a more global angle. Here’s the bundle link.

  1. Editors Weblog
  2. E-Media Tidbits
  3. Journalism.co.uk
  4. MediaGuardian
  5. paidContent

5 of the best: Feeds about multimedia journalism

Another catch-all category. Andy Dickinson tops my UK feeds, but he’s also a leading expert on online video and related areas. 10,000 Words is strong on data, among other things. And Adam Westbrook is good on enterprise as well as practising video journalism and audio slideshows. Bundle link here.

  1. 10,000 Words
  2. Adam Westbrook
  3. Advancing the Story
  4. Andy Dickinson
  5. News Videographer

5 of the best: Technology feeds

A mix of the mainstream, the new, and the specialist. As the Guardian’s technology coverage is incorporated into its Media feed, I was able to include ReadWriteWeb instead, which often provides a more thoughtful take on technology news. Bundle link here.

  1. Mashable
  2. ReadWriteWeb
  3. TechCrunch
  4. Telegraph Connected
  5. The Register

5 of the best: UK feeds

Alison Gow’s Headlines & Deadlines is the best blog by a regional journalist I can think of (you may differ – let me know). Adam Tinworth’s One Man and his Blog represents the magazines sector, and Martin Belam’s Currybetdotnet casts an eye across a range of areas, including the more technical side of things. Murray Dick (Slewfootsnoop) is an expert on computer assisted reporting and has a broadcasting background. The Online Journalism Blog is there because I expect them to read my blog, of course. Bundle link here.

  1. Currybetdotnet
  2. Headlines and Deadlines
  3. One Man & His Blog
  4. Online Journalism Blog
  5. Slewfootsnoop

5 of the best: US feeds

Jay, Jeff and Mindy are obvious choices for me, after which it is relatively arbitrary, based on the blogs that update the most – particularly open to suggestions here. Bundle link here.

  1. BuzzMachine
  2. Jay Rosen: Public Notebook
  3. OJR
  4. Teaching Online Journalism
  5. Yelvington.com

September 20 2010

11:18

My Henry Stewart talk about ‘Blogging, Twitter and Journalism’

I’ve recorded a 48 minute presentation covering ‘Blogging, Twitter and Journalism‘ for the Henry Stewart series of talks. It’s designed for journalism students and covers

  • How blogging differs from other journalism platforms;
  • Key developments in journalism blogging history;
  • What makes a successful blog
  • What is Twitter and how is it useful for journalists and publishers? and
  • Why RSS is central to blogging and Twitter and how it works

September 17 2010

14:00

This Week in Review: J-schools as R&D labs, a big news consumption shift, and what becomes of RSS

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

Entrepreneurship and old-school skills in j-school: We found out in February that New York University and the New York Times would be collaborating on a news site focused on Manhattan’s East Village, and this week the site went live. Journalism.co.uk has some of the details of the project: Most of its content will be produced by NYU students in a hyperlocal journalism class, though their goal is to have half of it eventually produced by community members. NYU professor Jay Rosen, an adviser on the project, got into a few more of the site’s particulars, describing its Virtual Assignment Desk, which allows local residents to pitch stories via a new WordPress editing plugin.

Rosen’s caution that “it is going to take a while for The Local East Village to find any kind of stride” notwithstanding, the site got a few early reviews. The Village Voice’s Foster Kamer started by calling the site the Times’ “hyperlocal slave labor experiment” and concluded by officially “declaring war” on it. GigaOM’s Mathew Ingram, on the other hand, was encouraged by NYU’s effort to give students serious entrepreneurial skills, as opposed to just churning out “typists and videographers.”

NYU’s project was part of the discussion about the role of journalism schools this week, though. PBS’ MediaShift wrapped up an 11-post series on j-school, which included an interview with Rosen about the journalism as R&D lab and a post comparing and contrasting the tacks being taken by NYU, Jeff Jarvis’ program at the City University of New York and Columbia University. (Unlike the other two, Columbia is taking a decidedly research-oriented route.) Meanwhile, Tony Rogers, a Philadelphia-area j-prof, wrote two articles (one of them a couple of weeks ago) at About.com quoting several professors wondering whether journalism schools have moved too far toward technological skills at the expense of meat-and-potatoes journalism skills.

They weren’t the only ones: Both Teresa Schmedding of the American Copy Editors Society and Iowa State j-school director Michael Bugeja also criticized what they called a move away from the core of journalism in the country’s j-schools. “I expect to teach new hires InDesign, Quark or Twitter, MySpace, FB and how to use whatever the app of the week is, but I don’t expect to teach you what who, what, where, when, why and how means,” Schmedding wrote. TBD’s Steve Buttry countered those arguments with a post asserting that journalists need to know more about disruptive technology and what it’s doing to their future industry. “Far too many journalists and journalism school graduates know next to nothing about the business of journalism and that status quo is indefensible,” said Buttry.

A turning point in news consumption: Like most every Pew survey, the biennial study released this week by the Pew Center for the People & the Press is a veritable cornucopia of information on how people are consuming news. Tom Rosenstiel of Pew’s Project for Excellence in Journalism has some fascinating musings of the study’s headline finding: People aren’t necessarily ditching old platforms for news, but are augmenting them with new uses of emerging technology. Rosenstiel sees this as a turning point in news consumption, brought about by more tech-savvy news orgs, faster Internet connections, and increasing new media literacy. Several others — Mathew Ingram of GigaOM, Joe Pompeo of Business Insider, Chas Edwards of Digg — agreed that this development is a welcome one.

The Washington Post’s Howard Kurtz and paidContent’s Staci Kramer have quick summaries of the study’s key statistics, and DailyFinance’s Jeff Bercovici pointed out one particularly portentous milestone: For the first time, the web has eclipsed newspapers as a news source. (But, as Collective Talent noted, we still love our TV news.) Lost Remote’s Cory Bergman took a closer look at news consumption via social media, and j-prof W. Joseph Campbell examined the other side of the coin — the people who are going without news.

The Pew Internet & American Life Project also released an interesting study this week looking at “apps culture,” which essentially didn’t exist two years ago. Beyond the Book interviewed the project’s director, Lee Rainie, about the study, and the Lab gave us five applications for news orgs from the study: Turns out news apps are popular, people will pay for apps, and they consume apps in small doses.

Did social media kill RSS and press releases?: Ask.com announced last Friday that it would shut down Bloglines, the RSS reader it bought in 2005, citing a slowdown in RSS usage as Twitter and Facebook increase their domination of real-time information flow. “The writing is on the wall,” wrote Ask’s president, Doug Leeds. PaidContent’s Joseph Tarkatoff used the news as a peg for the assertion that the RSS reader is dead, noting that traffic is down for Bloglines and Google Reader, and that Google Reader, the web’s most popular RSS reader, is being positioned as more of a social sharing site.

Tech writer Jeff Nolan agreed, arguing that RSS has value as a back-end application but not as a primary news-consumption tool: “RSS has diminishing importance because of what it doesn’t enable for the people who create content… any monetization of content, brand control, traffic funneling, and audience acquisition,” he wrote. Business Insider Henry Blodget joined in declaring RSS readers toast, blaming Twitter and Facebook for their demise. Numerous people jumped in to defend RSS, led by Dave Winer, who helped invent the tool about a decade ago. Winer argued that RSS “forms the pipes through which news flows” and suggested reinventing the technology as a real-time feed with a centralized, non-commercial subscription service.

Tech writer Robert Scoble responded that while the RSS technology might be central to the web, RSS reading behavior is dying. The future is in Twitter and Facebook, he said. GigaOM’s Mathew Ingram and media consultant Terry Heaton also defended RSS, with Ingram articulating its place alongside Twitter’s real-time flow and Heaton arguing that media companies just need to realize its value as its utility spreads across the web.

RSS wasn’t the only media element declared dead this week; Advertising Age’s Simon Dumenco also announced the expiration of the press release, replaced by the “real-time spin of Facebook and Twitter. PR blogger Jeremy Pepper and j-prof Kathy Gill pushed back with cases for the press release’s continued use.

Twitter’s media-company move: Lots of interesting social media stuff this week; I’ll start with Twitter. The company began rolling out its new main-page design, which gives it a lot of the functions that its independently developed clients have. Twitter execs said the move indicated Twitter’s status as a more consumptive platform, where the bulk of the value comes from reading, rather than writing — something All Things Digital’s Peter Kafka tagged as a fundamental shift for the company: “Twitter is a media company: It gives you cool stuff to look at, you pay attention to what it shows you, and it rents out some of your attention to advertisers.”

GigaOM’s Mathew Ingram and venture capitalist David Pakman agreed, with Pakman noting that while Google, Facebook and Twitter all operate platform, users deal overwhelmingly with the company itself — something that’s very valuable for advertisers. The Lab’s Megan Garber also wrote a smart post on the effect of Twitter’s makeover on journalism and information. The new Twitter, Garber writes, moves tweets closer to news articles and inches its own status from news platform closer to a broadcast news platform. Ex-Twitter employee Alex Payne and Ingram (who must have had a busy week) took the opportunity to argue that Twitter as a platform needs to decentralize.

On to Facebook: The New Yorker released a lengthy profile of Facebook founder Mark Zuckerberg, and while not everyone was crazy about it (The Atlantic’s Alexis Madrigal thought it was boring and unrevealing), it gave the opportunity for one of the people quoted in it — Expert Labs director Anil Dash — to deliver his own thoughtful take on the whole Facebook/privacy debate. Dash isn’t that interested in privacy; what he is worried about is “this company advocating for a pretty radical social change to be inflicted on half a billion people without those people’s engagement, and often, effectively, without their consent.”

Elsewhere around social media and news: Mashable’s Vadim Lavrusik wrote a fantastic overview of what news organizations are beginning to do with social media, and we got closer looks at PBS NewsHour, DCist and TBD in particular.

Reading roundup: Plenty of stuff worth reading this week. Let’s get to it.

— Last week’s discussion on online traffic and metrics spilled over into this week, as the Lab’s Nikki Usher and C.W. Anderson discussed the effects of journalists’ use of web metrics and the American Journalism Review’s Paul Farhi looked at the same issue (from a more skeptical perspective). The Columbia Journalism Review’s Dean Starkman had the read of the week on the topic (or any topic, really), talking about what the constant churn of news in search of new eyeballs is doing to journalism. All of these pieces are really worth your time.

— The San Jose Mercury News reported that Apple is developing a plan for newspaper subscriptions through its App Store that would allow the company to take a 30 percent cut of all the newspaper subscriptions it sells and 40 percent of their advertising revenue. The Columbia Journalism Review’s Ryan Chittum was skeptical of the report, but Ken Doctor had nine good questions on the issue while we find out whether there’s anything to it.

— Another British Rupert Murdoch paper, News of the World, is going behind a paywall in October. PaidContent was skeptical, but Paul Bradshaw said it’ll do better than Murdoch’s other newly paywalled British paper, The Times.

— The Atlantic published a very cool excerpt from a book on video games as journalism by three Georgia Tech academics. I’m guessing you’ll be hearing a lot more about this in the next couple of years.

— Rafat Ali, who founded paidContent gave a kind of depressing interview to Poynter on his exit from the news-about-the-news industry. “I think there’s just too much talk about it, and to some extent it is just an echo chamber, people talking to each other. There’s more talk about the talk than actual action.” Well, shoot, I’d better find a different hobby. (Seriously, though, he’s right — demos, not memos.)

— Finally, a wonderful web literacy tool from Scott Rosenberg: A step-by-step guide to gauge the credibility of anything on the web. Read it, save it, use it.

August 26 2010

18:05

10 Must-Read Sites for Hyper-Local Publishers

Here at NowSpots we're developing a new advertising platform that will let local publishers sell and publish real-time ads on their sites. In my last post here on MediaShift Idea Lab, I explained why real-time ads are a better business model for hyper-local bloggers and local publishers than AdSense or existing display ad solutions.

Since winning a 2010 Knight News Challenge award to kickstart development of our new platform, we've been busy meeting with publishers to learn more about their needs and problems. We've also been busy reading up on what's happening in the hyper-local publishing space. This week I'm going to share with you 10 sites I read on a regular basis for news, commentary, and context about business models for hyper-local bloggers and local publishers. At the end of the post are links to subscribe to them through RSS or to follow them on Twitter.

Top Ten

1. MediaGazer

MediaGazer is a semi-automated aggregator for media news. It's a dead-simple, one-page site that lists the day's top media headlines from around the web alongside links to related coverage. What's great about MediaGazer is that their algorithm makes sure they get just about everything interesting each day, while their editorial touch makes sure the front page is always interesting. Not every story on MediaGazer pertains to the local news game, but anything good that does will be there.

2. Nieman Journalism Lab

The Nieman Journalism Lab is a blog covering journalism's efforts to figure out its future. Moreso than any other blog on the web, they are squarely focused on introducing new examples of "the new news" and figuring out what they might lead to. My only complaint is that I wish they'd post more. Just about everything they run is in my wheelhouse as a news startup guy.

3. Lost Remote

Lost Remote is focused on "hyper-local news, neighborhood blogs, and local journalism startups." Originally started by MSNBC.com's Cory Bergman, it is now edited by Steve Safran. Anything interesting that happens in the local news space that could impact hyper-local bloggers shows up here. Lost Remote is the TechCrunch of hyper-local bloggers. A must read.

4. Local Onliner

Peter Krasilovsky's Local Onliner blog is a repository of analysis pieces on the future of local online publishing that he writes for the Kelsey Group blog. As a vice president at BIA/Kelsey, where he works on local online commerce, Krasilovsky's perspective on hyper-local news, geo-targeted advertising and the like is worth a look for anyone who wants to understand the business behind local publishing.

5. Mashable's local section

Uber-blog Mashable devotes a post or two each month to the local space, and its coverage is picking up with the rise of group-buying sites such as Groupon and location-based social networks such as Foursquare and GoWalla. I filter down to just posts tagged "local" to sidestep the never-ending onslaught of headlines about Twitter.

6. Local SEO Guide

Local SEO is a sharp blog from Andrew Shotland, an SEO consultant who specializes in local. Every hyper-local blogger needs to be aware of how findable their content is through search. Shotland's blog offers detailed rundowns of topics such as why sites like Yelp do so well in search that can help you better connect with readers through local search.

7. Hyperlocal Blogger

Matt McGee's Hyperlocal Blogger pulls together the latest news coverage of the hyper-local blogging space and publishes regular commentary on issues affecting neighborhood bloggers. For instance, McGee recently responded to the news that the city of Philadelphia is requiring city bloggers to buy a Business Privilege License for $300.

8. Chicago Art Magazine Transparency Pages

A bit of a hidden gem, this series of blog posts by Chicago Art Magazine's Kathryn Born covers a seven month period in late 2009 during which she launched a collection of websites focused on the Chicago art scene. In these posts, which carry a bit of a confessional tone, she discusses how hard it is to sell ads to local galleries, and her philosophy on creating quick content for the web. They're a great recounting of the trials and tribulations of starting a hyper-local web publication, and every hyper-local blogger should read them.

9. MediaShift Idea Lab

The blog you're reading right now has been a favorite of mine ever since I started Windy Citizen in 2008. I love the site for its great think-pieces about the future of news and updates from Knight News Challenge winners. We're excited to have a spot of our own now, and we still drop by regularly to see what's new. For hyper-local bloggers interested in new ideas about the space, this should be a regular stop.

10. eMedia Vitals

eMedia Vitals has an old-school name and takes an old-school approach to covering tactics and strategies for growing your digital business. Editor (and co-founder of TechicallyPhilly.com) Sean Blanda turned me onto the site at SXSW last year and I've since found their analysis to be relevant to people working in the local news space.

OPML File and Twitter List

These are the sites I'm reading on a regular basis to keep up with what's happening in the hyper-local space. I'm sure you may have a few favorites of your own that I omitted. If so, feel free to share them with me in the comments below or via Twitter (I'm @bradflora).

I've created an OPML file that you can import to add the feeds for all these sites to Google Reader. You can find it here.

And if you prefer reading your news through Twitter, I've created a list over on the NowSpots Twitter account that you can follow to add these folks to your Twitter feed. You can find it here.

Happy reading!

18:05

10 Must-Read Sites for Hyper-Local Publishers

Here at NowSpots we're developing a new advertising platform that will let local publishers sell and publish real-time ads on their sites. In my last post here on MediaShift Idea Lab, I explained why real-time ads are a better business model for hyper-local bloggers and local publishers than AdSense or existing display ad solutions.

Since winning a 2010 Knight News Challenge award to kickstart development of our new platform, we've been busy meeting with publishers to learn more about their needs and problems. We've also been busy reading up on what's happening in the hyper-local publishing space. This week I'm going to share with you 10 sites I read on a regular basis for news, commentary, and context about business models for hyper-local bloggers and local publishers. At the end of the post are links to subscribe to them through RSS or to follow them on Twitter.

Top Ten

1. MediaGazer

MediaGazer is a semi-automated aggregator for media news. It's a dead-simple, one-page site that lists the day's top media headlines from around the web alongside links to related coverage. What's great about MediaGazer is that their algorithm makes sure they get just about everything interesting each day, while their editorial touch makes sure the front page is always interesting. Not every story on MediaGazer pertains to the local news game, but anything good that does will be there.

2. Nieman Journalism Lab

The Nieman Journalism Lab is a blog covering journalism's efforts to figure out its future. Moreso than any other blog on the web, they are squarely focused on introducing new examples of "the new news" and figuring out what they might lead to. My only complaint is that I wish they'd post more. Just about everything they run is in my wheelhouse as a news startup guy.

3. Lost Remote

Lost Remote is focused on "hyper-local news, neighborhood blogs, and local journalism startups." Originally started by MSNBC.com's Cory Bergman, it is now edited by Steve Safran. Anything interesting that happens in the local news space that could impact hyper-local bloggers shows up here. Lost Remote is the TechCrunch of hyper-local bloggers. A must read.

4. Local Onliner

Peter Krasilovsky's Local Onliner blog is a repository of analysis pieces on the future of local online publishing that he writes for the Kelsey Group blog. As a vice president at BIA/Kelsey, where he works on local online commerce, Krasilovsky's perspective on hyper-local news, geo-targeted advertising and the like is worth a look for anyone who wants to understand the business behind local publishing.

5. Mashable's local section

Uber-blog Mashable devotes a post or two each month to the local space, and its coverage is picking up with the rise of group-buying sites such as Groupon and location-based social networks such as Foursquare and GoWalla. I filter down to just posts tagged "local" to sidestep the never-ending onslaught of headlines about Twitter.

6. Local SEO Guide

Local SEO is a sharp blog from Andrew Shotland, an SEO consultant who specializes in local. Every hyper-local blogger needs to be aware of how findable their content is through search. Shotland's blog offers detailed rundowns of topics such as why sites like Yelp do so well in search that can help you better connect with readers through local search.

7. Hyperlocal Blogger

Matt McGee's Hyperlocal Blogger pulls together the latest news coverage of the hyper-local blogging space and publishes regular commentary on issues affecting neighborhood bloggers. For instance, McGee recently responded to the news that the city of Philadelphia is requiring city bloggers to buy a Business Privilege License for $300.

8. Chicago Art Magazine Transparency Pages

A bit of a hidden gem, this series of blog posts by Chicago Art Magazine's Kathryn Born covers a seven month period in late 2009 during which she launched a collection of websites focused on the Chicago art scene. In these posts, which carry a bit of a confessional tone, she discusses how hard it is to sell ads to local galleries, and her philosophy on creating quick content for the web. They're a great recounting of the trials and tribulations of starting a hyper-local web publication, and every hyper-local blogger should read them.

9. MediaShift Idea Lab

The blog you're reading right now has been a favorite of mine ever since I started Windy Citizen in 2008. I love the site for its great think-pieces about the future of news and updates from Knight News Challenge winners. We're excited to have a spot of our own now, and we still drop by regularly to see what's new. For hyper-local bloggers interested in new ideas about the space, this should be a regular stop.

10. eMedia Vitals

eMedia Vitals has an old-school name and takes an old-school approach to covering tactics and strategies for growing your digital business. Editor (and co-founder of TechicallyPhilly.com) Sean Blanda turned me onto the site at SXSW last year and I've since found their analysis to be relevant to people working in the local news space.

OPML File and Twitter List

These are the sites I'm reading on a regular basis to keep up with what's happening in the hyper-local space. I'm sure you may have a few favorites of your own that I omitted. If so, feel free to share them with me in the comments below or via Twitter (I'm @bradflora).

I've created an OPML file that you can import to add the feeds for all these sites to Google Reader. You can find it here.

And if you prefer reading your news through Twitter, I've created a list over on the NowSpots Twitter account that you can follow to add these folks to your Twitter feed. You can find it here.

Happy reading!

July 30 2010

17:57

Learning From Failure in Community-Building at Missouri

news21 small.jpg

Education content on MediaShift is sponsored by Carnegie-Knight News21, an alliance of 12 journalism schools in which top students tell complex stories in inventive ways. See tips for spurring innovation and digital learning at Learn.News21.com.

I recently had an opportunity that is rarely handed to a journalism school professor: The chance to be a member of the inaugural class of the Reynolds Journalism Institute Fellows in the 2008-09 school year.

I already have a unique job. As an associate professor at the Missouri School of Journalism, I am also a new media director at the university-owned NBC-affiliate, KOMU-TV. I teach new media and I manage its production in a professional newsroom that is staffed with students. (We have a professional promotions, production and sales department just like any other television news station.)

Screen shot 2010-07-29 at 8.50.40 AM.png

I had a big idea back in 2007. I wanted to find a way to bring multiple newsrooms together to make it easier for news consumers to learn about their candidates leading up to election day. I wanted to partner with the other newsrooms owned by the University of Missouri: KBIA-FM (the local NPR station) and the Columbia Missourian (the daily morning paper in town). I wanted to plan for the big election in November 2008 and had already tried a similar project during the mid-term 2006 November election season.

Smart Decision '08

In 2006, we put a lot of content into one place but it was all hand-coded. I won't go into the nit-picky details. What I will tell you is it was time consuming and almost impossible to keep up to date as three newsrooms populated the site. I wanted automation and simple collaboration so the site could make it easier for news consumers to learn about information without worrying about where it came from. Information first, newsroom second. In the end, news consumers would end up using all of the newsrooms' information instead of just one or none.

I launched the Smart Decision '08 site and went into my RJI fellowship with a plan to complete my goal. I had already started building a new website that would collect RSS feeds of each newsroom's politically branded content. I had a small group of web managers tag each story that arrived into our site and categorize it under the race and candidate names mentioned in the news piece. It was a relatively simple process.

Unfortunately, our site was not simple. It was not clean and it was hand built by students with my oversight. It did not have a welcoming user experience. It did not encourage participation. I had a vision, but I lacked the technical ability to create a user-friendly site. I figured the content would rule and people would come to it. Not a great assumption.

Back in 2008, I still had old-school thoughts in my head. I thought media could lead the masses by informing voters who were hungry for details about candidates. I thought a project's content was more important than user experience. I thought I knew what I was talking about.

We did find a way to gather up some participation on the night of the big November 2008 election. We invited the general public to a viewing party where they could watch multiple national broadcasts, eat free food and participate in a live town forum during a four-hour live webcast we produced in the Reynolds Journalism Institute building.

We brought four newsrooms together in a separate environment where we produced web-only content while each newsroom produced its own content for air or print. We had a Twitter watch desk, a blog watch desk and insights from all kinds of people in the area. You can see a very quick video that captures some of the experience of that night:

Assumptions About the Audience

But in the end, my project was a failure.

Still, without that failure, I would not have learned so much.

You see, I came into this project with the idea that I was progressive. I was thinking about the future of journalism. I was going to change it all. But it all started out with a very old view of journalism: I made assumptions about my audience.

  • I assumed people wanted the information I was collecting.
  • I assumed the online audience wanted to take the time to dig into the information I was collecting for them.
  • I assumed the audience wanted to participate in a new space I created for them.
  • I assumed the newsrooms that were partners in the project would promote the site without any prompting.

My assumptions killed my project. I had invested so much time into the project that I had to finish it. I arrived into the fellowship with a work in progress and I wasn't going to stop -- even though I could see we were not getting the public participation. I created the content and hoped participation would follow.

The truth is that things work the other way around.

But I would not have learned that without my fellowship.

I worked with an amazing team of people. Jane Stevens and Matt Thompson led me into a new perspective in community building and content collection. I watched as we talked about community building. My biggest "a-ha moment" was when we discussed how community builders need a personal relationship with its first 1,000 members on a website. I realized that my Smart Decision project was doomed to fail from the start because I did not start with my community first. I expected the community to come to me. I needed to go to them.

I also learned a major project needs two managers: One to keep up with the content and one to make sure it gets promoted. That promotion needed to happen in each individual newsroom and in the public.

Being More Agile

During my fellowship, I also learned to be more agile. These days, when I start a project, I'm ready to move on to the next idea a lot faster. I launch multiple ideas at the same time and see what floats. I also cherish the relationships I form with members of the community. Instead of creating many different sites, I'm bringing the information to where they are. I'm focused on delivering information to Twitter and Facebook. I have news employees working on blogs, but most people go to those posts through Facebook. They do not go directly to the sites or from our main news web page.

Screen shot 2010-07-29 at 8.52.40 AM.png

I'm constantly learning as a news manager. But I will always cherish the time I had as a fellow because I was allowed to fail. The Smart Decision project was not something I could have managed while I was also in charge of a newsroom. It was an experiment that taught me how not to launch a new website.

I learned Drupal sites can be awesome if you know what you are doing. (I did not know what I was doing until it was too late). I also learned that my job in my newsroom does not make it easy to launch major multiple-newsroom projects. I am not sure if I will do it again in 2012. I would like to, but I'll need to consult my community first.

Jennifer Reeves worked in television news for the majority of her career. In the last six years, she has moved from traditional journalist to non-traditional thinker about journalism and education. Jen is currently the New Media Director at KOMU-TV and komu.com. At the same time, she is an associate professor at the Missouri School of Journalism and was a part of the inaugural class of Reynolds Journalism Institute fellows (2008-09).

news21 small.jpg

Education content on MediaShift is sponsored by Carnegie-Knight News21, an alliance of 12 journalism schools in which top students tell complex stories in inventive ways. See tips for spurring innovation and digital learning at Learn.News21.com.

This is a summary. Visit our site for the full post ».

June 11 2010

15:51

June 10 2010

12:00

June 07 2010

15:00

When web users cross the Gladwell 10,000-hour standard

Derek Powazek has a piece that tries to bring the Malcolm Gladwell Outliers thesis — that it takes 10,000 hours of practice to master anything — and apply it to the explosion of content brought about by the Internet:

Ladies and gentlemen, we have the internet — the biggest no-experience-required open mic night ever created. It connects us all, whether we’ve put in 10,000 hours or ten.

It’s only because of extremely fortuitous timing that the world was spared my 16-year-old Beatles impersonation. I put in those hours before everything was digital and duplicated for free, forever. Make no mistake, if MySpace had been around when I was 16, my furtive recordings would still be haunting me.

Maybe it’s only because of fortuitous timing that we even expect anyone to be good at anything now. We were spared hearing The Beatles when they were new. There’s no record of Shakespeare’s embarrassing early attempts. No MP3s of Bach’s school choir. Maybe if we were more used to seeing people suck before they get good at something, we wouldn’t expect perfection from day one.

Derek’s right. (Even though I’m a bit suspicious of the random roundness of Gladwell’s 10,000-hour number. Lots of bands play a lot of gigs without becoming the Beatles; lots of programmers spent lots of time on computers without becoming Bill Gates.) The ease with which the Internet exposes less-than-professional work forces us to reset our expectations about what makes something worth public display. That’s a problem for some old-school journalists, who think the entire universe should be filtered through a copy desk before seeing the light of day.

But what if there’s a different implication for online news? Here’s Derek again:

Suppose Gladwell is right and it really does take 10,000 hours to master something. Let’s set the bar lower. Let’s say that it takes half that time to be merely good at it. And just to be generous, let’s say half again just to not suck at something. That would mean it takes 2,500 hours of practice to just not be awful.

Now ask yourself, what have you done for 2,500 hours? That’s 104 days. 14 weeks of constant practice. Just under four months of nonstop repetition.

Very few of us have spent that much time doing anything besides sleeping or watching TV.

Well, I can think of one area where lots of people are crossing 10,000 hours of time invested: using the Internet.

And unlike watching TV — where the rewards for your couch labor amount to mastery of your Tivo and better control of your remote — after 10,000 hours online, you’re a vastly smarter Internet user than you were at the start. You’ve stopped using Internet Explorer. You’ve abandoned the embarrassing email address. Your Google-fu is finely honed. Maybe you’ve messed around with RSS. Maybe you’ve got a smartphone and know how to swim between apps. In other words, the return on time investment isn’t just important for creators of technology; it’s also important to its users, who move past early awkwardness to feeling more like natives.

One recent study estimated Internet users spend 17 hours a week online; another one found for teens the number is 31 hours. At that rate, teens would get to 10,000 hours in a little over six years.

What will this mean for news? I won’t pretend to know. But I think anyone creating content online will have to think about how their products should shift as their audience gains increased mastery of the medium. Just as sites are slowly moving away from dial-up-safe sites to adjust to a broadband reality, sites will have to reckon with a savvier pool of users.

Part of that would include now-basic moves like search-engine optimization and social media, since Internet veterans are less likely to simply default to a news organization’s homepage as a point of entry. Will full-text RSS become more important as more users start using RSS or RSS-like feeds? What new navigation regimes will evolve to meet their needs of users aware of all their other options online? How will advertising evolve in a world where more people are using ad-blocking or Flash-blocking software, things previously the domain of nerds like me?

Who knows? But it’s worth remembering how much your audience is a moving target — one that is learning and practicing and getting better at this Internet thing all the time.

June 01 2010

09:10

TechCrunch: Pulse launch – are RSS news apps must-haves for the iPad?

TechCrunch reports on the launch of Pulse – the RSS-based news aggregator application created for the iPad by two US university graduate students Akshay Kothari and Ankit Gupta.

On sale for $3.99 [£2.76], the app is aimed to please both hardcore RSS reader users and people who are willing to pay top dollars for single publication apps. Pulse’s home screen renders stories from multiple sources on a dynamic mosaic interface. Swipe up and down to see headlines from various sources, and right and left to browse stories from a particular source.

Full story at this link…

The app gets a favourable review from TechCrunch and adds another point to Patrick Smith’s post last week arguing that RSS feeds beat any branded iPhone or iPad news app:

Of course, the everyday Man On The Clapham Omnibus doesn’t care or want to know about RSS, much less mobile apps that create a mobile version of their OPML file. But Journalism.co.uk readers are media professionals – and I’d wager that most of you are capable of using free or cheap software to create a mobile news experience that no branded premium app can match.

Similar Posts:



May 27 2010

17:15

SochiReporter Helps Transform Sochi in Preparation for Olympics

I recently spoke with a friend of mine here in Sochi, Russia. She is a specialist in modernizing the technological infrastructure of sanatoriums, which were the places where lucky Soviet working class heroes would be sent to rest and relax. (Think of them as health spas.)

It's a challenge to transform the Soviet-era sanatoriums. For example, her job entails computerizing the files and data and modernizing the registration of new clients. But she said it's exciting work. For her, the most enjoyable part of the job is organizing courses for the staff (doctors, waiters, janitors) who at first seem dazed and confused by the changes and new technology. Gradually, their puzzlement gives way to excitement. "How come we were doing this job manually for so many years?" they eventually ask.

I can definitely relate to her experience, as can many people who are trying to modernize different aspects of Sochi culture and society for the upcoming 2014 Winter Games. It's not just about the modernization of the sanatoriums; it's about every aspect of the locals' lifestyle and the character of the infrastructure. Of course, this is what makes this process of transformation so exciting.

Our project, SochiReporter, a hyper-local citizen news website, is working to create an archive of these changes -- an archive that is built by and for locals. It's never boring, but there is still much work and learning to be done.

Over the last several weeks we have been working at mastering our own technology. We added new features to the site, expanded the social networking component, added links to SochiReporter groups on other social networks, and will add more changes over the next two weeks. Also of note is that the website is loading much faster, partly because of some back-end work, and partly because the new 4G WiMax Internet service called Yota that was launched in Sochi at the end of March.

Becoming a Journalist-Entrepreneur

I have become part of the new breed of journalists-turned-entrepreneurs, and I'm finding a certain amount of pleasure in this lifestyle, crazy though it is.

First of all, I am living between two cities: Sochi and Moscow. Being in Sochi means working with contributors and the people who actually submit content to the website, and promoting the project at the local level. Moscow is a bigger source of financing, a business hub where I can meet with advertisers who might be interested in supporting SochiReporter.

Our team has recently been working on developing a sustainable business model, as the Knight Foundation grant money that enabled us to launch the project and start the experiment will soon run out.

Being an entrepreneur means being simultaneously responsive to two mobile phones, an iPad, a laptop and even a fax machine. It also means being very open to new collaborations and projects. You need to be open to taking risks, and adept at using the knowledge you acquired in traditional media reporting and applying it to new media.

Giving Newspapers a Chance

We recently decided to start giving the local Sochi papers, which don't have an online presence, an opportunity to place their content on our site. This section is called News and it's where we mostly have content from RSS feeds. It's separate from the Reports section, which is filled with reports from citizens and includes original content.

The editor of the first Sochi paper to go on our site is extremely happy about the arrangement. He had been seeking a presence on the web. For our part, we'll see how things go and will probably partner with additional local media. However, our main goal is to provide our content to local media. We hope to expand those possibilities by enabling people to submit reports and photos via mobile phone. Right now, people aren't able to upload content using their phone, though they can read the site.

Marketing

Just a final word about marketing, as it is now one of our primary goals. With the site now built and working, we are focused on telling people about it and getting them to use it. One way of doing that is by being part of big events in the area. We were recently chosen as a media sponsor for one of the biggest annual movie festivals in Russia, Kinotavr. It will take place in Sochi from June 6 to 13.

We are the only Sochi-based media outlet to be among the sponsors. The rest are Moscow-based media outlets. We will receive some very cool promotion during the event and the SochiReporter logo will be present in the Kinotavr daily newsletter, its brochures and on its website.

May 05 2010

08:00

March 11 2010

10:03
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl