Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 27 2011

14:38

My online journalism book is now out

The Online Journalism Handbook, written with Liisa Rohumaa, has now been published. You can get it here.

I’ve been blogging throughout the process of writing the book – particularly the chapters on data journalism, blogging and UGC – and you can still find those blog posts under the tag ‘Online Journalism Book‘.

Other chapters cover interactivity, audio slideshows and podcasting, video, law, some of the history that helps in understanding online journalism, and writing for the web (including SEO and SMO).

Meanwhile, I’ve created a blog, Facebook page and Twitter account (@OJhandbook) to provide updates, corrections and additions to the book.

If you spot anything in the book that needs updating or correcting, let me know. Likewise, let me know what you think of the book and anything you’d like to see added in future.

PrintFriendly

September 22 2010

10:40

Why did you get into data journalism?

In researching my book chapter I asked a group of journalists who worked with data what led them to do so. Here are their answers:

Jonathon Richards, The Times:

The flood of information online presents an amazing opportunity for journalists, but also a challenge: how on earth does one keep up with; make sense of it? You could go about it in the traditional way, fossicking in individual sites, but much of the journalistic value in this outpouring, it seems, comes in aggregation: in processing large amounts of data, distilling them, and exploring them for patterns. To do that – unless you’re superhuman, or have a small army of volunteers – you need the help of a computer.

I ‘got into’ data journalism because I find this mix exciting. It appeals to the traditional journalistic instinct, but also calls for a new skill which, once harnessed, dramatically expands the realm of ‘stories I could possibly investigate…’

Mary Hamilton, Eastern Daily Press:

I started coding out of necessity, not out of desire. In my day-to-day work for local newspapers I came across stories that couldn’t be told any other way. Excel spreadsheets full of data that I knew was relevant to readers if I could break it down or aggregate it up. Lists of locations that meant nothing on the page without a map. Timelines of events and stacks of documents. The logical response for me was to try to develop the skills to parse data to get to the stories it can tell, and to present it in interactive, interesting and – crucially – relevant ways. I see data journalism as an important skill in my storytelling toolkit – not the only option, but an increasingly important way to open up information to readers and users.

Charles Arthur, The Guardian:

When I was really young, I read a book about computers which made the point – rather effectively – that if you found yourself doing the same process again and again, you should hand it over to a computer. That became a rule for me: never do some task more than once if you can possibly get a computer to do it.

Obviously, to implement that you have to do a bit of programming. It turns out all programming languages are much the same – they vary in their grammar, but they’re all about making the computer do stuff. And it’s often the same stuff (at least in my ambit) – fetch a web page, mash up two sets of data, filter out some rubbish and find the information you want.

I got into data journalism because I also did statistics – and that taught me that people are notoriously bad at understanding data. Visualisation and simplification and exposition are key to helping people understand.

So data journalism is a compound of all those things: determination to make the computer do the slog, confidence that I can program it to, and the desire to tell the story that the data is holding and hiding.

I don’t think there was any particular point where I suddenly said “ooh, this is data journalism” – it’s more that the process of thinking “oh, big dataset, stuff it into an ad-hoc MySQL database, left join against that other database I’ve got, see what comes out” goes from being a huge experiment to your natural reaction.

It’s not just data though – I use programming to slough off the repetitive tasks of the day, such as collecting links, or resizing pictures, or getting the picture URL and photographer and licence from a Flickr page and stuffing it into a blogpost.

Data journalism is actually only half the story. The other half is that journalists should be **actively unwilling** to do repetitive tasks if it’s machine-like (say, removing line breaks from a piece of copy, or changing a link format).

Time spent doing those sorts of tasks is time lost to journalism and given up to being a machine. Let the damn machines do it. Humans have better things to do.

Stijn Debrouwere, Belgian information designer:

I used to love reading the daily newspaper, but lately I can’t seem to be bothered anymore. I’m part of that generation of people news execs fear so much: those that simply don’t care about what newspapers and news magazines have to offer. I enjoy being an information designer because it gives me a chance to help reinvent the way we engage and inform communities through news and analysis, both offline and online. Technology doesn’t solve everything, but it sure can help. My professional goal is simply this: make myself love news and newspapers again, and thereby hopefully getting others to love it too.

September 13 2010

07:30

Podcasting: the experiences of Bagel Tech News

Bagel Tech News podcast

As part of the research into a forthcoming book on online journalism, I interviewed Ewen Rankin of independent podcast Bagel Tech News. Here are his responses in full:

The background

My background is as a commercial photographer. I started life in graphic design and quickly moved to shooting photographs for the agency at which I worked. It was kind of a lucky transition as I wasn’t much cop as a graphic artist. I took fairly low level stuff to start with (picture business cards were all the rage in the 80s) and then moved to more commercial work shooting the advertising shots for Pretty Polly and Golden Lady tights in about 1988.

I start broadcasting in July 2008 and after two weeks Amber Macarthur made us Podcast of the Week on the Net@Night show with Leo Laporte. Listenership rose and we began to grow.

The Daily News show was published… daily until November 2008 and then I started publishing the BOG Show with Marc Silk, and was opened by Andy Ihnatko on 30th November 2008. I removed Marc from the show in Christmas 2009 and installed a ‘Skype Wall’ in January 2010 to run a more panel based show. More shows have been added in the intervening period and the network now has 7 active shows

  • Bagel Tech News – 70k Dloads PCM
  • Bagel Tech BIG – 3k Dloads/Week
  • Bagel Profits – No Show since May due to Athos Work committments. Generally around 250-500 per episode
  • Bagel Tech Foto – New podcast on Photography – 5 episodes produced 250 Dloads Per episode
  • Bagel Tech Media – Formerly Sonic Beyond Podcast – 500 Dloads per episode
  • Bagel Tech Rage – Formerly Tech Rage News – 250 Dloads per Episode.
  • Bagel Tech Mac will begin airing in September
  • Bagel Tech Law will begin in 2011

Apart from the Daily Show, all podcasts are produced weekly.

Bagel Tech Media Group will also add non tech related shows in 2011.

Preparing the show

The Daily Show is prepared each morning at 5.30am with a trawl through around 300 stories gathered using the Firefox Plugin ‘Brief’ and then saved and Synced as bookmarks using ‘X Marks’ After that the chosen stories are ordered and then the podcast is recorded. This is generally about 10 minutes of audio including fluffs and rereads and edits to between 5 and 6.5 mins.

Then the pictures are added to the M4a Version and then the website is updated.

Stories are selected based on whether I believe that the story is either something that the listenership would Want to Know, but I also include stories which I think they SHOULD know or could know. And every podcast has an ‘And Finally’ to sign off with a snigger.

The Weekly shows are more relaxed and there is minimal prep for these.

Tricks of the Trade… hmm. I guess I have just got more efficient at reviewing stories and creating the podcast and website. I have learnt more tools which can save me time and I am already set up to work from locations across the country. I am truly a mobile office and studio and it is rare for me to miss an episode of the Daily Show. The process is time consuming in prep more than delivery. Some mornings are hard to get motivated, others come easier.

Advice

Broadcast with enthusiasm and passion for the subject. Make sure that podcasting is your hobby first and try to make money second. If you show your financial hand too early then you will alienate listeners.

Concentrate on community. Let people feel part of the ‘X Show’ community rather than isolated listeners. Open a chatroom and live feed while you record for the interaction which ensures this develops.

Lastly, broadcast to more than 1000 people every day. It doesn’t matter if there arent 1000 people on the other side of the microphone… always broadcast like there are or it will show through.

May 04 2010

08:36

Data journalism pt5: Mashing data (comments wanted)

This is a draft from a book chapter on data journalism (part 1 looks at finding data; part 2 at interrogating datapart 3 at visualisation, and 4 at visualisation tools). I’d really appreciate any additions or comments you can make – particularly around tips and tools.

Mashing data

Wikipedia defines a mashup particularly succinctly, as “a web page or application that uses or combines data or functionality from two or many more external sources to create a new service.” Those sources may be online spreadsheets or tables; maps; RSS feeds (which could be anything from Twitter tweets, blog posts or news articles to images, video, audio or search results); or anything else which is structured enough to ‘match’ against another source.

This ‘match’ is typically what makes a mashup. It might be matching a city mentioned in a news article against the same city in a map; or it may be matching the name of an author with that same name in the tags of a photo; or matching the search results for ‘earthquake’ from a number of different sources. The results can be useful to you as a journalist, to the user, or both.

Why make a mashup?

Mashups can be particularly useful in providing live coverage of a particular event or ongoing issue – mashing images from a protest march, for example, against a map. Creating a mashup online is not too dissimilar from how, in broadcast journalism, you might set up cameras at key points around a physical location in anticipation of an event from which you will later ‘pull’ live feeds: in a mashup you are effectively doing exactly the same thing – only in a virtual space rather than a physical one. So, instead of setting up a feed at the corner of an important junction, you might decide to pull a feed from Flickr of any images that are tagged with the words ‘protest’ and ‘anti-fascist’.

Some web developers have built entire sites that are mashups. Twazzup (twazzup.com) for example, will show you a mix of Twitter tweets, images from Flickr, news updates and websites – all based on the search term you enter. And Friendfeed (friendfeed.com) pulls in data that you and your social circle post to a range of social networking sites, and displays them in one place.

Mashups also provide a different way for users to interact with content – either by choosing how to navigate (for instance by using a map), or by inviting them to input something (for instance, a search term, or selecting a point on a slider). The Super Tuesday YouTube/Google Maps mashup, for instance, provided an at-a-glance overview of what election-related videos were being uploaded where across the US.

Finally, mashups offer an opportunity for juxtaposing different datasets to provide fresh, sometimes ongoing, insights. The MySociety/Channel 4 project Mapumental, for example, combines house price data with travel information and data on the ’scenicness’ of different locations to provide an interactive map of a location which the user can interrogate based on their individual preferences.

Mashup tools

Like so many aspects of online journalism, the ease with which you can create a mashup has increased significantly in recent years. An increase in the number and power of online tools, combined with the increasing ‘mashability’ of websites and data, mean that journalists can now create a basic mashup through the simple procedures of drag-and-drop or copy-and-paste.

A simple RSS mashup, which combines the feeds from a number of different sources into one, for example, can now be created using an RSS aggregator such as xFruits (xfruits.com) or Jumbra (jumbra.com).

Likewise, you can mix two maps together using the website MapTube (maptube.org) which also contains a number of maps for you to play with.

And if you want to mix two sources of data into one visualisation the site DataMasher (datamasher.org) will let you do that – although you’ll have to make do with the US data that the site provides. Google Public Data Explorer (google.com/publicdata) is a similar tool which allows you to play with global data.

But perhaps the most useful tool for news mashups is Yahoo! Pipes (pipes.yahoo.com).

Yahoo! Pipes allows you to choose a source of data – it might be an RSS feed, an online spreadsheet or something that the user will input – and do a variety of things with it. Here are just some of the basic things you might do:

  • Add it to other sources
  • Combine it with other sources – for instance, matching images to text
  • Filter it
  • Count it
  • Annotate it
  • Translate it
  • Create a gallery from the results
  • Place results on a map

You could write a whole book on how to use Yahoo! Pipes – indeed, people have – so we will not cover the practicalities of using all of those features here. There are also dozens of websites and help files devoted to the site (which you should explore). Below, however, is a short tutorial to introduce you to the website and how it works – this is a good way to understand how basic mashups work, and how easily they can be created.

Mashups and APIs

Although there are a number of easy-to-use mashup creators listed above, really impressive mashups tend to be written by people with knowledge of programming languages, and use APIs. APIs (Application Programming Interface) allow websites to interact with other websites. The launch of the Google Maps API in 2005, for example, has been described as a ‘huge tipping point’ in mashup history (Duvander, 2008) as it allowed web developers to ‘mash’ countless other sources of data with maps. Since then it has become commonplace for new websites, particularly in the social media arena, to launch their own APIs in order to allow web developers to do interesting things with their feeds and data – not just mashups, but applications and services too.

If you want to develop a particularly ambitious mashup it is likely that you will need to teach yourself some programming skills, and familiarise yourself with some APIs (the APIs of Twitter, Google Maps and Flickr are good places to start).

Box-out: Anatomy of a feed

The image below from ReadWriteWeb shows the code behind a simple Twitter update. It includes information about the author, their location, whether the update was a reply to someone else, what time and where it was created, and lots more besides. Each of these values can be used by a mashup in various ways – for example, you might match the author of this tweet with the author of a blog or image; you might match its time against other things being published at that moment; or you might use their location to plot this update on a map.

While the code can be intimidating, you do not need to understand programming in order to be able to do things with it. Of course, it will help if you do…

Anatomy of a Twitter feed

January 19 2010

09:47

Technology is not a strategy: it’s a tool

Here’s another draft section from the book chapter on UGC I’m currently writing which I’d welcome your input on. I’m particularly interested in any other objectives you can think of that news organisations have for using UGC – or the strategies adopted to achieve those.

A common mistake made when first venturing into user generated content is to focus on the technology, rather than the reasons for using it. “We need to have our own social network!” someone shouts. But why? And, indeed, how do you do so successfully?

A useful framework to draw on when thinking about how you approach UGC is the POST process for social media strategy outlined by Forrester Research (Bernoff, 2007). This involves identifying:

  • People: who are your audience (or intended audience), and what social media (e.g. Facebook, blogs, Twitter, forums, etc.) do they use? Equally important, why do they use social media?
  • Objectives: what do you want to achieve through using UGC
  • Strategy: how are you going to achieve that? How will relationships with users change?
  • Technology: only when you’ve explored the first three steps can you decide which technologies to use

Some common objectives for UGC and strategies associated with those are listed below:

Objective Example UGC strategies Users spend longer on our site
  • Give users something to do around content, e.g. comments, vote, etc.
  • Find out what users want to do with UGC and allow them to do that on-site
  • Acknowledge and respond to UGC
  • Showcase UGC on other platforms, e.g. print, broadcast
  • Create a positive atmosphere around UGC – prevent aggressive users scaring others away
Attract more users to our site
  • Help users to promote their own and other UGC
  • Allow users to cross-publish UGC from our site to others and vice versa
  • Allow users to create their own UGC from our own raw or finished materials
Get to the stories before our competitors
  • Monitor UGC on other sites
  • Monitor mentions of keywords such as ‘earthquake’, etc.
  • Become part of and contribute to online UGC communities
  • Provide live feeds pulling content from UGC sites*
Increase the amount of content on our site
  • Make it easy for users to contribute material to the site
  • Make it useful
  • Make it fun
  • Provide rewards for contributing – social or financial
Improve the editorial quality of our work
  • Provide UGC space for users to highlight errors, contribute updates
  • Ensure that we attract the right contributors in terms of skills, expertise, contacts, etc.
  • Involve users from the earliest stages of production

Can you add any more? What strategies have you used around UGC?

January 15 2010

14:27

What is User Generated Content?

The following is a brief section from a book I’m writing on online journalism. I’m publishing it here to invite your thoughts on anything you think I might be missing…

There is a long history of audience involvement in news production, from letters to the editor and readers’ photos, to radio and television phone-ins, and texts from viewers being shown at the bottom of the screen.

For many producers and editors, user generated content is seen – and often treated – as a continuation of this tradition. However, there are two key features of user generated content online that make it a qualitatively different proposition.

Firstly, unlike print and broadcast, on the web users do not need to send something to the mainstream media for it to be distributed to an audience: a member of the public can upload a video to YouTube with the potential to reach millions. They can share photos with people all over the world. They can provide unedited commentary on any topic they choose, and publish it, regularly, on a forum or blog.

Quite often they are simply sharing with an online community of other people with similar interests. But sometimes they will find themselves with larger audiences than a traditional publisher because of the high quality of the material, its expertise, or its impact.

Indeed, one of the challenges for media organisations is to find a way to tap into blog platforms, forums, and video and photo sharing websites, rather than trying to persuade people to send material to their news websites as well. For some this has meant setting up groups on the likes of Flickr, LinkedIn and Facebook to communicate with users on their own territory.

The second key difference with user generated content online is that there are no limitations on the space that it can occupy. Indeed, whole sites can be given over to your audience and, indeed, are. The Telegraph, Sun and Express all host social networks where readers can publish photos and blog posts, and talk on forums. The Guardian’s CommentIsFree website provides a platform where dozens of non-journalist experts blog about the issues of the day. And an increasing number of regional newspapers provide similar spaces for people to blog their analysis of local issues under their news brand, while numerous specialist magazines host forums with hundreds of members exchanging opinions and experiences every day. On the multimedia side, Sky and the BBC provide online galleries where users can upload hundreds of photos and videos.

The term User Generated Content itself is perhaps too general a term to be particularly useful to journalists. It can refer to anything from a comment posted by a one-time anonymous website visitor, to a 37-minute documentary that one of your readers spent ten years researching. The most accurate definition might simply be that user generated content is “material your organisation has not commissioned and paid for”. In which case, most of the time when we’re talking about UGC,we need to talk in more specific terms.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl