Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2013

21:45

Making of a Prince: Prince of Petworth

Dan SilvermanThis blog post is about Making of a Prince: Prince of Petworth, a NetSquared DC event organized by Roshani Kothari.  The event took place on Tuesday, June 18, 2013 from 7-9 pm at the Affinity Lab in DC.

read more

Tags: blogging DC

May 18 2013

08:13

Why I stopped working with print publishers (for a while)

Scraping for Journalists book

This was first published on the BBC College of Journalism website:

I have just spent 10 months publishing an ebook. Not ‘writing’, or ‘producing’, but 10 months publishing. Just as the internet helped flatten the news industry – making reporters into publishers and distributors – it has done the same to the book industry. The question I wanted to ask was: how does that change the book?

Having written books for traditional publishers before, my plunge into self-publishing was prompted when I decided I wanted to write a book for journalists about scraping: the technique of grabbing and combining information from online documents.

There was a time when self-publishing was for those who couldn’t get themselves printed. Increasingly, however, it’s for those who cannot wait to. This was just such a case, with classic symptoms: a timely subject that is prone to change; a small market (or so I thought) and a dispersed and knowledgeable audience.

To carry it through I turned to the self-publishing website Leanpub, having seen what my Birmingham City University colleague Andrew Dubber had been doing with the service. Most ebook services offer the timeliness of ebook publishing, but Leanpub had something else: agility.

‘Agile development’ is a popular concept in technology development: it is the idea that, rather than launching a ‘finished’ product upon the world, you should instead launch something part-finished and develop it in response to user feedback.

In other words, it is better to see how people actually use something and respond to that, than to assume you know what they will use it for. My ebook was designed to be used – but would people use it how I imagined?

So, in July 2012 I put up a page announcing the imminent publication of the book. Users could suggest how much they might be prepared to pay. Immediately, I had some indication of suitable pricing. Free market research.

When the first two chapters were published, I started with a cheap price: readers were, after all, taking a gamble on the content that followed. You might also argue that these ‘early adopters’ of the book would be key to its continued success. Why discount a book that has grown old, when you can discount one that isn’t even finished yet?

I published a new chapter every week for the first few months. People who had bought the book would receive an email alerting them to the new content to download. An accompanying Facebook page, and my own Twitter account, helped provide other platforms for announcements, but also reader feedback.

One reader told me about idiosyncrasies in how tools worked in different countries: I added additional notes in the books. Others told me how they used links: I changed the way that I formatted them. Readers suggested alternative solutions to problems outlined in one chapter – and I added those at the end of that chapter.

The book evolved out of that call-and-response, including usage data: which formats were most popular; how pricing affected buying behaviour; what languages might be best for future translations. It has combined the best elements of blogging (readers as editors; iterative writing; analytics) with the best of books (comprehensiveness; structure).

When I set out to write it, I thought there might be barely 100 people in the world who would want to buy it. As I began that final chapter, it had sold five times that – the rate of a mildly successful textbook. This has genuinely shocked me. No publisher would have guessed that market existed. Even if they wanted to bet on it, they couldn’t have distributed the books effectively enough.

So this is the book industry in the internet age: not only publishing without delays for typesetting, printing, or distribution – but before a book is even finished. And is it finished? Not quite: I have the Kindle Store edition and the print on demand version to do now…

08:13

Why I stopped working with print publishers (for a while)

Scraping for Journalists book

This was first published on the BBC College of Journalism website:

I have just spent 10 months publishing an ebook. Not ‘writing’, or ‘producing’, but 10 months publishing. Just as the internet helped flatten the news industry – making reporters into publishers and distributors – it has done the same to the book industry. The question I wanted to ask was: how does that change the book?

Having written books for traditional publishers before, my plunge into self-publishing was prompted when I decided I wanted to write a book for journalists about scraping: the technique of grabbing and combining information from online documents.

There was a time when self-publishing was for those who couldn’t get themselves printed. Increasingly, however, it’s for those who cannot wait to. This was just such a case, with classic symptoms: a timely subject that is prone to change; a small market (or so I thought) and a dispersed and knowledgeable audience.

To carry it through I turned to the self-publishing website Leanpub, having seen what my Birmingham City University colleague Andrew Dubber had been doing with the service. Most ebook services offer the timeliness of ebook publishing, but Leanpub had something else: agility.

‘Agile development’ is a popular concept in technology development: it is the idea that, rather than launching a ‘finished’ product upon the world, you should instead launch something part-finished and develop it in response to user feedback.

In other words, it is better to see how people actually use something and respond to that, than to assume you know what they will use it for. My ebook was designed to be used – but would people use it how I imagined?

So, in July 2012 I put up a page announcing the imminent publication of the book. Users could suggest how much they might be prepared to pay. Immediately, I had some indication of suitable pricing. Free market research.

When the first two chapters were published, I started with a cheap price: readers were, after all, taking a gamble on the content that followed. You might also argue that these ‘early adopters’ of the book would be key to its continued success. Why discount a book that has grown old, when you can discount one that isn’t even finished yet?

I published a new chapter every week for the first few months. People who had bought the book would receive an email alerting them to the new content to download. An accompanying Facebook page, and my own Twitter account, helped provide other platforms for announcements, but also reader feedback.

One reader told me about idiosyncrasies in how tools worked in different countries: I added additional notes in the books. Others told me how they used links: I changed the way that I formatted them. Readers suggested alternative solutions to problems outlined in one chapter – and I added those at the end of that chapter.

The book evolved out of that call-and-response, including usage data: which formats were most popular; how pricing affected buying behaviour; what languages might be best for future translations. It has combined the best elements of blogging (readers as editors; iterative writing; analytics) with the best of books (comprehensiveness; structure).

When I set out to write it, I thought there might be barely 100 people in the world who would want to buy it. As I began that final chapter, it had sold five times that – the rate of a mildly successful textbook. This has genuinely shocked me. No publisher would have guessed that market existed. Even if they wanted to bet on it, they couldn’t have distributed the books effectively enough.

So this is the book industry in the internet age: not only publishing without delays for typesetting, printing, or distribution – but before a book is even finished. And is it finished? Not quite: I have the Kindle Store edition and the print on demand version to do now…

April 09 2013

12:39

August 16 2012

08:27

Hyperlocal Voices: Matt Brown, Londonist

The fifth in our new series of Hyperlocal Voices explores the work done by the team behind the Londonist. Despite having a large geographic footprint – Londonist covers the whole of Greater London - the site is full of ultra-local content, as well as featuring stories and themes which span the whole of the capital.

Run by two members of staff and a raft of volunteers, Editor Matt Brown gave Damian Radcliffe an insight into the breadth and depth of the site.

1. Who were the people behind the blog?

Everyone in London! We’re a very open site, involving our readers in the creation of many articles, especially the imagery. But more prosaically, we have an editorial team of 5 or 6 people, plus another 20 or so regular contributors. I act as the main content editor for the site.

We’re more than a website, though, with a weekly podcast (Londonist Out Loud, ably presented and produced by N Quentin Woolf), a separate Facebook presence, a daily e-newsletter, 80,000 Twitter followers, the largest FourSquare following in London (I think), a Flickr pool with 200,000 images, several e-books, occasional exhibitions and live events every few weeks. The web site is just one facet of what we do.

2. What made you decide to set up the blog?

I actually inherited it off someone else, but it was originally set up as a London equivalent of certain sites in the US like Gothamist and Chicagoist, which were riding the early blogging wave, providing news and event tips for citizens. There was nothing quite like it in London, so my predecessor wanted to jump into the gap and have some fun.

3. When did you set up the blog and how did you go about it?

It dates back to 2004, when it was originally called the Big Smoker. Before too long, it joined the Gothamist network, changing its name to Londonist.

We now operate independently of that network, but retain the name. It was originally set up in Movable Type publishing platform, but we moved to WordPress a couple of years ago.

4. What other blogs, bloggers or websites influenced you?

Obviously, the Gothamist sites originally. But we’re now more influenced by the wonderful ecosystem of London blogs out there, all offering their own take on life in the capital.

The best include Diamond Geezer (an incisive and often acerbic look at London), Ian Visits (a mix of unusual site visits and geeky observation) and Spitalfields Life (a daily interview with a local character). These are just three of the dozens of excellent London sites in my RSS reader.

5. How did – and do – you see yourself in relation to a traditional news operation?

Complementary rather than competitors. We cover three or four news stories a day, sometimes journalistically, but our forte in this area is more in commentary, features and reader involvement around the news.

And news is just a small part of what we do — most of the site is event recommendation, unusual historical insights, street art, food and drink, theatre reviews and the like. As an example of our diversity, a few months back we ran a 3,000-word essay on the construction of Hammersmith flyover by an engineering PhD candidate, and the very next item was about a beauty pageant for chubby people in Vauxhall.

6. What have been the key moments in the blog’s development editorially?

I think most of these would be technologically driven. For example, when Google mapping became possible, our free wifi hotspots and V2 rocket maps greatly increased site traffic.

Once Twitter reached critical mass we were able to reach out to tens of thousands of people, both for sourcing information for articles and pushing our finished content.

The other big thing was turning the site into a business a couple of years ago, so we were able to bring a little bit of money in to reinvest in the site. The extra editorial time the money pays for means our output is now bigger and better.

7. What sort of traffic do you get and how has that changed over time?

We’re now seeing about 1.4 million page views a month. It’s pretty much doubling year on year.

8. What is / has been your biggest challenge to date?

Transforming from an amateur site into a business.

We started taking different types of advertising, including advertorial content, and had to make sure we didn’t alienate our readers. It was a tricky tightrope, but I’d hope we’ve done a fairly good job of selecting paid-for content only if it’s of interest to a meaningful portion of our readers, and then making sure we’re open and clear about what is sponsored content and what is editorially driven.

9. What story, feature or series are you most proud of? 

I’m rather enjoying our A-Z pubcrawl at the moment, and not just because of the booze.

Basically, we pick an area of town each month beginning with the next letter of the alphabet (so, Angel, Brixton, City, Dalston, etc.). We then ask our readers to nominate their favourite pubs and bars in the area, via Twitter, Facebook or comments.

We then build a Google map of all the suggestions and arrange a pub crawl around the top 4.

Everyone’s a winner because (a) we get a Google-friendly article called, for example, ‘What’s the best pub in Farringdon?‘, with a map of all the suggestions; (b) we get the chance to use our strong social media channels to involve a large number of people – hundreds of votes every time; (c) the chance to meet some of our readers, who are invited along on the pub crawl, and who get a Londonistbooze badge as a memento; (d) a really fun night out round some very good pubs.

The next part (G for Greenwich) will be announced in early September.

10. What are your plans for the future?

We’re playing around with ebooks at the moment, as a way to sustain the business directly through content. We’ve published a book of London pub crawls (spotting a theme here?), and a history of the London Olympics by noted London author David Long. Our next ebook will be a collection of quiz questions about the capital, drawn from the numerous pub quizzes we’ve ran over the years.

Basically, we’re looking to be the best organisation for finding out about London in any and every medium we can get our hands on.

08:27

Hyperlocal Voices: Matt Brown, Londonist

The fifth in our new series of Hyperlocal Voices explores the work done by the team behind the Londonist. Despite having a large geographic footprint – Londonist covers the whole of Greater London - the site is full of ultra-local content, as well as featuring stories and themes which span the whole of the capital.

Run by two members of staff and a raft of volunteers, Editor Matt Brown gave Damian Radcliffe an insight into the breadth and depth of the site.

1. Who were the people behind the blog?

Everyone in London! We’re a very open site, involving our readers in the creation of many articles, especially the imagery. But more prosaically, we have an editorial team of 5 or 6 people, plus another 20 or so regular contributors. I act as the main content editor for the site.

We’re more than a website, though, with a weekly podcast (Londonist Out Loud, ably presented and produced by N Quentin Woolf), a separate Facebook presence, a daily e-newsletter, 80,000 Twitter followers, the largest FourSquare following in London (I think), a Flickr pool with 200,000 images, several e-books, occasional exhibitions and live events every few weeks. The web site is just one facet of what we do.

2. What made you decide to set up the blog?

I actually inherited it off someone else, but it was originally set up as a London equivalent of certain sites in the US like Gothamist and Chicagoist, which were riding the early blogging wave, providing news and event tips for citizens. There was nothing quite like it in London, so my predecessor wanted to jump into the gap and have some fun.

3. When did you set up the blog and how did you go about it?

It dates back to 2004, when it was originally called the Big Smoker. Before too long, it joined the Gothamist network, changing its name to Londonist.

We now operate independently of that network, but retain the name. It was originally set up in Movable Type publishing platform, but we moved to WordPress a couple of years ago.

4. What other blogs, bloggers or websites influenced you?

Obviously, the Gothamist sites originally. But we’re now more influenced by the wonderful ecosystem of London blogs out there, all offering their own take on life in the capital.

The best include Diamond Geezer (an incisive and often acerbic look at London), Ian Visits (a mix of unusual site visits and geeky observation) and Spitalfields Life (a daily interview with a local character). These are just three of the dozens of excellent London sites in my RSS reader.

5. How did – and do – you see yourself in relation to a traditional news operation?

Complementary rather than competitors. We cover three or four news stories a day, sometimes journalistically, but our forte in this area is more in commentary, features and reader involvement around the news.

And news is just a small part of what we do — most of the site is event recommendation, unusual historical insights, street art, food and drink, theatre reviews and the like. As an example of our diversity, a few months back we ran a 3,000-word essay on the construction of Hammersmith flyover by an engineering PhD candidate, and the very next item was about a beauty pageant for chubby people in Vauxhall.

6. What have been the key moments in the blog’s development editorially?

I think most of these would be technologically driven. For example, when Google mapping became possible, our free wifi hotspots and V2 rocket maps greatly increased site traffic.

Once Twitter reached critical mass we were able to reach out to tens of thousands of people, both for sourcing information for articles and pushing our finished content.

The other big thing was turning the site into a business a couple of years ago, so we were able to bring a little bit of money in to reinvest in the site. The extra editorial time the money pays for means our output is now bigger and better.

7. What sort of traffic do you get and how has that changed over time?

We’re now seeing about 1.4 million page views a month. It’s pretty much doubling year on year.

8. What is / has been your biggest challenge to date?

Transforming from an amateur site into a business.

We started taking different types of advertising, including advertorial content, and had to make sure we didn’t alienate our readers. It was a tricky tightrope, but I’d hope we’ve done a fairly good job of selecting paid-for content only if it’s of interest to a meaningful portion of our readers, and then making sure we’re open and clear about what is sponsored content and what is editorially driven.

9. What story, feature or series are you most proud of? 

I’m rather enjoying our A-Z pubcrawl at the moment, and not just because of the booze.

Basically, we pick an area of town each month beginning with the next letter of the alphabet (so, Angel, Brixton, City, Dalston, etc.). We then ask our readers to nominate their favourite pubs and bars in the area, via Twitter, Facebook or comments.

We then build a Google map of all the suggestions and arrange a pub crawl around the top 4.

Everyone’s a winner because (a) we get a Google-friendly article called, for example, ‘What’s the best pub in Farringdon?‘, with a map of all the suggestions; (b) we get the chance to use our strong social media channels to involve a large number of people – hundreds of votes every time; (c) the chance to meet some of our readers, who are invited along on the pub crawl, and who get a Londonistbooze badge as a memento; (d) a really fun night out round some very good pubs.

The next part (G for Greenwich) will be announced in early September.

10. What are your plans for the future?

We’re playing around with ebooks at the moment, as a way to sustain the business directly through content. We’ve published a book of London pub crawls (spotting a theme here?), and a history of the London Olympics by noted London author David Long. Our next ebook will be a collection of quiz questions about the capital, drawn from the numerous pub quizzes we’ve ran over the years.

Basically, we’re looking to be the best organisation for finding out about London in any and every medium we can get our hands on.

05:05

Evan Williams, Biz Stone reveal 'Medium' - Jalees Rehman: But do we need it?

CNN :: There's yet another way to post writing and photos and share them with other people online. Medium is a new blogging tool for people who feel constrained by Twitter and overwhelmed by Blogger or Tumblr. Its founders are Evan Williams and Biz Stone, who co-founded Twitter and Blogger, so they know about blogging, both long and short form.

A report by Heather Kelly, edition.cnn.com

Discussed here:

Fragments of Truth :: When we look at our Twitter feeds, Facebook updates or the millions of blog posts that are generated, it is difficult to claim that the quality of information being exchanged is improving. However, improving the quality of exchanged information seems to be a rather lofty goal and it is not clear that a new platform will necessarily achieve that. "Medium" will apparently rely on two key principles: Rating and Organizing.

[Jalees Rehman:] Do we need another information sharing platform?

An opinion piece by Jalees Rehman, fragments-of-truth.blogspot.de

Jalees Rehman on Twitter

August 15 2012

15:42

13 ways of looking at Medium, the new blogging/sharing/discovery platform from @ev and Obvious

[With apologies to Wallace Stevens, the finest poet to ever serve as vice president of the Hartford Livestock Insurance Company.]

I.

Medium is a new online publishing platform from Obvious Corp. It launched yesterday. Obvious is the most recent iteration of the company that created Blogger, Odeo, and Twitter. Blogger was the outfit that, until it was bought up by Google, did the most to enable the early-2000s blogging boom. Odeo was a podcasting service that never really took off — 20 percent ahead of its time, 80 percent outflanked by Apple. Twitter — well, you’ve heard of Twitter.

Ev Williams, the key figure at every stage, tweeted about Medium yesterday in a way that slotted it right into the evolutionary personal-publishing chain he and his colleagues have enabled: Let’s try this again!

II.

Medium has been described as “a cross between Tumblr and Pinterest.” There’s some truth to that, in terms of presentation. Like Tumblr, it relies on artfully constructed templates for its structural power; like Pinterest, it’s designed to be image-heavy. But those surface issues, while interesting, are less consequential than the underlying structure of Medium, which upends much of how we think about personal publishing online.

III.

When the Internet first blossomed, its initial promise to media was the devolution of power from the institution to the individual. Before the web, reaching an audience meant owning a printing press or a broadcast tower. It was resource-intensive, and those resources tended to congeal around companies — organizations that had newsrooms, yes, but also human resource departments, advertising sales staffs, and people to man the phones when your paper was thrown into the bushes (we’re very sorry about that, Mrs. Johnson, we’ll be happy to credit your account).

The web, by reducing potential worldwide access to basic knowledge of [1996: Unix and <table> tags; 1999: how to input FTP credentials; 2005: how to come up with a unique login and password; 2010: how to stay under 140 characters], eliminated, at least in theory, the need for organizations. (Vide Shirky.)

IV.

In theory. In reality, organization still had some enormous advantages. Organizations are sustainable; they outlive the vagaries of human attention. Some individuals flourished in the newly democratic blogosphere. But over time, people got bored, got new jobs, found new interests, or otherwise reached the limits of what people-driven, individual-driven publishing could accomplish for them. The political blogosphere — the cacophony of individual voices on both left and right circa, say, 2004 — evolved toward institutions, toward Politico and TPM and The Blaze and HuffPo and the like.

Personal publishing is like voting. In theory, it’s the very definition of empowerment. In reality, it’s an excellent way for your personal shout to be cancelled out by someone else’s shout.

V.

That was when a few smart people realized that there was a balance to be found between the organization and the individual. The individual sought self-expression and an audience; the organization sought sustainability and cash money. Louie, I think this is the beginning of a beautiful friendship.

So Facebook built a way for people to express themselves (by providing free content) to an audience (through their self-defined network of friends), while selling ads around it all. It’s a pretty good business.

So Twitter (Ev, Jack, and crew) build a way for people to express themselves, in a format that was genius in its limitations and in its old-media model of subscribe-and-follow — again, transformed from institutions to individuals. It’s not as good of a business as Facebook, probably, but it’s still a pretty good business.

So Tumblr, Path, Foursquare, and a gazillion others have tried to pull off the same trick: Serve users by helping them find an outlet for personal expression, then build a business around those users’ collective outputs. It’s publishing-as-platform, and it’s the business model du jour in this unbundled, rebundled world.

VI.

What’s most radical about Medium is that it denies authorship.

Okay, maybe not denies authorship — people’s names are right next to their work, after all. But it degrades authorship, renders it secondary, knocks it off its pedestal.

The shift to blogging created a wave of new individual media stars, but in a sense it just shifted traditional media brands to a new, personal level. Instead of reading The Miami Herald or Newsweek, you read Jason Kottke or John Gruber. So long, U.S. News; hello, Anil Dash. They were brands in the sense that your attraction to their work was tied to authorship — you wanted to see what Lance Arthur or Dean Allen or Josh Marshall or Ezra Klein was going to write next. The value was tied to the work’s origin, its creator.

And while social networks allowed that value to be spread, algorithmically, much wider, the proposition was much the same. You were interested in your Facebook news feed because it was produced by your friends. You were interested in your Twitter stream because you’d clicked “Follow” next to every single person appearing in it.

VII.

Degrading authorship is something the web already does spectacularly well. Work gets chopped and sliced and repurposed. That last animated GIF you saw — do you know who made it? Probably not. That infonugget you saw on Gawker or The Atlantic — did it start there? Probably not. Sites like Buzzfeed are built largely on reshuffling the Internet, rearranging work into streams and slideshows.

It’s been a while since auteur theory made sense as an explanation of the web. And you know what? We’re better for it. In a world of functionally infinite content, relying on authorship doesn’t scale. We need people to mash things up, to point things out, to sample, to remix.

VIII.

Where Medium zags is in structuring its content around what it calls “collections.” Here’s Ev:

Posting on Medium (not yet open to everyone) is elegant and easy, and you can do so without the burden of becoming a blogger or worrying about developing an audience. All posts are organized into “collections,” which are defined by a theme and a template.

The burden of becoming a blogger or worrying about developing an audience. That’s a real issue, right? I’ve talked to lots of journalists who want to have some outlet for their work that doesn’t flow through an assigning editor. But when I suggest starting a blog, The Resistance begins. I don’t know how to start a blog. If I did, it’d be ugly. Or: I’d have to post all the time to keep readers coming back. I don’t want to do that. Starting a blog means, for most, committing to something — to building a media brand, to the caring and feeding of an audience, to doing lots of stuff you don’t want to do. That’s why ease of use — the promise of Facebook, the promise of Twitter, the promise of Tumblr — has been such a wonderful selling point to people who want to create media without hassle. Every single-serving Tumblr, every Twitter account updated sporadically, every Facebook account closed to only a few friends speaks the same message: You can do this, it’s simple, don’t stress, you’ll be fine.

IX.

So Medium is built around collections, not authors. When you click on an author’s byline on a Medium post, it goes to their Twitter feed (Ev synergy!), not to their author archive — which is what you’d expect on just about any other content management system on the Internet. (The fact we call them content management systems alone tells you the structural weight that comes from even the lightest personal publishing systems.) The author is there as a reference point to an identity layer — Twitter — not as an organizing principle.

As Dave Winer noted, Medium does content categorization upside down: “Instead of adding a category to a post, you add a post to a category.” He means collection in Medium-speak, but you get the idea: Topic triumphs over author. Medium doesn’t want you to read something because of who wrote it; Medium wants you to read something because of what it’s about. And because of the implicit promise that Medium = quality.

(This just happens to be promising from a business-model perspective. Who needs silly content contributors asserting authorial privilege when the money starts to flow? Demoting the author privileges the platform, which is nice if you own the platform.)

X.

At one level, Medium is just another publishing platform (join the crowd): You type in a title, some text, maybe a photo if you want, hit “Publish” and out comes a “post,” whatever that means that days, on a unique URL that you can share with your friends. (And let me just say, as a Blogger O.G. from the Class of ’99, that Medium’s posting interface brought back super-pleasant memories of Blogger’s old two-pane interface. Felt like the Clinton years again.)

XI.

Ev writes that a prime objective of Medium is increased quality: “Lots of services have successfully lowered the bar for sharing information, but there’s been less progress toward raising the quality of what’s produced.” That’s probably true: There are orders of magnitude more content published every day than was the case in 1999, when Blogger launched as a Pyra side project. The mass of quality content is much higher too, of course, but it’s surrounded by an even-faster-growing mass of not-so-great (or at least not-so-great-to-you) content.

Medium takes a significant step in that direction by violating perhaps the oldest blogging norm: that content appears in reverse-chronological order, newest stuff up top, flowing forever downward into the archives. Reverse chron has been key to blogging since Peter Merholz made up the word. (Older than that, actually — back to the original “What’s New” page at NCSA in 1993.) For the pleasure centers in the brain that respond to “New!,” reverse chron was a godsend — even if traditional news organizations were never quite comfortable with it, preferring to curate their own homepages through old-fashioned ideas like, you know, editorial judgment.

Medium believes in editorial judgment — but everyone’s an editor. Like the great social aggregators (Digg is dead, long live Digg), Medium relies on user voting to determine what floats to the top of a collection and what gets dugg down the bottom. (A reverse chron view is available, but not the default.) It’ll be interesting to see how that works once Medium is really a working site: Will a high-rated story stick to the top of a collection for weeks, months, or years, forever pushing new stuff down? Will there be any way for someone visiting a collection to see what’s new since she was last there? The tension between what’s good and what’s new is a long-standing one for online media, and privileging either comes with drawbacks — new material never reaching an audience, or good stuff being buried beneath something inconsequential posted 20 minutes later.

Considering Obvious Corp.’s heritage in Blogger and Twitter — both of which privilege reverse chron, Twitter existentially so — it’s interesting to see Ev & Co. thinking that a push for quality might entail a retreat from the valorization of newness.

XII.

There’s been a lot of movement in the past few months toward alternative, “quality” platforms for content on the web. Branch is based on the idea that web comments are shit and that you have to create a separate universe where smart people can have smart conversations. App.net, the just-funded paid Twitter alternative, is attractive to at least some folks because it promises a reboot of the social web without the “cockroaches” — you know, stupid people. Svbtle, an invite-only blogging platform, is aimed only at those who “strive to produce great content. We focus on the writing, the news, and the ideas. Everything else is a distraction.”

This new class of publishing platforms, like Medium, is beautiful — they share a stripped-down aesthetic that evokes the best of the early web (post-<blink> tag, pre-MySpace) modernized with nice typography, lovely textures, and generous white space. (Medium, in particular, seems to be luxuriate in giant FF Tisa, evocative of Jeffrey Zeldman’s huge-type redesign back in May.)

This new class has also been criticized with a variation on the white flight argument — the idea that the privileged flee common spaces and platforms once they stop being solely the realm of an elite and become too popular. (Vide danah boyd. Also vide your favorite indie band, the first time you heard them on the radio.)

For (just) a moment, strip away the political implications of that critique: What each of these sites argues, implicitly, is that the web norms that we’ve evolved over the past decade err toward crassness and ugliness. That advertising — which all these sites lack, and which is proving to be less-than-sufficiently-remunerative for lots of “quality” online media — is an uninvited guest in our reading experiences. That the free-for-all of a comments thread creates broken-windows-style chaos. That the madness of the web might be tamed through better tools and better platforms. That the web’s pressure to Always Keep Posting New Stuff leads to a lot of dumb stuff being posted. It’s a critique of pageview chasing, a critique of linkbait, a critique of content farms, a critique of SEO’d headlines — a yearning for something more authentic, whatever the hell that means.

I think we’d all like to know what that means. And how to get there.

XIII.

Is Medium the route there? I’m skeptical.

I’m unclear who, beyond an initial crowd of try-anything-once types, will want to publish via Medium, as lovely as it is. Or at least I’m unclear on how many of them there are. The space Medium, er, mediates is between two poles. On one side you’ve got people who want to hang out a shingle online and own their work in every possible sense. On the other, you’ve got people who are happy in the friendly confines of Facebook and Twitter, places where they can reach their friends effortlessly and not worry about writing elegant prose. Is there an audience between those two poles that’s big enough to build something lasting? Is this Blogger or Twitter, or is it Odeo?

But even if Medium isn’t a hit, however that gets defined these days, I think Ev & Co. are onto something here. There are seeds of a backlash against the beautiful chaos the web hath wrought, the desire for a flight to quality. There will be new ways beyond ease of use to harness the creative powers of the audience. And there will be new ways to structure content discovery that go beyond branding authorship and recommendation engines. Those trends are real, and whatever happens to Medium, they’ll impact everyone who publishes online.

Blackbird photo by Duncan Brown used under a Creative Commons license.

August 11 2012

06:48

Content games: How to be a tech blogger

Interesting essay, no endorsement.

Kernel Mag :: Every fledgling tech blogger should receive these fourteen formulaic post suggestions for slow news days, or months with low site traffic. Each of these posts, if written well, is guaranteed to garner many unique pageviews, likes, retweets, shares and comments. Eagle-eyed observers of the industry will quickly recognise the perennial relevance of these templates and the ease with which they can be applied to fit any news item.

An essay by Ezra Butler, www.kernelmag.com

HT: Evgeny Morozov

Tags: blogging

August 10 2012

14:33

Startup Skyscrpr makes direct ad sales easier for bloggers

TechCrunch :: Selling ads online isn’t easy and unless you have a site with a large audience, chances are the major advertising networks aren’t interested in working with you. Direct ad sales are often an attractive option for smaller blogs and online publications, but managing them can be a major hassle. Skyscrpr, a new startup launching out of Vancouver’s GrowLab accelerator, wants to make direct ad sales easy for publishers.

A report by Frederic Lardinois, techcrunch.com

August 09 2012

12:19

Two reasons why every journalist should know about scraping (cross-posted)

This was originally published on Journalism.co.uk – cross-posted here for convenience.

Journalists rely on two sources of competitive advantage: being able to work faster than others, and being able to get more information than others. For both of these reasons, I  love scraping: it is both a great time-saver, and a great source of stories no one else has.

Scraping is, simply, getting a computer to capture information from online sources. They might be a collection of webpages, or even just one. They might be spreadsheets or documents which would otherwise take hours to sift through. In some cases, it might even be information on your own newspaper website (I know of at least one journalist who has resorted to this as the quickest way of getting information that the newspaper has compiled).

In May, for example, I scraped over 6,000 nomination stories from the official Olympic torch relay website. It allowed me to quickly find both local feelgood stories and rather less positive national angles. Continuing to scrape also led me to a number of stories which were being hidden, while having the dataset to hand meant I could instantly pull together the picture of a single day on which one unsuccessful nominee would have run, and I could test the promises made by organisers.

ProPublica scraped payments to doctors by pharma companies; the Ottawa Citizen ran stories based on its scrape of health inspection reports. In Tampa Bay they run an automatically updated page on mugshots. And it’s not just about the stories: last month local reporter David Elks was using Google spreadsheets to compile a table from a Word document of turbine applications for a story which, he says, “helped save the journalist probably four or five hours of manual cutting and pasting.”

The problem is that most people imagine that you need to learn a programming language to start scraping - but that’s not true. It can help - especially if the problem is complicated. But for simple scrapers, something as easy as Google Docs will work just fine.

I tried an experiment with this recently at the News:Rewired conference. With just 20 minutes to introduce a room full of journalists to the complexities of scraping, and get them producing instant results, I used some simple Google Docs functions. Incredibly, it worked: by the end The Independent’s Jack Riley was already scraping headlines (the same process is outlined in the sample chapter from Scraping for Journalists).

And Google Docs isn’t the only tool. Outwit Hub is a must-have Firefox plugin which can scrape through thousands of pages of tables, and even Google Refine can grab webpages too. Database scraping tool Needlebase was recently bought by Google, too, while Datatracker is set to launch in an attempt to grab its former users. Here are some more.

What’s great about these simple techniques, however, is that they can also introduce you to concepts which come into play with faster and more powerfulscraping tools like Scraperwiki. Once you’ve become comfortable with Google spreadsheet functions (if you’ve ever used =SUM in a spreadsheet, you’ve used a function) then you can start to understand how functions work in a programming language like Python. Once you’ve identified the structure of some data on a page so that Outwit Hub could scrape it, you can start to understand how to do the same in Scraperwiki. Once you’ve adapted someone else’s Google Docs spreadsheet formula, then you can adapt someone else’s scraper.

I’m saying all this because I wrote a book about it. But, honestly, I wrote a book about this so that I could say it: if you’ve ever struggled with scraping or programming, and given up on it because you didn’t get results quickly enough, try again. Scraping is faster than FOI, can provide more detailed and structured results than a PR request – and allows you to grab data that organisations would rather you didn’t have. If information is a journalist’s lifeblood, then scraping is becoming an increasingly key tool to get the answers that a journalist needs, not just the story that someone else wants to tell.

12:19

Two reasons why every journalist should know about scraping (cross-posted)

This was originally published on Journalism.co.uk – cross-posted here for convenience.

Journalists rely on two sources of competitive advantage: being able to work faster than others, and being able to get more information than others. For both of these reasons, I  love scraping: it is both a great time-saver, and a great source of stories no one else has.

Scraping is, simply, getting a computer to capture information from online sources. They might be a collection of webpages, or even just one. They might be spreadsheets or documents which would otherwise take hours to sift through. In some cases, it might even be information on your own newspaper website (I know of at least one journalist who has resorted to this as the quickest way of getting information that the newspaper has compiled).

In May, for example, I scraped over 6,000 nomination stories from the official Olympic torch relay website. It allowed me to quickly find both local feelgood stories and rather less positive national angles. Continuing to scrape also led me to a number of stories which were being hidden, while having the dataset to hand meant I could instantly pull together the picture of a single day on which one unsuccessful nominee would have run, and I could test the promises made by organisers.

ProPublica scraped payments to doctors by pharma companies; the Ottawa Citizen ran stories based on its scrape of health inspection reports. In Tampa Bay they run an automatically updated page on mugshots. And it’s not just about the stories: last month local reporter David Elks was using Google spreadsheets to compile a table from a Word document of turbine applications for a story which, he says, “helped save the journalist probably four or five hours of manual cutting and pasting.”

The problem is that most people imagine that you need to learn a programming language to start scraping - but that’s not true. It can help - especially if the problem is complicated. But for simple scrapers, something as easy as Google Docs will work just fine.

I tried an experiment with this recently at the News:Rewired conference. With just 20 minutes to introduce a room full of journalists to the complexities of scraping, and get them producing instant results, I used some simple Google Docs functions. Incredibly, it worked: by the end The Independent’s Jack Riley was already scraping headlines (the same process is outlined in the sample chapter from Scraping for Journalists).

And Google Docs isn’t the only tool. Outwit Hub is a must-have Firefox plugin which can scrape through thousands of pages of tables, and even Google Refine can grab webpages too. Database scraping tool Needlebase was recently bought by Google, too, while Datatracker is set to launch in an attempt to grab its former users. Here are some more.

What’s great about these simple techniques, however, is that they can also introduce you to concepts which come into play with faster and more powerfulscraping tools like Scraperwiki. Once you’ve become comfortable with Google spreadsheet functions (if you’ve ever used =SUM in a spreadsheet, you’ve used a function) then you can start to understand how functions work in a programming language like Python. Once you’ve identified the structure of some data on a page so that Outwit Hub could scrape it, you can start to understand how to do the same in Scraperwiki. Once you’ve adapted someone else’s Google Docs spreadsheet formula, then you can adapt someone else’s scraper.

I’m saying all this because I wrote a book about it. But, honestly, I wrote a book about this so that I could say it: if you’ve ever struggled with scraping or programming, and given up on it because you didn’t get results quickly enough, try again. Scraping is faster than FOI, can provide more detailed and structured results than a PR request – and allows you to grab data that organisations would rather you didn’t have. If information is a journalist’s lifeblood, then scraping is becoming an increasingly key tool to get the answers that a journalist needs, not just the story that someone else wants to tell.

August 08 2012

17:23

Judge in Google, Oracle case seeks names of paid reporters, bloggers

Reuters :: Google Inc and Oracle Corp's copyright and patent battle took a strange twist on Tuesday, after a judge ordered the companies to disclose the names of journalists, bloggers and other commentators on their payrolls.

A report by Alexei Oreskovic, www.reuters.com

Share source link:

13:00

How do you navigate a liveblog? The Guardian’s Second Screen solution

I’ve been using The Guardian’s clever Second Screen webpage-slash-app during much of the Olympics. It is, frankly, a little too clever for its own good, requiring a certain learning curve to understand its full functionality.

But one particular element has really caught my eye: the Twitter activity histogram.

In the diagram below – presented to users before they use Second Screen – this histogram is highlighted in the upper left corner.

Guardian's Second Screen Olympics interactive

What the histogram provides is an instant visual cue to help in hunting down key events.

If you missed Jessica Ennis’s gold, it’s a pretty safe bet you’ll find it where the big Twitter spike is. Indeed, if you missed something interesting – whether you know it happened or not – you should be able to find it by hitting the peaks in that Twitter histogram.

That’s useful whether you’re looking at the Olympics or any other ongoing event which would normally see news websites reaching for a liveblog. What’s more, it requires no human intervention or editorial decision making.

Of course, in this form it relies on people using Twitter – but you can adapt the principle to other sources of activity data: traffic volume to your site, for instance (compared to typical traffic for that time of day, if you want to avoid it being skewed by lunchtime rushes).

Indeed, that’s what the Guardian Zeitgeist does across the site as a whole.

Horizontal navigation, adopted by Second Screen as a whole, is a further innovation which bears closer scrutiny. The histogram lends itself to it, so how do you adapt from a vertically-navigated scrolling liveblog? Would you run the histogram up the side, kept static while the page scrolls? Or would you run the liveblog horizontally?

Either way, it’s a creative solution to a common liveblogging problem that’s worth noting.

13:00

How do you navigate a liveblog? The Guardian’s Second Screen solution

I’ve been using The Guardian’s clever Second Screen webpage-slash-app during much of the Olympics. It is, frankly, a little too clever for its own good, requiring a certain learning curve to understand its full functionality.

But one particular element has really caught my eye: the Twitter activity histogram.

In the diagram below – presented to users before they use Second Screen – this histogram is highlighted in the upper left corner.

Guardian's Second Screen Olympics interactive

What the histogram provides is an instant visual cue to help in hunting down key events.

If you missed Jessica Ennis’s gold, it’s a pretty safe bet you’ll find it where the big Twitter spike is. Indeed, if you missed something interesting – whether you know it happened or not – you should be able to find it by hitting the peaks in that Twitter histogram.

That’s useful whether you’re looking at the Olympics or any other ongoing event which would normally see news websites reaching for a liveblog. What’s more, it requires no human intervention or editorial decision making.

Of course, in this form it relies on people using Twitter – but you can adapt the principle to other sources of activity data: traffic volume to your site, for instance (compared to typical traffic for that time of day, if you want to avoid it being skewed by lunchtime rushes).

Indeed, that’s what the Guardian Zeitgeist does across the site as a whole.

Horizontal navigation, adopted by Second Screen as a whole, is a further innovation which bears closer scrutiny. The histogram lends itself to it, so how do you adapt from a vertically-navigated scrolling liveblog? Would you run the histogram up the side, kept static while the page scrolls? Or would you run the liveblog horizontally?

Either way, it’s a creative solution to a common liveblogging problem that’s worth noting.

August 02 2012

14:18

A case study in online journalism: investigating the Olympic torch relay

Torch relay places infographic by Caroline Beavon

For the last two months I’ve been involved in an investigation which has used almost every technique in the online journalism toolbox. From its beginnings in data journalism, through collaboration, community management and SEO to ‘passive-aggressive’ newsgathering,  verification and ebook publishing, it’s been a fascinating case study in such a range of ways I’m going to struggle to get them all down.

But I’m going to try.

Data journalism: scraping the Olympic torch relay

The investigation began with the scraping of the official torchbearer website. It’s important to emphasise that this piece of data journalism didn’t take place in isolation – in fact, it was while working with Help Me Investigate the Olympics‘s Jennifer Jones (coordinator for#media2012, the first citizen media network for the Olympic Games) and others that I stumbled across the torchbearer data. So networks and community are important here (more later).

Indeed, it turned out that the site couldn’t be scraped through a ‘normal’ scraper, and it was the community of the Scraperwiki site – specifically Zarino Zappia – who helped solve the problem and get a scraper working. Without both of those sets of relationships – with the citizen media network and with the developer community on Scraperwiki – this might never have got off the ground.

But it was also important to see the potential newsworthiness in that particular part of the site. Human stories were at the heart of the torch relay – not numbers. Local pride and curiosity was here – a key ingredient of any local newspaper. There were the promises made by its organisers – had they been kept?

The hunch proved correct – this dataset would just keep on giving stories.

The scraper grabbed details on around 6,000 torchbearers. I was curious why more weren’t listed – yes, there were supposed to be around 800 invitations to high profile torchbearers including celebrities, who might reasonably be expected to be omitted at least until they carried the torch – but that still left over 1,000.

I’ve written a bit more about the scraping and data analysis process for The Guardian and the Telegraph data blog. In a nutshell, here are some of the processes used:

  • Overview (pivot table): where do most come from? What’s the age distribution?
  • Focus on details in the overview: what’s the most surprising hometown in the top 5 or 10? Who’s oldest and youngest? What about the biggest source outside the UK?
  • Start asking questions of the data based on what we know it should look like – and hunches
  • Don’t get distracted – pick a focus and build around it.

This last point is notable. As I looked for mentions of Olympic sponsors in nomination stories, I started to build up subsets of the data: a dozen people who mentioned BP, two who mentioned ArcelorMittal (the CEO and his son), and so on. Each was interesting in its own way – but where should you invest your efforts?

One story had already caught my eye: it was written in the first person and talked about having been “engaged in the business of sport”. It was hardly inspirational. As it mentioned adidas, I focused on the adidas subset, and found that the same story was used by a further six people – a third of all of those who mentioned the company.

Clearly, all seven people hadn’t written the same story individually, so something was odd here. And that made this more than a ‘rotten apple’ story, but something potentially systemic.

Signals

While the data was interesting in itself, it was important to treat it as a set of signals to potentially more interesting exploration. Seven torchbearers having the same story was one of those signals. Mentions of corporate sponsors was another.

But there were many others too.

That initial scouring of the data had identified a number of people carrying the torch who held executive positions at sponsors and their commercial partners. The Guardian, The Independent and The Daily Mail were among the first to report on the story.

I wondered if the details of any of those corporate torchbearers might have been taken off off the site afterwards. And indeed they had: seven disappeared entirely (many still had a profile if you typed in the URL directly - but could not be found through search or browsing), and a further two had had their stories removed.

Now, every time I scraped details from the site I looked for those who had disappeared since the last scrape, and those that had been added late.

One, for example – who shared a name with a very senior figure at one of the sponsors – appeared just once before disappearing four days later. I wouldn’t have spotted them if they – or someone else – hadn’t been so keen on removing their name.

Another time, I noticed that a new torchbearer had been added to the list with the same story as the 7 adidas torchbearers. He turned out to be the Group Chief Executive of the country’s largest catalogue retailer, providing “continuing evidence that adidas ignored LOCOG guidance not to nominate executives.”

Meanwhile, the number of torchbearers running without any nomination story went from just 2.7% in the first scrape of 6,056 torchbearers, to 7.2% of 6,891 torchbearers in the last week, and 8.1% of all torchbearers – including those who had appeared and then disappeared – who had appeared between the two dates.

Many were celebrities or sportspeople where perhaps someone had taken the decision that they ‘needed no introduction’. But many also turned out to be corporate torchbearers.

By early July the numbers of these ‘mystery torchbearers’ had reached 500 and, having only identified a fifth, we published them through The Guardian datablog.

There were other signals, too, where knowing the way the torch relay operated helped.

For example, logistics meant that overseas torchbearers often carried the torch in the same location. This led to a cluster of Chinese torchbearers in Stansted, Hungarians in Dorset, Germans in Brighton, Americans in Oxford and Russians in North Wales.

As many corporate torchbearers were also based overseas, this helped narrow the search, with Germany’s corporate torchbearers in particular leading to an article in Der Tagesspiegel.

I also had the idea to total up how many torchbearers appeared each day, to identify days when details on unusually high numbers of torchbearers were missing – thanks to Adrian Short – but it became apparent that variation due to other factors such as weekends and the Jubilee made this worthless.

However, the percentage per day missing stories did help (visualised below by Caroline Beavon), as this also helped identify days when large numbers of overseas torchbearers were carrying the torch. I cross-referenced this with the ‘mystery torchbearer’ spreadsheet to see how many had already been checked, and which days still needed attention.

Daily totals - bar chart

But the data was just the beginning. In the second part of this case study, I’ll talk about the verification process.

14:18

A case study in online journalism: investigating the Olympic torch relay

Torch relay places infographic by Caroline Beavon

For the last two months I’ve been involved in an investigation which has used almost every technique in the online journalism toolbox. From its beginnings in data journalism, through collaboration, community management and SEO to ‘passive-aggressive’ newsgathering,  verification and ebook publishing, it’s been a fascinating case study in such a range of ways I’m going to struggle to get them all down.

But I’m going to try.

Data journalism: scraping the Olympic torch relay

The investigation began with the scraping of the official torchbearer website. It’s important to emphasise that this piece of data journalism didn’t take place in isolation – in fact, it was while working with Help Me Investigate the Olympics‘s Jennifer Jones (coordinator for#media2012, the first citizen media network for the Olympic Games) and others that I stumbled across the torchbearer data. So networks and community are important here (more later).

Indeed, it turned out that the site couldn’t be scraped through a ‘normal’ scraper, and it was the community of the Scraperwiki site – specifically Zarino Zappia – who helped solve the problem and get a scraper working. Without both of those sets of relationships – with the citizen media network and with the developer community on Scraperwiki – this might never have got off the ground.

But it was also important to see the potential newsworthiness in that particular part of the site. Human stories were at the heart of the torch relay – not numbers. Local pride and curiosity was here – a key ingredient of any local newspaper. There were the promises made by its organisers – had they been kept?

The hunch proved correct – this dataset would just keep on giving stories.

The scraper grabbed details on around 6,000 torchbearers. I was curious why more weren’t listed – yes, there were supposed to be around 800 invitations to high profile torchbearers including celebrities, who might reasonably be expected to be omitted at least until they carried the torch – but that still left over 1,000.

I’ve written a bit more about the scraping and data analysis process for The Guardian and the Telegraph data blog. In a nutshell, here are some of the processes used:

  • Overview (pivot table): where do most come from? What’s the age distribution?
  • Focus on details in the overview: what’s the most surprising hometown in the top 5 or 10? Who’s oldest and youngest? What about the biggest source outside the UK?
  • Start asking questions of the data based on what we know it should look like – and hunches
  • Don’t get distracted – pick a focus and build around it.

This last point is notable. As I looked for mentions of Olympic sponsors in nomination stories, I started to build up subsets of the data: a dozen people who mentioned BP, two who mentioned ArcelorMittal (the CEO and his son), and so on. Each was interesting in its own way – but where should you invest your efforts?

One story had already caught my eye: it was written in the first person and talked about having been “engaged in the business of sport”. It was hardly inspirational. As it mentioned adidas, I focused on the adidas subset, and found that the same story was used by a further six people – a third of all of those who mentioned the company.

Clearly, all seven people hadn’t written the same story individually, so something was odd here. And that made this more than a ‘rotten apple’ story, but something potentially systemic.

Signals

While the data was interesting in itself, it was important to treat it as a set of signals to potentially more interesting exploration. Seven torchbearers having the same story was one of those signals. Mentions of corporate sponsors was another.

But there were many others too.

That initial scouring of the data had identified a number of people carrying the torch who held executive positions at sponsors and their commercial partners. The Guardian, The Independent and The Daily Mail were among the first to report on the story.

I wondered if the details of any of those corporate torchbearers might have been taken off off the site afterwards. And indeed they had: seven disappeared entirely (many still had a profile if you typed in the URL directly - but could not be found through search or browsing), and a further two had had their stories removed.

Now, every time I scraped details from the site I looked for those who had disappeared since the last scrape, and those that had been added late.

One, for example – who shared a name with a very senior figure at one of the sponsors – appeared just once before disappearing four days later. I wouldn’t have spotted them if they – or someone else – hadn’t been so keen on removing their name.

Another time, I noticed that a new torchbearer had been added to the list with the same story as the 7 adidas torchbearers. He turned out to be the Group Chief Executive of the country’s largest catalogue retailer, providing “continuing evidence that adidas ignored LOCOG guidance not to nominate executives.”

Meanwhile, the number of torchbearers running without any nomination story went from just 2.7% in the first scrape of 6,056 torchbearers, to 7.2% of 6,891 torchbearers in the last week, and 8.1% of all torchbearers – including those who had appeared and then disappeared – who had appeared between the two dates.

Many were celebrities or sportspeople where perhaps someone had taken the decision that they ‘needed no introduction’. But many also turned out to be corporate torchbearers.

By early July the numbers of these ‘mystery torchbearers’ had reached 500 and, having only identified a fifth, we published them through The Guardian datablog.

There were other signals, too, where knowing the way the torch relay operated helped.

For example, logistics meant that overseas torchbearers often carried the torch in the same location. This led to a cluster of Chinese torchbearers in Stansted, Hungarians in Dorset, Germans in Brighton, Americans in Oxford and Russians in North Wales.

As many corporate torchbearers were also based overseas, this helped narrow the search, with Germany’s corporate torchbearers in particular leading to an article in Der Tagesspiegel.

I also had the idea to total up how many torchbearers appeared each day, to identify days when details on unusually high numbers of torchbearers were missing – thanks to Adrian Short – but it became apparent that variation due to other factors such as weekends and the Jubilee made this worthless.

However, the percentage per day missing stories did help (visualised below by Caroline Beavon), as this also helped identify days when large numbers of overseas torchbearers were carrying the torch. I cross-referenced this with the ‘mystery torchbearer’ spreadsheet to see how many had already been checked, and which days still needed attention.

Daily totals - bar chart

But the data was just the beginning. In the second part of this case study, I’ll talk about the verification process.

July 27 2012

14:36

The thrills (and agony) of the 'Social Olympics'

PBS Mediashift :: The IOC developed social media, blogging and Internet guidelines to deal with social media. The guidelines encourage athletes to use first-person accounts, similar to diary entries, that abide by the Olympic spirit. They forbid, however, video or images of the competitions, and comments about competitors.

[Terri Thornton:] And already, the need for such guidelines has been underscored.

A report by Terri Thornton, www.pbs.org

Download the guideline here: "IOC Social Media, Blogging and Internet Guidelines for participants and other accredited persons at the London 2012 Olympic Games"

HT: Anthony De Rosa, here:

IOC encourage athletes to use first-person accounts; forbid video, images, comments on competitions, competitors to.pbs.org/Oh5CBT

— Anthony De Rosa (@AntDeRosa) July 27, 2012

July 26 2012

07:38

Review: Translate app

Antoinette Siu takes a look at a new free app which promises to make transcribing audio easier.

Transcribing audio is one of the most time-consuming tasks in a journalist’s job. Switching between the audio player and the text editor, rewinding every 20 seconds in, typing frantically to catch every syllable—repeating these steps back and forth, and back and forth… in an age of so much automation, something isn’t quite right.

A new Chrome app tool called Transcribe lets you do all that in one screen. With keyboard shortcuts and an audio file uploader, you can easily go back and forth between your sound and text.

The basic version is (and likely always will be) free; the pro version goes for $19/month on the Solo plan or $29/month on the Premium plan. With both, a 30-day free trial is included with no credit card necessary.

Just from testing the basic app, there seems to be a huge potential in a free reporting tool like this. Another upside is, even when you lack internet access the app will continue to work.

With the upgrade to pro, the developers offer options to save your transcripts, export the documents, and other sweet features like the full screen mode—so you can really focus on getting that transcription done free from distractions.

Overall, the app is not only intuitively simple to use, but audio uploads are fast, the design is simple and without clutter, and, most of all, free from advertising at this point.

The tool is definitely off to a promising start, considering it’s still a work in progress.

07:38

Review: Translate app

Antoinette Siu takes a look at a new free app which promises to make transcribing audio easier.

Transcribing audio is one of the most time-consuming tasks in a journalist’s job. Switching between the audio player and the text editor, rewinding every 20 seconds in, typing frantically to catch every syllable—repeating these steps back and forth, and back and forth… in an age of so much automation, something isn’t quite right.

A new Chrome app tool called Transcribe lets you do all that in one screen. With keyboard shortcuts and an audio file uploader, you can easily go back and forth between your sound and text.

The basic version is (and likely always will be) free; the pro version goes for $19/month on the Solo plan or $29/month on the Premium plan. With both, a 30-day free trial is included with no credit card necessary.

Just from testing the basic app, there seems to be a huge potential in a free reporting tool like this. Another upside is, even when you lack internet access the app will continue to work.

With the upgrade to pro, the developers offer options to save your transcripts, export the documents, and other sweet features like the full screen mode—so you can really focus on getting that transcription done free from distractions.

Overall, the app is not only intuitively simple to use, but audio uploads are fast, the design is simple and without clutter, and, most of all, free from advertising at this point.

The tool is definitely off to a promising start, considering it’s still a work in progress.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl