Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 17 2012

19:30

Surprise! The news shows up in the least expected places

You’re flicking through tweets, reading email, Googling recipes, or watching dogs sticking their heads out of car windows in slow motion when a headline catches your eye. Before you know it, you’re reading the news, even though you didn’t mean to. Maybe you didn’t even want to.

A lot of readers get their news just like this — incidentally — according to a growing body of research. That is, they don’t turn to the web seeking news. The news finds them. And that has implications for how that news gets produced and distributed.

Borchuluun Yadamsuren, a post-doctoral fellow at Missouri’s Reynolds Journalism Institute, studies how people get information and how it makes them feel. In 2009, she surveyed 148 adults, mostly highly educated, in the college town of Columbia, Mo., then followed up with 20 one-on-one interviews. Some respondents said they don’t trust the news, some were ambivalent, but most said they get informed one way or another — often by accident.

Yadamsuren quotes a respondent who exemplifies the incidental news reader:

She finds news when she is searching for something else on the Internet or she finds links to news stories from her e-mail. She said ‘it feels more legitimate and less like wasting time’ if she has gone to news from her e-mail. She thinks her husband is wasting his time as he constantly looks at all of the regional newspapers as part of his daily routine.

Yadamsuren draws on a lot of previous research about information behavior, most notably the work of Denis McQuail, et al., on the “uses and gratifications” theory from the 1960s and ’70s. (She cites other work heavily, if you want to dive in.)

She identified four types of consumers: avid news readers, news avoiders, news encounterers, and crowd surfers.

Avid news readers are perhaps the most familiar to a typical Nieman Lab reader: They visit news sites multiple times a day. They include traditional news sources in their media diet. They have higher trust in media. “They continue monitoring for news to feel empowered, to be informed, to get over boredom, and to have a break from their work,” Yadamsuren wrote.

News avoiders deliberately choose not to visit news sites. They are more sensitive to negative stories. They tend to distrust the mainstream press. But that doesn’t mean they don’t get news. Yadamsuren quotes a respondent in her study:

Sometimes those who are sensitive to negative news may use incidental exposure to online news as their main way to get informed about news events. R14 said she does not search for news because she thinks the mainstream media cover too many depressing stories. She mostly runs across news at the Yahoo! portal.

News encounterers are more ambivalent about news. They neither search for nor avoid it. They don’t have established habits. They know that important news will find them one way or another.

Crowd surfers rely on friends and online communities for news. They tend not to trust the mainstream “filter.” They start at Reddit or Digg. “They like reading ‘newsworthy’ stories voted on by visitors to the site rather than relying on the stories selected by news media,” Yadamsuren said. “They trust the ‘wisdom of crowd’ rather than the viewpoints of journalists.”

Yadamsuren has also found the route to discovery — not just the content of the news itself — provokes emotional reaction in readers. For instance, unexpected positive news feels better and more awesome than expected positive news.

“It seemed that the incidental element of discovering the news story strengthened the respondents’ reaction to the news item,” she writes in recently published research. “The respondents described the encounters as finding a treasure, unexpectedly learning something new, or unexpectedly encountering something that evoked their curiosity. All these reactions may also occur during general news reading, but the chance element is likely to make these reactions stronger.”

Again, Yadamsuren quoting respondents in her report:

R1 explains: ‘I’m not looking for something in particular, but sometimes it’s there right in front of my eyes, and it makes me happy and like you know it’s perfect for me, let’s explore further’. R2 compared incidental exposure to online news to the feeling of discovery of treasure: ‘I guess it’s kind of the thrill of the chase, like you really discovered a treasure or something’…

One respondent (R19) says she feels lucky to find stories to read that she might have missed. ‘I usually feel pretty lucky when I find something that I would have otherwise missed…makes me want to search more’. Another respondent (R12) agrees: ‘many times it will be something that either I haven’t heard or it jogs my memory, oh yeah, I want to read more about that’. (R12)

Discoveries are not always joyous: Stumbling upon bad news can amplify the feelings associated with bad news. And some readers reacted negatively to incidental exposure regardless of the content, because it wound up being a distraction or a waste of time. That induced feelings of guilt.

Regardless of how they feel, surprised readers are more engaged readers. Yadamsuren did not study how people feel about the brands they encountered; we don’t know if the joy of surprise translates to more positive associations with The Washington Post, for example.

Yadamsuren suggests news organizations syndicate their content and advertise in unlikely places. Maybe that axiom, “go where your users are,” should be revised: “Go where your users least expect to find you.”

June 13 2011

15:00

Eli Pariser: How do we recreate a front-page ethos for a digital world?

At the top of my summer reading list is The Filter Bubble, Eli Pariser’s new book that argues that the filters we rely on to make sense of the online world can do us as much harm as good.

While the book relies on familiar notions about the perils of the echo chamber, it uses those ideas as a starting point, rather than an ending, focusing on the algorithmic implications of all the echoing. One of the most intriguing aspects of Pariser’s argument is his exploration of the automation of preference — through the increasing influence of the Like button, through Google’s desire to make its results “relevant,” through various news orgs’ recommendation engines, and, ultimately, through media companies’ economic mandate to please, rather than provoke, their audiences.

That last one isn’t new, of course; news organizations have always navigated a tension between the need to know and the want to know when it comes to the information they serve to their readers. What is new, though, is the fact that audiences’ wants now have data to back them up; they can be analyzed and tailored and otherwise manipulated with a precision that is only increasing. Audiences’ needs, on the other hand, are generally as nebulous as they’ve ever been. But they are, of course, no less urgent.

So if we’re to truly gain from what the web offers us, Pariser argues, what we need is something like the kind of thinking that guided journalism through most of the 20th century: a notion that media companies serve more than, in every sense, pure interest. A conviction that news editors (and, more broadly, the fabled gatekeepers who exert power, even on the “democratized” web, over people’s access to information) have a responsibility to give people as full and nuanced a picture of the world as they can.

As much as we need filters, Pariser says, a web experience that is based on filters alone won’t give us that wide-angle view. And now, he argues, while online media remains in its infancy, is the time to do something to change that.

To learn more about Pariser’s thinking — and especially about how that thinking applies to news producers — I spoke with him when he came to Cambridge for a recent reading at the Harvard Book Store. Below is a transcript of our talk. (And apologies for the shaky camera work in the video above, which was shot in a bookstore office; apparently, I had a case of the Austeros that day.)

To begin with, I asked Pariser about a key aspect of this argument: the notion that the filter bubble phenomenon affects not only what the information we consume, but also our ability to put that information to use within a functional democracy. Here’s what we told me:

EP: What people care about politically, and what they’re motivated to do something about, is a function of what they know about and what they see in their media. We’ve known this for a while — that, for example, if you chop up television broadcast news, and show different sets of news to different groups of people, and then you poll them about what their preferences are, you get very different results. People see something about the deficit on the news, and they say, ‘Oh, the deficit is the big problem.’ If they see something about the environment, they say the environment is a big problem.

This creates this kind of a feedback loop in which your media influences your preferences and your choices; your choices influence your media; and you really can go down a long and narrow path, rather than actually seeing the whole set of issues in front of us.

MG: Interesting. So what should news organizations be doing, and how should they be thinking about this problem when they’re thinking about how they build their websites, and build their news experience?

EP: Well, I think, right now, it’s a little polarized. You actually have the old-school editors who say, ‘Only humans can do this.’ The New York Times, at least until recently, didn’t let even blog authors see how people were using or sharing their links; you had no sense of how you were doing online. That’s sort of one extreme. On the other extreme is this ‘if people click it, then it must be good’ mentality. And I think we need people who are smart about journalism to be thinking about how we import a lot of the implicit things that a front page does, or that a well-edited newspaper does — how do we import that into these algorithms that are going to affect how a lot of people experience the world? Whether — we might prefer that they not, but that’s sort of the way that this is going. So how do we do that? That seems like the big, exciting project right now.

February 02 2011

21:00

“Serendipity and surprise”: How will engagement work for The Daily?

All of us here at the Lab watched the unveiling of The Daily (even those of us who are on a beach sipping umbrella drinks).

But there was something that News Corp. CEO Rupert Murdoch said that seems significant now that the genie is out of the bottle. He said this about today’s readers:

“They expect content tailored to their specific interest to be available any time, anywhere. Too often this means that news is restricted, only to interest that have been predefined. What we are losing today are the opportunities for true news discovery. The magic of newspapers and great blogs lies in their serendipity and surprise, and the deft touch of a good editor.”

There’s a lot to unpack in that statement, but what is interesting to me is how it jibes with what we are learning about how engagement will work on The Daily — specifically how they plan to use comments and social media, and to access the greater Internet.

The Daily deserves credit for making strides to meet expectations of social functionality we see on news sites: You can share stories with your friends via email, Twitter and Facebook, and you can leave comments within the app. (Something we’re particularly interested in here at The Lab is audio comments. Seems to open up all kinds of questions — for example, what do trolls sound like? And can the comments be turned into more content, a comments podcast, perhaps? But I digress.)

Similar to The Washington Post iPad app, The Daily will be able to deploy Twitter feeds in stories or other features. Further, editor Jesse Angelo said today, they plan on linking out and pulling in HTML5 content as needed.

As Jon Miller, the News Corp. digital chief presenting The Daily, said, “The Daily is not an island. It definitely will be a part of the entire web discourse and the social world.”

The Daily seems to fit that description, but I can’t help but wonder: Can you really link to stories from The Daily? In the questions following the demo, Miller and Angelo gave the impression that access to The Daily from the greater web would be, well, tricky, to say the least. Stories shared from the app would be free (meaning if I send you a link from The Daily, you can see it). But direct from the homepage, apparently: not. (This seems similar to the balance the NYT has struck between walled garden and open web: side-door entry, through blogs and social media, leads to the same thing as front-door. But it’s the front door where you’ll be asked to pay for admission.) Angelo gave the impression that select content from the app would be mirrored online, but not the whole publication — or even the whole piece of content.

For The Daily to succeed, of course, it’ll need subscribers. But does that mean its Twitter feed, Facebook page, and blog will be used to engage readers — or simply as a promotion device?

So the question then becomes: How will the “serendipity and surprise” that Murdoch talked about actually work?

In a way, it would seem that The Daily wants to incorporate the web from inside the app, but not from outside it, taking it a step further than the “walled garden” approach we’ve seen in some apps. The app (if you’ll allow a Minnesotan transgression) reminds me of the Chaska Community Center, an indoor, one-stop destination that includes (deep breath) a soccer/multipurpose field, hockey rink, two gymnasiums, workout facilities, a movie theater, and swimming pool complete with a water slide several stories high. In other words, a lot of shiny, cool stuff that you can use all under one roof.

As an iPad-only newspaper, The Daily is clearly betting on people spending a lot of time on the device, and in some ways that seems to harken back to the glory days of subscribers reading every section of the paper. It wants to move away from the “drive-by” audience, instead rewarding subscribers for their loyalty.

Reader engagement, at least as we’ve come to think of it, requires an open and two-way exchange, one that can benefit publishers by potentially creating a stronger connection with readers while putting their content in front of more eyeballs. As best as I can tell, story sharing and linking will have to come primarily from subscribers out to others, which would create limited opportunities for those “I didn’t know I needed that before now” moments of serendipity. Murdoch noted, during the launch, the benefits of “true opportunities for news discovery.” Whether The Daily will be able to create those, though, remains to be seen.

August 25 2010

13:30

Googling serendipity: How does journalism fare in a world where algorithms trump messy chance?

Twelve years ago, when I was reporting on the pending Microsoft antitrust case, I learned that what was really at stake wasn’t immediately apparent in the legal briefs. It wasn’t the browser market (remember Netscape?) or whether Windows should be able to run somebody else’s word-processing program. Rather, it was how control was exercised over the places where we learned, created, and engaged in critical thought.

One of the best thinkers on the topic was Ben Shneiderman, founding director of the Human-Computer Interaction Lab at the University of Maryland. He told me at the time that the critical question for Microsoft was not whether the company encouraged innovation — it did — but rather how financial pressures dictated which innovations it adopted and which it let wither. The Microsoft software suite, he noted, wasn’t very accessible to people with learning disabilities or those with low incomes.

Fast forward to 2010, and now we hear from Eric Schmidt, CEO of Google, another powerful technology company that controls the tools of creativity and expression. Schmidt recently talked to The Wall Street Journal about the potential for applying artificial intelligence to search, suggesting that the search engine of the future would figure out what we meant rather than find what we actually typed.

Schmidt seems to be pushing the idea that the future — or, more accurately, each of our individual futures, interests, and passions — all can be plotted by algorithm from now until our dying day. The role of serendipity in our lives, he said, “can be calculated now. We can actually produce it electronically.”

Really?

According to Webster’s, serendipity is “the faculty or phenomenon of finding valuable or agreeable things not sought for.” So if the essence of serendipity is chance or fortune or chaos, then by definition, anything that a search engine brings to you, even on spec, isn’t serendipitous.

I don’t know whether Schmidt’s comments should be chalked up to blind ambition or to quant-nerd naivete. But it’s troubling that Schmidt seems to discount the role that human nature plays in our everyday lives and, ultimately, in guiding our relationships with technology.

It might be that Schmidt’s vision for the search engine of the future would serve us well in finding a new restaurant, movie or book. But if Google really wants to take the guesswork out of our lives, we should be asking the same question that Shneiderman put to Microsoft. How might financial pressures shape Google’s “serendipity algorithm”? What content — journalism and otherwise — will it push our way that will shape our worldview? And, to Shneiderman’s point, what limits does it impose?

I think it’s safe to say that some good ideas don’t lend themselves to being monetized online — witness the rise of nonprofit startups in bringing us investigative, public affairs, and explanatory journalism. How might they fare in Schmidt’s world order?

I caught up with Shneiderman on Monday, and he agreed that this is one of the key questions that should be debated as we depend more and more on a “recommender system” in which companies like Google or Amazon use massive databases to anticipate our needs and wants. Public interest groups and other nonprofits that can’t afford the right keywords could be most vulnerable in these systems, Shneiderman said. “How far down the list do the concerns of civic groups get pushed?” he asked.

It’s fair to ask companies what considerations and factors might be weighted in their search formulas, Shneiderman said, but it isn’t clear what level of transparency should be expected. “What is a reasonable a request to make without exposing their algorithm and their business practices?” he said.

I can’t say either. But I do think there are some lessons that Google can take from the history that Microsoft has helped write.

One lesson is that what’s good for the bottom line doesn’t always jibe with what’s best for consumers. A dozen years ago, the Netscape browser was regarded by many as more as more functional, but Microsoft saw it as a threat. So it bundled its own Internet Explorer browser in its operating system and effectively pushed Netscape out of existence.

Another lesson is that it isn’t always possible to divine what people will want in the future based on a profile of what they (or people like them) have wanted it the past. Indeed, some of the most successful technology companies — Google included — have succeeded precisely because their vision for the future was radical, new and compelling. Microsoft once played that role to a monolithic IBM. But today, as Microsoft’s market valuation has been eclipsed by that of Apple, it has become debatable whether Microsoft remains a consumer-driven company.

None of this should be interpreted as an anti-capitalistic rant. We’re all better off for Google’s search box, and it’ll be interesting to see where Schmidt’s vision takes the company.

Rather, it is a suggestion that even the most elaborate algorithms and high-touch e-marketing can’t address every human need.

One of the best vacations I ever took was when I pulled out of my driveway in Raleigh in late August 1991 with no particular destination. Two days later, I found myself in North Dakota, discovering places I never would have appreciated based on my past interests or those of my friends and peers. The experience was so compelling to me precisely because it was serendipitous.

That trip has served as an important reminder to me ever since. When we don’t know what we want, sometimes what we really need is to figure it out for ourselves.

June 30 2010

22:00

Google News revamps with “news for you” angle

A few moments ago, the Google News homepage rolled out a redesign — a revamp meant to make the algorithm-driven news site “more customizable and shareable.”

“There’s an old saying that all news is local,” writes Google software engineer Kevin Stolt in a blog post announcing the design changes. “But all news is personal too — we connect with it in different ways depending on our interests, where we live, what we do and a lot of other factors. Today we’re revamping the Google News homepage with several changes designed to make the news that you see more relevant to you. We’re also trying to better highlight interesting stories you didn’t know existed and to make it easier for you to share stories through social networks.”

In other words, the new site is trying to balance two major, and often conflicting, goals of news consumption: personalization and serendipity.

The more specific purpose of today’s changes, Google says, is threefold: first, to have consumers tell Google what stories most interest them; second, to help those consumers keep track of ongoing stories; and third, to help them share stories with others.

Among the changes being implemented, per Stolt’s explanation of them:

Customizable interest areas: “The new heart of the homepage is something we call ‘News for you’: a stream of headlines automatically tailored to your interests. You can help us get it right by using the ‘Edit personalization’ box to specify how much you’re interested in Business, Health, Entertainment, Sports or any subject you want to add (whether it’s the Supreme Court, the World Cup or synthetic biology). You can choose to view the stories by Section view or List view, and reveal more headlines by hovering over the headline with your mouse. We’ll remember your preferences each time you log in.”

Customizable news sourcing: “To give you more control over the news that you see, we’re now allowing you to choose which news sources you’d like to see more or less often. You can do so in News Settings. These sources will rank higher or lower for you (but not for anyone else) in Google News search results and story clusters.”

An emphasis on local news: “And then there’s local news; we’re now highlighting weather and headlines about your city or neighborhood in their own section, which you can edit with whichever location you want to follow.”

An increased emphasis on the Spotlight section: “We’re also more prominently displaying the Spotlight section, which features stories of more lasting interest than breaking news and has been one of our most popular sections since we introduced it last fall.”

Communal (read: non-customized) story highlights: “There are the subjects that interest you and then there’s the major news of the day. To make it easy for you to find the big stories like Hurricane Alex, we’re adding links to topics that many outlets are covering. You’ll find these topics in the Top Stories section on the left side of the homepage as well as in linked keywords above headlines. Clicking on a topic link takes you to a list of related coverage that you can add to your news stream.”

(This is also a nod, I’d add, toward serendipity — a goal Google News has expressed interest in before, most notably through its Spotlight and its Editors’ Picks features.)

The changes are pretty fascinating, all in all (especially in the context of Google’s rumored move into Facebook territory); we’ll likely have more to say on them later on. In the meantime, here’s more on the changes, from the horse’s mouth:

June 10 2010

22:23

Google News experiments with human control, promotes a new serendipity with Editors’ Picks

Late this afternoon, Google News rolled out a new experiment: Editors’ Picks. Starting today, a small percentage of Google News users will find a new box of content with that label, curated not by Google’s news algorithm, but by real live human news editors at partner news organizations. Here’s an example, curated by the editors of Slate:

Per Google’s official statement on the new feature:

At Google, we run anywhere from 50 to 200 experiments at any given time on our websites all over the world. Right now, we are running a very small experiment in Google News called Editors’ Picks. For this limited test, we’re allowing a small set of publishers to promote their original news articles through the Editors’ Picks section.

That by itself is a remarkable shift for a website that, at its launch in 2002, proudly included on every page: “This page was generated entirely by computer algorithms without human editors. No humans were harmed or even used in the creation of this page.

But Google’s statement very much understates the feature’s (potential) significance. You know how Cass Sunstein wanted to build an “architecture of serendipity” that would give readers important but surprising information? And how, increasingly, many news thinkers have come to believe that systematizing serendipity is not so much a contradiction as a democratic necessity? Well, this is a step — small, but certain — in that direction. Think of Editors’ Picks as a Spotlight-like feature that, instead of highlighting “in-depth pieces of lasting value,” shines a light on what editors themselves have deemed valuable. 

In that sense, Editors’ Picks — currently being run in partnership with less than a dozen news outlets, including The Washington Post, Newsday, Reuters, and Slate — could recreate the didn’t-know-you’d-love-it-til-you-loved-it experience of the bundled news product within the broader presentation of Google News’ algorithmically curated news items. Serendipity concerns exist even at Google (see Fast Flip, for example); this is one way of replicating the offline experience of serendipity-via-bundling within the sometimes scattered experience of online news consumption.

Editors’ Picks also does what its name suggests: it allows editors to choose which stories they introduce to the Google News audience. (Google confirmed to me the links on display aren’t being paid for by the news publishers — that is, it’s not a sponsored section.) Publishers can choose to promote stories that have done well, traffic-wise, amplifying that success — or they can choose to promote stories that have gotten less traction. Or they can simply choose to promote stories that are funny or important or touching or all of the above — stories that are simply worth reading. The point is, they can choose.

Which is, of course, of a piece with Google’s renewed focus on the news side of its search functionalities — and its effort to reach out to the news organizations. And it’s of a piece with other sites that have moved from automated news to automation-plus-human-editing.

Consumers, for their part, get some choice in the matter, as well: The Editors’ Picks experiment combines crowd-curated content with content selected by news organizations themselves — editorial authority and algorithmic — within the same news presentation.

In other words: serendipity, systematized.

April 12 2010

12:28

The future is mobile, and other thoughts from Google CEO Eric Schmidt’s speech at ASNE

Yes, he got the inevitable “shouldn’t you pay content providers?” question from an audience member. And, yes, he gave the inevitable “most news organizations actually want the traffic we provide” answer. But for the most part, though it tread familiar territory, Google CEO Eric Schmidt’s speech last night — delivered to a packed half-ballroom at the American Society of News Editors conference in DC — was an impressive feat of rhetorical tight-rope-walking. (Text: You, news editors, are guardians of democracy. Subtext: You, news editors, should probably rethink your patrol systems.)

So was the speech well-received? My read: the crowd reception to the uber-exec and his thoughts was cordial, but — despite the many, many compliments Schmidt paid to journalism and journalists during the course of the talk — not overly friendly. (Usually, at a speech like this, there’d be a vibrant back-channel conversation, via Twitter, that would allow a more nuanced assessment. Last night’s speech didn’t have that back-talk; relatively few people were tweeting it, though many were taking notes on reporters’ pads.)

Below, I’ve excerpted the sections of the talk that I found most interesting; they’re listed in chronological order to give you an idea of the arc of the speech.

On newspapers and discovery:
I love newspapers. I love of reading them — that when you’re finished, you’re done, and you know what’s going on. I love the notion of discovery that newspapers represent…. Newspapers are fundamental, not just in America, but around the world.

On information and democracy:
We have goals in common. Google believes in the power of information. We believe that it’s better to have more information than less. We also understand that information can annoy governments and annoy people…but that ultimately the world is a better place with more information available to more and more people. And the flow of accurate information, of the diverse views and debate that we’re so used to, is really, really fundamental to a functioning democracy.

On criticism (and sympathy):
You all get criticized all the time. On the left, you get criticized for being too liberal. On the right, you get criticized for being too conservative. In our case, we just get kicked out of China. Same thought.

On journalism as an art form:
We’re not in the news business, and I’m not here to tell you how to run a newspaper. We are computer scientists. And trust me, if we were in charge of the news, it would be incredibly accurate, incredibly organized, and incredibly boring. There is an art to what you do. And if you’re ever confused as to the value of newspaper editors, look at the blog world. That’s all you need to see. So we understand how fundamental tradition and the things you care about are.

On the best of times, the worst of times:
You have more readers than ever; you have more sources than ever, for sure; you have more ways to report. And new forms of making money will develop. And they’re underway now…. So we have a business model problem. We don’t have a news problem. That’s ultimately my view.

On our new emphasis on now-ness:
What do our children know now that our parents did not know when they were the age of our children? They know about now. They know about precisely now, in a way that our parents’ generation did not. That this now-ness drives everything…and what happens is, you experience the reality of the moment in a way that’s much, much more intense.

On the implications of now-ness:
It’s creating a problem which I’m going to call “the ersatz experience problem.” On the one hand, you have a sense of connectedness to everything — literally, every event globally…but you also have a false sense of actual experience, since you’re not really there. So the trade-off is that you know everything, but you’re not physically in any one place. And that shift is actually a pretty profound one in the way society’s going to consume media and news and so forth. And all of us are part of it. And Google is obviously moving it forward.

On Google’s “mobile-first” focus:
It’s important to understand that three things are coming together: the powerful mobile devices that …are paired with the tremendous performance that we can now get on computers…it is the sum of that, and the capabilities and the technologies that will exploit the sum of that, that will define the next ten or twenty years for all of us. So when I say “Internet first,” I mean “mobile first.”

Now, some of the most clever engineers are working on mobile applications ahead of personal computer applications. People are literally moving to that because that’s where the action is, that’s where the growth is, there’s a completely unwashed landscape, you have no idea where folks are going to go.

On news’ mobile/personal/multi-platform future:
Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”

When I go to a news site, I want that site to know me, to know about me: what I care about, and so forth. I don’t want to be treated as a stranger, which is what happens today. So, remember me. Show me what I like. But I also want you to challenge me. I want you to say, “Here’s something new. Here’s something you didn’t know.”

On the sheer volume of information out there today:
The Internet is about scale. I was studying this, because I was trying to figure out how big this thing is. Between the dawn of humanity and 2003, roughly 5 Exabytes of information were created. (An Exabyte is roughly a million gigabytes.) We generate that amount in every two days now…. So there is a data explosion. And the data explosion is overwhelming all of us. Of course, this is good business for Google and others who try to sort all this out.

On the future of display ads:
If you think about it in this context — you have this explosion of mobile devices, you have this connection, and so forth — what does this mean for the business world? Well, it’s obvious that advertising, which is the business Google is in, is going to do very well in this space. Because advertising works well when it’s very targeted. Well, these devices are very targeted. So we can give a personalized ad.

Furthermore, Google — and others — are busy building vertical display ads that look an awful lot like the ads that look an awful lot like the ads that are in traditional newspapers…. In the next few years, you should be able to do very, very successful display advertising against this kind of content. You may not be able to do it against murders, because it’s very difficult to get the right targeted ad in that case — what, are you going to advertise a knife? It’s obviously terrible. I’m not trying to make a joke about it; it’s a real business problem.

On the future of subscriptions:
We and others are working on ubiquitous ways in which subscriptions can be bundled, packaged, and delivered. We’re seeing this today with both the Kindle and the iPad. Both of which have this subscription model which you can test. You can actually find out, “What will people pay for this?” And eventually that model should have higher profitability. Because it has a low cost of goods, right, because you don’t have the newspaper and the printing and distribution costs. So there’s every reason to believe that eventually we’ll solve this and ultimately bring some significant money into this thing.

On the need for experimentation:
A Ralph Waldo Emerson quote is, “Don’t be too timid and squeamish about your actions; life is an experiment.” On the Internet, there is never a single solution…. The fact of the matter is there are no simple solutions to these complex problems. And in order to really find them, we’re going to have to run lots of experiments.

April 02 2010

14:00

This Week in Review: The iPad’s skeptics, Murdoch’s first paywall move and a ‘Chatroulette for news’

[Every Friday, Mark Coddington sums up the week’s top stories about the future of news and the debates that grew up around them. —Josh]

The iPad’s fanboys and skeptics: For tech geeks and future-of-journalism types everywhere, the biggest event of the week will undoubtedly come tomorrow, when Apple’s iPad goes on sale. The early reviews (Poynter’s Damon Kiesow has a compilation) have been mostly positive, but many of the folks opining on the iPad’s potential impact on journalism have been quite a bit less enthusiastic. A quick rundown:

— Scott Rosenberg, who’s studied the history of blogging and programming, says the news media’s excitement over the iPad reminds him of the CD-ROM craze of the early 1990s, particularly in its misguided expectation for a new, ill-defined technology to lead us into the future. The lesson we learned then and need to be reminded of now, Rosenberg says, is that “people like to interact with one another more than they like to engage with static information.”

— Business Insider’s Henry Blodget argues that the iPad won’t save media companies because they’re relying on the flawed premise that people want to consume content in a “tightly bound content package produced by a single publisher,” just like they did in print.

— Tech exec Barry Graubart says that while the iPad will be a boon to entertainment companies, it won’t provide the revenue boost news orgs expect it to, largely for two reasons: Its ads can’t draw the number of eyeballs that the standard web can, and many potential news app subscribers will be able to find suitable alternatives for free.

— GigaOm’s Mathew Ingram is not impressed with the iPad apps that news outlets have revealed so far, describing them as boring and unimaginative.

— Poynter’s Damon Kiesow gives us a quick summary of why some publishers thought the iPad might be a savior in the first place. (He doesn’t come down firmly on either side.)

Two other thoughtful pieces worth highlighting: Ken Doctor, a keen observer of the world of online news, asks nine questions about the iPad, and offers a lot of insight in the process. And Poynter’s Steve Myers challenges journalists to go beyond creating “good-enough” journalism for the iPad and produce creative, immersive content that takes full advantage of the device’s strengths.

Murdoch’s paid-content move begins: Rupert Murdoch has been talking for several months about his plans to put up paywalls around all of his news sites, and this week the first of those plans was unveiled. The Times and Sunday Times of London announced that they will begin charging for its site in June — £1 per day or £2 per week. This would be stricter than the metered model that The New York Times has proposed and the Financial Times employs: There are no free articles or limits, just 100% paid content.

The Times and Sunday Times both accompanied the announcement with their own editorials giving a rationale for their decision. The Sunday Times is far more straightforward: “At The Sunday Times we put an enormous amount of money and effort into producing the best journalism we possibly can. If we keep giving it away we will no longer be able to do that.” Some corners of journalism praised the Times’ decision and echoed its reasoning: BBC vet John Humphrys, Texas newspaperman John P. Garrett (though he didn’t mention the Times by name in a post decrying unthinking “have it your way” journalism), and British PR columnist Ian Monk.

The move also drew criticism, most prominently from web journalism guru Jeff Jarvis, who called the paywall “pathetic.” (If you want your paywall-bashing in video form, Sky News has one of Jarvis, too.) Over at True/Slant, Canadian writer Colin Horgan had some intriguing thoughts about why this move could be important: The fact that the Internet is so all-encompassing as a medium has led us to blur together vastly different types on it, Horgan argues. “What Murdoch is trying to do (perhaps unintentionally) is destroy that mental disconnect, and ask us to pay for media within a medium.”

Two other paid-content tidbits worth noting: Christian Science Monitor editor John Yemma told paidContent that news organizations’ future online will come not from “digital razzle dazzle,” but from relevant, meaningful content. And Damon Kiesow plotted paid content on a supply-and-demand curve, concluding that, not surprisingly, we have an oversupply of information.

Chatroulette, serendipity and the news: The random video chat site Chatroulette has drawn gobs of attention from media outlets, so it was probably only a matter of time before some of them applied the concept to online news. Daniel Vydra, a software developer at The Guardian, was among the first this week when he created Random Guardian and New York Times Roulette, two simple programs that take readers to random articles from those newspapers’ websites. Consultant Chris Thorpe explained the thinking behind their development — a Clay Shirky-inspired desire to recapture online the serendipity that a newspaper’s bundle provides.

GigaOm’s Mathew Ingram wrote about the project approvingly, saying he expects creative, open API projects like this to be more successful in the long run than Rupert Murdoch’s paywalls. Also, Publish2’s Ryan Sholin noted that just because everyone’s excited about the moniker “Chatroulette for news” doesn’t mean this concept hasn’t been around for quite a while.

Meanwhile, the idea sparked deeper thoughts from two CUNY j-profs about the concept of serendipity and the news. Here at the Lab, C.W. Anderson argued that true serendipity involves coming across perspectives you don’t agree with, and asked how one might create a true “news serendipity maker” that could take into account your news consumption patterns, then throw you some curveballs. And in a short but smart post, Jeff Jarvis said that serendipity is not mere randomness, but unexpected relevance — “the unknown but now fed curiosity.”

How much slack can nonprofits take up?: Alan Mutter, an expert in the dollars-and-cents world of the news business both traditionally and online, raised a pretty big stink this week with a post decrying the idea that nonprofits can carry the bulk of the load of journalism. The numbers at the core of Mutter’s argument are simple: Newspapers are spending an estimated $4.4 billion annually on newsgathering, and it would take an $88 billion endowment to provide that much money each year. That would be more than a quarter of the $307.7 billion contributed to charity in 2008 — a ridiculously tall order.

Mutter drew a lot of fire in his comment section for attacking a straw man with that argument, as he didn’t cite any specific people who are claiming that nonprofits will, in fact, take over the majority of journalism’s funding. As many of those folks wrote, the nonprofit advocates have always claimed that they’ll be a part of network that makes up journalism’s future, not the network itself. (One of them, Northeastern prof Ben Compaine, had made that exact argument just a few days earlier, and Steve Outing made a similar one in response to Mutter’s post.)

John Thornton, a co-founder of the nonprofit Texas Tribune, wrote the must-read point-by-point response, taking issue with the basis of Mutter’s math and his assumption that market-driven solutions are “inherently superior” to non-market ones. Besides, he argued, serious journalism hasn’t exactly been doing business like gangbusters lately, either: “Expecting investors to continue to fund for-profit, Capital J journalism just ‘cuz:  doesn’t that sound a lot like charity?” Reuters financial blogger Felix Salmon weighed in with similar numbers-based objections, as did David Cay Johnston.

Reading roundup: One mini-debate, and four nifty resources:

Former tech/biz journalist Chris Lynch fired a shot at j-schools in a post arguing that the shrunken (but elite) audiences resulting from widespread news paywalls would cause “most journalism schools to shrink or disappear.” Journalism schools, he said, are teaching an outdated objectivity-based philosophy that doesn’t hold water in the Internet era, when credibility is defined much differently. Gawker’s Ravi Somaiya chimed in with an anti-j-school rant, and North Carolina j-school dean Jean Folkerts and About.com’s Tony Rogers (a community college j-prof) leaped to j-schools’ defense.

Now the four resources:

— Mathew Ingram of GigaOm has a quick but pretty comprehensive explanation of the conundrum newspapers are in and some of the possible ways out. Couldn’t have summed it better myself.

— PBS MediaShift’s Jessica Clark outlines some very cool efforts to map out local news ecosystems. This will be something to keep an eye out for, especially in areas with blossoming hyperlocal news scenes, like Seattle.

— Consider this an addendum to last month’s South by Southwest festival: Ball State professor Brad King has posted more than a dozen short video interviews he conducted there, asking people from all corners of media what the most interesting thing they’re seeing is.

— British j-prof Paul Bradshaw briefly gives three principles for reporters in a networked era. Looks like a pretty good journalists’ mission statement to me.

March 31 2010

01:22

Serendipity is unexpected relevance

Serendipity is not randomness. It is unexpected relevance.

I constantly hear the fear that serendipity is among the many things we’re supposedly set to lose as news moves out of newsrooms and off print to online. Serendipity, says The New York Times, is lost in the digital age. Serendipity, it is said, is something we get from that story we happen upon as we flip pages, the story we never would have searched for but find only or best in print. Serendipity, it is also said, is the province and value of editors, who pick the fluky and fortuitous for us. Without serendipity, as I hear it, we’ll be less-well informed (all work, no play, makes Jack a dull boy; all relevance, not serendipity, makes Jill a predictable girl).

A few days ago, a Guardian guy, inspired by Clay Shirky hacked together a serendipity generator: just a random story served up on a click. It wasn’t a serious solution, just fun. But it focused the serendipity question for me.

What is serendipity? It’s not a story from left field. It’s not, I think, “the opposite of what you normally consumed.” There’s a reason we find value in the supposedly serendipitous. When I started Entertainment Weekly, I said that our features had to satisfy a curiosity you didn’t know you had — but you end up having it. When we read a paper and find a good story that we couldn’t have predicted we’d have liked, we think that is serendipity. But there’s some reason we like it, that we find it relevant to us. Maybe that relevance is the unknown but now fed curiosity, maybe it’s enjoyment of good writing or a certain kind of tale, maybe the gift of some interesting fact we want to share and gain social equity for, maybe it’s a challenge to our ideas, maybe an answer to a question that has bugged us. In the end, it has value to us; it’s relevant.

Can that relevance be analyzed and served? Can we still get serendipity online? Of course, we can and do — mostly on Twitter and Facebook. Serendipity comes from friends who find that story and — like an editor — pass it on. If we share their judgment, we may like what they share and call that serendipity. But there’s plenty that passes me by on Twitter that I don’t like; it’s serendipitous by the usual definitions but it doesn’t work for me because it has no value; it’s not relevant.

Can an algorithm serve us serendipity? Maybe, if it has enough signals of what we and people we trust like, what interests us, what we need, our context. It can calculate and predict and try to serve our relevance and serendipity. I think serendipity comes not from one-size-fits-all editing but from better targeting across a larger pool of possibilities. If Google can intuit intent, I think it can also serve surprise and serendipity.

MORE: See also Chris Anderson and Matthew Ingram on serendipity.

March 29 2010

17:47

What would it take to build a true “serendipity-maker”?

What if we created a “ChatRoulette for news” that generated content we tended to disagree with — but was also targeted toward our regular levels and sources of news consumption? How hard would it be?

For the last 24 hours or so, the Twitter-sphere has been buzzing over Daniel Vydra’s “serendipity maker,” an off-the-cuff Python hack that draws on the APIs of the Guardian, New York Times, and Australian Broadcasting Corp. in order to create a series of “news roulettes.” In sum, hit a button and you’ll get taken to a totally random New York Times, Guardian, or ABC News story. As the Guardian noted on its technology blog, “the idea came out of a joking remark by Chris Thorpe yesterday in a Guardian presentation by Clay Shirky that what we really need is a ‘Chatroulette for news’”:

After all, we do have loads of interesting content: but the trouble with the way that one tends to trawl the net, and especially newspapers, simply puts paid to the sort of serendipitous discovery of news that the paper form enables by its juxtaposition of possibly unrelated — but potentially important — subjects.

This relates to the much-debated theoretical issue of “news serendipity,” summarized here by Mathew Ingram. In essence, the argument goes that while there is more news on the web, our perspectives on the news are narrower because we only browse the sites we already agree with, or know we already like, or care about. In newspapers, however, we “stumbled upon” (yes, pun intended) things we didn’t care about, or didn’t agree with, in the physical act of turning the page.

As Ryan Sholin has been pointing out all morning on Twitter, the idea of a “serendipity maker” for the web isn’t entirely new. And I don’t know if the current news roulettes really solve the problem journalism theorists are concerned about. So I’d like to know: What would it take to create a news serendipity maker that automatically knew and “factored in” your news consumption patters, but then showed you web content that was the opposite of what you normally consumed?

For example, I’m naturally hostile to the Tea Party as a political organization. What if someone created a roulette that automatically generated news content sympathetic to the Tea Party? And what if they found a way to key it to my news consumption patterns even more strongly, i.e., if somehow the roulette knew I was a regular New York Times reader and would pick Tea Party friendly articles written either by the Times or outlets like the Times (rather than, say, random angry blog posts?)

I think this is interesting, because it would basically hack the entire logic of the web. The beauty of the web is that it can direct you towards ever more finely grained content which is exactly what you want to read. It would somehow know what you wanted even before you did. In other words, it might be the opposite of what Mark S. Luckie called “a Pandora for news.” And it would solve a very real social problem — or at least a highly theorized social problem — what Cass Sunstein calls the drift towards a “Daily Me” or “Daily We,” where we only read news content we already agree with, and our political culture suffers as a result.

So. This is a shout out for news hackers, developers, and others to weigh in: How hard would it be to create a machine like this? How would you do it? Would you do it? I would really like to write a longer post on this, based on your replies. So feel free to chime in in the comments section, or email me directly with your thoughts. I’d like to include them in my next post.

March 08 2010

16:12

Zooming the news: Is Seadragon a new news interface?

Frédéric Filloux has an interesting piece in this week’s Monday Note (which, if you’re not already reading, you should be). It’s on Microsoft’s work on Seadragon, which is a piece of tech that allows “infinite zooming”:

This is what Seadragon is about: it lets you dive in an image down to the smallest detail. All done seamlessly using the internet. The Seadragon deep-zooming system achieves such fluidity by sending requests to a database of “tiles”, each one holding a fraction of the total image. The required tiles load as we zoom and pan. And because each request is of a modest size, it only needs to cover a fraction of our screen, the process works fine with a basic internet connection.

Filloux argues that something like Seadragon might be a new interface for news:

In a prototype, they used a set of 6400 pages of the final editions of the Seattle Post Intelligencer, the local daily that folded few months ago. Let’s picture this: a one year of a daily newspaper entirely shown on one screen. 365 days x 50 pages of newspaper on average, that is about 17 800 pages to navigate. At first, this collection is represented using a series of thumbnails that are too small to be identified. One click breaks up the stack by month, another click organizes it in a much more manageable set of weeks. Now, I pick up an issue and dive in…Unlike the hyperlink system I use when going from one page to another, in the Seadragon-based interface I’m not leaving my “newspaper”. I’m staying inside the same zoomable set of elements. As I land on a page of interest, again, I can zoom in to a particular story (which, in passing, reconstructs itself in order to avoid the “old-style” jump to the article’s continuation on another page).

I absolutely agree that we’re nowhere near a stable endpoint for how we present news online — there’s a huge need for innovation. (One of the things I admire most about Gawker Media, for example, is that they are willing to rethink basic elements like comments, post styles, and ad placement. And the chance to try new presentation forms is one of the most exciting things about the iPad.)

But I’d push back against the idea of a Seadragon-like interface being the future. Two reasons:

People don’t like immersive environments online as much as some would like to think. Compare the amount of hype Second Life got to the actual amount of use it gets today. (How are all those Second Life “news bureaus” doing today?) I remember back when VRML was the future, and that we would all by 2002 be spending our time walking through news corridors and news caves. Aside from World of Warcraft and other games, users have consistently been less interested in immersive experiences than technologists have. When we’re seeking information, as opposed to play, we’ve defaulted to something closer to flat navigation. I don’t think that’s the endpoint of news, but I think it’s an indicator that “diving deep” into a geographic news landscape might not be the metaphor that wins out.

The main problem with contemporary news navigation is discovery, not depth. Most news consumers are looking for interesting content, stories they’ll enjoy, photos they’ll like to look at, videos they’ll think are worth watching. One reason time-on-site is so low for news sites is that, when a story grabs someone’s interest, news sites do a bad job of showing them other stories that will grab it again. News organizations produce a ton of content, but it’s difficult to present it all well to readers. That, to me, is the big challenge, not the need for the sort of depth that an infinite-zoom metaphor might provide.

But that’s just my quick take. What do you guys think: Is something like Seadragon doing to be a big influence on how we navigate news in the near future?

December 08 2009

14:44

What’s your problem with the internet? A crib sheet for news exec speeches

When media executives (and the occasional columnist on a deadline) talk about ‘the problem with the web’ they often revert to a series of recurring themes. In doing so they draw on a range of discourses that betray assumptions, institutional positions and ideological leanings. I thought I’d put together a list of some common memes of hatred directed towards the internet at various points by publishers and journalists, along with some critical context.

If you can think of any other common complaints, or responses to the ones below, post them in the comments and I’ll add them in.

Undemocratic and unrepresentative (The ‘Twitterati’)

The presumption here is that the media as a whole is more representative and democratic than users of the web. You know, geeks. The ‘Twitterati’ (a fantastic ideologically-loaded neologism that conjures up images of unelected elites). A variant of this is the position that sees any online-based protest as ‘organised’ and therefore illegitimate.

Of course the media is hardly representative or democratic on any level. In every general election in the UK during the twentieth century, for example, editorial opinion was to the right of electoral opinion (apart from 1997). In 1983, 1987 and 1992 press support exceeded by at least half the Conservative Party’s share of the vote. Similar stats can be found in US election coverage. The reasons are obvious: media owners are not representative or democratic: by definition they are part of a particular social class: wealthy proprietors or shareholders (although there are other factors such as advertiser influence and organisational efficiencies).

Journalists themselves are not representative either in terms of social classgender, or ethnicity – and have become less representative in recent decades.

But neither is the web a level playing field. Sadly, it has inherited most of the same barriers to entry that permeate the media: lack of literacy, lack of access and lack of time prevent a significant proportion of the population from having any voice at all online.

So any treatment of internet-based opinion should be done with caution. But just as not everyone has a voice online, even fewer people have a voice in print and broadcast. To accuse the web of being unrepresentative can be a smokescreen for the lack of representation in the mainstream media. When a journalist uses the unrepresentative nature of the web as a stick, ask how their news selection process presents a solution to that: is there a PR agency for the poor? Do they seek out a response from the elderly on every story?

And there is a key difference: while journalism becomes less representative, web access becomes more so, with governments in a number of countries moving towards providing universal broadband and access to computers through schools and libraries.

‘The death of common culture’

The internet, this argument runs, is preventing us from having a common culture we can all relate to. Because we are no longer restricted to a few terrestrial channels and a few newspapers – which all share similar editorial values – we are fragmented into a million niches and unable to relate to each other.

This is essentially an argument about culture and the public sphere. The literature here is copious, but one of the key planks is ‘Who defines the public sphere? Who decides what is shared culture?’ Commercial considerations and the needs of elite groups play a key role in both. And of course, what happens if you don’t buy into that shared culture? Alternative media has long attempted to reflect and create culture outside of that mainstream consensus.

You might also argue that new forms of common culture are being created – amateur YouTube videos that get millions of hits; BoingBoing posts; Lolcats; Twitter discussions around jokey hashtag memes – or that old forms of common culture are being given new life: how many people are watching The Apprentice or X Factor because of simultaneous chatter on Twitter?

The ‘echo chamber’/death of serendipity (homophily)

When we read the newspapers or watched TV news, this argument runs, we encountered information we wouldn’t otherwise know about. But when we go online, we are restricted to what we seek out – and we seek out views to reinforce our own (homophily or cyberbalkanisation).

Countering this, it is worth pointing out that in print people tended to buy a newspaper that also supported their own views, whereas online people switch from publication to publication with differing political orientations. It’s also worth pointing out that over 80% of people have come across a news article online while searching for something else entirely. Many websites have ‘related/popular articles/posts/videos’ features that introduce some serendipity. And finally, there is the role of social media in introducing stories we otherwise wouldn’t encounter (a good example here is the Iran elections – how many people would have skimmed over that in a publication or broadcast, but clicked through because someone was tweeting #cnnfail)

That’s not to say homophily doesn’t exist – there is evidence to suggest that people do seek out reinforcements for their own views online – but that doesn’t mean the same trend didn’t exist in print and broadcast, and it doesn’t make that true of everyone. I’d argue that the serendipity of print/broadcast depends on an editor’s news agenda and the serendipity of online depends on algorithms and social networks.

‘Google are parasites’

This argues that Google’s profits are based on other people’s content. I’ve tackled the Google argument previously: in short, Google is more like a map than a publication, and its profits are based on selling advertising very effectively against searches, rather than against content (which is the publisher’s model). It’s also worth pointing out that news content only forms around 0.01% of indexed content, and that news-related searches don’t tend to attract much advertising anyway. (If it was, Google would try to monetise Google News).

It’s often worth looking at the discourses underlying much of the Google-parasite meme. Often these revolve around it being ‘not fair’ that Google makes so much money; around ‘the value of our content’ as if that is set by publishers rather than what the market is willing to pay; and around ‘taking our content’ despite the fact that publishers invite Google to do just that through a) deciding not to use the Robots Exclusion Protocol (ACAP appears to be an attempt to dictate terms, although it’s not technically capable of doing so yet) and b) employing SEO practices.

Another useful experiment with these complaints is to look at what result publishers are really aiming for. Painting Google as a parasite can, variously, be used as an argument to relax ownership rules; to change copyright law to exclude fair comment; or to gain public subsidy (for instance, via a tax on Google or other online operators). In a nutshell, this argument is used to try to re-acquire the monopoly over distribution that publishers had in the physical world, and the consequent ability to set the price of advertising.

‘Bloggers are parasites’

A different argument to the one above, this one seeks to play down the role of bloggers by saying they are reliant on content from mainstream media.

Of course, you could equally point out that mainstream media is reliant on content from PR agencies, government departments, and, most of all, each other. The reliance of local broadcasters on local newspaper content is notorious; the lifting of quotes from other publications equally common. There’s nothing necessarily wrong with that – journalists often lift quotes for the same reasons as bloggers – to contextualise and analyse. The difference is that bloggers tend to link to the source.

Another point to make here is some blogs’ role as ‘Estate 4.5‘, monitoring the media in the same way that the media is supposed to monitor the powerful. “We can fact-check your ass!

‘You don’t know who you’re dealing with’

On the internet no one knows you're a dog

Identity is a complex thing. While it’s easy to be anonymous online, the assertions that people make online are generally judged by their identities, just as in the real world.

However, an identity is more than just a name – online, more than anything, it is about reputation. And while names can be faked, reputations are built over time. Forum communities, for example, are notorious for having a particularly high threshold when it comes to buying into contributions from anyone who has not been an active part of that community for some time. (It’s also worth noting that there’s a rich history of anonymous/pseudonymous writing in newspapers).

Users of the web rely on a range of cues and signals to verify identity and reputation, just as they do in the physical world. There’s a literacy to this, of course, which not everyone has at the same levels. But you might argue that it is in some ways easier to establish the background of a writer online than it was for their print or broadcast counterparts. On the radio, nobody knows you’re a dog.

Rumour and hearsay ‘magically become gospel’

They say “A lie is halfway round the world before the truth has got its boots on.” And it’s fair to say that there is more rumour and hearsay online for the simple reason that there is more content and communication online (and so there’s also more factual and accurate information online too). But of course myths aren’t restricted to one medium – think of the various ‘Winterval’ stories propagated by a range of newspapers that have gained such common currency. Or how about these classics:

Express cover: Migrants take all new jobs

The interactive nature of the web does make it easier for others to debunk hearsay through comments, responses on forums, linkbacks, hashtagged tweets and so on. But interactivity is a quality of use, not of the thing itself, so it depends on the critical and interactive nature of those browsing and publishing the content. Publishers who don’t read their comments, take note.

‘Unregulated’ lack of accountability

Accountability is a curious one. Often those making this assertion are used to particular, formal, forms of accountability: the Press Complaints Commission; Ofcom; the market; your boss. Online the forms of accountability are less formal, but can be quite savage. A ream of critical comments makes you accountable very quickly. Look at what happened to Robert Scoble when he posted something inaccurate; or to Jan Moir when she wrote something people felt was in bad taste. That accountability didn’t exist in the formal structures of mainstream media.

Related to this is the idea that the internet is ‘unregulated’. Of course it is regulated – you have (ironically, relatively unaccountable) organisations like the Internet Watch Foundation, and the law applies just as much online and in the physical world. Indeed, there is a particular problem with one country’s laws being used to pursue people abroad – see, for example, how Russian businessmen have sued American publishers in London for articles which were accessed a few times online. On the other hand, people can escape the attentions of lawyers by mirroring content in other jurisdictions, by simply being too small a target to be worth a lawyer’s time, or by being so many that it is impractical to pursue. These characteristics of the web can be used in the defence of freedoms (see Trafigura) as much as for attacks (hate literature).

Triviality

Trivial is defined as “of very little importance or value”. This is of course a subjective value judgement depending on what you feel is important or valuable. The objection to the perceived triviality of online content – particularly those of social networks and blogs – is another way to deprecate an upstart rival based on a normative ideal of the importance of journalism. And while there is plenty of ‘important’ information in the media, there is also plenty of ‘trivial’ material too, from the 3am girls to gift ideas and travel supplements.

The web has a similar mix. To focus on the trivial is to intentionally overlook the incredibly important. And it is also to ignore the importance of so much apparently ‘trivial’ information – what my friends are doing right now may be trivial to a journalist, but it’s useful ‘news’ or content to me. And in a conversational medium, the exchange of that information is important social glue.

To take journalists’ own news values: people within your social circle are ‘powerful’ within that circle, and therefore newsworthy, to those people, regardless of their power in the wider world.

‘Cult of the amateur’ undermining professionals

This argument has, for me, strange echoes of the arguments against universal suffrage at various points in history. Replace ‘bloggers’ with ‘women’ or ‘the masses’ and ‘professionals’ with ‘men’ or ‘the aristocracy’ in these arguments and you have some idea of the ideology underlying them. It’s the notion that only a select portion of the population are entitled to a voice in the exercise of power.

The discourse of ‘amateur’ is particularly curious. The implication is that amateur means poor quality, whereas it simply means not paid. The Olympics is built on amateurism, but you’d hardly question the quality of Olympic achievement throughout time. In the 19th century much scientific discovery was done by amateur scientists.

Professional, on the other hand, is equated with ‘good’. But professionalism has its own weaknesses: the pressures of deadlines, pressures of standardisation and efficiency, commercialism and market pressures, organisational culture.

That’s not to say that professionalism is bad, either, but that both amateurism and professionalism have different characteristics which can be positive or negative in different situations.

There’s an economic variant to this argument which suggests that people volunteering their efforts for nothing undermines the economic value of those who do the same as part of a paid job. This is superficially true, but some of the reasons for paying people to do work are because you can expect it to be finished within a particular timeframe to a particular quality – you cannot guarantee those with amateur labour (also, amateurs choose what they want to work on), so the threat is not so large as it is painted. The second point is that jobs may have to adapt to this supply of volunteer information. So instead of or as well as creating content the role is to verify it, contextualise it, link it, analyse it, filter it, or manage it. After all, we don’t complain about the ‘cult of the volunteer’ undermining charity work, do we?

Thanks to Nick Booth, Jon Bounds, Will Perrin, Alison Gow, Michele Mclellan, King Kaufman, Julie Posetti, Mark Pack, James Ball, Shane Richmond, Clare White, Sarah Hartley, Mary Hamilton, Matt Machell and Mark Coughlan for contributing ideas via Twitter under the #webhate tag.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl