Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2013

15:00

“If you’re not feeling it, don’t write it”: Upworthy’s social success depends on gut-checking “regular people”

Back in November, the Lab’s own Adrienne LaFrance wrote a number of words about Upworthy, a social packaging and not-quite-news site that has become remarkably successful at making “meaningful content” go viral. She delved into their obsession with testing headlines, their commitment to things that matter, their aggressive pushes across social media, and their commitment to finding stories with emotional resonance.

Things have continued to go well for Upworthy — they’re up to 10 million monthly uniques from 7.5. At the Personal Democracy Forum in New York, editorial director Sara Critchfield shared what she sees as Upworthy’s secret sauce for shareability, namely, seeking out content that generates a significant emotional response from both the reader and the writer.

upworhty monthlys

A slide from Critchfield’s PDF presentation.

Critchfield emphasized that using emotional input in editorial planning isn’t about making ad hoc decisions, it’s about making space for that data in the workflow, or “making it a bullet point.”

Here’s how she explained it:

When I spoke with Critchfield after her talk, she underscored the way in which packaging content is Upworthy’s bread and butter (most likely WonderBread and Land o’ Lakes [Sorry, Don Draper]).

“If you watch people shop in a grocery store, 95% of the time they are scanning the shelves for the packaging, making the choices on that before they turn the bottle around and look at the nutrition information. People choose their media that way too. So you can have a piece of media with the exact same nutritional value in it with different packaging and the consumer is going to choose the one that appeals to them most,” she said.

But before you can package content, you have to create it — or at least, select it from out of the vastness of the Internet. The people who do that are Critchfield’s handpicked team of curators.

“Of the things we curate at Upworthy, I think our editorial staff is what we pride ourselves the most on curating. We really focus on regular people. We reject the idea that the media elite or people who have been trained in a certain way somehow have the monopoly on editorial judgement, what matters or should matter. So we focus almost exclusively on hiring non-professionally trained writers,” she says. “To be honest, it’s sometimes difficult for folks who have professional background to come into Upworthy and have success.”

In other words, Critchfield builds the element of genuine emotional response into her team by hiring people who were never trained to worry about what’s news, and what isn’t.

“I tell my writers, ‘If you’re not feeling it, don’t write it.’,” says Critchfield. “We don’t really force people, we don’t let an editorial calendar dictate what we do. There will be big current events, and if someone on staff feels really passionate about it, then we cover it. And if there aren’t, then we don’t.”

The vast majority of Upworthy’s traffic comes from social media sites, where Critchfield says conversation is more valuable to the reader anyway. Some of their biggest hits have been about the economy, bullying and, recently, as displayed in her talk, funding cancer research after a young musician died of pancreatic cancer.

Critchfield says she encourages her curators to have huge vision for their posts. If they don’t expect it to get millions of views, then it’s not worth posting. Adam Mordecai is a great example of that kind of intuition, she says. He’s the guy who posted “This Kid Just Died. What He Left Behind is Wondtacular,” the video about cancer that ended up raising tens of thousands of dollars. (The original YouTube video got 433,000 Facebook shares; Upworthy’s got 2.5 million.)

Trained journalists are often rubbed the wrong way by the idea of writing headlines like that, or being asked to spend so much time on them. (Critchfield says instead of spending 58 minutes writing a story and 2 minutes on a headline, most journalists would be better served by spending 30 or 40 minutes on their piece and 20 to 30 on their headline. “People look at me and say that’s crazy, I don’t have time, I would never do that,” she says, “and they walk away all sad. That’s happened to me over and over again.”)

“I have a broadcast journalist who just came in and said, ‘Sara, I just can’t get over it. Every time I write ‘wanna’ in a headline, I feel like I’m going to hell,’” she says. “You have to match appropriately to the context. You’re competing — people on Facebook are at a party. They’re around friends, they’re trying to define themselves, they’re trying to look at baby pictures. You have to join the party, but be the cooler kid at that party. You’re not going to do it by speaking formally to people who are there to have fun.”

Fighting that training can be hard, which is why Critchfield has so carefully assembled team of “normal people.” “In the curation of the staff, I look for heart. What moves this person? There are people on staff — I have an improv comedian, I have a professional poker player, I have someone who works for the Harlem Children’s Zone, I have a person who used to be a software developer,” says Critchfield. “What they’re trained in isn’t as important as the compilation of a group of people with various hearts and passions.”

Or at least mostly normal people. Femi Oke was a radio producer when she decided to apply for a job at Upworthy. Oke says she was looking for a side gig that would give her experience with social media when she saw an ad for the job. “In typical Upworthy fashion,” she says, “it wasn’t a normal ad. It was a crazy ad — it was really intriguing.”

Oke describes going through an intensive training process at a retreat in Colorado where the curators learned to “speak Upworthy.” At first, she was surprised that the majority of the staff weren’t journalists, but soon the strategy of broadening the audience through diverse hires started to make more sense. But as the site’s popularity grew, Oke says it became increasingly important for curators to embrace traditional media tasks, like fact-checking. “As people started to see them as news, they started doing things news organizations would do,” says Oke. “They have such a fantastic reputation, they don’t want to ruin it.”

Since starting at Upworthy, Oke’s been hired to host The Stream, Al-Jazeera’s social media-centric daily online TV show, a concept born out of the Arab Spring. “At the end of each show, we have a teaser for what we’re doing on the next show. It would be a really heavy, intense, stodgy but accurate breakdown of what the next day’s subject is. I walked in and said, if we can’t make it a one-liner where I’m going to watch the show tomorrow, we shouldn’t be writing that,” Oke remembers. “My producers said, ‘Oh my god, she’s crazy.’”

So for a show on the 50th anniversary of the African Union, she might say “Happy 50th birthday, African Union! Are you looking good — or do you need a makeover?”

“That’s me anyway, but Upworthy made me even more certain that that was the style of broadcast that works for all media. It’s about being inclusive, accepting, and inviting people in.”

The one thing Critchfield says brings all the curators together is their competitive spirit and obsession with metrics. All Upworthy curators have direct access to the analytics for their work, and she says they are obsessed with testing different tricks. (How many more people will click this story if there’s a curse word in the headline?) But Critchfield says no post gets published without gut-checking its author to see how committed they are to the larger cause it’s meant to represent.

“We’ve really clarified internally that we can’t separate data analytics from human editorial judgment. Working to combine those two together is sometimes difficult,” she says. “What makes a thing viral can have just as much to do with how the person writing the piece up or working with the piece feels about it as it does with big data or listening tools.”

Photo by Esty Stein / Personal Democracy Media used under a Creative Commons license.

June 18 2013

17:20

Adobe Finds Tablets Racing Ahead For Retailers

Time was, the term “mobile” could be used to describe a swathe of devices. But now the market is so rich with portable gadgets, it’s time to get more granular, according to Adobe digital marketing SVP and GM  Brad Rencher.

“A lot of people are still lumping smartphones together with tablets, together with other types of mobile device,” Rencher told Beet.TV during the Cannes Lions advertising conflab. “We’ve seen very different behaviour in terms of how and when people use those devices.”

“Tablets are becoming a powerhouse in terms of engagement with apps and shopping. People are spending more time with tablets between the hours of 7 p.m. and 10 p.m. Smartphones tend to be out and about during the middle of the day, looking for directions or  for a restaurant.

“Tablets are becoming a retailer’s dream. We buy more often when we shop  on tablets than we do on desktops or smartphones. And when we buy, we buy 25 percent more product on tablets than we do on any other platform.”

Rencher bases the differentiation on “hundreds and hundreds of millions of interactions” from Adobe’s Marketing Cloud advertiser analytics suite.

May 29 2013

16:51

What’s New in Digital Scholarship: Teen sharing on Facebook, how Al Jazeera uses metrics, and the tie between better cellphone coverage and violence

library-shelves-of-academic-journals-cc

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on the Press, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Roughly once a month, JR managing editor John Wihbey will sum up for us what’s new and fresh.

This month’s edition of What’s New In Digital Scholarship is an abbreviated installment — we’re just posting our curated list of interesting new papers and their abstracts. We’ll provide a fuller analysis at the half-year mark, in our June edition. Until then, happy geeking out!

“Mapping the global Twitter heartbeat: The geography of Twitter.” Study from the University of Illinois Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, published in First Monday. By Kalev Leetaru, Shaowen Wang, Guofeng Cao, Anand Padmanabhan, and Eric Shook.

Summary: “In just under seven years, Twitter has grown to count nearly three percent of the entire global population among its active users who have sent more than 170 billion 140-character messages. Today the service plays such a significant role in American culture that the Library of Congress has assembled a permanent archive of the site back to its first tweet, updated daily. With its open API, Twitter has become one of the most popular data sources for social research, yet the majority of the literature has focused on it as a text or network graph source, with only limited efforts to date focusing exclusively on the geography of Twitter, assessing the various sources of geographic information on the service and their accuracy. More than three percent of all tweets are found to have native location information available, while a naive geocoder based on a simple major cities gazetteer and relying on the user — provided Location and Profile fields is able to geolocate more than a third of all tweets with high accuracy when measured against the GPS-based baseline. Geographic proximity is found to play a minimal role both in who users communicate with and what they communicate about, providing evidence that social media is shifting the communicative landscape.

“Predicting Dissemination of News Content in Social Media: A Focus on Reception, Friending, and Partisanship.” Study from Ohio State, published in Journalism & Mass Communication Quarterly. By Brian E. Weeks and R. Lance Holbert.

Summary: “Social media are an emerging news source, but questions remain regarding how citizens engage news content in this environment. This study focuses on social media news reception and friending a journalist/news organization as predictors of social media news dissemination. Secondary analysis of 2010 Pew data (N = 1,264) reveals reception and friending to be positive predictors of dissemination, and a reception-by-friending interaction is also evident. Partisanship moderates these relationships such that reception is a stronger predictor of dissemination among partisans, while the friending-dissemination link is evident for nonpartisans only. These results provide novel insights into citizens’ social media news experiences.”

“Al Jazeera English Online: Understanding Web metrics and news production when a quantified audience is not a commodified audience.” Study from George Washington University, published in Digital Journalism. By Nikki Usher.

Summary: “Al Jazeera English is the Arab world’s largest purveyor of English language news to an international audience. This article provides an in-depth examination of how its website employs Web metrics for tracking and understanding audience behavior. The Al Jazeera Network remains sheltered from the general economic concerns around the news industry, providing a unique setting in which to understand how these tools influence newsroom production and knowledge creation. Through interviews and observations, findings reveal that the news organization’s institutional culture plays a tremendous role in shaping how journalists use and understand metrics. The findings are interpreted through an analysis of news norms studies of the social construction of technology.”

“Teens, Social Media and Privacy.” Report from the Pew Internet & American Life Project and Harvard’s Berkman Center for Internet & Society. By Mary Madden, Amanda Lenhart, Sandra Cortesi, Urs Gasser, Maeve Duggan, and Aaron Smith.

Summary: “Teens are sharing more information about themselves on social media sites than they have in the past, but they are also taking a variety of technical and non-technical steps to manage the privacy of that information. Despite taking these privacy-protective actions, teen social media users do not express a high level of concern about third-parties (such as businesses or advertisers) accessing their data; just 9% say they are ‘very’ concerned. Key findings include: Teens are sharing more information about themselves on their social media profiles than they did when we last surveyed in 2006: 91% post a photo of themselves, up from 79% in 2006; 71% post their school name, up from 49%; 71% post the city or town where they live, up from 61%; 53% post their email address, up from 29%; 20% post their cell phone number, up from 2%. 60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings: 56% of teen Facebook users say it’s ‘not difficult at all’ to manage the privacy controls on their Facebook profile; 33% Facebook-using teens say it’s ‘not too difficult’; 8% of teen Facebook users say that managing their privacy controls is ‘somewhat difficult,’ while less than 1% describe the process as ‘very difficult.’”

“Historicizing New Media: A Content Analysis of Twitter.” Study from Cornell, Stoneybrook University, and AT&T Labs Research, published in the Journal of Communication. By Lee Humphreys, Phillipa Gill, Balachander Krishnamurthy, and Elizabeth Newbury.

Summary: “This paper seeks to historicize Twitter within a longer historical framework of diaries to better understand Twitter and broader communication practices and patterns. Based on a review of historical literature regarding 18th and 19th century diaries, we created a content analysis coding scheme to analyze a random sample of publicly available Twitter messages according to themes in the diaries. Findings suggest commentary and accounting styles are the most popular narrative styles on Twitter. Despite important differences between the historical diaries and Twitter, this analysis reveals long-standing social needs to account, reflect, communicate, and share with others using media of the times.” (See also.)

“Page flipping vs. clicking: The impact of naturally mapped interaction technique on user learning and attitudes.” Study from Penn State and Ohio State, published in Computers in Human Behavior. By Jeeyun Oh, Harold R. Robinson, and Ji Young Lee.

Summary: “Newer interaction techniques enable users to explore interfaces in a more natural and intuitive way. However, we do not yet have a scientific understanding of their contribution to user experience and theoretical mechanisms underlying the impact. This study examines how a naturally mapped interface, page-flipping interface, can influence user learning and attitudes. An online experiment with two conditions (page flipping vs. clicking) tests the impact of this naturally mapped interaction technique on user learning and attitudes. The result shows that the page-flipping feature creates more positive evaluations of the website in terms of usability and engagement, as well as greater behavioral intention towards the website by evoking greater perception of natural mapping and greater feeling of presence. In terms of learning outcomes, however, participants who flip through the online magazine show less recall and recognition memory, unless they perceive page flipping as more natural and intuitive to interact with. Participants perceive the same content as more credible when they flip through the content, but only if they appreciate the coolness of the medium. Theoretical and practical implications will be discussed.”

“Influence of Social Media Use on Discussion Network Heterogeneity and Civic Engagement: The Moderating Role of Personality Traits.” Study from the University of Alabama, Tuscaloosa, and the University of Texas at Austin, published in the Journal of Communication. By Yonghwan Kim, Shih-Hsien Hsu, and Homero Gil de Zuniga.

Summary: “Using original national survey data, we examine how social media use affects individuals’ discussion network heterogeneity and their level of civic engagement. We also investigate the moderating role of personality traits (i.e., extraversion and openness to experiences) in this association. Results support the notion that use of social media contributes to heterogeneity of discussion networks and activities in civic life. More importantly, personality traits such as extraversion and openness to experiences were found to moderate the influence of social media on discussion network heterogeneity and civic participation, indicating that the contributing role of social media in increasing network heterogeneity and civic engagement is greater for introverted and less open individuals.”

“Virtual research assistants: Replacing human interviewers by automated avatars in virtual worlds.” Study from Sammy Ofer School of Communications, Interdisciplinary Center Herzliya (Israel), published in Computers in Human Behavior. By Béatrice S. Hasler, Peleg Tuchman, and Doron Friedman.

Summary: “We conducted an experiment to evaluate the use of embodied survey bots (i.e., software-controlled avatars) as a novel method for automated data collection in 3D virtual worlds. A bot and a human-controlled avatar carried out a survey interview within the virtual world, Second Life, asking participants about their religion. In addition to interviewer agency (bot vs. human), we tested participants’ virtual age, that is, the time passed since the person behind the avatar joined Second Life, as a predictor for response rate and quality. The human interviewer achieved a higher response rate than the bot. Participants with younger avatars were more willing to disclose information about their real life than those with older avatars. Surprisingly, the human interviewer received more negative responses than the bot. Affective reactions of older avatars were also more negative than those of younger avatars. The findings provide support for the utility of bots as virtual research assistants but raise ethical questions that need to be considered carefully.”

“Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa.” Study from Duke and German Institute of Global and Area Studies (GIGA), published in the American Political Science Review. By Jan H. Pierskalla and Florian M. Hollenbach.

Summary: “The spread of cell phone technology across Africa has transforming effects on the economic and political sphere of the continent. In this paper, we investigate the impact of cell phone technology on violent collective action. We contend that the availability of cell phones as a communication technology allows political groups to overcome collective action problems more easily and improve in-group cooperation, and coordination. Utilizing novel, spatially disaggregated data on cell phone coverage and the location of organized violent events in Africa, we are able to show that the availability of cell phone coverage significantly and substantially increases the probability of violent conflict. Our findings hold across numerous different model specifications and robustness checks, including cross-sectional models, instrumental variable techniques, and panel data methods.”

Photo by Anna Creech used under a Creative Commons license.

August 21 2012

10:20

Connected TV Viewers Interact, Respond to Ads, Yume/Magid Study

Connected TV users are receptive to advertising, with 90% of viewers saying they notice ads on the platform, according to research conducted by video ad technology provider YuMe in partnership with research firm Frank N. Magid. Travis Hockersmith, Senior Director Client Strategy at YuMe, shared the findings of the study with Beet.TV. 

About 30% of Internet homes have a connected TV, which includes homes with gaming consoles, over-the-top devices and smart TVs, YuMe learned in a survey of 736 connected TV users conducted online in May and June, Hockersmith says in this interview. About 66% of connected TV users say they’re likely to interact with relevant ads on connected TVs, and nearly 20% say they’ve purchased a product as a result of an ad they’ve seen on connected TV. About 93% of users live in multi-member households. 

Those are promising figures for marketers who are starting to explore the medium. YuMe and Magid also learned that connected TV users said they prefer 15 to 30 second ads in short form video and streaming shows compared with monthly subscriptions or pay-per-view pricing models. For more insight on pricing models and ad receptiveness on connected TVs, check out this video interview.

 

 

August 17 2012

16:07

Metrics, metrics everywhere: How do we measure the impact of journalism?

If democracy would be poorer without journalism, then journalism must have some effect. Can we measure those effects in some way? While most news organizations already watch the numbers that translate into money (such as audience size and pageviews), the profession is just beginning to consider metrics for the real value of its work.

That’s why the recent announcement of a Knight-Mozilla Fellowship at The New York Times on “finding the right metric for news” is an exciting moment. A major newsroom is publicly asking the question: How do we measure the impact of our work? Not the economic value, but the democratic value. The Times’ Aaron Pilhofer writes:

The metrics newsrooms have traditionally used tended to be fairly imprecise: Did a law change? Did the bad guy go to jail? Were dangers revealed? Were lives saved? Or least significant of all, did it win an award?

But the math changes in the digital environment. We are awash in metrics, and we have the ability to engage with readers at scale in ways that would have been impossible (or impossibly expensive) in an analog world.

The problem now is figuring out which data to pay attention to and which to ignore.

Evaluating the impact of journalism is a maddeningly difficult task. To begin with, there’s no single definition of what journalism is. It’s also very hard to track what happens to a story once it is released into the wild, and even harder to know for sure if any particular change was really caused by that story. It may not even be possible to find a quantifiable something to count, because each story might be its own special case. But it’s almost certainly possible to do better than nothing.

The idea of tracking the effects of journalism is old, beginning in discussions of the newly professionalized press in the early 20th century and flowering in the “agenda-setting” research of the 1970s. What is new is the possibility of cheap, widespread, data-driven analysis down to the level of the individual user and story, and the idea of using this data for managing a newsroom. The challenge, as Pilhofer put it so well, is figuring out which data, and how a newsroom could use that data in a meaningful way.

What are we trying to measure and why?

Metrics are powerful tools for insight and decision-making. But they are not ends in themselves because they will never exactly represent what is important. That’s why the first step in choosing metrics is to articulate what you want to measure, regardless of whether or not there’s an easy way to measure it. Choosing metrics poorly, or misunderstanding their limitations, can make things worse. Metrics are just proxies for our real goals — sometimes quite poor proxies.

An analytics product such as Chartbeat produces reams of data: pageviews, unique users, and more. News organizations reliant on advertising or user subscriptions must pay attention to these numbers because they’re tied to revenue — but it’s less clear how they might be relevant editorially.

Consider pageviews. That single number is a combination of many causes and effects: promotional success, headline clickability, viral spread, audience demand for the information, and finally, the number of people who might be slightly better informed after viewing a story. Each of these components might be used to make better editorial choices — such as increasing promotion of an important story, choosing what to report on next, or evaluating whether a story really changed anything. But it can be hard to disentangle the factors. The number of times a story is viewed is a complex, mixed signal.

It’s also possible to try to get at impact through “engagement” metrics, perhaps derived from social media data such as the number of times a story is shared. Josh Stearns has a good summary of recent reports on measuring engagement. But though it’s certainly related, engagement isn’t the same as impact. Again, the question comes down to: Why would we want to see this number increase? What would it say about the ultimate effects of your journalism on the world?

As a profession, journalism rarely considers its impact directly. There’s a good recent exception: a series of public media “impact summits” held in 2010, which identified five key needs for journalistic impact measurement. The last of these needs nails the problem with almost all existing analytics tools:

While many Summit attendees are using commercial tools and services to track reach, engagement and relevance, the usefulness of these tools in this arena is limited by their focus on delivering audiences to advertisers. Public interest media makers want to know how users are applying news and information in their personal and civic lives, not just whether they’re purchasing something as a result of exposure to a product.

Or as Ethan Zuckerman puts it in his own smart post on metrics and civic impact, ”measuring how many people read a story is something any web administrator should be able to do. Audience doesn’t necessarily equal impact.” Not only that, but it might not always be the case that a larger audience is better. For some stories, getting them in front of particular people at particular times might be more important.

Measuring audience knowledge

Pre-Internet, there was usually no way to know what happened to a story after it was published, and the question seems to have been mostly ignored for a very long time. Asking about impact gets us to the idea that the journalistic task might not be complete until a story changes something in the thoughts or actions of the user.

If journalism is supposed to inform, then one simple impact metric would ask: Does the audience know the things that are in this story? This is an answerable question. A survey during the 2010 U.S. mid-term elections showed that a large fraction of voters were misinformed about basic issues, such as expert consensus on climate change or the predicted costs of the recently passed healthcare bill. Though coverage of the study focused on the fact that Fox News viewers scored worse than others, that missed the point: No news source came out particularly well.

In one of the most limited, narrow senses of what journalism is supposed to do — inform voters about key election issues — American journalism failed in 2010. Or perhaps it actually did better than in 2008 — without comparable metrics, we’ll never know.

While newsrooms typically see themselves in the business of story creation, an organization committed to informing, not just publishing, would have to operate somewhat differently. Having an audience means having the ability to direct attention, and an editor might choose to continue to direct attention to something important even it’s “old news”; if someone doesn’t know it, it’s still new news to them. Journalists will also have to understand how and when people change their beliefs, because information doesn’t necessarily change minds.

I’m not arguing that every news organization should get into the business of monitoring the state of public knowledge. This is only one of many possible ways to define impact; it might only make sense for certain stories, and to do it routinely we’d need good and cheap substitutes for large public surveys. But I find it instructive to work through what would be required. The point is to define journalistic success based on what the user does, not the publisher.

Other fields have impact metrics too

Measuring impact is hard. The ultimate effects on belief and action will mostly be invisible to the newsroom, and so tangled in the web of society that it will be impossible to say for sure that it was journalism that caused any particular effect. But neither is the situation hopeless, because we really can learn things from the numbers we can get. Several other fields have been grappling with the tricky problems of diverse, indirect, not-necessarily-quantifiable impact for quite some time.

Academics wish to know the effect of their publications, just as journalists do, and the academic publishing field has long had metrics such citation count and journal impact factor. But the Internet has upset the traditional scheme of things, leading to attempts to formulate wider ranging, web-inclusive measures of impact such as Altmetrics or the article-level metrics of the Public Library of Science. Both combine a variety of data, including social media.

Social science researchers are interested not only in the academic influence of their work, but its effects on policy and practice. They face many of the same difficulties as journalists do in evaluating their work: unobservable effects, long timelines, complicated causality. Helpfully, lots of smart people have been working on the problem of understanding when social research changes social reality. Recent work includes the payback framework which looks at benefits from every stage in the lifecycle of research, from intangibles such as increasing the human store of knowledge, to concrete changes in what users do after they’ve been informed.

NGOs and philanthropic organizations of all types also use effectiveness metrics, from soup kitchens to international aid. A research project at Stanford University is looking at the use and diversity of metrics in this sector. We are also seeing new types of ventures designed to produce both social change and financial return, such as social impact bonds. The payout on a social impact bond is contractually tied to an impact metric, sometimes measured as a “social return on investment.”

Data beyond numbers

Counting the countable because the countable can be easily counted renders impact illegitimate.

- John Brewer, “The impact of impact

Numbers are helpful because they allow standard comparisons and comparative experiments. (Did writing that explainer increase the demand for the spot stories? Did investigating how the zoning issue is tied to developer profits spark a social media conversation?) Numbers can be also compared at different times, which gives us a way to tell if we’re doing better or worse than before, and by how much. Dividing impact by cost gives measures of efficiency, which can lead to better use of journalistic resources.

But not everything can be counted. Some events are just too rare to provide reliable comparisons — how many times last month did your newsroom get a corrupt official fired? Some effects are maddeningly hard to pin down, such as “increased awareness” or “political pressure.” And very often, attributing cause is hopeless. Did a company change its tune because of an informed and vocal public, or did an internal report influence key decision makers?

Fortunately, not all data is numbers. Do you think that story contributed to better legislation? Write a note explaining why! Did you get a flood of positive comments on a particular article? Save them! Not every effect needs to be expressed in numbers, and a variety of fields are coming to the conclusion that narrative descriptions are equally valuable. This is still data, but it’s qualitative (stories) instead of quantitative (numbers). It includes comments, reactions, repercussions, later developments on the story, unique events, related interviews, and many other things that are potentially significant but not easily categorizable. The important thing is to collect this information reliably and systematically, or you won’t be able to make comparisons in the future. (My fellow geeks may here be interested in the various flavors of qualitative data analysis.)

Qualitative data is particularly important when you’re not quite sure what you should be looking for. With the right kind, you can start to look for the patterns that might tell you what you should be counting,

Metrics for better journalism

Can the use of metrics make journalism better? If we can find metrics that show us when “better” happens, then yes, almost by definition. But in truth we know almost nothing about how to do this.

The first challenge may be a shift in thinking, as measuring the effect of journalism is a radical idea. The dominant professional ethos has often been uncomfortable with the idea of having any effect at all, fearing “advocacy” or “activism.” While it’s sometimes relevant to ask about the political choices in an act of journalism, the idea of complete neutrality is a blatant contradiction if journalism is important to democracy. Then there is the assumption, long invisible, that news organizations have done their job when a story is published. That stops far short of the user, and confuses output with effect.

The practical challenges are equally daunting. Some data, like web analytics, is easy to collect but doesn’t necessarily coincide with what a news organization ultimately values. And some things can’t really be counted. But they can still be considered. Ideally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics, comments, inbound links, social media discussion, and other reactions. With that sort of extensive data set, we stand a chance of figuring out not only what the journalism did, but how best to evaluate it in the future. But nothing so elaborate is necessary to get started. Every newsroom has some sort of content analytics, and qualitative effects can be tracked with nothing more than notes in a spreadsheet.

Most importantly, we need to keep asking: Why are we doing this? Sometimes, as I pass someone on the street, I ask myself if the work I am doing will ever have any effect on their life — and if so, what? It’s impossible to evaluate impact if you don’t know what you want to accomplish.

May 04 2012

19:22

comScore's Goodman: We Have Brought "a Standardization" to YouTube

comScore has been working closely with YouTube since August to provide third party data around consumption and demographics.

As a result of these efforts, it has established a brought "a standardization" around the giant video site -- data which is critical for the content creators, YouTube and brand advertisers, says Eli Goodman, Media Evangelist at comScore, in this interview with Beet.TV

We interviewed him on Wednesday at the the International Academy of Web Television (IAWTV) where he was a featured speaker.

Andy Plesser

May 03 2012

13:44

Selling Online Video Ads Is Becoming TV-Like, AOL Video Chief

Selling online video on the basis of total views, known as gross rating points (GRPs), common in the television business, will become increasingly important as consumers and the media stop differentiating between online video and other forms of video and view video holistically across platforms, says Ran Harnevo, SVP, Video, AOL On, in this interview with Beet.TV

AOL recently formed a partnership with Nielsen to begin selling video based on GRPs.

Selling in this fashion is important, Harnevo tells us in this interview, because it helps TV buyers have a common currency and familiar metrics of reach and frequency. With more video moving to connected TVs and other devices, publishers need to make video an easier purchase for TV buyers. "There may be other ways with more data, but in order to blend these industries that are already blending [online GRPs help," he says.

He adds that AOL has already started running some campaigns that have been bought on online GRPs.

Google has said it'll sell online ads on a GRP basis, while ad network Tremor Video also has an alliance with Nielsen to incorporate GRPs.

Recently we spoke with Kelly Train of digital media shop PHD Media who says that the agency business is finally embracing GRPs as a means to buy onlilne video ads.

Daisy Whitney

April 15 2012

20:48

eMarketer's David Hallerman on the Pardoxical State of Online Video

While online video advertising is expected to jump 54 percent this year, faster than any form of media, according to eMarketer, the industry faces multiple challenges around the emerging medium says David Hallerman, Principal Analyst at eMarketer in this interview with Beet.TV

He sizes up the state of the industry as a parodox where viewers are consuming more videos on more devices, "converging" around the new medium while at the same time the media is fragmenting around multiple devices.

We spoke with Hallerman at the IAB Digital Video summit last week in New York.

April 13 2012

15:51

Brian Morrissey: Video Ad Tech Not Providing Basics to Advertisers

Technology providers are racing way ahead of advertisers and not providing them with the basic needs to make reasoned online video media buys, says Brian Morrissey, editor-in-chief of Digiday in this interview with Beet.TV

These was his principal takeaway from the BRX Video Summit this week where he was a moderator.

Technology has long foiled the ad business, he says, citing the development of the "click" in the early days of the Internet which calls "the original sin."

Morrissey is gearing up for a big week of conferences with Digiday's Video Upfront program in New York on Monday and in LA on Thursday.

 

March 29 2012

21:20

Explosive Online Video Growth Coming from Premium Programming, comScore's Eli Goodman

The growth in online video consumption in the United States is currently being driven by increased viewing of long-form programming, such as TV shows and movies, saya Eli Goodman, comScore Media Evangelist, in this session from the Beet.TV Executive Retreat in Vieques, Puerto Rico.

Goodman provided a comprehensive overview of the state of the online video marketplace and consumer viewing habits in this presentation.

About 180 million Web users in the United States are watching about 44 billion videos each month, he explained. While those numbers fluctuate a bit month to month, they've most been steady. However, growth is coming in engagement with the videos and more time spent viewing.

YouTube is still dominant with 50% of video views coming from that site, followed by Vevo and Hulu, he says  in his talk.

Interestingly, the number of ads viewed in online video has doubled year over year, according to comScore's most recent data. In February, Americans watched 7.5 billion video ads online, up from 3.8 billion a year ago.

Goodman emphasized that mobile and tablet viewing is incremental to desktop viewing. "A rising tide lifts all boats," he said. "The entire pie is growing and will continue to grow for some time."

Goodman shares many more data points and insight in this video, so check it out in its entirety for a deep dive into online video habits and advertising.

Daisy Whitney

 

February 10 2012

17:34

Videology Inks Deals with More Data Providers

In a pair of deals that expand its slate of data providers, online video ad technology firm Videology has partnered with database marketing service I-Behavior and WPP's shopping data firm Kantar Shopcom for additional online audience insight. We spoke with Brad Herman, Chief Supply Officer at Videology, and he shared more details on Videology's approach to the online video market.

These new deals help Videology tap into in-store behavior and demographic data. Videology draws on partnerships with about 20 third-party data providers to help marketers target audiences online, Herman said. "If advertisers want to reach men 18 to 34 who are sedan buyers, they can go into the platform and identify them in a granular manner and price and execute," Herman explained. Videology works with a range of companies including ad exchanges, ad networks, syndicators, content aggregators, premium publishers and others.

The company just rolled out a new product in its publisher audience and analysis platform that  is designed to "bring the supply side and the demand side" closer together for media planners, he said. Videology recently released a study on effectiveness of cross-platform media buys.

January 04 2012

11:06

Social Interest Positioning – Visualising Facebook Friends’ Likes With Data Grabbed Using Google Refine

What do my Facebook friends have in common in terms of the things they have Liked, or in terms of their music or movie preferences? (And does this say anything about me?!) Here’s a recipe for visualising that data…

After discovering via Martin Hawksey that the recent (December, 2011) 2.5 release of Google Refine allows you to import JSON and XML feeds to bootstrap a new project, I wondered whether it would be able to pull in data from the Facebook API if I was logged in to Facebook (Google Refine does run in the browser after all…)

Looking through the Facebook API documentation whilst logged in to Facebook, it’s easy enough to find exemplar links to things like your friends list (https://graph.facebook.com/me/friends?access_token=A_LONG_JUMBLE_OF_LETTERS) or the list of likes someone has made (https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS); replacing me with the Facebook ID of one of your friends should pull down a list of their friends, or likes, etc.

(Note that validity of the access token is time limited, so you can’t grab a copy of the access token and hope to use the same one day after day.)

Grabbing the link to your friends on Facebook is simply a case of opening a new project, choosing to get the data from a Web Address, and then pasting in the friends list URL:

Google Refine - import Facebook friends list

Click on next, and Google Refine will download the data, which you can then parse as a JSON file, and from which you can identify individual record types:

Google Refine - import Facebook friends

If you click the highlighted selection, you should see the data that will be used to create your project:

Google Refine - click to view the data

You can now click on Create Project to start working on the data – the first thing I do is tidy up the column names:

Google Refine - rename columns

We can now work some magic – such as pulling in the Likes our friends have made. To do this, we need to create the URL for each friend’s Likes using their Facebook ID, and then pull the data down. We can use Google Refine to harvest this data for us by creating a new column containing the data pulled in from a URL built around the value of each cell in another column:

Google Refine - new column from URL

The Likes URL has the form https://graph.facebook.com/me/likes?access_token=A_LONG_JUMBLE_OF_LETTERS which we’ll tinker with as follows:

Google Refine - crafting URLs for new column creation

The throttle control tells Refine how often to make each call. I set this to 500ms (that is, half a second), so it takes a few minutes to pull in my couple of hundred or so friends (I don’t use Facebook a lot;-). I’m not sure what limit the Facebook API is happy with (if you hit it too fast (i.e. set the throttle time too low), you may find the Facebook API stops returning data to you for a cooling down period…)?

Having imported the data, you should find a new column:

Google Refine - new data imported

At this point, it is possible to generate a new column from each of the records/Likes in the imported data… in theory (or maybe not..). I found this caused Refine to hang though, so instead I exprted the data using the default Templating… export format, which produces some sort of JSON output…

I then used this Python script to generate a two column data file where each row contained a (new) unique identifier for each friend and the name of one of their likes:

import simplejson,csv

writer=csv.writer(open('fbliketest.csv','wb+'),quoting=csv.QUOTE_ALL)

fn='my-fb-friends-likes.txt'

data = simplejson.load(open(fn,'r'))
id=0
for d in data['rows']:
	id=id+1
	#'interests' is the column name containing the Likes data
	interests=simplejson.loads(d['interests'])
	for i in interests['data']:
		print str(id),i['name'],i['category']
		writer.writerow([str(id),i['name'].encode('ascii','ignore')])

I could then import this data into Gephi and use it to generate a network diagram of what they commonly liked:

Sketching common likes amongst my facebook friends

Rather than returning Likes, I could equally have pulled back lists of the movies, music or books they like, their own friends lists (permissions settings allowing), etc etc, and then generated friends’ interest maps on that basis.

PS dropping out of Google Refine and into a Python script is a bit clunky, I have to admit. What would be nice would be to be able to do something like a “create new rows with new column from column” pattern that would let you set up an iterator through the contents of each of the cells in the column you want to generate the new column from, and for each pass of the iterator: 1) duplicate the original data row to create a new row; 2) add a new column; 3) populate the cell with the contents of the current iteration state. Or something like that…

PPS Related to the PS request, there is a sort of related feature in the 2.5 release of Google Refine that lets you merge data from across rows with a common key into a newly shaped data set: Key/value Columnize. Seeing this, it got me wondering what a fusion of Google Refine and RStudio might be like (or even just R support within Google Refine?)


September 16 2011

10:56

comScore Introduces Video Reporting on Audience Duplication

Online audience measurement service comScore has released reporting that lets clients see how video audiences between sites overlap and relate to each other, the research firm told Beet.TV in an interview.

The audience duplication and cross-viewing reports in the Video Metrix service give comScore customers a snapshot into unique video viewers on various sites, as well as the crossover between those viewers and among specific demographics, said comScore product manager Dan Piech.

This could be helpful if an advertiser is placing a buy across three sports sites and wants to know the unduplicated reach of men 18 to 34 across those sites, as an example, Piech said.

Daisy Whitney

September 13 2011

20:49

comScore Now Ranking All YouTube Partner Channels

comScore, which first broke out metrics around the top 50 YouTube partner channels for the month of July, is publishing analytics for all YouTube partner channels, which is many thousands, says Dan Piech, Product Manager, in this interview with Beet.TV

These numbers will first be available in August data will be published later this week, a comScore spokeswoman tells us.

Andy Plesser

August 28 2011

21:10

Nielsen Rolls out GRP Measurement for Online Video

Nielsen has introduced a new system for publishers and advertisers to measure gross rating point (GRP's) across web properties, social networks and around online video.

At the Beet.TV Leadership webcast last week, we spoke with Jon Gibs, SVP Analytics and Insight, Nielsen. He tell us about the value of new product along with trends in viewers of online video vs. television.

Andy Plesser

August 01 2011

14:00

Newsbeat, Chartbeat’s news-focused analytics tool, places its bets on the entrepreneurial side of news orgs

Late last week, Chartbeat released a new product: Newsbeat, a tool that takes the real-time analytics it already offers and tailors them even more directly to the needs of news orgs. Chartbeat is already famously addictive, and Newsbeat will likely up the addiction ante: It includes social sharing information — including detailed info about who has been sharing stories on Twitter — and, intriguingly, notifications when stories’ traffic patterns deviate significantly from their expected path. (For more on how it works, Poynter has a good overview, and GigaOm’s Mathew Ingram followed up with a nice discussion of the decision-making implications of the tool.)

What most stood out to me, though, both when I chatted with Tony Haile, Chartbeat’s general manager, and when I poked around Newsbeat, is what the tool suggests about the inner workings of an increasingly online-oriented newsroom. Chartbeat, the parent product, offers an analytic overview of an entire site — say, Niemanlab.org — and provides a single-moment snapshot of top-performing stories site-wide. Newsbeat, on the other hand, can essentially break down the news site into its constituent elements via a permissioning system that provides personalized dashboards for individual reporters and editors. Newsbeat allows those individual journalists to see, Haile notes, “This is how my story’s doing right now. This is how my people are doing right now.”

On the one hand, that’s a fairly minor thing, an increasingly familiar shift in perspective from organization to person. Still, though, it’s worth noting the distinction Newsbeat is making between news org and news brand. Newsbeat emphasizes the individual entities that work together, sometimes in sync and sometimes not so much, under the auspices of a particular journalistic brand. So, per Newsbeat, The New York Times is The New York Times, yes…but it’s also, and to some extent more so, the NYT Business section and the NYT Politics page and infographics and and blogs and Chris Chivers and David Carr and Maureen Dowd. It’s a noisy, newsy amalgam, coherent but not constrained, its components working collectively — but not, necessarily, concertedly.

That could be a bad thing: Systems that lack order tend to beget all the familiar problems — redundancy, wasted resources, friction both interpersonal and otherwise — that disorder tends to produce. For news orgs, though, a little bit of controlled chaos can be, actually, quite valuable. And that’s because, in the corporate context, the flip side of fragmentation is often entrepreneurialism: Empower individuals within the organization — to be creative and decisive and, in general, expert — and the organization overall will be the better for it. Analytics, real-time and otherwise, serve among other things as data points for editorial decision-making; the message implicit in Newsbeat’s design is that, within a given news org, several people (often, many, many, many people) will be responsible for a brand’s moment-by-moment output.

Which is both obvious and important. News has always been a group effort; until recently, though, it’s also been a highly controlled group effort, with an organization’s final product — a paper, a mag, a broadcast — determined by a few key players within the organization. News outlets haven’t just been gatekeepers, as the cliché goes; they’ve also had gatekeepers, individuals who have had the ultimate responsibility over the news product before it ships.

Increasingly, though, that’s not the case. Increasingly, the gates of production are swinging open to journalists throughout, if not fully across, the newsroom. That’s a good thing. It’s also a big thing. And Newsbeat is reflecting it. With its newest tool, Chartbeat is self-consciously trying to help organize “the newsroom of the future,” Haile told me — and that newsroom is one that will be dynamic and responsive and, more than it’s ever been before, collaborative.

July 27 2011

17:30

NYTimes.com’s most looked-up words for 2011: Even more morose than last year’s list

One of the cooler-but-lesser-known functions of NYTimes.com is its word “look up” feature: Double-click on any word in the text of an article — insouciance, say, or omerta — and a little question mark will pop up. Click the question mark, and you’ll get a definition of the highlighted word directly from the American Heritage Dictionary. Ooooh!

Since 2009, the Times’ analytics department has been tracking the words that its users look up via the feature; this week, it released results for the first half-and-change of 2011. And they are…multisyllabic. And anything but perfunctory.

They are also pretty…depressing. Though journalism, as an institution, isn’t especially renowned for its sunny outlook on the world, it’s still remarkable how pessimistic and generally morose (hubris! feckless! dyspeptic!) the looked-up words tend to be. While they’re nothing, of course, like a representative sample of all the words used in the Times — they don’t account for NYT blog posts, for one thing, but mostly they represent only the words that have confused people and/or sparked their interest enough to get people to click on them — the negativity here is noteworthy nonetheless. If a good newspaper is a cultural product, a nation talking to itself and all that, then the preponderance of profligacy and hauteur and duplicity and blasphemy that populate the list don’t bode terribly well for our collective conversation. If some future civilization were to come across the Times list and assume it’s representative of The Times We Live In, they’d probably feel sorry for us. Or, you know, schadenfreudically avuncular.

The negativity is a trend, actually; Josh pointed out the same thing for last year’s list. It’s worth noting, too, the tension that the words represent: the classic disconnect between respecting readers’ intelligence by challenging it…and giving them a pleasant reading experience. The line between education and pretension — for the Times, in particular, which aspires to an intellectualism that is, first and foremost, accessible — is a thin one. “As always, we should remember that our readers are harried and generally turn to us for news, not SAT prep,” Corbett notes. At the same time, though, as one commenter put it, “This is why I love this newspaper: It not only dispenses the news, it keeps me on my toes and prods me to keep my literary silverware polished and my linguistic cutlery honed. :-D”

Anyway, we’ve listed the looked-up words on a spreadsheet, above, if you want to play around with the set. (A couple notes on the data: As in previous years, the Times’ blogs, among other parts of the site, aren’t counted in these data. And the data account for word-lookups on NYTimes.com, not the Times’ various apps. Plus, Corbett notes, “This year, we arranged the list by how many times a word was looked up per use, rather than by total number of look-ups. That highlights the most baffling words of all.”) If you notice anything interesting about them, let us know.

July 23 2011

19:34

Google+ (Plus) business profiles to include analytics & more

Venture Beat :: If businesses can contain themselves for just a few more months, they’ll have much better Google+ tools than the ones that currently exist. In fact, Google will be unveiling specially tweaked profiles with analytics and more sophisticated sharing options, all coming during (or shortly after) Q3 2011.

Continue to read Jolie O'Dell, venturebeat.com

June 05 2011

13:00

TubeMogul delivers 5 billion daily video ad auctions

Beet.TV :: TubeMogul, the San Francisco-based company known for providing video analytics to publishers, recently launched a demand side platform (DSP) for advertisers to buy online video inventory in real time. David Burch, says that the new platform is the "first DSP for online video advertising" and it has quickly taken off. The platform now offers publishers the opporutunity to bid of 5 billion ad auctions daily.

Beep.TV: spoke with David Burch last month at the Brightcove global customer summit. Coverage of the Brightcove conference was part of Beep.TV's sponorship agreement.

Continue to read Andy Plesser, www.beet.tv

May 05 2011

18:43

WPP's Kantar Video Teams Open Amplify to Track Social Buzz Around Video

Kantar Video, the video analytics unit of WPP, is teaming with semantic Web services company Open Amplify to gather and organize social media chatter as part of Kantar's product offering.

For an overview on Open Amplify, and how its tools will provide valuable data around video in the social media sphere, we spoke with Micheal T. Petit, co-founder and CIO.

We inteviewed him at the WPP Global Video Summit which was hosted by Kantar in New York earler this week.

Andy Plesser

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl