Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 29 2012


The Difference Between 'Invention' and 'Innovation'

Two and a half years ago, I co-founded Stroome, a collaborative online video editing and publishing platform and 2010 Knight News Challenge winner.

From its inception, the site received a tremendous amount of attention. The New School, USC Annenberg, the Online News Association and, ultimately, the Knight Foundation all saw something interesting in what we were doing. We won awards; we were invited to present at conferences; we were written about in the trades and featured in over 150 blogs. Yet despite all the accolades, not once did the word "invention" creep in. "Innovation," it turns out, was the word on everyone's lips.

Like so many up-and-coming entrepreneurs, I was under the impression that invention and innovation were one and the same. They aren't. And, as I have discovered, the distinction is an important one.

Recently, I was asked by Jason Nazar, founder of Docstoc and a big supporter of the L.A. entrepreneurial community, if I would help define the difference between the two. A short, 3-minute video response can be found at the bottom of this post, but I thought I'd share some key takeaways with you here:


In its purest sense, "invention" can be defined as the creation of a product or introduction of a process for the first time. "Innovation," on the other hand, occurs if someone improves on or makes a significant contribution to an existing product, process or service.

Consider the microprocessor. Someone invented the microprocessor. But by itself, the microprocessor was nothing more than another piece on the circuit board. It's what was done with that piece -- the hundreds of thousands of products, processes and services that evolved from the invention of the microprocessor -- that required innovation.


If ever there were a poster child for innovation it would be former Apple CEO Steve Jobs. And when people talk about innovation, Jobs' iPod is cited as an example of innovation at its best.

steve jobs iphone4.jpg

But let's take a step back for a minute. The iPod wasn't the first portable music device (Sony popularized the "music anywhere, anytime" concept 22 years earlier with the Walkman); the iPod wasn't the first device that put hundreds of songs in your pocket (dozens of manufacturers had MP3 devices on the market when the iPod was released in 2001); and Apple was actually late to the party when it came to providing an online music-sharing platform. (Napster, Grokster and Kazaa all preceded iTunes.)

So, given those sobering facts, is the iPod's distinction as a defining example of innovation warranted? Absolutely.

What made the iPod and the music ecosystem it engendered innovative wasn't that it was the first portable music device. It wasn't that it was the first MP3 player. And it wasn't that it was the first company to make thousands of songs immediately available to millions of users. What made Apple innovative was that it combined all of these elements -- design, ergonomics and ease of use -- in a single device, and then tied it directly into a platform that effortlessly kept that device updated with music.

Apple invented nothing. Its innovation was creating an easy-to-use ecosystem that unified music discovery, delivery and device. And, in the process, they revolutionized the music industry.


Admittedly, when it comes to corporate culture, Apple and IBM are worlds apart. But Apple and IBM aren't really as different as innovation's poster boy would have had us believe.

Truth is if it hadn't been for one of IBM's greatest innovations -- the personal computer -- there would have been no Apple. Jobs owes a lot to the introduction of the PC. And IBM was the company behind it.

Ironically, the IBM PC didn't contain any new inventions per se (see iPod example above). Under pressure to complete the project in less than 18 months, the team actually was under explicit instructions not to invent anything new. The goal of the first PC, code-named "Project Chess," was to take off-the-shelf components and bring them together in a way that was user friendly, inexpensive, and powerful.

And while the world's first PC was an innovative product in the aggregate, the device they created -- a portable device that put powerful computing in the hands of the people -- was no less impactful than Henry Ford's Model T, which reinvented the automobile industry by putting affordable transportation in the hands of the masses.


Given the choice to invent or innovate, most entrepreneurs would take the latter. Let's face it, innovation is just sexier. Perhaps there are a few engineers at M.I.T. who can name the members of "Project Chess." Virtually everyone on the planet knows who Steve Jobs is.

But innovation alone isn't enough. Too often, companies focus on a technology instead of the customer's problem. But in order to truly turn a great idea into a world-changing innovation, other factors must be taken into account.

According to Venkatakrishnan Balasubramanian, a research analyst with Infosys Labs, the key to ensuring that innovation is successful is aligning your idea with the strategic objectives and business models of your organization.

In a recent article that appeared in Innovation Management, he offered five considerations:

1. Competitive advantage: Your innovation should provide a unique competitive position for the enterprise in the marketplace;
2. Business alignment: The differentiating factors of your innovation should be conceptualized around the key strategic focus of the enterprise and its goals;
3. Customers: Knowing the customers who will benefit from your innovation is paramount;
4. Execution: Identifying resources, processes, risks, partners and suppliers and the ecosystem in the market for succeeding in the innovation is equally important;
5. Business value: Assessing the value (monetary, market size, etc.) of the innovation and how the idea will bring that value into the organization is a critical underlying factor in selecting which idea to pursue.

Said another way, smart innovators frame their ideas to stress the ways in which a new concept is compatible with the existing market landscape, and their company's place in that marketplace.

This adherence to the "status quo" may sound completely antithetical to the concept of innovation. But an idea that requires too much change in an organization, or too much disruption to the marketplace, may never see the light of day.


While they tend to be lumped together, "invention" and "innovation" are not the same thing. There are distinctions between them, and those distinctions are important.

So how do you know if you are inventing or innovating? Consider this analogy:

If invention is a pebble tossed in the pond, innovation is the rippling effect that pebble causes. Someone has to toss the pebble. That's the inventor. Someone has to recognize the ripple will eventually become a wave. That's the entrepreneur.

Entrepreneurs don't stop at the water's edge. They watch the ripples and spot the next big wave before it happens. And it's the act of anticipating and riding that "next big wave" that drives the innovative nature in every entrepreneur.

This article is the seventh of 10 video segments in which digital entrepreneur Tom Grasty talks about his experience building an Internet startup, and is part of a larger initiative sponsored by docstoc.videos, which features advice from small business owners who offer their views on how to launch a new business or grow your existing one altogether.

May 04 2011


MIT management professor Tom Malone on collective intelligence and the “genetic” structure of groups

Do groups have genetic structures? If so, can they be modified?

Those are two central questions for Thomas Malone, a professor of management and an expert in organizational structure and group intelligence at MIT’s Sloan School of Management. In a talk this week at IBM’s Center for Social Software, Malone explained the insights he’s gained through his research and as the director of the MIT Center for Collective Intelligence, which he launched in 2006 in part to determine how collective intelligence might be harnessed to tackle problems — climate change, poverty, crime — that are generally too complex to be solved by any one expert or group. In his talk, Malone discussed the “genetic” makeup of collective intelligence, teasing out the design differences between, as he put it, “individuals, collectively, and a collective of individuals.”

The smart group

First is the question of whether general cognitive ability — what we think of, when it comes to individuals, as “intelligence” — actually exists for groups. (Spoiler: it does.) Malone and his colleagues, fellow MIT researchers Sandy Pentland and Nada Hashmi, Carnegie Mellon’s Anita Williams Woolley, and Union College’s Christopher Chabrisassembled 192 groups — groups of two to five people each, with 699 subjects in all — and assigned to them various cognitive tasks: planning a shopping trip for a shared house, sharing typing assignments in Google Docs, tackling Raven’s Matrices as a group, brainstorming different uses for a brick. (For you social science nerds, the team chose those assignments based on Joe McGrath‘s taxonomy of group tasks.) Against the results of those assignments, the researchers compared the results of the participants’ individual intelligence tests, as well as the varying qualities of the group, from the easily quantifiable (participants’ gender) to the less so (participants’ general happiness).

And what they found is telling. “The average intelligence of the people in the group and the maximum intelligence of the people in the group doesn’t predict group intelligence,” Malone said. Which is to say: “Just getting a lot of smart people in a group does not necessarily make a smart group.” Furthermore, the researchers found, group intelligence is also only moderately correlated with qualities you’d think would be pretty crucial when it comes to group dynamics — things like group cohesion, satisfaction, “psychological safety,” and motivation. It’s not just that a happy group or a close-knit group or an enthusiastic group doesn’t necessarily equal a smart group; it’s also that those psychological elements have only some effect on groups’ ability to solve problems together.

So how do you engineer groups that can problem-solve effectively? First of all, seed them with, basically, caring people. Group intelligence is correlated, Malone and his colleagues found, with the average social sensitivity — the openness, and receptiveness, to others — of a group’s constituents. The emotional intelligence of group members, in other words, serves the cognitive intelligence of the group overall. And this means that — wait for it — groups with more women tend to be smarter than groups with more men. (As Malone put it: “More females, more intelligence.”) That’s largely mediated by the researchers’ social sensitivity findings: Women tend to be more socially sensitive than men — per Science! — which means that, overall, more women = more emotional intelligence = more group intelligence.

Which, yay. And it’s easy to see a connection between these findings and the work of journalists — who, whether through crowdsourcing or commentary, are trying to figure out the most productive ways to amplify, and generally benefit from, the wisdom of crowds. News outfits are experimenting not just with inviting group participation in their work, but also with, intriguingly, defining the groups whose participation they invite — the starred commenters, the “brain trust” of readers, etc. Those experiments are based, in turn, on a basic insight: that the “who” of groups matters as much as the “how.” Attention to the makeup of groups on a more granular, person-to-person level may extend the benefits even further.

The group genome

But where Professor Malone’s ideas get especially interesting from the Lab’s perspective is in another aspect of his work: the notion that groups have, in their structural elements, a kind of dynamic DNA. Malone and his colleagues — in this case, Robert Laubacher and Chrysanthos Dellarocas — are essentially trying to map the genome of human collectivity, the underlying structure that determines groups’ outcomes. The researchers break the “genes” of groups down to interactions among four basic (and familiar) categories: what, who, why, and how. Or, put another way: what the project is, who’s working to enact it, why they’re working to enact it, and what methods they’re using to enact it. (So the “genetic structure” of the Linux community, for example, breaks down to relationship among the what of creating new tools and shaping existing ones; the who of the crowd combined with Linus Torvalds, and his lieutenants; the why of love, glory, and, to an extent, financial gain; and the how of both collaboration and hierarchical ordering. The interplay among all those factors determines the community’s outward expression and outcomes.)

That all seems simple and obvious — because it is — but what makes the approach so interesting and valuable from the future-of-news perspective is, among other things, its disaggregation of project and method and intention. Groups form for all kinds of reasons, but we generally pay little attention to the discrete factors that lead them to form and flourish. Just as understanding humans’ genetic code can lead us to a molecular understanding of ourselves as individuals, mapping the genome of groups may help us understand ourselves as we behave within a broader collective.

And that knowledge, just as with the human genome, might help us gain an ability to manipulate group structures. When it comes to individuals, intelligence is measurable — and, thus, it has a predictive element: A smart kid will most likely become a smart adult, with all the attendant implications. Individual intelligence is fairly constant, and, in that, almost impossible to change. Group intelligence, though, Malone’s findings suggest, can be manipulated — and so, if you understand what makes groups smart, you can adjust their factors to make them even smarter. The age-old question in sociology is whether groups are somehow different, and greater, than the sum of their parts. And the answer, based on Malone’s and other findings, seems to be “yes.” The trick now is figuring out why that’s so, and how the mechanics of the collective may be put to productive use. Measuring group intelligence, in other words, is the first step in increasing group intelligence.

Malone and his colleagues have identified 16 “genes” so far, as expressed in groups like Wikipedia contributors, YouTube uploaders, and eBay auctioneers. “We don’t believe this is the end, by any means, but we think it’s a start,” he said — a way to rethink, and perhaps even revolutionize, the design of groups. Organizational design theory in the 20th century, he noted, generally focused on traditional, hierarchical corporations. But as digital tools give way to new kinds of collectives, “it seems to me,” the professor said, that “it’s time to update organizational design theory for these new organizations.”

Image via ynse used under a Creative Commons license.

Sponsored post

December 08 2010


Nicholas Christakis on the networked nature of Twitter

Earlier this fall, Alyssa Milano — known for being on “Who’s the Boss” and, more recently, for being on Twitter — sent out a somewhat surprising tweet to her nearly 1.2 million followers: a link to the Amazon page of a book called Connected: The Surprising Power of Our Social Networks & How They Shape Our Lives.

For a book like Connected, penned by two social scientists and built on longitudinal research and academic inquiry — a book, in other words, that may hope to achieve influence over our thinking, but doesn’t aspire to huge sales numbers — you’d think that a message broadcast from a heavily followed Twitter account would lead to a proportionally large spike in sales. Amplification, after all, comes from size: The more followers a person has, the more people who will see a message and who will, potentially, retweet it — and, thus, the more people who will potentially act on it. We know it intuitively: In general, the greater the numbers, the greater the viral power.

So, then, how many extra books did Connected’s authors, Nicholas Christakis and James Fowler, sell in the wake of their million-follower tweet?

None. Literally, not a one. In fact — insult, meet injury! — in the days and weeks following Milano’s tweet, the book’s sales actually declined. The actress’ follower numbers, in this case, hadn’t been a force for much of anything. “At least with respect to the influence of behavior,” Christakis noted, “these links — these Twitter links — are weak.”

But, hey, maybe it was just an Alyssa Milano thing: It’s pretty fair to figure that the overlap between her followers and the universe of people who might buy a sciency book by two professors would be, you know, low. So Christakis and Fowler asked Tim O’Reillynearly 1.5 million followers, with, ostensibly, more book-interest overlap — to send the Connected link out to his feed.

The result? “We sold one extra copy of the book.”

Same experiment, with Pew’s Susannah Fox (4,960 followers)? Three extra copies.

If you’re interested in the way information spreads online — and if you’re interested in the future of news, you probably are — then the low volume-to-impact rate the authors found (which, though completely anecdotal, flies in the face of so much conventional wisdom) is fascinating. And it begs a question that appears so often in academic inquiry: What’s up?

In a talk yesterday evening at IBM’s T.J. Watson Research Center in Cambridge (we wrote about another IBM event, with dataviz guru Jer Thorp, this summer), Christakis, a professor both at Harvard Medical School and its Faculty of Arts and Sciences, dove into that question, discussing the particular (and peculiar) ways that social networks — online and off — work.

The talk focused on the epidemiology of action — how and whether certain behaviors spread through a population. (More on that here.) Though we often talk about social connections in terms of simple binaries — friend vs. not-friend, weak ties versus strong — the ties that bind people together, Christaskis’ research suggests, are nowhere near as simple as we often assume. There’s the obvious — your Facebook friend may not be your friend friend — but also, more murkily but more fascinatingly, the complex of connections that affect our behavior in surprising ways.

For the Lab’s purposes, one especially intriguing element of the discussion focused on Twitter — and the extent to which ideas spread through Twitter’s network actually catch on and have impact. One binary that might actually be relevant in that regard, Christakis suggested: influencer versus influence-ee. “If we’re really going to advance this field, we need to figure out how to identify not just influential people, but also influenceable people,” the professor noted. “We need not just shepherds, but sheep.” And “if we’re going to exploit online ties,” Christakis said — say, by creating communities of interest around news content, and potentially monetizing those communities — then “measures of meaningful interactions will be needed”: We need metrics, in particular, to determine “which online interactions represent real relationships, where an influence might possibly be exerted.”

For that, he continued, “we need to distinguish between influential, or real, ties online, and uninfluential, or weak, ties online.”

The next question: How do you do that? How do you look beyond standard (and, per Christakis’ anecdotal evidence, misleading) metrics like Twitter follower/Facebook friend counts and find more meaningful metrics of influence? One benefit of social networks’ movement online is that their dynamics are (relatively) easily trackable: We’re able as never before to put data behind the interactions that define society as a whole, and, in that, understand them better. (Connected, on the other hand — whose conclusions are based on data sets of social flow that were cultivated, over a period of years, from physical documents — didn’t have that luxury.)

And while Christakis’ talk raised as many questions as it answered — we’re still in early days when it comes to measuring behavioral influences online — one of his core ideas is an insight that several news organizations are already putting to practice: the power of the niche. Much more significant and influential than single celebrities — individual nodes in a network — are the “niches within the network where you have the particular assemblage of influential people and their followers.” When influence is layered — when its fabric is made stronger by tight connections across a smaller network — it’s more predictable, and more powerful.

And that has big implications not only for news organizations, but also for the platforms that are hoping to translate their ubiquity into financial and social gain. If you want your work to have impact, then targeting a bundle of closely connected networks — with news, with links, with messages — may make more sense than going for numbers alone. Spreading a conversation is not the same as affecting it. “I’m not saying that Twitter is useless,” Christakis said, “but I think that the ability of Twitter to disseminate information is different than its ability to influence behavior.”

October 22 2010



Simon Valdman writes:

“The challenge isn’t just to do something smart and new on the edge of a traditional business – but to transform the business as a whole, and to position it for sustainable, profitable growth in the future.”

What a lesson for the print versus online talibans!

And said by the former director of digital strategy of the Guardian Group.

(Thanks to Guillermo Nagore in NY)

June 03 2010


Is 70 percent of what we read online really by our friends?

Last month, we tweeted a remarkable stat:

Of everything under 40 year olds read online, about 70% was created by someone they know http://j.mp/bb0jgN

Our source was this article citing a recent panel discussion at an SEO conference in New York. Here’s how the stat was presented, in a piece in the newsletter Publishing Trends, as a product of Forrester Research:

In one of several panels on social media and search, Patricia Neuray of Business.com cited the Forrester research finding that 70% of the content read online by under-40-year-olds was written by someone they know.

(Someone who livetweeted the panel seemed to also attribute it to Forrester, although with a cryptic hint of IBM.)

It’s obviously a remarkable statistic if true, but I wanted to get a little more detail — like how the study defined “someone they know” and “content read online.” Are they talking websites, or are they including things like email? Does “someone they know” mean someone they know in real life, or does an Internet friend count? I engaged in some vigorous Googling, but couldn’t find the original study. Then I emailed Forrester to see if they could produce it. A spokesperson got back to me:

That statistic does not come from a Forrester study. We heard about it and investigated it as well to find out that the original author of the article that used that statistic was in error. I just rechecked his article – he removed Forrester as the source but did not cite another source other than a speaker from IBM at this conference: http://www.publishingtrends.com/2010/04/making-search-convert-search-engine-strategies-2010/

And indeed, now the reference in the original article is thus:

In one of several panels on social media and search, Leslie Reiser of IBM cited the recent finding that “70% of the content read online by under-40-year-olds was written by someone they know.”

I contacted Reiser last week to see if she has a cite for it; my very quick Googling didn’t turn up an obvious IBM reference for the number, either, but that doesn’t mean much. I’ll let you know if I hear back from her. In any event, since by tweeting it we played a part in spreading the number, I thought we should note that the original source is still a bit up in the air.

March 30 2010


Portability, Participation Rule for New Media Consumer

We're spoiled by technology. Today, we expect more from our media than we can get from print, radio or linear TV.

If you're like me -- and, increasingly, evidence shows people are -- you crave portability, fungibility, the ability to listen to a book or article, to watch a TV show or movie or YouTube clip whenever and wherever you want. You may even, like me, want to chop off pieces and show them elsewhere, tag them, mash them up.

Consuming media the way it used to be provided (and sometimes still is) can be so woefully inefficient. Who wants to have to sit down and consume at the provider's convenience, rather than their own? Who has time for appointment TV any more? Just look at the research that finds more and more of us using DVRs, avoiding commercials and otherwise changing viewing habits.

It's not necessarily that we object to a reasonable level of advertising or fees. We're increasingly using services like Hulu or Netflix that let us watch shows and movies on demand, even if we have to suffer ads, or pay for the privilege. It's worth the price in order to not be at the mercy of whatever happens to be available, either in real-time or on-demand through a cable. It's great to have the choice of what screen to use, too. And who doesn't enjoy being able to zoom back a minute or two and catch something they liked or missed?

During the Winter Olympics, I couldn't bother sitting through tape-delayed events that had happened hours ago or that I didn't care about. I not only recorded the shows off the air, using an Eye TV device mentioned in this MediaShift story on cutting the cord to cable, but also set the program to automatically convert the broadcasts to iTunes clips that took up less space on my hard drive and also made them easy to transfer to computers and other devices.

Shifting from Eyes to Ears and Back

If you're like me, you also enjoy reading and listening to books you're interested in. I may read a chapter or two, then listen to a chapter while doing the dishes. I get through the book faster and enjoy the continuity. When an audiobook doesn't exist -- which is surprisingly often -- I'll try to get the digital edition and have my computer's speech-synthesis application read it to me. Even with the distortions and glitches, it's good enough to give a good rendering of what's in print.

I'll do that for newspaper and magazine articles, blogs and research papers, too. It's a great way to not have to stop reading because I have something else to do that requires the use of my hands or eyes. If I'm going to be traveling, I might record the audio into an iPod so I can listen while standing in line or taking a taxi to the hotel. I'll certainly access books remotely via computer, Blackberry or iPod Touch.

By now, you may be thinking: What's this got to do with trends in media or the media business, at large? This guy is a huge geek, and he's unlike 90 percent of humanity.

But that really isn't the case. Yes, I am reasonably comfortable with technology, but I don't use it for its own sake. I use the technology because it is liberating, it let's me do things I've always wanted to. I know I'm not the only person who's engaged in time- and place-shifting by using a timer and tape recorder to grab favorite radio shows, for example. It's no secret why audio cassette decks used to be sold with two slots for tapes, only one of which had a "record" button. I still record things on a videotape when I want to bring them over to someone else's house to watch.

Our time is valuable, and the more we can control it the more value it has. So, too, does media become more valuable when we can better weave it into our relationships. If we can snag a piece of something and blog or tweet about it or email it to a friend, it makes it easier to have a meaningful conversation and be engaged.

Age of the Participatory Consumer

A recent study from IBM media research found that we're moving from "traditional devices" to "connected experiences," that media consumers from all generations, but especially the younger ones, are moving from passive to "involved" consumption of media, and from limited to open access. Consumers around the world, it finds, increasingly expect to control and participate in their media.

There's a lesson here amid debates about what media consumers will pay for, and which distribution channels and levels of access can be controlled. Device makers, too, need to figure out a balance between portability and access, as the iPod's masters showed they learned by finally offering DRM-free versions of songs. I also predict the Kindle will do the same as competitors with more open devices gain market share.

Anyone who produces media or the devices to consume them will have to provide enough value for us to put up with any restrictions. More importantly, they need to understand that technology has made us into new kinds of consumers.

Dorian Benkoil is consulting sales manager, and has devised marketing strategy for MediaShift. He is SVP at Teeming Media, a strategic media consultancy focused on helping digital media content identify and meet business objectives. He has devised strategies, business models and training programs for websites, social media, blog networks, events companies, startups, publications and TV shows. He Tweets at @dbenk.

This is a summary. Visit our site for the full post ».

January 19 2010


IBM funds my research into visualisation in journalism

IBM Global Services
Image via Wikipedia

I am very pleased to say that I have received a $10,000 award from IBM to support research into visualisation in journalism.

Big thank you to the research team at IBM Canada for the grant.  We’ve been talking about doing some research into best practices in journalism for the use of IBM’s ManyEyes.

Here’s the announcement posted to the UBC Graduate School of Journalism website:

UBC journalism professor Alfred Hermida has been awarded $10,000 from IBM to support his research into the application of visualization in journalism.

The 2010 IBM Centre for Advanced Studies (CAS) Faculty Award is designed to support Prof Hermida’s research in the area of business intelligence for journalism. IBM said the award was in recognition of the quality of his research and its importance to the industry.

Prof Hermida is researching the application of visual analytics to develop of new digital story-telling techniques that take advantage of the ability to manipulate and represent complex datasets in visual and compelling ways.

Visual analytics is an emerging area of research that applies the science of analytical reasoning supported by highly interactive visual interfaces.

It is increasingly being adopted in journalism to communicate information clearly and effectively through the use of computer-generated images, gaining insight and knowledge from data, and its inherent patterns and relationships.

The UBC journalism school is developing a Visualization Lab for Civic and Investigative Journalism to research and develop innovative approaches to data manipulation, integration and querying for the purpose of understanding complex public policy issues using investigative journalism and visual analytics.

The aim of the visualization project is to improve access to essential data on public policy issues benefits all Canadians. Reducing barriers to information access is valuable because the democratization of data plays a meaningful role in fostering civic engagement.  It is a response to the growing need for credible information on key quality of life issues brought directly to Canadians in a visual form that is both easily accessed and understood.

The IBM Centre for Advanced Studies (CAS), established in 1990 at the IBM Toronto Software Laboratory, brings together IBM researchers and technical leaders with academic and government research organizations from around the world.

The IBM CAS awards recognize individuals who best epitomize the mission of CAS to facilitate the exchange of academic research knowledge and real world industry.

Reblog this post [with Zemanta]

January 04 2010


News orgs’ goal for 2010: Imagine tomorrow’s media world today

The legacy press — or the traditional media, or whatever we’re calling newspapers these days — has one main challenge for 2010, and it’s not finding a new business model. It has to do with vision. It has to do with being able to imagine a world that does not yet exist.

While the news media’s woes come from lagging ad rates and content that’s scooped up by aggregrators, those are symptoms of the main problem: an inability to imagine what media consumption will look like in one, five, 10 years.

It’s a problem that’s not new or unique to the news business. Two examples illustrate my point.

Personal computers

In the early ’60s, IBM, the king of computers at the time, couldn’t imagine a need for personal computers, according to Robert X. Cringley’s 1992 book, “Accidental Empires.” (The famous quote from IBM chief Thomas Watson — “I think there is a world market for maybe five computers” — appears to be apocryphal, though.) In those days, computers were mainframes that filled a room. Executive didn’t type; they had secretaries for that. Watch an episode of “Mad Men,” and you’ll get the idea.

Cringley writes in his book that top IBM executives were briefed on a plan for video-display terminals in those days, but they didn’t get it. “These were intelligent men, but they had a firmly fixed concept of what computer was supposed to be, and it didn’t include video-display terminals,” he wrote. “To invent a particular type of computer, you have to want to use it, and the leaders of America’s computer companies did not want a computer on their desks.”

Imagine that: a computer company that could not foresee that people might want to harness the power of a mainframe computer, plunk it on their desk or lap, and use it all by themeselves. Today it seems preposterous; my laptop gets turned on as early each morning as my coffee maker.

IBM and others couldn’t imagine a world that didn’t exist then. Of course, others did — including later bosses at IBM — and the personal computer was born. But the inability to imagine delayed the process and changed the computer industry forever. Ask you typical 20-something who rules the computer business, and IBM won’t be on their list.


The first commercial microwave hit the market in 1947, according to Microtech’s history of the microwave. But it wasn’t until the 1970s when they caught on in the home. I remember when my family got our first: We all watched as my mom boiled her first cup of water for tea in this mammoth machine. “I can’t imagine what I’ll do with this,” I remember my mother saying, noting that making tea water in a stovetop kettle seemed easier.

Then think about today. My microwave died on Christmas Day, when not a store was open to replace it. Our family barely made it to Saturday, when I rushed to Target to buy a new one. What we couldn’t imagine a use for 30 years ago, we can’t live without today.

What this means for the news business

My point is news organizations need to imagine how people will consume news in the future — even though it might not make sense to them today. Newspapers owners may want ink on their fingers, and a paper they can feel, but many of their customers don’t now — or won’t in five years. And they may think a newspaper web site should look like a newspaper, but it shouldn’t. (It’s normal to build something new based on something old. That happened in the computer world, too, with the first microcomputers modeled on a mainframe.)

The challenge for the news biz is to look ahead and imagine how people may want their news and information. It’s about format (online, by phone, through social media) and content (aggregated, local, tailored to their needs.) For local news operations, this mean “organizing a community’s information so the community can organize itself,” as Jeff Jarvis puts it.

For all media organizations, it means adding more value to what they offer readers, according to Jay Rosen. What it doesn’t mean is forsaking the journalistic mission in search of the “almighty hit,” as Lehigh University journalism professor Jeremy Littau puts it.

This doesn’t mean news organizations should be inventing technology. I think that’s probably out of the pervue of most journalists. What I’m talking about is envisioning a new way to use technology, in this case the Internet and the cell phone and likely other tools that others will invent. The new business doesn’t need to invent the tools — just figure out how to use them to best serve their readers.

Newspaper readers, at least those who read small or mid-sized papers, have always expected the newspaper to make sense of the world for them. If you’ve spent Saturday night shifts at the city desk of at a mid-sized newspaper as I have, you know what I mean. People would call with seemingly inane questions: On what channel will the local college basketball game be shown? Is there trash pickup tomorrow because of the holiday? If I mail a package today, will it make it to my grandchildren by Christmas?

A wise editor of mine explained that we should be proud readers came to us with these questions because it meant the newspaper was so intrinsic to people’s lives that it was the first place they went for answers. Newspapers still need to be that today. It’s still their job to explain the changing world to readers. And it’s also their job to imagine what the world will look like, so they can serve the readers of tomorrow.

Over the summer, I blogged on this site and on Save the Media about artisanal news, my concept of small batches of news tailored to tight niches of readers. A commenter noted that he or she couldn’t imagine what I was talking about until it happens. Fair enough. But to survive, news organization must imagine. The question is: What’s next? That’s the challenge for news organizations — to figure out what readers won’t be able to live without tomorrow. And then the money will come. Because, really, making money is a simple formula: Make something people can’t live without, and they’ll be willing to pay for it.

Photo by David used under a Creative Commons license.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...