Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 07 2011

11:13

Hi, my name’s Zarino

So when Nicola asked me to write a Friday post introducing myself on the ScraperWiki blog, I never thought I’d be writing it during such a momentous few days. I was meant to entertain and beguile you with talk of my MSc research into Open Data at Oxford, tease and tantalise with news of how we’re making ScraperWiki cleaner, faster and more intuitive.

But suddenly, all of that seemed pretty unimportant. In the small hours of Thursday morning, people all over the UK woke up to find that Steve Jobs, one of the greatest and most controversial legends of the technology world, had passed away. The news rocked Twitter — there was pretty much nothing else in my stream all day — flowers and candles were laid outside Apple Stores around the world, people published poems and pictures and stories and bittersweet obituaries. Many of the highest-traffic pages on the web displayed humble banners with his name. Some simply shut down and devoted every pixel to his memory.

And it got me wondering — what does a guy do to cause such a stir? Books will no doubt be written (in fact, they already have) answering that question. They’ll talk about Steve the college dropout, Steve the child of the Sixties, Steve the garage marketeer. They’ll picture him bowtied and bespectacled grinning above the first Macintosh, clad in the inimitable blue and black at one of a hundred keynotes, and worryingly gaunt at the height of his battle with cancer. They’ll talk about how he revolutionised not just the computing industry, not just the software industry and not just the music industry, but also the animation industry, the movie industry, and the technology retail industry. And they’ll be right.

But if you ask me, the real reason why we’re all laying flowers for this guy, writing poems for him, even talking about him at all, is because he put the user at the live, beating heart of everything he did. Steve didn’t invent the mouse, or the GUI or the personal computer, but in a world of green-on-black, of FORTRAN and BASIC, he had the foresight, the passion and the balls to back these weird, unpopular and user-centric technologies, because he knew, once normal people had access to the liberating power of the silicon chip, their lives would change forever. It’s only a matter of time until someone (hopefully us!) does the same with data.

I’m no Steve Jobs (I look terrible in turtle-necks). But if I can do anything here at ScraperWiki, it’s to try and bring some of that user focus to the world of data science. Life is too short to spend it puzzling over dubug console output, or commenting out lines of code one by one. And most of all, life’s too short to be doing all of that alone. I have two goals as the ScraperWiki UX guy: to make the experience of using our services as smooth, as intuitive and as integrated as possible, and also to make it as social as possible—not in a Facebook way, but in a hackday way—so you can all benefit from the wealth of experience, backgrounds and talents around you, right now, on this very site. There’s some amazing work being done by our members, and it’s my job to make sure you can keep on doing it, keep on getting the scoops, informing the public, serving your clients, no matter how hideous the HTML or unstructured the PDF.

Like I said, I’m no Steve Jobs. Who could even try to compete? But like Steve, I have an email address – zarino@scraperwiki.com – and I want to hear from you. Yes, you, right now. And in the future, whenever you have a problem. Whenever you think of something ScraperWiki should be doing for you, or whenever it fails to do something it says it should. Drop me an email and we’ll work on a solution (I promise my responses won’t be as famously acerbic as Steve’s).

And with that, I’ll leave you. Our brilliant new Editor interface isn’t going to design itself, you know. But before I go, I should take one last chance to say thank you. To you amazing ScraperWiki diggers, to Francis and the ScraperWiki team, but most of all, to Steve, for making all of this possible. I hope we can do him proud.


September 16 2011

13:16

Driving the Digger Down Under

G’day,

Henare here from the OpenAustralia Foundation – Australia’s open data, open government and civic hacking charity. You might have heard that we were planning to have a hackfest here in Sydney last weekend. We decided to focus on writing new scrapers to add councils to our PlanningAlerts project that allows you to find out what is being built or knocked down in your local community. During the two afternoons over the weekend seven of us were able to write nineteen new scrapers, which covers an additional 1,823,124 Australiansa huge result.

There are a number of reasons why we chose to work on new scrapers for PlanningAlerts. ScraperWiki lowers the barrier of entry for new contributors by allowing them to get up and running quickly with no setup – just visit a web page. New scrapers are also relatively quick to write which is perfect for a hackfest over the weekend. And finally, because we have a number of working examples and ScraperWiki’s documentation, it’s conceivable that someone with no programming experience can come along and get started.

It’s also easy to support people writing scrapers in different programming languages using ScraperWiki. PlanningAlerts has always allowed people to write scrapers in whatever language they choose by using an intermediate XML format. With ScraperWiki this is even simpler because as far as our application is concerned it’s just a ScraperWiki scraper – it doesn’t even know what language the original scraper was written in.

Once someone has written a new scraper and formatted the data according to our needs, it’s a simple process for us to add it to our site. All they need to do is let us know, we add it to our list of planning authorities and then we automatically start to ask for the data daily using the ScraperWiki API.

Another issue is maintenance of these scrapers after the hackfest is over. Lots of volunteers only have the time to write a single scraper, maybe to support their local community. What happens when there’s an issue with that scraper but they’ve moved on? With ScraperWiki anyone can now pick up where they left off and fix the scraper – all without us ever having to get involved.

It was a really fun weekend and hopefully we’ll be doing this again some time. If you’ve got friends or family in Australia, don’t forget to tell them to sign up for PlanningAlerts.

Cheers,

Henare
OpenAustralia Foundation volunteer


July 29 2011

14:44

Meet the User – Claire Miller

Here at ScraperWiki we like programmers and journalists. We’re also interested in helping to bridge the gap, so with no further ado let me present to you: Claire Miller. She’s a reporter and data journalist for Media Wales.

She’s come to ScraperWiki from the non-coding side but with a journalistic eye for the story. She’s new to the ScraperWiki way of doing things but says “I’m just starting to appreciate how handy a tool ScraperWiki can be for journalists. At the moment when I come across a dataset on a government website, like the Food Standards Agency ratings or jobcentre vacancies, I’ve taken to checking ScraperWiki to see if someone is trying to scrape them – this is how I ended up with a collection of scrapers gathering up ratings of dodgy takeaways in Wales.”

She’s hoping there will be some stories for the paper but has gone over to the ScraperWiki side and wants to make the data useful for people living in Wales. Like me and many others in the data journalism fold she’s decided to just jump in by forking useful looking scrapers and pointing them at Welsh datasets.

I’m interested in how I can use ScraperWiki to find data I can use to find stories. I’ve already used some from a series of scrapers gathering data from the jobcentre vacancies search to analyse the sort of vacancies that are offer for people in Wales. I’m also working on gathering up lots of data on Welsh schools and found the Welsh School Finder which saved me so much time in linking census and financial data to addresses and locations.

Her ultimate goal is to start scraping PDFs as FOI requests are constantly being given in that most evil of formats. We’re working on our documentation and tutorials and PDFs are most definitely on our list. For the budding data journalists out there, I’d say walk before you can run. PDFs are hard. So start with html web scraping and CSVs. But remember, where there’s a ScraperWiki digger, there’s a way!


July 22 2011

15:53

Meet the User – Ben Harris

Ben Harris is one of the few ScraperWikians I’ve come across that actually codes for a day job (I’m sure there’s lots, send them pizza). He’s a sysadmin (surprised a ninja/evangelist isn’t in some way attached to the title), which means he writes quite a lot of little hacky scripts to do useful things (just what we like!).  A couple of years ago he started trying to find (and publish) the traffic regulation orders applying to Cambridge, and this led him into the world of Freedom of Information.  Now he helps maintain the list of public authorities on WhatDoTheyKnow, which involves pulling together information from legislation, official registers and lots of websites.  They have a wiki, FOIwiki, to help them keep track of things.

He says:

The concept of a wiki for running code is brilliant, and ScraperWiki does a pretty good job of implementing it.  While the scraping facilities are obviously good, the important thing is having somewhere public to keep (and link to) runnable code and the databases that go with it – Ben Harris

One of his experiments has been to write a ScraperWiki view that suggests updates to a page on FOIwiki based on the output of a scraper, automatically generating links to WhatDoTheyKnow where they already know about authorities.  The idea here is that an FOIwiki page would mirror a ScraperWiki dataset but with added notes, links and general explanation. He says it needs a lot more work, but it’s already proved useful for spotting new NHS Foundation Trusts.

A long-term project is to scrape contact details for English parish councils from various district council websites.  As for the data landscape along his ScraperWiki digger road trip, surprise surprise, there’s very little consistency! He’s written scrapers that ingest HTML, PDF, CSV and UK Legislation XML.  So having them all in the same kind of table in ScraperWiki means that he can then apply a common set of tools to the results.  A consequence of the multiplicity of sources is that they often use variant names for the same public authority, so he’s trying to improve our fuzzy matching rules.

Sadly, a lot of the information he wants is only available on paper, and ScraperWiki can’t help.  In consequence, he’s spent hours in the offices of Cambridgeshire County Council with a scanner, and dug through the collections of legislation in Cambridge University Library, the British Library, and the National Archives.

He says the most accessible information is probably that from the Scottish Information Commissioner, who provides a big CSV file with the email addresses of most of the Schedule 1 Scottish Public Authorities.  Of course he’s imported it into ScraperWiki.

Strangely enough, many of the other things he spends his free time on also involve collecting and cataloguing things, be they grid squares, kilometres cycled, or railway stations (a friend’s project, but he helps out).

For being a collecting, cataloguing, council document scanning crazed ScraperWikian (there’s a title!) we salute you Ben Harris (*honking of digger horn*)


July 15 2011

12:58

An Ouseful Person to Know – Tony Hirst

We love to teach at ScraperWiki but we love people who teach even more. That being said, may I introduce you to Tony Hirst, Open University Lecturer and owner of this ‘ouseful‘ blog. He teaches at the  Department of Communication and Systems and recently worked on course units relating to information skills, data visualisation and game design.

What are your particular interests when it comes to collecting data?

I spend a lot of time trying to make sense of the web, particularly in how we can appropriate and combine web applications in interesting and useful ways, as well as trying to identify applications and approaches that might be relevant to higher and distance education on the one hand, “data journalism” and civic engagement on the other. I’m an open data and open education advocate, and try to make what contribution I can by identifying tools and techniques that lower the barrier to entry in terms of making use of public open data for people who aren’t blessed with the time that enables them to try out such things. I guess what I’m trying to do is contribute towards making data, and its analysis, more accessible than it often tends to be.

How have you found using ScraperWiki and what do you find it useful for?  

ScraperWiki removed the overhead of having to set up and host a scraping environment, and associated data store, and provided me with a perfect gateway for creating my own scrapers (I use the Python route). I haven’t (yet) started playing with ScraperWiki views, but that’s certainly on my to do list. I find it really useful for “smash and grab” raids, though I have got a few scrapers that I really should tweak to run as scheduled scrapers. Being able to browse other scrapers, as well as put out calls for help to the ScraperWiki community, is a great way to bootstrap problem solving when writing scrapers, though I have to admit as often as not I resort to StackOverflow (and on occasion GetTheData) for my QandA help. [Disclaimer: I help set up GetTheData with the OKF's Rufus Pollock]

Are there any data projects you’re working on at the moment?

I have an idea in mind for a database of audio/podcast/radio interviews and discussions around books that would allow a user to look up a (probably recent) book by ISBN and then find book talks and author interviews associated with it. I’ve started working on several different scrapers that separately pull book and audio data from various sites Tech Nation (IT Conversations), Authors@Google (YouTube), and various BBC programmes (though I’m not sure of rights issues there!). I now really need to revisit them all to see if I can come up with some sort of normalised view over the data I might be able to get from each of those sources, and a set of rules from trying to parse out book and author data from free text descriptions that contain that information. [Read his blog post here]

I’m also tempted to start scraping university course prospectus web pages to try and build up a catalogue of courses from UK universities, in part because UCAS seem reluctant to release their aggregation of this data, and in part because the universities seem to be taking such a long time to get round to releasing course data in a structured way using the XCRI course marketing information XML standard.

Anything else you’d like to add? A little about your passions and hobbies?

I’ve started getting into the world of motorsport data, and built a set of scripts to parse the FIA/Formula 1 timing and results sheets (PDFs). Looking back over the scraper code, I’d wish I’d documented it… I think I must have been “in the flow” when I wrote it! Every couple of weeks, I go in and run each script separately by hand. I really should automate it to give me a one click dump everything, but I take a guilty pleasure in scraping each document separately! I copy each set of data as a Python array by hand and put it into a text file, that I then process using a series of other Python scripts and ultimately dump into CSV files. I’m not sure why I don’t just try to process the data in Scraperwiki and pop it into the database there… Hmmm…?!


July 08 2011

14:10

Meet the User – Pall Hilmarsson

Our digger has been driving around colder climes by one of our star users. Icelander Pall Hilmarrson. Driving such a heavy vehicle on icy surfaces and through volcanic ash may seem daunting to most people, but Pall has not only ventured forth undeterred, he has given passers-by a lift. One such hitch-hiker is Chris Taggart with his OpenCorporates project. I caught up (electronically) with Pall.

What’s your background and what are your particular interest when it comes to collecting data?

I have a work related experience in design – I started working as a designer 12 years ago, almost by accident. At one point I thought I’d study it and I did try, for the whole of ten days! Fortunately I quit and went for a B.A. degree in anthropology. Somehow I´ve ended up again doing design again. Currently I work for the Reykjavík Grapevine magazine.

I´m particularly interested in freeing data that has some social relevance, something that gives us a new way of seeing and understanding society. That comes from the anthropology. Data that has social meaning.

How have you found using ScraperWiki and what do you find it useful for?

ScraperWiki has been a fantastical tool for me. I had written scrapers before, mostly small scripts to make RSS feeds and only in Perl. ScraperWiki has led me to teach myself Python and write more complex scrapers. It has opened up a whole new set of possibilites. I really like being able to study other peoples scrapers and helping others with their scrapers. I’ve learned so much from ScraperWiki.

Are there any data projects you’re working on at the moment?

Right now I´m involved in scraping some national company registers for the brilliant OpenCorporates site. I´m also compiling a rather large dataset on foreclosures in Iceland the last 10 years – trying to get an image of where the financial meltdown is hitting the hardest. I´m hoping to make it into an interactive map application. So far the data shows some interesting things – going into the project I had some notion that the Reykjavík suburbs with their new apartment buildings would be the bulk of foreclosures. It seems though that the old downtown area is actually where most apartments are going up for auction.

How is the data landscape in the area you’re interested in? Is it accessible, formatted, consistent?

Governmental data over here is not easily accessible, but that might change. A new bill introduced in Parliament aims to free a lot of data and make the right for citizens to access information a lot stronger. But of course it will never be enough. Data begets more data.

So watch out Iceland – you’re being ScraperWikied!


July 01 2011

15:24

Meet the User – Kevin Curry

The ScraperWiki digger had been driven far and wide. Wherever there’s open data you’ll find us parked nearby. I’ve noticed it travel across the Atlantic and have contacted the driver – Kevin Curry. He has the grand title (as well as digger driver!) of Chief Scientist and Co-founder of Bridgeborn, Inc. He is computer science graduate from Virginia Polytechnic Institute and State University. Our American friends may also know him for CityCamp.

He’s currently interested in United States Department of Agriculture (USDA) data for a variety of reasons, only some of which have to do with food. He’s been involved with Gov 2.0 from its beginnings. USDA happens to be one of the biggest publishers of open data in the US federal government. But in the case of the commodities data, it’s a highly granular. You can get XML for each and every commodity but that is the only way you can get the data. There’s no aggregate collection. So he decided to create one. In his day job he doesn’t get to write much code anymore, so he also had that itch to scratch. Hence the decision to take the wheel of the ScraperWiki digger.

I am a huge fan of ScraperWiki. Publishing scrapers on ScraperWiki is a great way to share what’s been done and hopefully get others to create new things with it. Publishing data through ScraperWiki can serve local government, community organizations and news providers. It bothers me when governments and newspapers cite facts based on data but don’t link to the data. Usually there are other interesting questions that can be asked and answered, and other views that can be created, that are not in the published report or news article. But this requires access to the data. At the same time, data without context isn’t useful. What can cities learn from the data they collect? What interesting stories can journalists tell? ScraperWiki is a great way to start answering those questions. – Kevin Curry

At the moment, the USDA project is just a hobby. He’s also interested in local food data. USDA publishes a table of farm markets and farms. But if you want to know who has fresh blue crabs today, what sizes and how much they cost then you have to call around to a half-dozen local providers. That’s a different sort of data challenge. He says the next thing he will tackle with ScraperWiki is data published by the city government or regional transportation authority.

USDA watch out – you’re being ScraperWikied!


May 27 2011

14:23

Meet the User – Tim McNamara

You may have noticed a Kiwi driving our digger around of late (image is purely metaphorical). A New Zealander by the name of Tim McNamara has been unearthing earthquakes, government bills, historic places, clinical trials and even companies.

I had to enquire after such a scraping wizard (trying to get a Lord of the Rings reference in here, not working too well). It turns out, Mr McNamara has an open data pedigree. He’s just started contracting for the Open Knowledge Foundation and is working on improving opengovernmentdata.org and opendatamanual.org. Part of his work involves advising governments about opening their data. Which is why he’s such a  star ScraperWiki user.

“I had a hunch that that governments don’t need to spend
millions of dollars on rebuilding a system to have a fancy
web API. So, I decided to validate that hunch . I found that I was able to extract hundreds of thousands of data very quickly. Moreover, all of that data is available in a consistent API between different domains. Perfect! With a bit of community engagement, governments could use ScraperWiki to provide that web API for their legacy systems. That’s a really exciting prospect”. – Tim McNamara, Open Knowledge Foundation

What I find truly amazing and inspiring is that Tim doesn’t consider himself a programmer. He thinks of himself as “more of a writer and a thinker. When I code, it’s because I think it’s the best thing that I can do with my time to support a goal at a given time.” Which is the sort of civic hacking, public interest coding we like seeing on ScraperWiki.

So watch out New Zealand – you’re being ScraperWikied! 

Tim is in charge of this year’s PyCon for New Zealand. So if you’re in the area please say hello.


May 13 2011

15:26

There’s More Than One Way to Scrape a Site

A request came in to ScraperWiki to scrape information on the Members of the European Parliament.  I put it out on Twitter and Facebook hoping a kind member of the ScraperWiki community will have spent so much time on the computer he/she has no life at all. I had to turn people away!

Within minutes, two tweeters wanted to give it a go and I got a reply on Facebook.  In fact, Tim Green had already scraped the names and URLs of MEPs by the time I got back to him saying it had already been claimed on twitter by Pall Hilmarsson.

Although both scrapers are looking at the same site, Tim‘s is less than 20 lines of code and with only 8 revisions, it’s a very quick scrape. Whereas Pall‘s went for the full schebang, scraping opinions and speeches and generally drilling down into the data a whole lot more. Hence the nearly 200 lines of code!

So if you’re a code junky, take a look and what it takes to scrape and then scrape further by comparing scrapers/meps with scrapers/meps_2.   Also, Tim kindly scraped the next request: National Historic Ships Register. To Tim and Pall I say: If the ScraperWiki digger were capable of emotion you would both be receiving a diesel greasy kiss!

European Parliament Members and National Historic Ships – you’ve been ScraperWikied! (with help from your friendly neighbourhood programmers)


May 11 2011

15:07

Access government in a way that makes sense to you? Surely not!

alpha.gov.uk uses Scraperwiki, a cutting edge data-gathering tool, to deliver the results that citizens want. And radically for government, rather than tossing a finished product out onto the web with a team of defenders, this is an experiment in customer engagement.

If you’re looking renew your passport, find out about student loans or how to complete a tax returns, it’s usually easier to use Google than navigate through government sites.  That was the insight for director of the Alphagov project Tom Loosemore, and his team of developers.  The is a government project run by government.

Alpha.gov.uk is not a traditional website, it’s a developer led but citizen focused experiment to engage with government information.

It abandons the direct.gov.uk approach of forcing you to think the way they thought, instead it provides a simple “ask me a question” interface and learns from customer journeys, starting with the first 80 of the most popular searches that led to a government website.

But how would they get information from all those Government website sites into the new system?

I spoke to Tom about the challenges behind the informational architecture of the site and he noted that: “Without the dynamic approach that Scraperwiki offers we would have had to rely on writing lots of redundant code to scrape the websites and munch together the different datasets. Normally that would have taken our developers a significant amount of time, would have been a lot of hassle and would have been hard to maintain. Hence we were delighted to use ScraperWiki, it was the perfect tool for what we needed.  It avoided a huge headache .”

Our very own ScraperWiki CEO Francis Irving says ”It’s fantastic to see Government changing its use of the web to make it less hassle for citizens. Just as developers need data in an organised form to make new applications from it, so does Government itself. ScraperWiki is an efficient way to maintain lots of converters for data from diverse places, such as Alpha.gov.uk have here from many Government departments. This kind of data integration is a pattern we’re seeing, meeting people’s expectations for web applications oriented around the user getting what they want done fast. I look forward to seeing the Alpha.gov.uk project rolling out fully – if only so I can renew my passport without having to read lots of text first!”

Just check out the AlphaGov tag. Because government sites weren’t built to speak to one another there’s no way their data would be compatible to cut and paste into a new site. So this is another cog in the ScraperWiki machine: merging content from systems that cannot talk to each other.

Alpha.gov.uk is an experimental prototype (an ‘alpha’) of a new, single website for UK Government, developed in line with the recommendations of Martha Lane Fox’s Review. The site is a demonstration, and whilst it’s public it’s not permanent and is not replacing any other website.  It’s been built in three months by a small team in the Government Digital Service, part of the Cabinet Office.

May 03 2011

11:23

It’s all a matter of trust

According to the latest Ipsos MORI poll on trust in people, only 1 in 5 people think journalists tell the truth. They’re still more trustworthy than politicians generally and government ministers! Phew.

But telling the truth and being trustworthy are not the same thing. There’s not believing what they say and then there’s knowing that what they say is wrong and doing something about it. Which is why we have the Press Complaints Commission.

Here at ScraperWiki we also have a group of developers that don’t just complain when sites don’t work, they do something about it. That’s what Ben Campbell did for the Press Complaints Commission. He scraped the PCC to produce this site (pictured above) for the Media Standards Trust.

‘Trying to work out basic stuff, like which newspapers are the most complained about, is virtually impossible on the existing PCC site. So we scraped the data to make it easier (oh, and it’s the Daily Mail)’
- Martin Moore (Media Standards Trust)

Just as a news story can be presented in myriad of ways so too can data. Some representations are more useful than others. Many have different purposes,  a different audience. Others are so buried behind web forms and coding, they can’t reveal a story unless liberated.

Scraping creates a data wire service. And our developers are showing how even creating a simple league table (with realtime updates) can tell a completely different story.

Press Complaints Commission – you’ve been ScraperWikied!


April 21 2011

13:15

Meet the User – Brewing up a data storm

By taking part in BigClean we got some very interesting users sharing space with you here on ScraperWiki. It being hosted in Prague meant we got to show off our installation of unicode! So meet (takže sa môžete zoznámiť) Stefan Urbanek.

His project, Data Brewery, is a Python framework for data mining. It’s like a coder’s version of Yahoo Pipes, where you link up nodes that stream in, process and output data. Something like this could be used for general ETL (extract, transform, load) business applications, but Data Brewery specialises in discovering what is in the data, and measuring its quality.

He’s blogged about creating a ScraperWiki backend for his data analysis framework. He even drew a pretty picture (much better than I could so here’s his copy):

He says the whole idea of ScraperWiki has made his life easier (glad to know our hard work is not for nothing!). And he plans to create more ScraperWiki plugins for analytical processing in Brewery.


April 11 2011

18:22

Scrape it – Save it – Get it

I imagine I’m talking to a load of developers. Which is odd seeing as I’m not a developer. In fact, I decided to lose my coding virginity by riding the ScraperWiki digger! I’m a journalist interested in data as a beat so all I need to do is scrape. All my programming will be done on ScraperWiki, as such this is the only coding home I know. So if you’re new to ScraperWiki and want to make the site a scraping home-away-from-home, here are the basics for scraping, saving and downloading your data:

With these three simple steps you can take advantage of what ScraperWiki has to offer – writing, running and debugging code in an easy to use editor; collaborative coding with chat and user viewing functions; a dashboard with all your scrapers in one place; examples, cheat sheets and documentation; a huge range of libraries at your disposal; a datastore with API callback; and email alerts to let you know when your scrapers break.

So give it a go and let us know what you think!


February 20 2010

01:48

Android Users has Increased by 200% in the Past Three Months

According to foreign media reports, the latest data published by Research institutions ChangeWave Research displayed, in the past few months, the Android mobile phone users rise sharply, and usersâ?? attitudes to the Android also has positive changes.

ChangeWave Researchâ??s survey data presented, by December 2009, there is 4% smart phone users using Android operating system which has increased by 200% than September.

The research institution also made a survey that usersâ?? potential purchasing plan of smart phones. Among the interviewees who plan to buy smart phones within the coming 90 days, 21% users said they will choose Android phones. The ratio is also 6% higher than September. In September, Android and Palm are the two most unpopular smartphone operating systems. But currently, Android has rise to the second in the same rank.

The growth of Android users influences iPhone to some degree. Despite iPhone is still the most popular smart phone. Nevertheless, the interviewees who wished to buy iPhone have decreases to 28% from 32%. But, the market share still increases 1% to 31%. The market share of RIM and Palm are respectively 39% and 6%.

Despite the Android mobile presents fast development trend, the sales of Googleâ??s first own brand Android phone Nexus One didn’t achieve the expected situation. In the first selling week, Nexus One only sold 20 thousand. Therefore, despite users accept Android gradually, there is still a long way to chase iPhone 3GS.

However, contrasting Nexus One and iPhone directly is not appropriate; because Nexus One is merely one type of recent listed Android phones. Other Android phones like HTC myTouch 3G and Motorola Droid, both the two brands attracted a great many users. Generally speaking, the market share of the Android is rising, which is good news for HTC and Motorola.

Epathchina.com always pays attention to the Android phones, not only Google Android phones but other Android phones. As a complete electronic dropshipping company, epathchina.com also releases some Android phones.  Certainly, epathchina.com is pleased to see the exciting growth.

Related posts:

  1. Nexus One—- iPhone’s Challenging Rival is Coming ePathChina.comâ??s constant attention to Nexus One: Nexus One uses...
  2. Google Android Rivaling Apple? Google’s revolutionary Android platform for Smartphone devices has been a...
  3. Android2.1: Google Nexus One Price Exposed Recently, every day there is lots of new information about...

Related posts brought to you by Yet Another Related Posts Plugin.

December 17 2009

07:26

In defence of the “user”

Matt Kelly doesn’t like the term ‘users’. In a speech to the World Newspaper Congress keynote in Hyderabad he bemoaned the sterility of the word:

“What a word! “Users.” Not readers, or viewers. Certainly not customers – not unless we are being deeply ironic. For the fact is the word “user” is, for the vast majority of people consuming our products online, entirely accurate.

“We’d never choose such a sterile word to describe the people who buy our newspapers. But online, “users” is about right. They find our content in a search engine, they devour it, then they move back to Google, or wherever, and go looking for more. Often, they have no idea which website it was they found the content on. This was the audience we’ve been chasing all that time. A swarm of locusts.”

He’s not alone. Many others have expressed – if not in such forthright terms, and often on very different bases – similar objections to the term.

But I like it.

I like it because it makes very plain how people use the medium. They were readers in print, an audience for broadcasters, and customers and consumers for businesses, but online… they ‘use’. Not in the exploitative sense that Matt Kelly insinuates, but in an instrumental fashion.

People use the web as a tool – to communicate, to find, to play, and to do a hundred other things. So many of the success stories online are tools: Google, Facebook, Flickr, YouTube. And one of the changes in mindset required when you publish for an online audience, it seems to me, is to recognise that people will want to ‘do’ something with your content online. Share it. Comment on it. Remix it. Correct it. Rate it. Search it. Annotate it. Tag it. Store it. Compare it. Contextualise it. Analyse it. Mash it.

In his book The Wealth of Networks, Yochai Benkler suggested that the ‘user’ was emerging as “a new category of relationship to information production and exchange.

“Users are individuals who are sometimes consumers and sometimes producers. They are substantially more engaged participants, both in defining the terms of their productive activity and in defining what they consume and how they consume it. In these two great domains of life—production and consumption, work and play—the networked information economy promises to enrich individual autonomy substantively by creating an environment built less around control and more around facilitating action.”

That quote should be at the heart of any online journalism training. Different medium, different rules.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl