Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 01 2013

13:56

Open Data Directory, Barriers to Open Government Data and a Distributed Data Supply Chain

This week's data digest features plans for an Open Data Directory and attempts to understand barriers to open government data. Some great examples of how open data is transforming journalism are also featured and thoughts on the need to build a distributed data supply chain is expressed.

read more

June 27 2013

15:52

Nonprofits Learn How to Tell Untold Stories at Data Day 2013

Data can be integrated seamlessly into stories that benefit communities, presenters told nonprofits and journalists at a conference on June 21. The event demonstrated how one can tap into information sources about communities whose voices are often unheard.

Data Day 2013, held at Northeastern University in Boston, showcased how successful data-based stories engage people on an emotional level. Metropolitan Area Planning Council, Northeastern University’s School of Public Policy & Urban Affairs, and The Boston Foundation co-hosted the conference.

read more

15:46

Nonprofits Learn How to Tell Untold Stories at Data Day 2013

Data can be integrated seamlessly into stories that benefit communities, presenters told nonprofits and journalists at a conference on June 21. The event demonstrated how one can tap into information sources about communities whose voices are often unheard.

read more

June 17 2013

13:58

Global Open Data Initiative, OpenData Latinoamérica, Big Data for Disaster Response and more...

This week check out the launch of the new Global Open Data Initiative, the pursuit for the release and use of open data in Latin America and thoughts on open data and democracy in Africa. A list of wrong assumptions about the use of big data for humanitarian relief is pointed out and you can have a look at some of the big data presentations at the recent Personal Democracy Forum held in New York.

read more

June 11 2013

17:00

OpenData Latinoamérica: Driving the demand side of data and scraping towards transparency

“There’s a saying here, and I’ll translate, because it’s very much how we work,” Miguel Paz said to me over a Skype call from Chile. “But that doesn’t mean that it’s illegal. Here, it’s ‘It’s better to ask forgiveness than to ask permission.””

Paz is a veteran of the digital news business. The saying has to do with his approach to scraping public data from governments that may be slow to share it. He’s also a Knight International Journalism Fellow, the founder of Hacks/Hackers Chile, and a recent Knight News Challenge winner. A few years ago, he founded Poderopedia, a database of Chilean politicians and their many connections to political organizations, government offices, and businesses.

But freeing, organizing, and publishing data in Chile alone is not enough for Paz, which is why his next project, in partnership with Mariano Blejman of Argentina’s Hacks/Hackers network, is aimed at freeing data from across Latin America. Their project is called OpenData Latinoamérica. Paz and Blejman hope to build a centralized home where all regional public data can be stored and shared.

Their mutual connection through Hacks/Hackers is key to the development of OpenData Latinoamérica. The network will make itself, to whatever extent possible, available for trouble shooting and training as the project gets off the ground and civic hackers and media types learn both how to upload data sets as well as make use of the information they find there.

Another key partnership helping make OpenData Latinoamérica possible is with the World Bank Institute’s Global Media Development program, which is run by Craig Hammer. Hammer believes the data age is revolutionizing government, non-government social projects, and how we make decisions about everyday life.

“The question for us, is, What are we gonna do with the data? Data for what? Bridging that space between opening the data and how it translates into improving the quality of people’s lives around the world requires a lot of time and attention,” he says. “That’s really where the World Bank Institute and our programmatic work is focused.”

A model across the Atlantic

Under Hammer, the World Bank helped organize and fund Africa Open Data, a similar project launched by another Knight fellow, Justin Arenstein. “The bank’s own access-to-information policy provides for a really robust opportunity to open its own data,” Hammer says, “and in so doing, provide support to countries across regions to open their own data.”

Africa Open Data is still in beta, but bringing together hackers, journalists, and information in training bootcamps has already led to reform-producing journalism. In a post about the importance of equipping the public for the data age, Hammer tells the story of Irene Choge, a journalist from Kenya who attended a training session hosted by the World Bank in conjunction with Africa Open Data.

She…examined county-level expenditures on education infrastructure — specifically, on the number of toilets per primary school…Funding allocated for children’s toilet facilities had disappeared, resulting in high levels of open defecation (in the same spaces where they played and ate). This increased their risk of contracting cholera, giardiasis, hepatitis, and rotavirus, and accounted for low attendance, in particular among girls, who also had no facilities during their menstruation cycles. The end result: poor student performance on exams…Through Choge’s analysis and story, open data became actionable intelligence. As a result, government is acting: ministry resources are being allocated to correct the toilet deficiency across the most underserved primary schools and to identify the source of the misallocation at the root of the problem.

Hammer calls Africa Open Data a useful “stress test” for OpenData Latinoamérica, but Paz says the database was also a natural next step in a series of frustrations he and Blejman had encountered in their other work.

“Usually, the problem you have is: Everything is cool before the hackathon, and during the hackathon,” says Paz. “But after, it’s like, who are the people who are working on the project? What’s the status of the project? Can I follow the project? Can I be a part of the project?” The solution to this problem ended up being Hackdash, which was actually Blejman’s brainchild — an interface that helps hackers keep abreast of the answers to those questions and thereby shore up the legacy of various projects.

So thinking about ways that international hackers can organize and communicate across the region is nothing new to Paz and Blejman. “One hackathon, we would do something, and another person who didn’t know about that would do something else. So when we saw the Open Data Africa platform, we thought it was a really great idea to do in Latin America,” he says.

Blejman says the contributions of the World Bank have been essential to the founding of OpenData Latinoamérica, especially in organizing the data bootcamps. Hammer says he sees the role of the bank as building a bridge between civic hackers and media. “More than a platform,” he says it’s, “an institution in and of itself to help connect sources of information to government and help transform that data into knowledge and that knowledge into action.”

Giving people the tools to understand the power of data is an important tenet of Hammer’s open data philosophy. He believes the next step for Big Data is global data literacy, which he says is most immediately important for “very specific and arguably strategic public constituencies — journalists, media, civic hackers, and civil society.” Getting institutions, like newspapers, to embrace the importance of data literacy rather than relying on individual interest is just one goal Hammer has in mind.

“I’m not talking about data visualization skills for planet Earth,” he says. “I’m saying, it’s possible — or it should be possible — for anybody that wants to have these skills to have them. If we’re talking about data as the real democratizer — open data as meaningful democratization of information — then it has to be digestible and accessible and consumable by everyone and everybody who wants to access and digest and consume it.”

Increasing the desire of the public for more, freer data is what Hammer calls stoking the demand side. He says it’s great if governments are willingly making information accessible, but for it to be useful, people have to understand its power and seek to unleash it.

“What’s great about OpenData Latinoamérica is it’s in every way a demand-side initiative, where the public is liberating its own data — it’s scraping data, it’s cleaning it,” he says. “Open data is not solely the purview of the government. It’s something that can be inaugurated by public constituencies.”

For example, in Argentina, where the government came late to the open data game, Blejman says he saw a powerful demand for information spring up in hackers and journalists around him. When they saw what other neighboring countries had and what they could do with that information, they demanded the same, and Argentina’s government began to release some of that data.

“We need to think about open data as a service, because no matter how much advocacy from NGOs, people don’t care about ‘open data’” per se, Paz says. “They care about data because it affects their life, in a good or bad way.”

Another advantage Bleman and Paz had when heading into OpenData Latinoamérica was the existence of Junar, a Chilean software platform founded by Javier Pajaro, who was a frustrated analyst when he decided to embrace open data platforms and help others do the same. Blejman said that, while Africa Open Data opted for CKAN, using a local, Spanish-language company that was already familiar to members of the Hacks/Hackers network has strengthened the project, making it easier to troubleshoot problems as they arise. He also said Junar’s ability to give participating organizations more control fit nicely into their hands-off, crowd-managed vision for future day-to-day operation of the database.

Organizing efforts

Paz and Blejman have high hopes for the stories and growth that will come from OpenData Latinoamérica. “What we expect from these events is for people to start using data, encourage newspapers to organize around data themes, and have the central hub for what they want to consume,” Blejman said.

They hope to one day bring in data from every country in Latin America, but they acknowledge that some will be harder to reach than others. “Usually, the federated governments, it’s harder to get standardized data. So, in a country like Argentina, which is a federated state with different authorities on different levels, it’s harder to get standardized data than in a republic where there’s one state and no federated government,” says Paz. “But then again, in Chile, we have a really great open data and open government and transparency allows, but we don’t have great data journalism.” (Chile is a republic.)

Down the road, they’d also like to provide a secure way for anonymous sources to dump data to the site. Paz says in his experience as a news editor, 20–25 percent of scoops come from anonymous tips. But despite developments like The New Yorker’s recent release of Strongbox, OpenData Latinoamérica is still working out a secure method that doesn’t require downloading Tor, but is more secure than email. Blejman also added that, for now, whatever oversight they have over the quality and accuracy of the original data they’re working with is minimal: “At the end, we cannot control the original sources, and we are just trusting the organizations.”

But more than anything, Paz is excited about seeing the beginnings of the stories they’ll be able to tell. He plans to use documents about public purchases made by Chile’s government to build an app that allows citizens to track what their government is spending money on, and what companies are being contracted those dollars.

Another budding story exemplifies the extent to which Paz has taken to heart Craig Hammer’s emphasis on building demand. In Chile, there is currently a significant outcry from students over the rising cost of education. Protests in favor of free education are ongoing. In response, Paz decided to harness this focus, energy, and frustration into a scrape-a-thon (or #scrapaton) to be held June 29 in Santiago. They will focus on scraping data on the owners of universities, companies that contract with universities, and who owns private and subsidized schools.

“There’s a joke that says if you put five gringos — and I don’t mean gringos in a disrespectful way — if you put five U.S. people in a room, they’re probably going to invent a rocket,” says Paz. “If you put five Chileans in a room, they’re probably going to fight each other. So one of the things — we’re not just building tools, we’re also building ways of working together, and making people trust each other.” Blejman added that he hopes the recent release of a Spanish-language version of the Open Data Handbook (El manual de Open Data) will further facilitate collaboration between hackers in various Latin American countries.

With a project of this size and scope, there are also some ambitious designs around measurement. Paz hopes to track how many stories and projects originate with datasets from OpenData Latinoamérica. Craig Hammer wants to quantify the social good of open data, a project he says is already underway via the World Wide Web Foundation’s collaboration with the Open Data for Development Camp.

“If there is a cognizable and evidentiary link between open data and boosting shared prosperity,” Hammer says, “then I think that would be, in many cases, the catalytic moment for open data, and would enable broad recognition of why it’s important and why it’s a worthwhile investment, and broad diffusion of data literacy would really explode.”

Hammer wants people to take ownership of data and realize it can help inform decisions at all levels, even for individuals and families. Once that advantage is made clear to the majority of the population, he says, the demand will kick in, and all kinds of organizations will feel pressured to share their information.

“There’s this visceral sense that data is important, and that it’s good. There’s recognition that opening information and making it broadly accessible is in and of itself a global public good. But it doesn’t stop there, right? That’s not the end,” he says. “That’s the beginning.”

Photo of Santiago student protesters walking as police fire water canons and tear gas fills the air, Aug. 8, 2012 by AP/Luis Hidalgo.

May 30 2013

13:49

More details on ‘news:rewired plus’ training days on 19 September

Image by Mark Hakansson

Image by Mark Hakansson

Did you know that you can sign up to attend a one-day intensive workshop the day before news:rewired?

The next news:rewired digital journalism conference is on 20 September. We also offer a ‘news:rewired plus‘ option so that you can attend one of three one-day training workshops the day before the conference, on Thursday 19 September.

If you are coming from overseas and want to make the most of your time in the UK, or if you just want to learn a new skill, signing up for a one-day course will allow you to really get to grips with one of the subjects on offer.

There are three news:rewired plus one-day workshops to choose from. If you are a regular at news:rewired you will recognise some or all of the trainers. They have all been involved with the event in the past, for example both Luke and Glen delivered workshops at our last event in April. We have invited them to lead one-day courses based both on their expertise in the field and the positive feedback from news:rewired delegates.

The three options are below. Click the links for full details.

Luke-LewisCreating a buzz: How to grow active social media communities. This course is led by Luke Lewis, editor of BuzzFeed UK.
Glen_Mulcahy-MAbigMobile journalism: How to create quality video and audio on an iPhone and iPad. This course is led by Glen Mulcahy, innovation lead at Ireland’s national broadcaster RTE.
Kathryn Corrick headshotIntroduction to open data for journalists: finding stories in data. This course is led by Kathryn Corrick from the Open Data Institute.

 

 

The first 50 news:rewired tickets (whether standard or ‘plus’), are available at an early bird discount rate. We only have a few left – so hurry!

This means early bird news:rewired plus tickets cost £280 (+VAT), while standard, conference-only news:rewired tickets cost £95 (+VAT). Tickets include lunch, refreshments and after-event drinks on the day of the conference.

The earlybird discount will only apply to the first 50 tickets sold, or until the end of Friday (31 May), whichever comes first. After this point standard tickets will rise to £130 (+VAT) and ‘news:rewired plus’ tickets will rise to £310 (+VAT).

You can buy standard conference tickets at this link. If you select a news:rewired PLUS ticket Journalism.co.uk will contact you to confirm which training course you would like to attend on the Thursday (19 September) and provide further details.

May 28 2013

16:35

Big Data Stories, Open Data on the Web & Mobile Data - Data Digest #17

If you are looking for some practical examples of how big data relates to you have a look at the data digest this week. Guidelines for publishing open data sets is also provided along with a lot of papers and research on the generation and use of data. There is also some interesting insight on the importance of mobile data.

read more

May 20 2013

15:24

"Moneyball" Giving, Data Philanthropy Review and "Seeing like a State" Data Digest #16

This week  the revolutionary data driven philanthropy of the Laura and John Arnold Foundation is featured and a call for beneficiary inclusion in Markets for Good goes out. Data philanthropy efforts are reviewed and advice for writing about data is offered. The rational for "seeing data as a state" rather than a donor in Africa to help answer policymakers questions is also explained.

read more

May 18 2013

18:56

Net2 Houston: Using the City of Houston's Open Data to Do Good

Houston open data portal

The City of Houston’s Bruce Haupt led a discussion at Net2 Houston’s May 14 meetup.
The topic? The City of Houston has a big heap of data and they want your ideas on how to use it!

read more

April 15 2013

12:04

April 09 2013

12:39

April 03 2013

11:54

July 30 2012

18:37

Netsquared Regional Event for Cameroon and Nigeria

The Netsquared Regional Conference for Cameroon and Nigeria is a multi stake holder event that will bring together actors from local Netsquared groups, Internet Society, civil society, diplomatic institutions, government and the tech world to articulate on issues related to the social web and nongovernmental diplomacy. Citizens from three neighboring countries including: Cameroon, Nigeria and Central African Republic, in a two day event will seek to resolve the following challenges:

- The difficulties faced in introducing the social web for social development in the sub region

read more

April 30 2012

14:00

How to Contribute to OpenStreetMap and Grow the Open Geodata Set

Hundreds of delegates from government, civil society, and business gathered in Brasilia recently for the first Open Government Partnership meetings since the inception of this initiative. Transparency, accountability, and open data as fundamental building blocks of a new, open form of government were the main issues debated. With the advent of these meetings, we took the opportunity to expand an open data set by adding street names to OpenStreetMap.

Getting ready to survey the Cruzeiro neighborhood in Brasilia.

OpenStreetMap, sometimes dubbed the "Wikipedia of maps," is an open geospatial database. Anyone can go to openstreetmap.org, create an account, and add to the world map. The accessibility of this form of contribution, paired with the openness of its common data repository, holds a powerful promise of commoditized geographic data.

As this data repository evolves, along with corresponding tools, many more people gain access to geospatial analysis and publishing -- which previously was limited to a select few.

When Steve Coast founded OpenStreetMap in 2004, the proposition to go out and crowdsource a map of the world must have sounded ludicrous to most. After pivotal growth in 2008 and the widely publicized rallying around mapping Haiti in 2010, the OpenStreetMap community has proven how incredibly powerful a free-floating network of contributors can be. There are more than 500,000 OpenStreetMap contributors today. About 3 percent (that's still a whopping 15,000 people) contribute a majority of the data, with roughly 1,300 contributors joining each week. Around the time when Foursquare switched to OpenStreetMap and Apple began using OpenStreetMap data in iPhoto, new contributors jumped to about 2,300 per month.

As the OpenGovernment Partnership meetings took place, we wanted to show people how easy it is to contribute to OpenStreetMap. So two days before the meetings kicked off, we invited attendees to join us for a mapping party, where we walked and drove around neighborhoods surveying street names and points of interest. This is just one technique for contributing to OpenStreetMap, one that is quite simple and fun.

Here's a rundown of the most common ways people add data to OpenStreetMap.

Getting started

It takes two minutes to get started with contributing to OpenStreetMap. First, create a user account on openstreetmap.org. You can then immediately zoom to your neighborhood, hit the edit button, and get to work. We recommend that you also download the JOSM editor, which is needed for more in-depth editing.

Once you start JOSM, you can download an area of OpenStreetMap data, edit it, and then upload it. Whatever you do, it's crucial to add a descriptive commit message when uploading -- this is very helpful for other contributors to out figure the intent and context of an edit. Common first edits are adding street names to unnamed roads, fixing typos, and adding points of interest like a hospital or a gas station. Keep in mind that any information you add to OpenStreetMap must be observed fact or taken from data in the public domain -- so, for instance, copying street names from Google is a big no-no.

Satellite tracing and GPS data

JOSM allows for quick tracing of satellite images. You can simply turn on a satellite layer and start drawing the outlines of features that can be found there such as streets, building foot prints, rivers, and forests. Using satellite imagery is a great way to create coverage fast. We've blogged before about how to do this. Here's a look at our progress tracing Brasilia in preparation for the OGP meetings:

Brasilia progress

OpenStreetMap contributions in Brasilia between April 5 and April 12.

In places where good satellite imagery isn't available, a GPS tracker goes a long way. OpenStreetMap offers a good comparison of GPS units. Whichever device you use, the basics are the same -- you track an area by driving or walking around and later load the data into JOSM, where you can clean it up, classify it, and upload it into OpenStreetMap.

Synchronizing your camera with your tracker

Synchronizing your camera with the GPS unit.

Walking papers

For our survey in Brasilia, we used walking papers, which are simple printouts of OpenStreetMap that let you jot down notes on paper. This is a great tool for on-the-ground surveys to gather street names and points of interest. It's as simple as you'd imagine. You walk or drive around a neighborhood and write up information that you see that's missing in OpenStreetMap. Check out our report of our efforts doing this in Brasilia on our blog.

Walking papers for Brasilia.

Further reading

For more details on how to contribute to OpenStreetMap, check out Learn OSM -- it's a great resource with step-by-step guides for the most common OpenStreetMap tasks. Also feel free to send us questions directly via @mapbox.

April 20 2012

14:00

How Ushahidi Deals With Data Hugging Disorder

At Ushahidi, we have interacted with various organizations around the world, and the key thing we remember from reaching out to some NGOs (non-governmental organizations) in Kenya is that we faced a lot of resistance when we began in 2008, with organizations not willing to share data which was often in PDFs and not in machine-readable format.

This was especially problematic as we were crowdsourcing information about the events that happened that year in Kenya. Our partners in other countries have had similar challenges in gathering relevant and useful data that is locked away in cabinets, yet was paid for by taxpayers. The progress in the Gov 2.0 and open data space around the world has greatly encouraged our team and community.

When you've had to deal with data hugging disorder of NGOs, open data is a welcome antidote and opportunity. Our role at Ushahidi is to provide software to help collect data, and visualize the near real-time information that's relevant for citizens. The following are some thoughts from our team and what I had hoped to share at OGP in Brazil.

ushahidi.jpg

Government Data is important

  • It is often comprehensive - It covers the entire country. For example, a national census covers an entire country, so it has a large sample, whereas other questionnaires have a smaller sample.
  • Verified - Government data is "clean" data; it has been verified -- for example, the number of schools in a particular region. Crowdsourcing projects done by government can be quite dependable. (Read this example of how Crowdmap was used by the Ministry of Agriculture in Afghanistan to collect commodity prices.)
  • Official - government data forms the basis of government decision making and policy. If you want to influence government policy and interventions, it needs to be based on official data.
  • Expensive - Government data because it is comprehensive and verified is expensive to collect -- this expense is covered by the taxpayer.

Platforms are important

Libraries were built before people could read. Libraries drove the demand for literacy. Therefore, it makes sense that data and data platforms exist before before citizens have become literate in data. As David Eaves wrote in the Open Knowledge Foundation blog:

It is worth remembering: We didn't build libraries for an already literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have a data or public policy literate citizenry, we build them so that citizens may become literate in data, visualization, coding and public policy.

Some countries like Kenya now have the data, and now open-source platforms available not just for Kenya but worldwide. What are we missing?

Platforms like Ushahidi are like fertile land, and having open data is like having good seeds. (Good data equals very good seeds.) But fertile land and seeds are not much without people and actions on that very land. We often speak about technology being 10 percent of what needs to go into a deployment project -- the rest is often partnership, hard work and, most of all, community. Ordinary citizens can be farmers of the land; we need to get ordinary citizens involved at the heart of open government for it to powerful.

Ushahidi's role

Accessible data: The ownership debate has been settled as we agree government data belongs to the citizens. However, ownership is useless without access. If you own a car that you do not have access to, that car is useless to you. In the same way, if our citizens own data they have no access to, it's useless to them. Ownership is exercised through access. Ushahidi makes data accessible -- our technology "meets you where you are." No new devices are needed to interact with the data.

Digestible data: Is Africa overpopulated? If Africa is overpopulated or risks overpopulation, what intervention should we employ? Some have suggested sterilization. However, the data shows us that the more education a woman has, the less babies she has. Isn't a better intervention increasing education opportunities for women? This intervention also has numerous additional advantages for a country -- more educated people are usually more economically productive.

Drive demand for relevant data: Governments are frustrated that the data they have released is not being used. Is this because data release is driven mainly by the supply side, not the demand side -- governments release what they want to release, not what is wanted? How do we identify data that will be useful to the grassroots? We can crowdsource demand for data. For example: The National Taxpayer Alliance in Kenya has shown that when communities demand and receive relevant data, they become more engaged and empowered. There are rural communities suing MPs for misusing constituency development funds. They knew the funds were misused because of the availability of relevant data.

Closing the feedback loop: The key to behavioral change lies in feedback loops. These are very powerful, as exemplified by the incredible success of platforms like Facebook, which are dashboards of our social lives and that of our networks. What if we had a dashboard of accountability and transparency for the government? How about a way to find out if the services funded and promised for the public were indeed delivered and the service level of said services? For example: The concept of Huduma in Kenya, showed an early prototype of what such a dashboard would look like. We are working on more ways of using the Ushahidi platform to provide for this specific use case. Partnership announcements will be made in due course about this.

All this, To what end? Efficiency and change

If we as citizens can point out what is broken, and if the governments can be responsive to the various problems there are, we can perhaps see a delta in corruption and service provision.

Our role at Ushahidi is making sure there's no lack of technology to address citizen's concerns. Citizens can also be empowered to assist each other if the data is provided in an open way.

Open Data leading to Open Government

It takes the following to bridge open data and open government:

  • Community building - Co-working spaces allow policy makers, developers and civic hackers to congregate, have conversations, and build together. Examples are places like the iHub in Kenya, Bongo Hive in Zambia, and Code For America meetups in San Fransisco, just to name a few.
  • Information gathering and sharing - Crowdsourcing plus traditional methods give not only static data but a near real-time view of what's going on on the ground.
  • Infrastructure sharing - Build capacity once, reuse many times -- e.g., Crowdmap.
  • Capacity building - If it works in Africa, it can work anywhere. Developing countries have a particularly timely opportunity of building an ecosystem that is responsive to citizens and can help to leapfrog by taking open data, adding real-time views, and most of all, acting upon that data to change the status quo.
  • Commitment from government - We can learn from Chicago (a city with a history of graft and fraud), where current CTO John Tolva and Mayor Rahm Emmanuel have been releasing high-value data sets, running hackathons, and putting up performance dashboards. The narrative of Chicago is changing to one of a startup haven! What if we could do that for cities with the goal of making smart cities truly smart from the ground up? At the very least, surfacing the real-time view of conditions on the ground, from traffic, energy, environment and other information that can be useful for urban planners and policy makers. Our city master plans need a dose of real-time information so we can build for our future and not for our past.
  • Always including local context and collaboration in the building, implementation and engagement with citizens.

Would love to hear from you about how Ushahidi can continue to partner with you, your organization or community to provide tools for processing data easily and, most importantly, collaboratively.

Daudi Were, programs director for Ushahidi, contributed to this post.

A longer version of this story can be found on Ushahidi's blog.

April 12 2012

12:14

January 04 2012

11:03

2011: the UK hyper-local year in review

In this guest post, Damian Radcliffe highlights some topline developments in the hyper-local space during 2011. He also asks for your suggestions of great hyper-local content from 2011. His more detailed slides looking at the previous year are cross-posted at the bottom of this article.

2011 was a busy year across the hyper-local sphere, with a flurry of activity online as well as more traditional platforms such as TV, Radio and newspapers.

The Government’s plans for Local TV have been considerably developed, following the Shott Review just over a year ago. We now have a clearer indication of the areas which will be first on the list for these new services and how Ofcom might award these licences. What we don’t know is who will apply for these licences, or what their business models will be. But, this should become clear in the second half of the year.

Whilst the Leveson Inquiry hasn’t directly been looking at local media, it has been a part of the debate. Claire Enders outlined some of the challenges facing the regional and local press in a presentation showing declining revenue, jobs and advertising over the past five years. Her research suggests that the impact of “the move to digital” has been greater at a local level than at the nationals.

Across the board, funding remains a challenge for many. But new models are emerging, with Daily Deals starting to form part of the revenue mix alongside money from foundations and franchising.

And on the content front, we saw Jeremy Hunt cite a number of hyper-local examples at the Oxford Media Convention, as well as record coverage for regional press and many hyper-local outlets as a result of the summer riots.

I’ve included more on all of these stories in my personal retrospective for the past year.

One area where I’d really welcome feedback is examples of hyper-local content you produced – or read – in 2011. I’m conscious that a lot of great material may not necessarily reach a wider audience, so do post your suggestions below and hopefully we can begin to redress that.


December 08 2011

18:30

Mapping the Story of Climate Change

For this week's climate meetings in Durban, the World Bank released a series of maps showing the predicted impact of climate change on the world between now and 2100.

The data is dismal. If climate change continues unmitigated as it has for the past century, temperatures around the world will increase 5 degrees Celsius (9 degrees Fahrenheit) by 2100 -- the equivalent increase between today's climate and the last ice age. This change won't impact the world equally, with local changes varying from almost none to more than 10 degrees Celsius, depending on scenario, location and season.

All of these maps were designed using Development Seed's TileMill, an easy-to-use open-source map design tool that we've written about here before, and hosted on MapBox Hosting. TileMill is free to download and has loads of documentation to help people get started making maps. For design tips on map making, check out a blog post from Development Seed's AJ Ashton on the thinking behind the design of these maps.

Preparing for climate change

These maps tell the story of the anticipated impact of climate change, from the basics of where we'll see the biggest increase in temperature and fluctuation in precipitation levels to larger societal impacts on food security, countries' economies, and people's vulnerability to natural disasters. With these maps, the World Bank aims to not only show the urgency in preparing for climate changes, but also to target efforts to the countries and regions that will be most affected.

This map shows the expected worldwide temperature increases, assuming that global population continues to increase and regionally oriented economic growth is slower than in other scenarios.

Agriculture is expected to be one of the most affected industries, impacting countries' economies -- and only more so for ones whose GDP (gross domestic product) is made up largely of agriculture-related business. For example, agriculture is 61.3 percent of Liberia's GDP and 47.68 percent of Ethiopia's, while it's just 1.24 percent of the U.S. GDP.

Low-lying coastal areas will likely be more vulnerable to increased flooding, with countries such as Bangladesh, Myanmar and India at highest risk due to the huge populations that live there.

More details on the maps are available in this blog post by Development Seed's Alex Barth.

The data powering the maps is all publicly available from the World Bank, as part of its larger open data push with data.worldbank.org. This and other related climate data is all housed in its Open Data Resources for Climate Change. The World Bank is encouraging people to use this data and is hosting an Apps for Climate challenge to promote and reward this use. Check out the details, and be sure to submit your app by March 16.

July 15 2011

07:19

When information is power, these are the questions we should be asking

Various commentators over the past year have made the observation that “Data is the new oil“. If that’s the case, journalists should be following the money. But they’re not.

Instead it’s falling to the likes of Tony Hirst (an Open University academic), Dan Herbert (an Oxford Brookes academic) and Chris Taggart (a developer who used to be a magazine publisher) to fill the scrutiny gap. Recently all three have shone a light into the move towards transparency and open data which anyone with an interest in information would be advised to read.

Hirst wrote a particularly detailed post breaking down the results of a consultation about higher education data.

Herbert wrote about the publication of the first Whole of Government Accounts for the UK.

And Taggart made one of the best presentations I’ve seen on the relationship between information and democracy.

What all three highlight is how control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin.

Control, Cost, Confusion

Hirst, for example, identifies the potential for data about higher education to be monopolised by one organisation – UCAS, or HEFCE – at extra cost to universities, resulting in less detailed information for students and parents.

His translation of the outcomes of a HEFCE consultation brings to mind the situation that existed for years around Ordnance Survey data: taxpayers were paying for the information up to 8 times over, and the prohibitive cost of accessing that data ended up inspiring the Free Our Data campaign. As Hirst writes:

“The data burden is on the universities?! But the aggregation – where the value is locked up – is under the control of the centre? … So how much do we think the third party software vendors are going to claim for to make the changes to their systems? And hands up who thinks that those changes will also be antagonistic to developers who might be minded to open up the data via APIs. After all, if you can get data out of your commercially licensed enterprise software via a programmable API, there’s less requirement to stump up the cash to pay for maintenance and the implementation of “additional” features…”

Meanwhile Dan Herbert analyses another approach to data publication: the arrival of commercial-style accounting reports for the public sector. On the surface this all sounds transparent, but it may be just the opposite:

“There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.

Like Hirst, he argues that the raw data – and the ability to interrogate that – should instead be made available because (quoting Anthony Hopwood): “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.”

This is a characteristic of the transparency initiative that we need to be sharper around as journalists. The Manchester Evening News discovered this when they wanted to look at spending cuts. What they found was a dataset that had been ‘spun’ to make it harder to see the story hidden within, and to answer their question they first had to unspin it – or, in data journalism parlance, clean it. Likewise, having granular data – ideally from more than one source – allows us to better judge the quality of the information itself.

Chris Taggart meanwhile looks at the big picture: friction, he says, underpins society as we know it. Businesses such as real estate are based on it; privacy exists because of it; and democracies depend on it. As friction is removed through access to information, we get problems such as “jurisdiction failure” (corporate lawyers having “hacked” local laws to international advantage), but also issues around the democratic accountability of ad hoc communities and how we deal with different conceptions of privacy across borders.

Questions to ask of ‘transparency’

The point isn’t about the answers to the questions that Taggart, Herbert and Hirst raise – it’s the questions themselves, and the fact that journalists are, too often, not asking them when we are presented with yet another ‘transparency initiative‘.

If data is the new oil those three posts and a presentation provide a useful introduction to following the money.

(By the way, for a great example of a journalist asking all the right questions of one such initiative, however, see The Telegraph’s Conrad Quilty-Harper on the launch of Police.uk)

Data is not just some opaque term; something for geeks: it’s information: the raw material we deal in as journalists. Knowledge. Power. The site of a struggle for control. And considering it’s a site that journalists have always fought over, it’s surprisingly placid as we enter one of the most important ages in the history of information control.

As Heather Brooke writes today of the hacking scandal:

“Journalism in Britain is a patronage system – just like politics. It is rare to get good, timely information through merit (eg by trawling through public records); instead it’s about knowing the right people, exchanging favours. In America reporters are not allowed to accept any hospitality. In Britain, taking people out to lunch is de rigueur. It’s where information is traded. But in this setting, information comes at a price.

“This is why there is collusion between the elites of the police, politicians and the press. It is a cartel of information. The press only get information by playing the game. There is a reason none of the main political reporters investigated MPs’ expenses – because to do so would have meant falling out with those who control access to important civic information. The press – like the public – have little statutory right to information with no strings attached. Inside parliament the lobby system is an exercise in client journalism that serves primarily the interests of the powerful. Freedom of information laws bust open the cartel.”

But laws come with loopholes and exemptions, red tape and ignorance. And they need to be fought over.

One bill to extend the FOI law to “remove provisions permitting Ministers to overrule decisions of the Information Commissioner and Information Tribunal; to limit the time allowed for public authorities to respond to requests involving consideration of the public interest; to amend the definition of public authorities” and more, for example, was recently put on indefinite hold. How many publishers and journalists are lobbying to un-pause this?

So let’s simplify things. And in doing so, there’s no better place to start than David Eaves’ 3 laws of government data.

This is summed up as the need to be able to “Find, Play and Share” information. For the purposes of journalism, however, I’ll rephrase them as 3 questions to ask of any transparency initiative:

  1. If information is to be published in a database behind a form, then it’s hidden in plain sight. It cannot be easily found by a journalist, and only simple questions will be answered.
  2. If information is to be published in PDFs or JPEGs, or some format that you need proprietary software to see, then it cannot be easily be questioned by a journalist
  3. If you will have to pass a test to use the information, then obstacles will be placed between the journalist and that information

The next time an organisation claims that they are opening up their information, tick those questions off. (If you want more, see Gurstein’s list of 7 elements that are needed to make effective use of open data).

At the moment, the history of information is being written without journalists.

PrintFriendly

May 24 2011

14:59

Data, Data Everywhere — But How Does It Relate to You And Your Work?

By Keisha Taylor. This was originally posted on the GuideStar International blog

As Internet and mobile access grows, more data is made open online. It is being used and analyzed by the media, the private sector, governments, and civil society organizations to inform their decisions. Open data, real time data, and linked data are being discussed in many forums. And so are the ways in which governments, civil society organizations, and intergovernmental organizations (IGOs) can work with the private sector to benefit the public using the data analysis. Data-related events are highlighting the value of data and are addressing technical, design, political, reliability, validity, and inclusion issues that arise with its disclosure.


An interactive example of data visualisation - OECD Better Life Index © OECD (2011) www.oecdbetterlifeindex.org

Hal Varian, Google’s Chief Economist, says “The ability to take data — to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it — that’s going to be a hugely important skill in the next decades, not only at the professional level but even at the educational level for elementary school kids, for high school kids, for college kids. Because now we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the ability to understand that data and extract value from it.”  This post highlights some of the organizations that are involved in this type of work and points to some of the forums discussing this topic.

The European Public Sector Information Platform has a great list of open data events. And for those of you interested in open government data events, have a look at the events calendar that is being updated by the Open Knowledge Foundation. A London-based nonprofit, Open Knowledge Foundation is at the forefront of promoting open knowledge to help citizens and society.

A few of the many notable events are:

These kinds of events, however, still tend to be dominated by the technology geek, statistician, and government official though civil society organizations and other organizations involved in cultural fields are also exploring the potential of using open data. For civil society organizations on the sidelines of this data movement, the everyday media’s use of data for reporting provides a practical demonstration of just how useful it can be. (I would recommend having a look at some really cool videos featured by Stanford on Journalism in the Age of Data.) Many eyes not only provides visualizations but a forum for anyone to upload data and create visualizations and Flowing Data illustrates how designers, programmers, and statisticians are making good use of data . A few practical examples of the use of data for reporting are listed below.

These are just a few of what are arguably limitless examples how data is being used to help us understand our world. The National Council for Voluntary Organisations (NCVO) in London recently hosted the workshop “Civil Society 2.0: how open data will change your organisation and what you can do about it,” and the presentations have been made available online. If indeed “Data is the New Oil,” civil society organizations (CSOs) should be learning how to generate, find, and use data to help inform and improve their work. The appropriate use of data can help all CSOs to advance the overall well-being of individuals and their local communities.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl