Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 16 2012

08:27

Hyperlocal Voices: Matt Brown, Londonist

The fifth in our new series of Hyperlocal Voices explores the work done by the team behind the Londonist. Despite having a large geographic footprint – Londonist covers the whole of Greater London - the site is full of ultra-local content, as well as featuring stories and themes which span the whole of the capital.

Run by two members of staff and a raft of volunteers, Editor Matt Brown gave Damian Radcliffe an insight into the breadth and depth of the site.

1. Who were the people behind the blog?

Everyone in London! We’re a very open site, involving our readers in the creation of many articles, especially the imagery. But more prosaically, we have an editorial team of 5 or 6 people, plus another 20 or so regular contributors. I act as the main content editor for the site.

We’re more than a website, though, with a weekly podcast (Londonist Out Loud, ably presented and produced by N Quentin Woolf), a separate Facebook presence, a daily e-newsletter, 80,000 Twitter followers, the largest FourSquare following in London (I think), a Flickr pool with 200,000 images, several e-books, occasional exhibitions and live events every few weeks. The web site is just one facet of what we do.

2. What made you decide to set up the blog?

I actually inherited it off someone else, but it was originally set up as a London equivalent of certain sites in the US like Gothamist and Chicagoist, which were riding the early blogging wave, providing news and event tips for citizens. There was nothing quite like it in London, so my predecessor wanted to jump into the gap and have some fun.

3. When did you set up the blog and how did you go about it?

It dates back to 2004, when it was originally called the Big Smoker. Before too long, it joined the Gothamist network, changing its name to Londonist.

We now operate independently of that network, but retain the name. It was originally set up in Movable Type publishing platform, but we moved to WordPress a couple of years ago.

4. What other blogs, bloggers or websites influenced you?

Obviously, the Gothamist sites originally. But we’re now more influenced by the wonderful ecosystem of London blogs out there, all offering their own take on life in the capital.

The best include Diamond Geezer (an incisive and often acerbic look at London), Ian Visits (a mix of unusual site visits and geeky observation) and Spitalfields Life (a daily interview with a local character). These are just three of the dozens of excellent London sites in my RSS reader.

5. How did – and do – you see yourself in relation to a traditional news operation?

Complementary rather than competitors. We cover three or four news stories a day, sometimes journalistically, but our forte in this area is more in commentary, features and reader involvement around the news.

And news is just a small part of what we do — most of the site is event recommendation, unusual historical insights, street art, food and drink, theatre reviews and the like. As an example of our diversity, a few months back we ran a 3,000-word essay on the construction of Hammersmith flyover by an engineering PhD candidate, and the very next item was about a beauty pageant for chubby people in Vauxhall.

6. What have been the key moments in the blog’s development editorially?

I think most of these would be technologically driven. For example, when Google mapping became possible, our free wifi hotspots and V2 rocket maps greatly increased site traffic.

Once Twitter reached critical mass we were able to reach out to tens of thousands of people, both for sourcing information for articles and pushing our finished content.

The other big thing was turning the site into a business a couple of years ago, so we were able to bring a little bit of money in to reinvest in the site. The extra editorial time the money pays for means our output is now bigger and better.

7. What sort of traffic do you get and how has that changed over time?

We’re now seeing about 1.4 million page views a month. It’s pretty much doubling year on year.

8. What is / has been your biggest challenge to date?

Transforming from an amateur site into a business.

We started taking different types of advertising, including advertorial content, and had to make sure we didn’t alienate our readers. It was a tricky tightrope, but I’d hope we’ve done a fairly good job of selecting paid-for content only if it’s of interest to a meaningful portion of our readers, and then making sure we’re open and clear about what is sponsored content and what is editorially driven.

9. What story, feature or series are you most proud of? 

I’m rather enjoying our A-Z pubcrawl at the moment, and not just because of the booze.

Basically, we pick an area of town each month beginning with the next letter of the alphabet (so, Angel, Brixton, City, Dalston, etc.). We then ask our readers to nominate their favourite pubs and bars in the area, via Twitter, Facebook or comments.

We then build a Google map of all the suggestions and arrange a pub crawl around the top 4.

Everyone’s a winner because (a) we get a Google-friendly article called, for example, ‘What’s the best pub in Farringdon?‘, with a map of all the suggestions; (b) we get the chance to use our strong social media channels to involve a large number of people – hundreds of votes every time; (c) the chance to meet some of our readers, who are invited along on the pub crawl, and who get a Londonistbooze badge as a memento; (d) a really fun night out round some very good pubs.

The next part (G for Greenwich) will be announced in early September.

10. What are your plans for the future?

We’re playing around with ebooks at the moment, as a way to sustain the business directly through content. We’ve published a book of London pub crawls (spotting a theme here?), and a history of the London Olympics by noted London author David Long. Our next ebook will be a collection of quiz questions about the capital, drawn from the numerous pub quizzes we’ve ran over the years.

Basically, we’re looking to be the best organisation for finding out about London in any and every medium we can get our hands on.

08:27

Hyperlocal Voices: Matt Brown, Londonist

The fifth in our new series of Hyperlocal Voices explores the work done by the team behind the Londonist. Despite having a large geographic footprint – Londonist covers the whole of Greater London - the site is full of ultra-local content, as well as featuring stories and themes which span the whole of the capital.

Run by two members of staff and a raft of volunteers, Editor Matt Brown gave Damian Radcliffe an insight into the breadth and depth of the site.

1. Who were the people behind the blog?

Everyone in London! We’re a very open site, involving our readers in the creation of many articles, especially the imagery. But more prosaically, we have an editorial team of 5 or 6 people, plus another 20 or so regular contributors. I act as the main content editor for the site.

We’re more than a website, though, with a weekly podcast (Londonist Out Loud, ably presented and produced by N Quentin Woolf), a separate Facebook presence, a daily e-newsletter, 80,000 Twitter followers, the largest FourSquare following in London (I think), a Flickr pool with 200,000 images, several e-books, occasional exhibitions and live events every few weeks. The web site is just one facet of what we do.

2. What made you decide to set up the blog?

I actually inherited it off someone else, but it was originally set up as a London equivalent of certain sites in the US like Gothamist and Chicagoist, which were riding the early blogging wave, providing news and event tips for citizens. There was nothing quite like it in London, so my predecessor wanted to jump into the gap and have some fun.

3. When did you set up the blog and how did you go about it?

It dates back to 2004, when it was originally called the Big Smoker. Before too long, it joined the Gothamist network, changing its name to Londonist.

We now operate independently of that network, but retain the name. It was originally set up in Movable Type publishing platform, but we moved to WordPress a couple of years ago.

4. What other blogs, bloggers or websites influenced you?

Obviously, the Gothamist sites originally. But we’re now more influenced by the wonderful ecosystem of London blogs out there, all offering their own take on life in the capital.

The best include Diamond Geezer (an incisive and often acerbic look at London), Ian Visits (a mix of unusual site visits and geeky observation) and Spitalfields Life (a daily interview with a local character). These are just three of the dozens of excellent London sites in my RSS reader.

5. How did – and do – you see yourself in relation to a traditional news operation?

Complementary rather than competitors. We cover three or four news stories a day, sometimes journalistically, but our forte in this area is more in commentary, features and reader involvement around the news.

And news is just a small part of what we do — most of the site is event recommendation, unusual historical insights, street art, food and drink, theatre reviews and the like. As an example of our diversity, a few months back we ran a 3,000-word essay on the construction of Hammersmith flyover by an engineering PhD candidate, and the very next item was about a beauty pageant for chubby people in Vauxhall.

6. What have been the key moments in the blog’s development editorially?

I think most of these would be technologically driven. For example, when Google mapping became possible, our free wifi hotspots and V2 rocket maps greatly increased site traffic.

Once Twitter reached critical mass we were able to reach out to tens of thousands of people, both for sourcing information for articles and pushing our finished content.

The other big thing was turning the site into a business a couple of years ago, so we were able to bring a little bit of money in to reinvest in the site. The extra editorial time the money pays for means our output is now bigger and better.

7. What sort of traffic do you get and how has that changed over time?

We’re now seeing about 1.4 million page views a month. It’s pretty much doubling year on year.

8. What is / has been your biggest challenge to date?

Transforming from an amateur site into a business.

We started taking different types of advertising, including advertorial content, and had to make sure we didn’t alienate our readers. It was a tricky tightrope, but I’d hope we’ve done a fairly good job of selecting paid-for content only if it’s of interest to a meaningful portion of our readers, and then making sure we’re open and clear about what is sponsored content and what is editorially driven.

9. What story, feature or series are you most proud of? 

I’m rather enjoying our A-Z pubcrawl at the moment, and not just because of the booze.

Basically, we pick an area of town each month beginning with the next letter of the alphabet (so, Angel, Brixton, City, Dalston, etc.). We then ask our readers to nominate their favourite pubs and bars in the area, via Twitter, Facebook or comments.

We then build a Google map of all the suggestions and arrange a pub crawl around the top 4.

Everyone’s a winner because (a) we get a Google-friendly article called, for example, ‘What’s the best pub in Farringdon?‘, with a map of all the suggestions; (b) we get the chance to use our strong social media channels to involve a large number of people – hundreds of votes every time; (c) the chance to meet some of our readers, who are invited along on the pub crawl, and who get a Londonistbooze badge as a memento; (d) a really fun night out round some very good pubs.

The next part (G for Greenwich) will be announced in early September.

10. What are your plans for the future?

We’re playing around with ebooks at the moment, as a way to sustain the business directly through content. We’ve published a book of London pub crawls (spotting a theme here?), and a history of the London Olympics by noted London author David Long. Our next ebook will be a collection of quiz questions about the capital, drawn from the numerous pub quizzes we’ve ran over the years.

Basically, we’re looking to be the best organisation for finding out about London in any and every medium we can get our hands on.

August 01 2012

13:46

Can Google Maps + Fusion Tables Beat OpenBlock?

WRAL.com, North Carolina's most widely read online news site, recently published a tool that allows you to search concealed weapons permits down to the street level. It didn't use OpenBlock to do so. Why?

openblock-logo.png

Or, if you're like many journalistically and technically savvy people I've spoken over the last few months, you could ask why would they? There's plenty of evidence out there to suggest the OpenBlock application is essentially a great experiment and proof of concept, but a dud as a useful tool for journalists. Many of the public records portions of Everyblock.com -- OpenBlock's commercial iteration -- are months if not years out of date. It can't be found anywhere on the public sites of the two news organizations in which the Knight Foundation invested $223,625. There are only three sites running the open-source code -- two of those are at universities and only one of which was created without funding from the Knight Foundation.

And, you, Thornburg. You don't have a site up and running yet, either.

All excellent points, dear friends. OpenBlock has its problems -- it doesn't work well in multi-city installations, some search functions don't work as you'd expect, there's no easy way to correct incorrect geocoding or even identify possible failures, among other obstacles that I'll describe in greater detail in a later blog post. But the alternatives also have shortcomings. And deciding whether to use OpenBlock depends on which shortcomings will be more tolerable to your journalists, advertisers and readers.

SHOULD I USE OPENBLOCK?

If you want to publish news from multiple cities or from unincorporated areas, or if you serve a rural community I'd hold off for now. If you visit our public repositories on GitHub you can see the good work the developers at Caktus have been doing to remove these limitations, and I'm proud to say that we have a private staging site that's up and running for our initial partner site. But until we make the set-up process easier, you're going to have to hire a Django developer (at anywhere from $48,000 a year to $150 an hour) to customize the site with your logo, your geographic data, and your news items.

The other limitation to OpenBlock right now is that it isn't going to be cheap to maintain once you do get it up and running. The next priority for me is to make the application scale better to multiple installations and therefore lower the maintenance costs. Within the small OpenBlock community, there's debate about how large of a server it requires. The very good developers at OpenPlans who did a lot of heavy lifting on the code between the time it was open sourced and the time that it should run nicely on a "micro" instance of Amazon's EC2 cloud hosting service -- about $180 a year.

But we and Tim Shedor, the University of Kansas student who built LarryvilleKU, find OpenBlock a little too memory intensive for the "micro" instance. We're on an Amazon Web Services "small" instance, and LarryvilleKU is on a similar sized virtual server at MediaTemple. That costs more like $720 a year. And if you add a staging server to make sure your code changes break in private instead of public, you're looking at hosting costs of nearly $1,500 a year.

And that's before your scrapers start breaking. Depending on how conservative you are, you'll want to set aside a budget for fixing each scraper somewhere between one and three times a year. Each fix might be an hour or maybe up to 12 hours of work for a Python programmer (or the good folks at ScraperWiki). If you have three data sources -- arrests, restaurant inspections and home sales, let's say -- then you may get away with a $300 annual scraper maintenance cost, or it may set you back as much as $15,000 a year.

I've got some ideas on how to reduce those scraper costs, too, but more on that later as well.

Of course, if you have someone on staff who does Python programming and whose done some work with public records and published a few Django sites and they've got time to spare, then your costs will go down significantly.

But just in case you don't have such a person on staff or aren't ready to make this kind of investment, what are your alternatives?

GOOGLE MAPS AND FUSION TABLES

Using a Google Map on your news website is a little like playing the saxophone. It's probably the easiest instrument to learn how to play poorly, but pretty difficult to make it really sing. Anyone can create a Google Map of homicides or parking garages or whatever, but it's going to be a static map of only one schema, and it won't be searchable or sortable.

Google_maps_screenshot.png

On the other hand, you can also use Google Maps and Fusion Tables to build some really amazing applications, like the ones you might see in The Guardian or on The Texas Tribune or WNYC or The Bay Citizen. You can do all this, but it also takes some coding effort and probably a bit more regular hand care and feeding to keep the site up-to-date.

I've taken a look at how you might use Google's data tools to replicate something like OpenBlock, although I've not actually done it. If you want to give it a whirl and report back, here's my recipe.

A RECIPE FOR REPLICATING OPENBLOCK

Step 1. Create one Google Docs spreadsheet for each schema, up to a maximum of four spreadsheets. And create one Google Fusion Table for each scheme, up to a maximum of four tables.

Step 2. If the data you want is in a CSV file that's been published to the web, you can populate it with a Google Docs function called ImportData. This function -- as well as its sister functions ImportHTML and ImportXML -- will only update 50 records a time. And I believe this function will pull in new data from the CSV about once an hour. I don't know whether it will append the new rows or overwrite them, or what it would do if only a few of the fields in a record change. If you're really lucky, the data would be in an RSS feed and you could use the ImportFeed function to get past this 50-record limit.

Of course, in the real world almost none of your data will be in these formats. None of mine are. And in that case, you'd have to either re-enter the data into Google Docs by hand or use something like ScraperWiki to scrape a datasource and present it as a CSV or a feed.

Step 3. Use a modification of this script to automatically pull the data -- including updates -- from the Google Docs spreadsheet into the corresponding Fusion table you created for that schema.

Step 4. Find the U.S. Census or local county shapefiles for any geographies you want -- such as ZIP codes or cities or school districts -- and convert them to KML.

Step 5. Upload that geographic information into another Fusion Table.

Step 6. Merge the the Fusion table from Step 3 with the Fusion table from Step 5.

Step 7. This is really a thousand little steps, each depending on which of OpenBlock's user interface features you'd like to replicate. And, really, it should be preceded by step 6a -- learn JavaScript, SQL, CSS and HTML. Once you've done that, you can build tools so that users can:

And there's even at least one prototype of using server-side scripting and Google's APIs to build a relatively full-functioning GIS-type web application: https://github.com/odi86/GFTPrototype

After all that, you will have some of the features of OpenBlock, but not others.

Some key OpenBlock features you can replicate with Google Maps and Fusion Tables:

  • Filter by date, street, city, ZIP code or any other field you choose. Fusion Tables is actually a much better interface for searching and filtering -- or doing any kind of reporting work -- than OpenBlock.
  • Show up to four different kinds of news items on one map (five if you don't include a geography layer).
  • Conduct proximity searches. "Show me crimes reported within 1 mile of a specific address."

WHAT YOU CAN'T REPLICATE

The OpenBlock features you can't replicate with Google:

  • Use a data source that is anything other than an RSS feed, HTML table, CSV or TSV. That's right, no XLS files unless you manually import them.
  • Use a data source for which you need to combine two CSV files before import. This is the case with our property transactions and restaurant inspections.
  • Update more than 50 records at a time. Definitely a problem for police reports in all but the smallest towns.
  • Use a data source that doesn't store the entire address in a single field. That's a problem for all the records with which we're working.
  • Map more than 100,000 rows in any one Fusion table. In rural counties, this probably wouldn't be a concern. In Columbus County, N.C., there are only 45,000 parcels of land and 9,000 incidents and arrests a year.
  • Use data sources that are larger than 20MB or 400,000 cells. I don't anticipate this would be a problem for any dataset in any county we're working.
  • Plot more than 2,500 records a day on a map. Don't anticipate hitting this limit either, especially after the initial upload of data.
  • Parse text for an address -- so you can't map news articles, for example.
  • Filter to the block level. If Main Street runs for miles through several miles, you're not going to be able to narrow your search to anything relevant.
  • Create a custom RSS feed, or email alert.

THE SEO ADVANTAGE

And there's one final feature of OpenBlock that you can't replicate using Google tools without investing a good deal of manual, rote set-up work -- taking advantage of SEO or social media sharing by having a unique URL for a particular geography or news item type. Ideally, if someone searches for "home sales in 27514" I want them to come to my site. And if someone wants to post to Facebook a link to a particular restaurant that was scolded for having an employee with a finger-licking tendency (true story), I'd want them to be able to link directly to that specific inspection incident without forcing their friends to hunt through a bunch of irrelevant 100 scores.

To replicate OpenBlock's URL structure using Google Maps and Fusion Tables, you'd have to create a unique web page and a unique Google map for each city and ZIP code. The geography pages would display a polygon of the selected geography, whether it's a ZIP code or city or anything else, and all of the news items for that geography (up to four schemas, such as arrests, incidents, property sales, and restaurant inspections). That's 55 map pages.

Then you'd have to create a map and a page for each news item type. That's four pages, four Fusion tables, and four Google Docs spreadsheets.

Whew. I'm going to stick with our work in improving the flexibility and scalability of OpenBlock. But it's still worth looking at Google Maps and Fusion Tables for some small and static data use cases. Other tools such as Socrata's Open Data, Caspio and Tableau Public are also worth your time as you begin to think about publishing public data. Each of those have some maintenance costs and their own strengths and weaknesses, but the real trick for using all of these tools is public data that isn't in any usable format. We're looking hard at solving that problem with a combination of scraping and crowdsourcing, and I'll report what we've found in an upcoming post.

Ryan Thornburg researches and teaches online news writing, editing, producing and reporting as an assistant professor in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. He has helped news organizations on four continents develop digital editorial products and use new media to hold powerful people accountable, shine light in dark places and explain a complex world. Previously, Thornburg was managing editor of USNews.com, managing editor for Congressional Quarterly's website and national/international editor for washingtonpost.com. He has a master's degree from George Washington University's Graduate School of Political Management and a bachelor's from the University of North Carolina at Chapel Hill.

April 25 2012

16:27

January 11 2012

15:20

NextDrop's Dashboards Look Great, But Mobile Content Would Be Better

One year ago, when we were just a team of graduate students with a big idea, our teammate Thejo Kote came to Hubli, India and demoed a web-based dashboard to the executive engineer and commissioner here. The dashboard uses Google Maps to show the status of valves and other system components in real time, using information provided via voice or SMS.

dashboard.png

Building that dashboard marked a turning point for NextDrop, which informs residents in India about the availability of piped water in order to help them lead more productive, less stressful lives. It was our first real "pivot," as we moved decisively away from crowdsourcing information from residents, which wasn't working. It was also the way to make progress with the utility, partner with them, and ultimately, win competitions that would enable us to get our company off the ground.

Implementing that dashboard is part of the larger vision of how NextDrop can ultimately revolutionize information flow in water utilities. But based on what we've learned so far, it's not clear that it's the low-hanging fruit in terms of how to make the lives of engineers easier today.

In Hubli, utility engineers have the computers and Internet access you need to follow the days' supply cycle through a live dashboard, but they're not quite there yet in terms of integrating that technology into their day-to-day routines.

But there's a different technology they are using -- everyone in the utility has a mobile phone, and they are incredibly adept at handling calls from hundreds of people each day, as they do things as varied as managing valvemen, dealing with customer complaints, coordinating tanker deliveries, overseeing pipe damage repairs, and interfacing with other engineers.

a day in the life of an engineer

santosh3.jpg

Last week, a team member and I went to the field with Mr. Santosh, one of the two section officers in Hubli's North Zone. While he was showing us the NR Betta Tank, we got to see first-hand the volume of calls he deals with.

Like all the engineers in the utility, Santosh's number is public, so even customers in his area can call him directly with complaints. Here are some notes from my interview with him.

I asked Santosh how many calls he gets, and this was his response:

  • 30 to 40 calls per day from NR Betta Tank, the major reservoir tank he is responsible for, where he checks on the reservoir level and chlorine levels.
  • 15 to 20 calls per day from his valvemen updating him on where they provided water.
  • 20 calls per day from the public inquiring about new connections.
  • 40 calls per day about tanker tuck deliveries.

While we're still learning a lot about the utility, we think the products that will make the lives of utility engineers easier today will have the following qualities:

  • Reduce the volume of calls the engineers get.
  • Provide them information through the mobile phone, the medium they already use.
  • Generate clear electronic records that can be studied over time.

With this in mind, we're launching a daily SMS that will inform utility engineers whether water was delivered to all the areas they're responsible for, and notify them of any exceptions to the set schedule. Beyond that, we're looking at opportunities to help engineers track the status of pipe damage repairs and tanker deliveries.

More news on new utility products soon to follow!

A version of this post first appeared on the NextDrop blog.

July 11 2011

16:02

How TileMill Improved Ushahidi Maps to Protect Children in Africa

In May I worked with Plan Benin to improve its Violence Against Children (VAC) reporting system. The system uses FrontlineSMS and Ushahidi to collect and visualize reports of violence against children. Ushahidi develops open-source software for information collection, visualization and interactive mapping. While in Benin, I was frustrated by the lack of local data available through Google Maps, Yahoo, and even OpenStreetMap -- the three mapping applications Ushahidi allows administrators to use without customization.

While these mapping services are great for places rich in geographic data, many places -- like Benin and other countries in the developing world -- are poorly represented by the major mapping services. Making matters worse is the fact that even when good data is available, slow and unreliable Internet access turns geolocating incidents and browsing the map into a frustrating, time-consuming challenge for staff and site visitors in-country.

In an effort to create a custom map with more local data, I tested out TileMill, Development Seed's open-source map design studio, with successful results.

An area of northwest Benin shown with Google Maps (left) and a custom map built with TileMill (right). Note the number of towns and villages that appear in the map at right.

With little hands-on experience with map design or GIS (geographic information systems), I was happy to find TileMill's Carto-based code intuitive and easy to use.

Because of the lack of data on Benin available through the major mapping services, I thought it would be interesting to visualize the VAC Benin data on a custom map using geographic data obtained by Plan Benin through CENATEL, the National Centre of Remote Sensing and Forest Cover Observation in Benin. I exported reports of violence from Ushahidi into a CSV file using Ushahidi's built-in export functionality. From there, I used Quantum GIS -- an open-source GIS tool -- to convert the data into GeoJSON, an open standard for data interchange that works very well with TileMill.

I then used TileMill to create a map that includes only the data relevant to Plan Benin's activities on this particular project, which helps users focus on the information they need. The map includes geographic data for Atacora and Couffo, the two "Program Units" where Plan Benin operates. (These are highlighted in light blue on the map.)

I also included labels for the important cities in both Program Units and, if you zoom in several levels, village names in Atacora. The red dots indicate reports of violence, and if you mouse over or click on a dot, you can see a summary of the incident. The reports were geolocated by hand using information sent via text message. The map also incorporates MapBox's open-source World Bright base-layer map, adding country borders, custom labels, population centers (in light yellow/brown tones), and other information to the map.

The Tip of the Iceberg

This is really the tip of the iceberg in terms of what TileMill can do. It would also be possible to add as many cities and villages as there are in the dataset, include multimedia-rich interactivity, use a choropleth scheme to indicate hotspots of violence, cluster reports, and so on.

With just a few design choices, this custom map dramatically improves the experience of interacting with data collected through Ushahidi. Highlighting the Program Units draws the eye to the important areas; using deep datasets and custom map labels solves the problem of missing local data; and the built-in interactivity means that visitors don't need to browse to multiple pages (a killer in low-bandwidth environments) to view information on individual reports.

Compositing, which was just rolled out on TileStream Hosting, helps the map load quickly, even in low-bandwidth environments (the maps are now faster than Google Maps), and this map can also be used offline via either the MapBox Appliance or the MapBox iPad app. Finally, TileStream Hosting makes it easy to host the map and generates embed code so the map can be widely shared.

Take a look at the map below and feel free to click over to the VAC Benin Ushahidi site to see the difference for yourself.

VAC Benin data collected with Ushahidi and visualized with TileMill:

Paul Goodman is a master's student at the UC-Berkeley School of Information and is spending the summer working with Development Seed.

April 22 2011

12:49

How to Design Fast, Interactive Maps Without Flash

Until recently if you wanted to create a fast interactive map to use on your website you had two main options - design it in Flash, or use Google. With the prevalence of mobile devices, for many users Flash isn't an option, leaving Google and a few competitors (like Bing). But we are developing open source technologies in this space that provide viable alternatives for serving fast interactive maps online - ones that often give users more control over map design and the data displayed on it.

TileMill, our open source map design studio, now provides interactivity in the latest head version on github. Once you design a map with TileMill, you can enable certain data in the shapefile to be interactive.

Map interactivity in the latest version of TileMill

When you export a map into MBTiles, a file format that makes it easy to manage and share map tiles and which you can easily export any map made in TileMill to, all the interaction is stored within the MBTiles file. This allows us to host interactive maps that are completely custom designed - including the look and feel and the data points - that are as fast as Google Maps.

An example of an interactive map using TileMill is the map in NPR's I Heart NPR Facebook App, an app that asks users to choose and map their favorite member station.

NPR Using TileMill

Yesterday, Tom MacWright gave a talk about designing fast maps and other emerging open source interactive mapping technologies, and specifically comparing them to Google, at the Where 2.0 Conference, a leading annual geo conference. If you're interested in learning more about this and weren't at the conference, check out his slides, which are posted on our blog.

March 24 2011

15:43

Using Technology to Aid Disaster Relief for Japan and Beyond

The March 11 earthquake in Japan triggered a flurry of concern in the Media Lab community at MIT. The natural desire to help was amplified by the fact that the disaster had hit many of our friends close to home in a very literal sense. Most messages suggested donations to support relief organizations -- a worthy cause indeed -- but there was also a more unique reaction: A call for relief technology.

It turns out that the use of digital tools in crisis situations is a concept with rich communities and plenty of solid examples. Within the Media Lab there are a handful of projects designed to assist in crisis situations and outside of the Lab there are hundreds more. Because I just started exploring the area, I want to share my initial observations along with a quick description of the projects I'm most familiar with.

Framing the Challenge

I always thought that social systems were complicated, but I will never complain again. The issues surrounding a crisis tool are almost unreal. Take a minute to think of the types of challenges that a relief technology project needs to consider up front, before even targeting a particular problem. Here are a few examples to get your brain running:

  • Limited technology access -- The primary stakeholders of a relief technology are those living in a disaster area. Maybe the area is a developing country with no digital infrastructure or a more developed country whose digital infrastructure just got annihilated. Maybe people have the Internet, maybe they just have cell phones, or maybe they really have nothing on the ground. Nothing is a given here.
  • Shifting impact windows -- What are the phases of disaster relief, how long do they last, and what changes between them? I like the general structure presented in this document [PDF]. It starts with an immediate response, which focuses on things like saving lives, cleaning up, and tending to basic human needs. This is followed by mid-term planning, which involves getting the basics back such as temporary housing and lifeline utilities. Finally the long term reconstruction phase begins. Each of these phases has drastically different needs, requirements, and timelines.
  • Global-scale requirements -- National disaster relief is a problem that even government-scale organizational structures struggle to deal with. What are the core needs and how can they be met? Who is being impacted? Who is going to participate and how? What are the implications of your solution? What is already being done and how can you fit in effectively? How many different cultures are going to be using your tool? What languages are involved? These are all vital questions which have to be thought through.
  • Lives are at stake -- Behind this entire process is an ultimate fact: These technologies are dealing with matters of life and death. If an organization relies on a tool for some portion of its operations and the tool fails there could be very real and serious consequences.

What exactly do these issues mean from a system design standpoint? How do these concerns end up shaping a project? Hopefully some examples can help illustrate that.

Example 1: Konbit

konbit jpg

Greg Elliott and Aaron Zinman, two students at the MIT Media Lab, noticed a major problem with the way reconstruction efforts were being approached after the 2010 Haiti earthquake. The issue was simple: Haiti lacked the information infrastructure needed to effectively identify local skill sets at scale and hire Haitians to assist in the rebuilding tasks.

For instance, instead of hiring a local plumber or electrician, someone might be flown in from the United States to get the job done. Plumbers, bricklayers, drivers, nurses, and translators all live within the crisis areas, but without a way for foreign NGOs to easily discover them and their skill sets they simply aren't hired. To make the problem more challenging, access to the Internet in Haiti is uncommon, meaning a purely web-based solution would offer no help at all. Additionally, 50% of the country is illiterate, so even SMS-based solutions are not appropriate.

They created Konbit to address this problem. It provides a phone-based interface that allows Haitians to register their skills and life experiences via an automated interview process. The interviews are then translated and categorized, resulting in an online searchable directory that employers and NGOs can use to discover local workers. They have more than 3,000 workers ready to be hired right now and are looking for NGOs and employers who can use their database.

Example 2: Ushahidi

ushahidi map.jpg

While Konbit focuses on the rebuilding phase of a disaster, Ushahidi, a 2008 Knight News Challenge winner, has proven how powerful an information-mapping platform can be in the immediate response to a crisis. Within days of the tsunami in Japan, an instance of the platform was set up to track reports and needs from the ground. This particular map (see above) has an aggregation of reports with labels such as "Wanted!" "Disaster Area," and "Available Service."

As those of you who are familiar with the Ushahidi platform already know, it is brilliant because the open tools are general enough to be easily and quickly adapted for use in new situations. Information transfer based on geography is going to be needed in any crisis situation and many non-crisis situations as well. This helps separate the technology from the context, which means that at the very least a general information flow can be quickly set up almost immediately after disaster strikes.

Example 3: Grassroots Mapping

One of the projects that has come out of the Center for Future Civic Media is a set of tools and techniques for community-driven maps called Grassroots Mapping. The tools allow individuals to create high resolution maps using what boils down to a kite and a camera. Unlike Konbit and Ushahidi, this project is much more focused on the documentation of geographic change.

The project had immediate application after the oil spill in the Gulf of Mexico in 2010, where thousands of miles of coastline were contaminated with oil over a period of several weeks. Satellite imagery gave a sense of the damage, but grassroots maps made it possible to create high resolution maps of the damage, which are now part of the public record for anyone to view and use.

Designing for a Need

I want to wrap up the post with a story about my own miniature attempt to contribute to the world of relief technology. As reactors were beginning to overheat in Japan, I heard someone comment on a desire to better understand what was actually needed and how they could help. That night I threw together a quick Google Maps mash-up with the hope to make it easy for people to help log needs and support organizations on the ground:

schultz map.jpg

The next day I learned about the Ushahidi instance and put my project on temporary hold; a few days of rushed hacking wasn't going to save Japan and I needed some giant shoulders to stand on. One week later, it seems that the original need I was approaching -- making it possible for American donors to understand how and where they could help -- is still not being met. Ushahidi has the information buried inside of its maps but the interface is simply not designed for that purpose and the reports are in Japanese.

I want to tap into Ushahidi by creating a layer which can frame the information for charitable supporters rather than for NGOs and survivors. The goal is a system that turns information into action by helping people with resources understand where needs are being met, who is meeting them, and how they can help. In the meantime, though, I would love to hear about any type of relief technology that you have seen which stood out as successful or unique.

January 03 2011

08:00

Hyperlocal voices: James Hatts, SE1

This week’s Hyperlocal Voices interview looks at the long-running SE1 website, which boasts half a million visits every month. Despite being over 12 years old, the site remains at the cutting edge of online journalism, being among the first experimenters with the Google Maps API and Audioboo.

Who were the people behind the site, and what were their backgrounds?

The London SE1 website is a family-run enterprise. My father, Leigh Hatts, has a background in outdoors, arts and religious affairs journalism. I was still studying for A-levels when we started the website back in 1998. I went on to study History and Spanish at Royal Holloway, University of London, and continued to run the SE1 website even whilst living and studying in Madrid.

What made you decide to set up the site?

My father was editing a monthly what’s on guide for the City of London (ie the Square Mile) with an emphasis on things that City workers could do in their lunch hour such as attending free lectures and concerts. The publication was funded by the City of London Corporation and in later years by the Diocese of London because many of these events and activities happened in the City churches.

Our own neighbourhood – across the Thames from the City – was undergoing a big change. Huge new developments such as Tate Modern and the London Eye were being planned and built. There was lots of new cultural and community activity in the area, but no-one was gathering information about all of the opportunities available to local residents, workers and visitors in a single place.

In the 1970s and 1980s there was a community newspaper called ‘SE1′ but that had died out, and our neighbourhood was just a small part of the coverage areas of the established local papers (South London Press and Southwark News).

We saw that there was a need for high quality local news and information and decided that together we could produce something worthwhile.

When did you set up the site and how did you go about it?

We launched an ad-funded monthly printed what’s on guide called ‘in SE1′ in May 1998. At the same time we launched a website which soon grew into a product that was distinct from (but complementary to) the printed publication.

The earliest version of the site was hosted on free web space from Tripod (a Geocities rival) and was very basic.

By the end of 1998 we had registered the london-se1.co.uk domain and the site as it is today began to evolve.

In 2001 we moved from flat HTML files to a news CMS called WMNews. We still use a much-customised version. The current incarnation of our forum dates from a similar time, and our events database was developed in 2006.

What other websites influenced you?

When we started there weren’t many local news and community websites.

chiswickw4.com started at about the same time as we did and I’ve always admired it. There used to be a great site for the Paddington area called Newspad (run by Brian Jenner) which was another example of a good hyperlocal site before the term was coined.

More recently I’ve enjoyed following the development at some of the local news and listings sites in the USA, like Pegasus News and Edhat.

I also admire Ventnor Blog for the way it keeps local authorities on their toes.

How did – and do – you see yourself in relation to a traditional news operation?

I think we have quite old-fashioned news values – we place a strong emphasis on local government coverage and the importance of local democracy. That means a lot of evenings sitting in long meetings at Southwark and Lambeth town halls.

Quite often the main difference is simply speed of delivery – why should people wait a week for something to appear in a local paper when we can publish within hours or minutes?

We are able to be much more responsive to changes in technology than traditional news operations – we were one of the first news sites in the UK to integrate the Google Maps API into our content management system, and one of the earliest users of Audioboo.

What have been the key moments in the blog’s development editorially?

It’s very difficult to pinpoint ‘key moments’. I think our success has more to do with quiet persistence and consistency of coverage than any particular breakthrough. Our 12-year track record gives us an advantage over the local papers because their reporters covering our patch rarely last more than a year or two before moving on, so they’re constantly starting again from a clean slate in terms of contacts and background knowledge.

There are also several long-running stories that we’ve followed doggedly for a long time – for example the stop-start saga of the regeneration of the Elephant & Castle area, and various major developments along the riverside.

Twitter has changed things a lot for us, both in terms of newsgathering, and being able to share small bits of information quickly that wouldn’t merit writing a longer article.

Some of the key moments in our 12-year history have been as much about technical achievement as editorial.

In 2006 I developed our CMS for events listings. Since then we have carried details of more than 10,000 local events from jumble sales to public meetings and exhibitions of fine art. As well as powering a large part of the website, this system can also output InDesign tagged text ready to be imported straight onto the pages of our printed publication. How many publications have such an integrated online and print workflow?

What sort of traffic do you get and how has that changed over time?

The site consistently gets more than 500,000 page views a month.

We have a weekly email newsletter which has 7,200 subscribers, and we have about 7,500 followers on Twitter.

For us the big growth in traffic came four or five years ago. Since then there have been steady, unspectacular year-on-year increases in visitor numbers.

December 09 2010

13:21

December 02 2010

16:50

Dotspotting Launches to Make City Data Mappable

1288311519955

Dotspotting is the first project Stamen is releasing as part of Citytracking, a project funded by the Knight News Challenge. We're making tools to help people gather data about cities and make that data more legible. Our hope is to do this in a way that's simple enough for regular people to get involved, but robust enough for real research to happen along the way.

There's currently a whole chain of elements involved in building digital civic infrastructure for the public, and these are represented by various Stamen projects and those of others. At the moment, the current hodgepodge of bits -- including APIs and official sources, scraped websites, sometimes-reusable data formats and datasets, visualizations, embeddable widgets etc. -- is fractured, overly technical and obscure, and requires considerable expertise to harness. That is, unless you're willing to use generic tools like Google Maps. We want to change this. Visualizing city data shouldn't be this hard, or this generic.

So the first part of this project is to start from scratch, in a "clean room" environment. We've started from a baseline that's really straightforward, tackling the simplest part: Getting dots on maps, without legacy code or any baggage. Just that, to start. Dots on maps.

More Than Dots

But dots on maps implies a few other things: Getting the locations, putting them on there, working with them, and -- crucially -- getting them out in a format that people can work with.

We've had several interactions with different city agencies so far, and while the situation has changed alot in the last few years, we've realized that, for the foreseeable future, people aren't going to stop using Word and Excel and Pages and Numbers to work with their data, or even stop using paper. It's made us think that if this stuff is really going to work out in the long run, we need to focus our thinking on projects that can consume as well as export things that cities and people actually use and use now. This is instead of going with projects that have to rely on fancy APIs or the latest database flavor.

It's great that San Francisco and New York are releasing structured XML data, but Oakland is still uploading Excel spreadsheets (it's actually awesome that they do), and the Tenderloin police lieutenants are printing out paper maps and hand-placing colored stickers on them. At some point, if this really is the way things are going, we're going to need to meet the needs of actual functioning city agencies; and while APIs are great and necessary, for now that means Excel spreadsheets and Word docs. It also means being able to easily read in data that people have uploaded to Google maps, interface with SMS systems like those that Ushahidi are pioneering. And it means being able to export to things like PowerPoint and Keynote, scary as that may seem.

What we've launched with is the baseline work that's being done to make this stuff internet-native. There's a login and permissions system that pretty much works. Uploading .csv files full of dots works. Each dot has an HTML page of its own, for example, like they do on Crimespotting. Collections of dots (we're calling them sheets) work, and you can export them. And there are dots on maps.

Easter Egg

What's up with the funny map, above, you ask? That's an undocumented Easter egg that allows you to change the default base map for Dotspotting on the fly using a templated URL. If that last sentence sounds like gibberish, just think: Fun! And a bit dorky. But fun!

Speaking of which, the code for Dotspotting is available for download on Github, and licensed for use under the GNU General Public License. We're planning on releasing the code as we work on the project, in the hope that working in this kind of transparent manner from the beginning will both benefit the project and serve as an example of the way we'd like to work with foundations on this kind of work.

September 03 2010

16:00

An open and shut case: At the new TimesOpen, different models for attracting developers to a platform

One phone rings, then another, then four more, now a dozen. The 15th-floor conference room is suddenly abuzz with an eclectic mix of song snippets and audio bits, an intimate peak at their owners before each is picked up or silenced. Having impressed the audience with the telephony technology behind the product, the presenter moves on to the next demo.

The intersection of mobile and geolocation is still an unknown world, waiting to be invented by hackers like the ones at round 2.0 of TimesOpen, The New York Times’ outreach to developers, which launched Thursday night. We wrote about the first TimesOpen event last year: It’s an attempt to open the doors of the The Times to developers, technologists, designers, and entrepreneurs, who can use Times tools to help answer some of the field’s big questions. This iteration of TimesOpen is a five-event series this fall, each focusing on a different topic: mobile/geolocation, open government, the real-time web, “big data,” and finally a hack day in early December.

On the docket Thursday were Matt Kelly of Facebook, John Britton of Twilio, Manu Marks of Google, and John Keefe of WNYC. Kelly presented Facebook Places; Britton gave one of his now New York-famous live demos of the Twilio API; Marks dove deep into the various flavors of the Google Maps API; Keefe — the only non-programmer of the bunch — discussed lessons learned from a community engagement project with The Takeaway.

Building community around an API

An API, or application programming interface, allow applications to easily communicate with one another. For example, any iPhone or Android application that pulls information from a web-based database is most likely it through an API. If you search local restaurants through Yelp, your location and query are passed to Yelp and results given in return. For any company with an API, like the three at TimesOpen, the challenge is to convince developers they should spend their time innovating on top of your platform. Strategically, when there’s an entire ecosystem living on top of your platform, your platform then becomes indispensable and valuable.

What’s most fascinating to me, however, are the approaches each company is taking to build a community around its API. The community is the most important key to the success of an API, a major source of innovation. One of the keys to Twitter’s explosive growth has been its API; rather than depending on its own developers for all new innovation, Twitter inadvertently created an entire ecosystem of value on top of their platform.

Let’s contrast Facebook and Twilio, for example. Facebook hopes Places, launched in mid August, will become the definitive platform for all location data. Interoperability can happen, but it should happen over Facebook’s infrastructure. Facebook envisions a future where, in addition to showing you where your friends are in real time, Places will also offer historical social context to location. Remember the trip through South America your friend was telling you about? Now you don’t have to, all of the relevant information is accessible through Places.

At the moment, though, Facebook’s only public location API is read-only. It can give a developer a single check-in, all check-ins for a given user, or check-in data for a given location. They have a closed beta for the write API with no definitive timeline for opening it publicly. Expanded access to the API is done through partnerships reserved for the select few.

Twilio’s demo power

Twilio, on the other hand, is a cloud-based telephony company which offers voice and SMS functionality as a service, and whose business depends wholly on extensive use of its API. Developer evangelist John Britton made a splash at the NY Tech Meetup when, in front of hundreds, he wrote a program and did a live demo that elegantly communicated the full scope of what their product offers. On Thursday, he impressed again: Using the Twilio API, he procured a phone number, and had everyone in the audience dial into it. When connected, callers were added to one of three conference rooms. Dialing into the party line also meant your phone number was logged, and the application could then follow up by calling you back. All of this was done with close to a dozen lines of code.

At TimesOpen, Britton stressed API providers need to keep a keen ear to their community. Community members often have ideas for how you can improve your service to solve the intermediate problems they have. For instance, up until a week ago, Twilio didn’t have the functionality to block phone numbers from repeatedly dialing in. For one company using the platform, the absence of this feature became a significant financial liability. Once rolled out, the feature made Twilio much more valuable of a service because the company could more closely tailor it to their needs. To make experimentation even easier, Twilio also has an open source product called OpenVBX and brings together its community with regular meetups.

Facebook already has the scale and the social graph to make any new API it produces a player. But for wooing the hackers — at least when you’re a small and growing platform — open and inclusive seems to win out over closed and exclusive.

September 01 2010

10:57

NEW DIGITAL NARRATIVES: FACTS, GRAPHIC IDEAS AND FAST QUALITY JOURNALISM &

chiquiesteban

INNOVATION’s Chiqui Esteban leads the New Narratives department of lainformacion.com in Madrid, Spain.

The graphic work done with his small team (he and Carlos Gamez plus Sarah Potts, an intern) is astonishing.

They have the support of the Art Director Antonio Pasagali, video and web developers, HTML designers of lainformacion.com but they do 90% of the final work.

They use Flash, Illustrator, Photoshop, Cinema 4d, Dreamweaver, Soundbooth, Premiere and Click2Map for Google Maps…

Chiqui and Carlos are fast like the best writers of the best newswire services.

They are quick, but accurate, and always focus on “the news behind the news” trying to explain, to find out, discover new angles and delivering not just facts but graphic ideas and fast quality journalism.

This is a 24/7 full time team and they are on permanent deadline mood.

Chiqui and Carlos have limited resources but unlimited creativity, and shows what you can do when you have real journalists.

Here you can review some of the work done during the last 12 months of lainformacion.com.

portada

An amazing showcase of almost 400 pieces of first class visual journalism!

Keep in mind that lainformacion.com was founded by Mario Tascon, the most influential Spanish infographics journalist, a new pure digital media company that today has a new editor, Carlos Salas, a very visual journalist, founding editor of El Economista, the financial newspaper launched by INNOVATION, and one the Best Designed Newspapers of the World.

And more:

Chiqui has been able to lead all this effort and at the same time work with many INNOVATION clients around the world teaching and preaching the New Visual Journalism Gospel.

Follow his Spanish/English blog.

A must-read blog.

August 16 2010

18:43

GoMap Helps Communities Map Local Events, News

GoMap is a map-based interface for local news, initiatives, building projects, public hearings and tweets. Our project, which won a 2010 Knight News Challenge grant, is ment to turn a city into a neighborhood, a place where everybody sees and hears his/her friends, can communicate with each other, and have fun based on their geographical location. Here's how the project was described by the Knight Foundation:

To inspire people to get involved in their community, this project will create a live, online map with local news and activities. GoMap Riga will pull some content from the web and place it automatically on the map. Residents also will be able to add their own news, pictures and videos while also discussing what is happening around them. GoMap Riga will be integrated with the major existing social networks and allow civic participation through mobile technology. The project will be tested in Riga, Latvia, and ultimately be applicable in other cities.

You can also watch a video about GoMap:

Knight News Challenge: GoMap Riga from Knight Foundation on Vimeo.

Below is a piece-by-piece overview of the elements that will be incorporated into the project.

Project Elements

News -- GoMap will automatically read news from online sources and place them on the map. GoMap will notify people about the news related to their home or interest area, so that people won't miss it when something is going on in their local community.

Initiatives-- Issues like "this fountain needs to get fixed" or "let's have an artist wall here" could take place on the city map. People could also create initiatives on the map, gather signatures from fellow citizens, and bring the initiative to the attention of the local municipality, media, police, etc. in order to get things done.

Building projects-- GoMap will automatically place all the local building projects on the map, notify locals about them, and in effect host an online public hearing about these projects.

Twitter -- Tweets like "check out this bar" or "let's meet right here" will be incorporated into GoMap. With just two clicks, people can have their tweets placed on the map.

We're currently in the very early stages of development and you can keep an eye on our progress at http://gomapdev.appspot.com. Our key challenge is to master the Google Maps API and create a lot of new code in order to get things to work and look the way we need.

Let us know what you think about our project, and thanks for reading!

18:43

GoMap Helps Communities Map Local Events, News

GoMap is a map-based interface for local news, initiatives, building projects, public hearings and tweets. Our project, which won a 2010 Knight News Challenge grant, is ment to turn a city into a neighborhood, a place where everybody sees and hears his/her friends, can communicate with each other, and have fun based on their geographical location. Here's how the project was described by the Knight Foundation:

To inspire people to get involved in their community, this project will create a live, online map with local news and activities. GoMap Riga will pull some content from the web and place it automatically on the map. Residents also will be able to add their own news, pictures and videos while also discussing what is happening around them. GoMap Riga will be integrated with the major existing social networks and allow civic participation through mobile technology. The project will be tested in Riga, Latvia, and ultimately be applicable in other cities.

You can also watch a video about GoMap:

Knight News Challenge: GoMap Riga from Knight Foundation on Vimeo.

Below is a piece-by-piece overview of the elements that will be incorporated into the project.

Project Elements

News -- GoMap will automatically read news from online sources and place them on the map. GoMap will notify people about the news related to their home or interest area, so that people won't miss it when something is going on in their local community.

Initiatives-- Issues like "this fountain needs to get fixed" or "let's have an artist wall here" could take place on the city map. People could also create initiatives on the map, gather signatures from fellow citizens, and bring the initiative to the attention of the local municipality, media, police, etc. in order to get things done.

Building projects-- GoMap will automatically place all the local building projects on the map, notify locals about them, and in effect host an online public hearing about these projects.

Twitter -- Tweets like "check out this bar" or "let's meet right here" will be incorporated into GoMap. With just two clicks, people can have their tweets placed on the map.

We're currently in the very early stages of development and you can keep an eye on our progress at http://gomapdev.appspot.com. Our key challenge is to master the Google Maps API and create a lot of new code in order to get things to work and look the way we need.

Let us know what you think about our project, and thanks for reading!

June 28 2010

08:00

#Tip of the day from Journalism.co.uk – local map widgets

Hyperlocal Google maps: mySociety has a simple step-by-step guide showing you how to display the most recent reports from its FixMyStreet service, on a local map widget. This can then be embedded on your blog or site. Tipster: Judith Townend. To submit a tip to Journalism.co.uk, use this link - we will pay a fiver for the best ones published.


June 04 2010

15:29

Mapping stories and historical images on Google Street View

Historypin, a site that overlays historical images and related stories on Google Street View, describes itself as “like a digital time machine”:

It uses Google Maps and Street View technology and hopes to become the largest user-generated archive of the world’s historical images and stories.

Historypin asks the public to dig out, upload and pin their own old photos, as well as the stories behind them, onto the Historypin map. Uniquely, Historypin lets you layer old images onto modern Street View scenes, giving a series of peaks into the past.

It has been developed by We Are What We Do, the “social movement” and campaign that was behind the book ‘Teach your Granny to Text and Other Ways to Change the World’, in partnership with Google.

If the technology behind it were opened up, this would be a fascinating way to publishing ‘nostalgia’ pictures from local newspapers, news archives or map historic stories.

(via Mapperz)

Similar Posts:



May 18 2010

21:11

A journalistic tour of the Argentinian Bicentenary

On May 25th we celebrate the Argentinian Bicentenary. And while the big media aren’t showing any really interesting initiatives, we have Tu Bicentenario, an independent and experimental journalistic project that aims to give real-time coverage to the main events of the celebrations with social tools and user-collaboration.

With a highly customizable website that integrates different movable boxes, including Facebook, Twitter, YouTube, Vimeo, Google Maps and mobile streaming, they are trying to facilitate the creation and publication of content not only by the creators but by the audience too.

The most interesting content that came out of the project so far -in my opinion- is the survey in pictures and videos of historic sites, contrasted with old images to show the changing of cities. This material is being geolocated in Google Maps.

Some Argentinean Google Maps users also upload 3D models of the most important sights so you can do a virtual tour of the country.

January 15 2010

02:59

November 21 2009

08:03

Google Latitude’s Location History provides more opportunities for mobile journalism

This was originally published in Poynter’s E-Media Tidbits last week

Google Latitude – a service that allows people to see where you are – has launched 2 new services – Location History and Location Alerts - that provide some interesting potential for mobile journalism.

location history

Location History (shown above) allows you to “store, view, and manage your past Latitude locations. You can visualize your history on Google Maps and Earth or play back a recent trip in order.”

There are obvious possibilities here for then editing a map with editorial information – if you’re covering a parade, a marathon, or a demonstration you could edit placemarks to add relevant reports as you were posting them (or someone else with access to the account could from the newsroom).

Location Alerts is less obviously useful: this sends you a notification (by email and/or text) when you are near a friend’s location, although as Google explains, it’s a little more clever than that:

“Using your past location history, Location Alerts can recognize your regular, routine locations and not create alerts when you’re at places like home or work. Alerts will only be sent to you and any nearby friends when you’re either at an unusual place or at a routine place at an unusual time. Keep in mind that it may take up to a week to learn your “unusual” locations and start sending alerts.”

There is potential here for making serendipitous contact with readers or contacts, but until Latitude has widespread adoption (its biggest issue for me, and one that may never be resolved), it’s not likely to be useful in the immediate future.

The good thing about Latitude is you can enable it and disable it to suit you, and my own experience is that I only enable it when I want to meet someone using GPS on my phone. To sign up to Google Latitude user, go here. To enable the new features, go to google.com/latitude/apps.

Those are 2 uses I can think of, and I’ve yet to have a serious play – can you think of any others?

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl