Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 15 2011

07:19

When information is power, these are the questions we should be asking

Various commentators over the past year have made the observation that “Data is the new oil“. If that’s the case, journalists should be following the money. But they’re not.

Instead it’s falling to the likes of Tony Hirst (an Open University academic), Dan Herbert (an Oxford Brookes academic) and Chris Taggart (a developer who used to be a magazine publisher) to fill the scrutiny gap. Recently all three have shone a light into the move towards transparency and open data which anyone with an interest in information would be advised to read.

Hirst wrote a particularly detailed post breaking down the results of a consultation about higher education data.

Herbert wrote about the publication of the first Whole of Government Accounts for the UK.

And Taggart made one of the best presentations I’ve seen on the relationship between information and democracy.

What all three highlight is how control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin.

Control, Cost, Confusion

Hirst, for example, identifies the potential for data about higher education to be monopolised by one organisation – UCAS, or HEFCE – at extra cost to universities, resulting in less detailed information for students and parents.

His translation of the outcomes of a HEFCE consultation brings to mind the situation that existed for years around Ordnance Survey data: taxpayers were paying for the information up to 8 times over, and the prohibitive cost of accessing that data ended up inspiring the Free Our Data campaign. As Hirst writes:

“The data burden is on the universities?! But the aggregation – where the value is locked up – is under the control of the centre? … So how much do we think the third party software vendors are going to claim for to make the changes to their systems? And hands up who thinks that those changes will also be antagonistic to developers who might be minded to open up the data via APIs. After all, if you can get data out of your commercially licensed enterprise software via a programmable API, there’s less requirement to stump up the cash to pay for maintenance and the implementation of “additional” features…”

Meanwhile Dan Herbert analyses another approach to data publication: the arrival of commercial-style accounting reports for the public sector. On the surface this all sounds transparent, but it may be just the opposite:

“There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.

Like Hirst, he argues that the raw data – and the ability to interrogate that – should instead be made available because (quoting Anthony Hopwood): “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.”

This is a characteristic of the transparency initiative that we need to be sharper around as journalists. The Manchester Evening News discovered this when they wanted to look at spending cuts. What they found was a dataset that had been ‘spun’ to make it harder to see the story hidden within, and to answer their question they first had to unspin it – or, in data journalism parlance, clean it. Likewise, having granular data – ideally from more than one source – allows us to better judge the quality of the information itself.

Chris Taggart meanwhile looks at the big picture: friction, he says, underpins society as we know it. Businesses such as real estate are based on it; privacy exists because of it; and democracies depend on it. As friction is removed through access to information, we get problems such as “jurisdiction failure” (corporate lawyers having “hacked” local laws to international advantage), but also issues around the democratic accountability of ad hoc communities and how we deal with different conceptions of privacy across borders.

Questions to ask of ‘transparency’

The point isn’t about the answers to the questions that Taggart, Herbert and Hirst raise – it’s the questions themselves, and the fact that journalists are, too often, not asking them when we are presented with yet another ‘transparency initiative‘.

If data is the new oil those three posts and a presentation provide a useful introduction to following the money.

(By the way, for a great example of a journalist asking all the right questions of one such initiative, however, see The Telegraph’s Conrad Quilty-Harper on the launch of Police.uk)

Data is not just some opaque term; something for geeks: it’s information: the raw material we deal in as journalists. Knowledge. Power. The site of a struggle for control. And considering it’s a site that journalists have always fought over, it’s surprisingly placid as we enter one of the most important ages in the history of information control.

As Heather Brooke writes today of the hacking scandal:

“Journalism in Britain is a patronage system – just like politics. It is rare to get good, timely information through merit (eg by trawling through public records); instead it’s about knowing the right people, exchanging favours. In America reporters are not allowed to accept any hospitality. In Britain, taking people out to lunch is de rigueur. It’s where information is traded. But in this setting, information comes at a price.

“This is why there is collusion between the elites of the police, politicians and the press. It is a cartel of information. The press only get information by playing the game. There is a reason none of the main political reporters investigated MPs’ expenses – because to do so would have meant falling out with those who control access to important civic information. The press – like the public – have little statutory right to information with no strings attached. Inside parliament the lobby system is an exercise in client journalism that serves primarily the interests of the powerful. Freedom of information laws bust open the cartel.”

But laws come with loopholes and exemptions, red tape and ignorance. And they need to be fought over.

One bill to extend the FOI law to “remove provisions permitting Ministers to overrule decisions of the Information Commissioner and Information Tribunal; to limit the time allowed for public authorities to respond to requests involving consideration of the public interest; to amend the definition of public authorities” and more, for example, was recently put on indefinite hold. How many publishers and journalists are lobbying to un-pause this?

So let’s simplify things. And in doing so, there’s no better place to start than David Eaves’ 3 laws of government data.

This is summed up as the need to be able to “Find, Play and Share” information. For the purposes of journalism, however, I’ll rephrase them as 3 questions to ask of any transparency initiative:

  1. If information is to be published in a database behind a form, then it’s hidden in plain sight. It cannot be easily found by a journalist, and only simple questions will be answered.
  2. If information is to be published in PDFs or JPEGs, or some format that you need proprietary software to see, then it cannot be easily be questioned by a journalist
  3. If you will have to pass a test to use the information, then obstacles will be placed between the journalist and that information

The next time an organisation claims that they are opening up their information, tick those questions off. (If you want more, see Gurstein’s list of 7 elements that are needed to make effective use of open data).

At the moment, the history of information is being written without journalists.

PrintFriendly

December 21 2010

15:26

Videos: Linked data and the semantic web

Courtesy of the BBC College of Journalism, we’ve got video footage from all of our sessions at news:rewired – beyond the story, 16 December 2010.

We’ll be grouping the video clips by session – you can view all footage by looking at the multimedia category on this site.

Martin Moore

Martin Belam

Simon Rogers

Silver Oliver

December 19 2010

18:00

Games, systems and context in journalism at News Rewired

I went to News Rewired on Thursday, along with dozens of other journalists and folk concerned in various ways with news production. Some threads that ran through the day for me were discussions of how we publish our data (and allow others to do the same), how we link our stories together with each other and the rest of the web, and how we can help our readers to explore context around our stories.

One session focused heavily on SEO for specialist organisations, but included a few sharp lessons for all news organisations. Frank Gosch spoke about the importance of ensuring your site’s RSS feeds are up to date and allow other people to easily subscribe to and even republish your content. Instead of clinging tight to content, it’s good for your search rankings to let other people spread it around.

James Lowery echoed this theme, suggesting that publishers, like governments, should look at providing and publishing their data in re-usable, open formats like XML. It’s easy for data journalists to get hung up on how local councils, for instance, are publishing their data in PDFs, but to miss how our own news organisations are putting out our stories, visualisations and even datasets in formats that limit or even prevent re-use and mashup.

Following on from that, in the session on linked data and the semantic web,Martin Belam spoke about the Guardian’s API, which can be queried to return stories on particular subjects and which is starting to use unique identifiers -MusicBrainz IDs and ISBNs, for instance – to allow lists of stories to be pulled out not simply by text string but using a meaningful identification system. He added that publishers have to licence content in a meaningful way, so that it can be reused widely without running into legal issues.

Silver Oliver said that semantically tagged data, linked data, creates opportunities for pulling in contextual information for our stories from all sorts of other sources. And conversely, if we semantically tag our stories and make it possible for other people to re-use them, we’ll start to see our content popping up in unexpected ways and places.

And in the long term, he suggested, we’ll start to see people following stories completely independently of platform, medium or brand. Tracking a linked data tag (if that’s the right word) and following what’s new, what’s interesting, and what will work on whatever device I happen to have in my hand right now and whatever connection I’m currently on – images, video, audio, text, interactives; wifi, 3G, EDGE, offline. Regardless of who made it.

And this is part of the ongoing move towards creating a web that understands not only objects but also relationships, a world of meaningful nouns and verbs rather than text strings and many-to-many tables. It’s impossible to predict what will come from these developments, but – as an example – it’s not hard to imagine being able to take a photo of a front page on a newsstand and use it to search online for the story it refers to. And the results of that search might have nothing to do with the newspaper brand.

That’s the down side to all this. News consumption – already massively decentralised thanks to the social web – is likely to drift even further away from the cosy silos of news brands (with the honourable exception of paywalled gardens, perhaps). What can individual journalists and news organisations offer that the cloud can’t?

One exciting answer lies in the last session of the day, which looked at journalism and games. I wrote some time ago about ways news organisations were harnessing games, and could do in the future – and the opportunities are now starting to take shape. With constant calls for news organisations to add context to stories, it’s easy to miss the possibility that – as Philip Trippenbachsaid at News Rewired - you can’t explain a system with a story:

Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work.

Many of the issues we cover – climate change, government cuts, the deficit – at macro level are systems that could be interestingly and interactively explored with games. (Like this climate change game here, for instance.) Other stories can be articulated and broadened through games in a way that allows for real empathy between the reader/player and the subject because they are experiential rather than intellectual. (Like Escape from Woomera.)

Games allow players to explore systems, scenarios and entire universes in detail, prodding their limits and discovering their flaws and hidden logic. They can be intriguing, tricky, challenging, educational, complex like the best stories can be, but they’re also fun to experience, unlike so much news content that has a tendency to feel like work.

(By the by, this is true not just of computer and console games but also of live, tabletop, board and social games of all sorts – there are rich veins of community journalism that could be developed in these areas too, as theRochester Democrat and Chronicle is hoping to prove for a second time.)

So the big things to take away from News Rewired, for me?

  • The systems within which we do journalism are changing, and the semantic web will most likely bring another seismic change in news consumption and production.
  • It’s going to be increasingly important for us to produce content that both takes advantage of these new technologies and allows others to use these technologies to take advantage of it.
  • And by tapping into the interactive possibilities of the internet through games, we can help our readers explore complex systems that don’t lend themselves to simple stories.

Oh, and some very decent whisky.

Cross-posted at Metamedia.

December 16 2010

15:05

LIVE: Linked data and the semantic web

We’ll have Matt Caines and Nick Petrie from Wannabe Hacks liveblogging for us at news:rewired all day. Follow individual posts on the news:rewired blog for up to date information on all our sessions.

We’ll also have blogging over the course of the day from freelance journalist Rosie Niven.

September 06 2010

20:35

Charities data opened up – journalists: say thanks.

Having made significant inroads in opening up council and local election data, Chris Taggart has now opened up charities data from the less-than-open Charity Commission website. The result: a new website – Open Charities.

The man deserves a round of applause. Charity data is enormously important in all sorts of ways – and is likely to become more so as the government leans on the third sector to take on a bigger role in providing public services. Making it easier to join the dots between charitable organisations, the private and public sector, contracts and individuals – which is what Open Charities does – will help journalists and bloggers enormously.

A blog post by Chris explains the site and its background in more depth. In it he explains that:

“For now, it’s just a the simplest of things, a web application with a unique URL for every charity based on its charity number, and with the basic information for each charity available as data (XML, JSON and RDF). It’s also searchable, and sortable by most recent income and spending, and for linked data people there are dereferenceable Resource URIs.

“The entire database is available to download and reuse (under an open, share-alike attribution licence). It’s a compressed CSV file, weighing in at just under 20MB for the compressed version, and should probably only attempted by those familiar with manipulating large datasets (don’t try opening it up in your spreadsheet, for example). I’m also in the process of importing it into Google Fusion Tables (it’s still churning away in the background) and will post a link when it’s done.”

Chris promises to add more features “if there’s any interest”.

Well, go on…

September 03 2010

10:56

Why the US and UK are leading the way on semantic web

Following his involvement in the first Datajournalism meetup in Berlin earlier this week, Martin Belam, the Guardian’s information architect, looks at why the US and UK may have taken the lead in semantic web, as one audience member suggested on the day.

In an attempt to try and answer the question, he puts forward four themes on his currybet.net blog that he feels may play a part. In summary, they are:

  • The sharing of a common language which helps both nations access the same resources and be included in comparative datasets.
  • Competition across both sides of the pond driving innovation.
  • Successful business models already being used by the BBC and even more valuably being explained on their internet blogs.
  • Open data and a history of freedom of information court cases which makes official information more likely to be made available.

On his full post here he also has tips for how to follow the UK’s lead, such as getting involved in hacks and hackers type events.Similar Posts:



July 12 2010

11:23

May 07 2010

14:00

An involuntary Facebook for reporters and their work: Martin Moore on the U.K.’s Journalisted

In the era of big media, our conceptions of trust were tied up in news organizations. If a story was on page 1 of The New York Times, that fact alone conjured up different associations of quality, truthfulness, and trustworthiness than if it were on page 1 of The National Enquirer. Those associations weren’t consistent — many Fox News viewers would have different views on the trustworthiness of the Times than I do — but they still largely lived at the level of the news organization.

But in an era of big-media regression and splintered news — when news can be delivered online by someone you hadn’t even heard of 10 seconds ago — how does trust evolve? Does it trickle down to the individual journalist: Do we decide who to trust not based on the news organization they work for but on the reporter? Are there ways to build metadata around those long-faceless bylines that can help us through the trust thicket?

It’s a question that’s getting poked at by Journalisted, the project of the U.K.’s Media Standards Trust. You can think of Journalisted as an involuntary Facebook for British reporters — at the moment, those who work for the national newspapers and the BBC, but with hopes to expand. It tracks their work across news organizations, cataloging it and drawing what data-based conclusions it can.

So if you run across an article by Richard Norton-Taylor and have pangs of doubt about his work, you can go see what else he’s written about the subject or anything else. There’s also a bit of metadata around his journalism: A tag cloud tells you he writes more about the MI5 than anything else, although lately he’s been more focused on NATO. You can see what U.K. bloggers wrote about each of his stories, and you can find other journalists who write about similar topics. And for journalists who choose to provide it, you can learn biographical information, like the fact that Simon Rothstein is an award-winning writer about professional wrestling, so maybe his WWE stories are more worth your time.

It is very much a first step — Journalisted is not yet the vaunted distributed trust network that will help us decide who to pay attention to and who we can safely ignore. The journalist-matching metadata is really interesting, but it still doesn’t go very far in determining merit: No one’s built those tools yet. But it’s a significant initiative toward placing journalists in the context of their work and their peers, and in the new splintered world, that context is going to be important.

Our friend Martin Moore of the Media Standards Trust dropped by our spare-shelved office not long ago and I asked him to talk about Journalisted. Video above, transcript below.

Journalisted is essentially a directory of all the journalists who are published in the UK national press and on the BBC, and in the future other sites as well. Each journalist has their own profile page, a little bit like Facebook or LinkedIn, but the difference being that that page is automatically updated with links to their most recent articles. It has some basic analysis of the content of those articles, so what they write an awful lot about, and what they don’t. And, it has links to other information to give context to the journalist, so if they have a profile in the paper, or if they have a Wikipedia page, or if they have their own personal blog or website. And as of a couple of weeks ago, they can add further information themselves if they’d like to.

[...]

If you’re interested in a particular journalist and you want to know more about what they write about, again to give you context, then obviously that’s a very good way of doing it. It tells you if they come from a particular perspective, it tells you if they’ve written an awful lot about a subject. If you, for example, read a piece strongly recommending against multiple vaccinations, you might want to know if this person has a history of being anti-multiple vaccinations, or if they have particular qualifications in science that make them very good reporting on this issue, etc. So, it gives you that context.

It also, on a simpler level, can give you contact details. So, where a journalist has published their email address, we automatically serve it up. But equally they can themselves put in further contact information, if you want to follow up on a story. And we also have some interesting analytics which lead you on to journalists who write about similar topics, or if you read an article, similar articles on the same topic. So again, it’s to contextualize the news and to help you to navigate and have more reason to trust a piece.

[...]

Initially, there was a bit of shock, I think. An awful lot of journalists don’t expect the spotlight to be turned around and put on them, so we had some very interesting exchanges. Since then, it’s now been around long enough that a lot of journalists have actually started to almost use it as their online CV. They’re adding their own stuff, they’re asking us to add stuff on their behalf, and they’re seeing that it can be of benefit to them, either with sources, so that they can allow sources to contact them, and to engage with them, or, equally, with employers. Quite a number of journalists have told us that editors have looked at their Journalisted profile and made a decision as to whether to offer them some work.

[...]

There are a number of goals. The initial one that we’re working on now is to flesh out the profiles much more. So to give people much more depth around the person so that they can have a much better impression as to who this journalist is, what they write about, their qualifications, the awards that they’ve won, and the books that they’ve written, etc. So, really flesh out the individual profile.

Following on from that, we’d love to expand it. We’d love to bring in more journalists, more publications — if possible, even go international. Our hope is that in the future, it will start to become a central resource, if you like, a junction point, a linked data resource, so that it will be the place you’ll come to from either the news site, from a blog, from wherever, in order to find out more about a journalist.

March 17 2010

09:32

MediaShift: Why news organisations should use ‘linked data’

Director of the Media Standards Trust Martin Moore gives 10 reasons why news organisations should use “linked data” – “a way of publishing information so that it can easily – and automatically -be linked to other, similar data on the web”.

[Moore's recommendations follow the News Linked Data Summit and you can read more about the event at this link.]

It’s worth reading the list in full, but some of the top reasons include:

  • Linked data can boost search engine optimisation;
  • It helps you and other people build services around your content;
  • It helps journalists with their work:

As a news organisation publishes more of its news content in linked data, it can start providing its journalists with more helpful information to inform the articles they’re writing. Existing linked data can also provide suggestions as to what else to link to.

Full post at this link…

Similar Posts:



March 16 2010

19:39

February 25 2010

11:24

Experiments in online journalism

Last month the first submissions by students on the MA in Online Journalism landed on my desk. I had set two assignments. The first was a standard portfolio of online journalism work as part of an ongoing, live news project. But the second was explicitly branded ‘Experimental Portfolio‘ – you can see the brief here. I wanted students to have a space to fail. I had no idea how brave they would be, or how successful. The results, thankfully, surpassed any expectations I had. They included:

There are a range of things that I found positive about the results. Firstly, the sheer variety – students seemed to either instinctively or explicitly choose areas distinct from each other. The resulting reservoir of knowledge and experience, then, has huge promise for moving into the second and final parts of the MA, providing a foundation to learn from each other.

Secondly, by traditional standards a couple of students did indeed ‘fail’ to produce a concrete product. But that was what the brief allowed – in fact, encouraged. They were not assessed on success, but research, reflection and creativity. The most interesting projects were those that did not produce anything other than an incredible amount of learning on the part of the student. In other words, it was about process rather than product, which seems appropriate given the nature of much online journalism.

Process, not product

One of the problems I sought to address with this brief was that students are often result-focused and – like journalists and news organisations themselves – minimise risk in order to maximise efficiency. So the brief took away those incentives and introduced new ones that rewarded risk-taking because, ultimately, MA-level study is as much about testing new ideas as it is about mastering a set of skills and area of knowledge. In addition, the whole portfolio was only worth 20% of their final mark, so the stakes were low.

Some things can be improved. There were 3 areas of assessment – the third, creativity, was sometimes difficult to assess in the absence of any product. There is the creativity of the idea, and how the student tackles setbacks and challenges, but that could be stated more explicitly perhaps.

Secondly, the ‘evaluation’ format would be better replaced by an iterative, blog-as-you-go format which would allow students to tap into existing communities of knowledge, and act as a platform for ongoing feedback. The loop of research-experiment-reflect-research could be integrated into the blog format – perhaps a Tumblelog might be particularly useful here? Or a vlog? Or both?

As always, I’m talking about this in public to invite your own ideas and feedback on whether these ideas are useful, and where they might go next. I’ll be inviting the students to contribute their own thoughts too.

February 24 2010

14:02

A history of linked data at the BBC

Martin Belam, information architect for the Guardian and CurryBet blogger, reports from today’s Linked Data meet-up in London, for Journalism.co.uk.

You can read the first report, ‘How media sites can use linked data’ at this link.

There are many challenges when using linked data to cover news and sport, Silver Oliver, information architect in the BBC’s journalism department, told delegates at today’s Linked Data meet-up session at ULU, part of a wider dev8d event for developers.

Initally newspapers saw the web as just another linear distribution channel, said Silver. That meant we ended up with lots and lots of individually published news stories online, that needed information architects to gather them up into useful piles.

He believes we’ve hit the boundaries of that approach, and something like the data-driven approach of the BBC’s Wildlife Finder is the future for news and sport.

But the challenge is to find models for sport, journalism and news

A linked data ecosystem is built out of a content repository, a structure for that content, and then the user experience that is laid over that content structure.

But how do you populate these datasets in departments and newsrooms that barely have the resource to manage small taxonomies or collections of external links, let alone populate a huge ‘ontology of news’, asked Silver.

Silver says the BBC has started with sport, because it is simpler. The events and the actors taking part in those events are known in advance. For example, even this far ahead you know the fixture list, venues, teams and probably the majority of the players who are going to take part in the 2010 World Cup.

News is much more complicated, because of the inevitable time lag in a breaking news event taking place, and there being canonical identifiers for it. Basic building blocks do exist, like Geonames or DBpedia, but there is no definitive database of ‘news events’.

Silver thinks that if all news organisations were using common IDs for a ’story’, this would allow the BBC to link out more effectively and efficiently to external coverage of the same story.

Silver also presented at the recent news metadata summit, and has blogged about the talk he gave that day, which specifically addressed how the news industry might deal with some of these issues:

Similar Posts:



12:32

How media sites can make use of linked data

Martin Belam, information architect for the Guardian and CurryBet blogger, reports from today’s Linked Data meet-up in London, for Journalism.co.uk.

The morning Linked Data meet-up session at ULU was part of a wider dev8d event for developers, described as ‘four days of 100 per cent pure software developer heaven’. That made it a little bit intimidating for the less technical in the audience – the notices on the rooms to show which workshops were going on were labelled with 3D barcodes, there were talks about programming ‘nanoprojectors’, and a frightening number of abbreviations like RDF, API, SPARQL, FOAF and OWL.

What is linked data?

‘Linked data’ is all about moving from a web of interconnected documents, to a web of interconnected ‘facts’. Think of it like being able to link to and access the relevant individual cells across a range of spreadsheets, rather than just having a list of spreadsheets. It looks a good candidate for being a step-change in the way that people access information over the internet.

What are the implications for journalism and media companies?

For a start it is important to realise that linked data can be consumed as well as published. Tom Heath from Talis gave the example of trying to find out about ‘pebbledash’ when buying a house.

At the moment, to learn about this takes a time-consuming exploration of the web as it stands, probably pogo-sticking between Google search results and individual web pages that may or may not contain useful information about pebbledash. [Image below: secretlondon123 on Flickr]

In a linked data web, finding facts about the ‘concept’ of pebbledash would be much easier. Now, replace ‘pebbledash’ as the example with the name of a company or a person, and you can see how there is potential for journalists in their research processes. A live example of this at work is the sig.ma search engine. Type your name in and be amazed / horrified about how much information computers are already able to aggregate about you from the structured data you are already scattering around the web.

Tom Heath elaborates on this in a paper he wrote in 2008: ‘How Will We Interact with the Web of Data?‘. However, as exciting as some people think linked data is, he struggled to name a ‘whizz-bang’ application that has yet been built.

Linked data at the BBC

The BBC have been the biggest media company so far involved in using and publishing linked data in the UK. Tom Scott talked about their Wildlife Finder, which uses data to build a website that brings together natural history clips, the BBC’s news archive, and the concepts that make up our perception of the natural world.

Simply aggregating the data is not enough, and the BBC hand-builds ‘collections’ of curated items. Scott said ‘curation is the process by which aggregate data is imbued with personalised trust’, citing a collection of David Attenborough’s favourite clips as an example.

Tom Scott argued that it didn’t make sense for the BBC to spend money replicating data sources that are already available on the web, and so Wildlife Finder builds pages using existing sources like Wikipedia, WWF, ZSL and the University of Michigan Museum of Zoology. A question from the floor asked him about the issues of trust around the BBC using Wikipedia content. He said that a review of the content before the project went live showed that it was, on the whole, ‘pretty good’.

As long as the BBC was clear on the page where the data was coming from, he didn’t see there being an editorial issue.

Other presentations during the day are due to be given by John Sheridan and Jeni Tennison from data.gov.uk, Georgi Kobilarov of Uberblic Labs and Silver Oliver from the BBC. The afternoon is devoted to a more practical series of workshops allowing developers to get to grips with some of the technologies that underpin the web of data.

Similar Posts:



Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl