Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 08 2011

12:59

Semantic web - rNews 1.0 is an official standard now

Semantic Web :: At a gathering of the International Press Telecommunications Council (IPTC), rNews took the step from being a proposal to being a formal standard. rNews was created by the IPTC and made its public debut earlier this year as a proposal for using RDFa to annotate news-specific metadata in HTML documents.

Continue to read Eric Franzon, semanticweb.com

December 21 2010

15:26

Videos: Linked data and the semantic web

Courtesy of the BBC College of Journalism, we’ve got video footage from all of our sessions at news:rewired – beyond the story, 16 December 2010.

We’ll be grouping the video clips by session – you can view all footage by looking at the multimedia category on this site.

Martin Moore

Martin Belam

Simon Rogers

Silver Oliver

December 19 2010

18:00

Games, systems and context in journalism at News Rewired

I went to News Rewired on Thursday, along with dozens of other journalists and folk concerned in various ways with news production. Some threads that ran through the day for me were discussions of how we publish our data (and allow others to do the same), how we link our stories together with each other and the rest of the web, and how we can help our readers to explore context around our stories.

One session focused heavily on SEO for specialist organisations, but included a few sharp lessons for all news organisations. Frank Gosch spoke about the importance of ensuring your site’s RSS feeds are up to date and allow other people to easily subscribe to and even republish your content. Instead of clinging tight to content, it’s good for your search rankings to let other people spread it around.

James Lowery echoed this theme, suggesting that publishers, like governments, should look at providing and publishing their data in re-usable, open formats like XML. It’s easy for data journalists to get hung up on how local councils, for instance, are publishing their data in PDFs, but to miss how our own news organisations are putting out our stories, visualisations and even datasets in formats that limit or even prevent re-use and mashup.

Following on from that, in the session on linked data and the semantic web,Martin Belam spoke about the Guardian’s API, which can be queried to return stories on particular subjects and which is starting to use unique identifiers -MusicBrainz IDs and ISBNs, for instance – to allow lists of stories to be pulled out not simply by text string but using a meaningful identification system. He added that publishers have to licence content in a meaningful way, so that it can be reused widely without running into legal issues.

Silver Oliver said that semantically tagged data, linked data, creates opportunities for pulling in contextual information for our stories from all sorts of other sources. And conversely, if we semantically tag our stories and make it possible for other people to re-use them, we’ll start to see our content popping up in unexpected ways and places.

And in the long term, he suggested, we’ll start to see people following stories completely independently of platform, medium or brand. Tracking a linked data tag (if that’s the right word) and following what’s new, what’s interesting, and what will work on whatever device I happen to have in my hand right now and whatever connection I’m currently on – images, video, audio, text, interactives; wifi, 3G, EDGE, offline. Regardless of who made it.

And this is part of the ongoing move towards creating a web that understands not only objects but also relationships, a world of meaningful nouns and verbs rather than text strings and many-to-many tables. It’s impossible to predict what will come from these developments, but – as an example – it’s not hard to imagine being able to take a photo of a front page on a newsstand and use it to search online for the story it refers to. And the results of that search might have nothing to do with the newspaper brand.

That’s the down side to all this. News consumption – already massively decentralised thanks to the social web – is likely to drift even further away from the cosy silos of news brands (with the honourable exception of paywalled gardens, perhaps). What can individual journalists and news organisations offer that the cloud can’t?

One exciting answer lies in the last session of the day, which looked at journalism and games. I wrote some time ago about ways news organisations were harnessing games, and could do in the future – and the opportunities are now starting to take shape. With constant calls for news organisations to add context to stories, it’s easy to miss the possibility that – as Philip Trippenbachsaid at News Rewired - you can’t explain a system with a story:

Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work.

Many of the issues we cover – climate change, government cuts, the deficit – at macro level are systems that could be interestingly and interactively explored with games. (Like this climate change game here, for instance.) Other stories can be articulated and broadened through games in a way that allows for real empathy between the reader/player and the subject because they are experiential rather than intellectual. (Like Escape from Woomera.)

Games allow players to explore systems, scenarios and entire universes in detail, prodding their limits and discovering their flaws and hidden logic. They can be intriguing, tricky, challenging, educational, complex like the best stories can be, but they’re also fun to experience, unlike so much news content that has a tendency to feel like work.

(By the by, this is true not just of computer and console games but also of live, tabletop, board and social games of all sorts – there are rich veins of community journalism that could be developed in these areas too, as theRochester Democrat and Chronicle is hoping to prove for a second time.)

So the big things to take away from News Rewired, for me?

  • The systems within which we do journalism are changing, and the semantic web will most likely bring another seismic change in news consumption and production.
  • It’s going to be increasingly important for us to produce content that both takes advantage of these new technologies and allows others to use these technologies to take advantage of it.
  • And by tapping into the interactive possibilities of the internet through games, we can help our readers explore complex systems that don’t lend themselves to simple stories.

Oh, and some very decent whisky.

Cross-posted at Metamedia.

December 16 2010

15:05

LIVE: Linked data and the semantic web

We’ll have Matt Caines and Nick Petrie from Wannabe Hacks liveblogging for us at news:rewired all day. Follow individual posts on the news:rewired blog for up to date information on all our sessions.

We’ll also have blogging over the course of the day from freelance journalist Rosie Niven.

November 16 2010

15:27

Extractiv: crawl webpages and make semantic connections

Extractiv screenshot

Here’s another data analysis tool which is worth keeping an eye on. Extractiv “lets you transform unstructured web content into highly-structured semantic data.” Eyes glazing over? Okay, over to ReadWriteWeb:

“To test Extractive, I gave the company a collection of more than 500 web domains for the top geolocation blogs online and asked its technology to sort for all appearances of the word “ESRI.” (The name of the leading vendor in the geolocation market.)

“The resulting output included structured cells describing some person, place or thing, some type of relationship it had with the word ESRI and the URL where the words appeared together. It was thus sortable and ready for my analysis.

“The task was partially completed before being rate limited due to my submitting so many links from the same domain. More than 125,000 pages were analyzed, 762 documents were found that included my keyword ESRI and about 400 relations were discovered (including duplicates). What kinds of patterns of relations will I discover by sorting all this data in a spreadsheet or otherwise? I can’t wait to find out.”

What that means in even plainer language is that Extractiv will crawl thousands of webpages to identify relationships and attributes for a particular subject.

This has obvious applications for investigative journalists: give the software a name (of a person or company, for example) and a set of base domains (such as news websites, specialist publications and blogs, industry sites, etc.) and set it going. At the end you’ll have a broad picture of what other organisations and people have been connected with that person or company.

It won’t answer your questions, but it will suggest some avenues of enquiry, and potential sources of information. And all within an hour.

Time and cost

ReadWriteWeb reports that the process above took around an hour “and would have cost me less than $1, after a $99 monthly subscription fee. The next level of subscription would have been performed faster and with more simultaneous processes running at a base rate of $250 per month.”

As they say, the tool represents “commodity level, DIY analysis of bulk data produced by user generated or other content, sortable for pattern detection and soon, Extractiv says, sentiment analysis.”

Which is nice.

September 03 2010

10:56

Why the US and UK are leading the way on semantic web

Following his involvement in the first Datajournalism meetup in Berlin earlier this week, Martin Belam, the Guardian’s information architect, looks at why the US and UK may have taken the lead in semantic web, as one audience member suggested on the day.

In an attempt to try and answer the question, he puts forward four themes on his currybet.net blog that he feels may play a part. In summary, they are:

  • The sharing of a common language which helps both nations access the same resources and be included in comparative datasets.
  • Competition across both sides of the pond driving innovation.
  • Successful business models already being used by the BBC and even more valuably being explained on their internet blogs.
  • Open data and a history of freedom of information court cases which makes official information more likely to be made available.

On his full post here he also has tips for how to follow the UK’s lead, such as getting involved in hacks and hackers type events.Similar Posts:



August 10 2010

08:00

#Tip of the day from Journalism.co.uk – html5 tagging

Get up to date with html5 tagging using Currybet.net's blog post outlining some of the 30 new tags for coding page structure, article structure and semantic mark-ups. Tipster: Rachel McAthy. To submit a tip to Journalism.co.uk, use this link - we will pay a fiver for the best ones published.


July 12 2010

11:23

July 07 2010

08:00

July 02 2010

13:23

June 17 2010

14:00

“A super sophisticated mashup”: The semantic web’s promise and peril

[Our sister publication Nieman Reports is out with its latest issue, and its focus is the new digital landscape of journalism. There are lots of interesting articles, and we're highlighting a few. Here, former Knight Fellow Andrew Finlayson explains the role of journalists in the semantic web. —Josh]

In the movie Terminator, humanity started down the path to destruction when a supercomputer called Skynet started to become smarter on its own. I was reminded of that possibility during my research about the semantic web.

Never heard of the semantic web? I don’t blame you. Much of it is still in the lab, the plaything of academics and computer scientists. To hear some of them debate it, the semantic web will evolve, like Skynet, into an all powerful thing that can help us understand our world or create various crises when it starts to develop a form of connected intelligence.

Intrigued? I was. Particularly when I asked computer scientists about how this concept could change journalism in the next five years. The true believers say the semantic web could help journalists report complex ever-changing stories and reach new audiences. The critics doubt the semantic web will be anything but a high-tech fantasy. But even some of the doubters are willing to speculate that computers using pieces of the semantic Web will increasingly report much of the news in the not too distant future.

Keep reading at Nieman Reports »

April 05 2010

16:03

Build Your Own NYT Linked Data Application

Learn how to build an application with linked data from The Times.

March 23 2010

17:12

February 24 2010

14:02

A history of linked data at the BBC

Martin Belam, information architect for the Guardian and CurryBet blogger, reports from today’s Linked Data meet-up in London, for Journalism.co.uk.

You can read the first report, ‘How media sites can use linked data’ at this link.

There are many challenges when using linked data to cover news and sport, Silver Oliver, information architect in the BBC’s journalism department, told delegates at today’s Linked Data meet-up session at ULU, part of a wider dev8d event for developers.

Initally newspapers saw the web as just another linear distribution channel, said Silver. That meant we ended up with lots and lots of individually published news stories online, that needed information architects to gather them up into useful piles.

He believes we’ve hit the boundaries of that approach, and something like the data-driven approach of the BBC’s Wildlife Finder is the future for news and sport.

But the challenge is to find models for sport, journalism and news

A linked data ecosystem is built out of a content repository, a structure for that content, and then the user experience that is laid over that content structure.

But how do you populate these datasets in departments and newsrooms that barely have the resource to manage small taxonomies or collections of external links, let alone populate a huge ‘ontology of news’, asked Silver.

Silver says the BBC has started with sport, because it is simpler. The events and the actors taking part in those events are known in advance. For example, even this far ahead you know the fixture list, venues, teams and probably the majority of the players who are going to take part in the 2010 World Cup.

News is much more complicated, because of the inevitable time lag in a breaking news event taking place, and there being canonical identifiers for it. Basic building blocks do exist, like Geonames or DBpedia, but there is no definitive database of ‘news events’.

Silver thinks that if all news organisations were using common IDs for a ’story’, this would allow the BBC to link out more effectively and efficiently to external coverage of the same story.

Silver also presented at the recent news metadata summit, and has blogged about the talk he gave that day, which specifically addressed how the news industry might deal with some of these issues:

Similar Posts:



12:32

How media sites can make use of linked data

Martin Belam, information architect for the Guardian and CurryBet blogger, reports from today’s Linked Data meet-up in London, for Journalism.co.uk.

The morning Linked Data meet-up session at ULU was part of a wider dev8d event for developers, described as ‘four days of 100 per cent pure software developer heaven’. That made it a little bit intimidating for the less technical in the audience – the notices on the rooms to show which workshops were going on were labelled with 3D barcodes, there were talks about programming ‘nanoprojectors’, and a frightening number of abbreviations like RDF, API, SPARQL, FOAF and OWL.

What is linked data?

‘Linked data’ is all about moving from a web of interconnected documents, to a web of interconnected ‘facts’. Think of it like being able to link to and access the relevant individual cells across a range of spreadsheets, rather than just having a list of spreadsheets. It looks a good candidate for being a step-change in the way that people access information over the internet.

What are the implications for journalism and media companies?

For a start it is important to realise that linked data can be consumed as well as published. Tom Heath from Talis gave the example of trying to find out about ‘pebbledash’ when buying a house.

At the moment, to learn about this takes a time-consuming exploration of the web as it stands, probably pogo-sticking between Google search results and individual web pages that may or may not contain useful information about pebbledash. [Image below: secretlondon123 on Flickr]

In a linked data web, finding facts about the ‘concept’ of pebbledash would be much easier. Now, replace ‘pebbledash’ as the example with the name of a company or a person, and you can see how there is potential for journalists in their research processes. A live example of this at work is the sig.ma search engine. Type your name in and be amazed / horrified about how much information computers are already able to aggregate about you from the structured data you are already scattering around the web.

Tom Heath elaborates on this in a paper he wrote in 2008: ‘How Will We Interact with the Web of Data?‘. However, as exciting as some people think linked data is, he struggled to name a ‘whizz-bang’ application that has yet been built.

Linked data at the BBC

The BBC have been the biggest media company so far involved in using and publishing linked data in the UK. Tom Scott talked about their Wildlife Finder, which uses data to build a website that brings together natural history clips, the BBC’s news archive, and the concepts that make up our perception of the natural world.

Simply aggregating the data is not enough, and the BBC hand-builds ‘collections’ of curated items. Scott said ‘curation is the process by which aggregate data is imbued with personalised trust’, citing a collection of David Attenborough’s favourite clips as an example.

Tom Scott argued that it didn’t make sense for the BBC to spend money replicating data sources that are already available on the web, and so Wildlife Finder builds pages using existing sources like Wikipedia, WWF, ZSL and the University of Michigan Museum of Zoology. A question from the floor asked him about the issues of trust around the BBC using Wikipedia content. He said that a review of the content before the project went live showed that it was, on the whole, ‘pretty good’.

As long as the BBC was clear on the page where the data was coming from, he didn’t see there being an editorial issue.

Other presentations during the day are due to be given by John Sheridan and Jeni Tennison from data.gov.uk, Georgi Kobilarov of Uberblic Labs and Silver Oliver from the BBC. The afternoon is devoted to a more practical series of workshops allowing developers to get to grips with some of the technologies that underpin the web of data.

Similar Posts:



January 22 2010

09:35

The Media Consortium: Media organisations should share more metadata

Great post here looking at the semantic web and how it will influence the future of  online journalism.

The next phase of the semantic web will be “a step beyond aggregation that aims to makes information more meaningful and useful” and journalists and media organisations can aid this development by sharing metadata more broadly and focus more on users’ long-term experiences of their websites.

Together, such data may be more valuable than if media organisations reserved data for their own purposes. Pooling metadata can help improve artificial intelligence, which drives the automated aspects of discovering new information on the semantic web.

The benefits for media organisations? Better websites for users, the capacity for news to challenge readers and bottom-up rather than top-down approaches to journalism and making meaning, suggests the post.

via The Media Consortium » Radical New Ways of Meaning-Making and Filtering.

Similar Posts:



December 16 2009

17:10

KNC 2010: NewsGraf wants to slap a search box on journalists’ brains

[EDITOR'S NOTE: The Knight News Challenge closed submissions for the 2010 awards last night at midnight, which means that another batch of great ideas, interesting concepts, and harebrained schemes gave their chance to convince the Knight Foundation they deserve funding. (Trust us — great, interesting, and harebrained are all well represented at this stage each year.) We've been picking through the applications available for public inspection the past few weeks, and over the next few days Mac is going to highlight some of the ideas that struck us as worthy of a closer look — starting today with NewsGraf, below.

But we also want your help. Do you know of a really interesting News Challenge application? Did you submit one yourself? Let us know about it. Either leave a comment on this post or email Mac Slocum. In either case, keep your remarks brief — 200 words or less. We'll run some of the ones you think are noteworthy in a post later this week. —Josh]

The most eye-catching thing about the NewsGraf’s proposal is its price tag; $950,000 over two years. That stands out in a sea of $50,000 and $100,000 requests.

But if you spend a little time digging into the intricacies of NewsGraf, that big price becomes downright reasonable. Cheap even. That’s because with NewsGraf, Mike Aldax and John Marshall want to digitally duplicate the knowledge, connections and synapses of a veteran journalist. That kind of audacity doesn’t come cheap.

Technologically speaking, NewsGraf ventures into the murky world of semantic tagging and social graphs. Unless you’ve got a computer science degree, it’s hard to get a handle on exactly what NewsGraf is. It’s a database, it’s a search engine, but it’s also a connectivity machine.

It’s easier to compare NewsGraf to a person — think of it as a veteran reporter. Someone who carries around a vast collection of interviews, research, and general knowledge gleaned from years working a beat. All this info is tucked neatly into her memory, and she taps this personal database whenever she’s assembling a story. It searches for red flags, patterns, and relationships. It’s an editorial sixth sense.

But there’s a big problem with this brain-based model: It disappears when the brain — and its associated owner — get laid off. With news organizations already running smaller and faster, how can they possibly overcome this growing knowledge gap?

Enter NewsGraf. The project is still on the drawing board, but the idea is to capture all that connective information in a format that’s accessible to anyone with a web browser. A visitor can enter the name of a local newsmaker and see the threads that bind that person to others in the community. It’s like Facebook, as designed by a beat reporter.

Data will come from government databases, local newspapers, blogs, and other sources. After running a query, a user can click through to the originating stories for deeper information. NewsGraf is merely the conduit here; Marshall said they want to send users to the information, not keep them locked within NewsGraf’s walls. As the application puts it:

As newspapers find it increasingly difficult to send reporters to monitor local politics and public discourse, communities will need alternative mechanisms to ensure transparency and good government. Local journalists and citizens will be able to draw upon NewsGraf’s data as a starting point for further investigation, uncovering important relationships that may be influencing decisions being made in their community.

The team behind the idea combines journalism (Aldax covers city hall for The San Francisco Examiner) and tech (Marshall is a software developer and a former VP at AOL). NewsGraf will focus on San Francisco and the Bay Area if it wins a News Challenge grant. But if funding doesn’t come through, Aldax hopes someone else runs with the idea. “We just want to see this happen,” he said.

December 10 2009

15:39

Next year’s news about the news: What we’ll be fighting about in 2010

I’ve helped organize a lot of future of journalism conferences this year, and have done some research for a few policy-oriented “future of journalism” white papers. And let’s face it: as Alan Mutter told On the Media this weekend, we’re edging close to the point of extreme rehash.

This isn’t to say there won’t be more such confabs, or that I won’t be attending most of them; journalists (blue-collar and shoe-leather types that they are) may not realize that such “talking” is actually the lifeblood of academia, for better or worse. However, as 2009 winds down, I do think that it might be worthwhile to try to summarize a few of the things we’ve more or less figured out this year, and point towards a few of the newer topics I see looming on the horizon. In other words, maybe there are some new things we should be having conferences about in 2010.

In the first section of this post, I summarize what I think we “kinda-sorta” learned over the past year. In the next, I want to point us towards some of the questions we should be asking in 2010.

To summarize, I think were reaching consensus on (1) the role of professional and amateur journalists in the new media ecosystem, (2) the question of what kind of news people will and won’t “pay” for, and (3) the inevitable shrinking and nicheification of news organizations. And I think the questions we should be asking next year include (1) the way changes in journalism are changing our politics, (2) the relationship between journalism, law, and public policy, (3) what kind of news networks we’ll see develop in this new ecosystem, (4) the future of j-school, and (5) the role of journalists, developers, data, and “the algorithm.”

But first, here’s what we know.

What we kinda-sorta know

As Jay Rosen has tweeted a number of times over the past few months, what’s remarkable about the recent wave of industry and academic reports on journalism is the degree to which they consolidate the “new conventional wisdom” in ways that would have seemed insane even a few years ago. In other words, we now kinda-sorta know things now that we didn’t before, and maybe we’re even close to putting some old arguments to bed. Here are some (big) fights that may be tottering toward their expiration date.

1. “Bloggers” versus “journalists” is (really, really) over. Yes yes. We’ve been saying it for years. But maybe this time it’s actually true. One of the funny thing’s about recent pieces like this one in Digital Journalist or this one from Fast Company is just how old-fashioned they seem, how concerned they are with fighting yesterday’s battles. The two pieces, of course, show that the fighting won’t actually ever go away…but maybe we need to start ignoring most of it.

2. Some information won’t be free, but probably not enough to save big news organizations. If “bloggers vs. journalists” was the battle of 2006, the battle of 2009 was over that old canard, “information wants to be free.” We can expect this fight to go on for a while, too, but even here there seems to be an emerging, rough consensus. In short: Most people won’t pay anything for traditional journalism, but a few people will pay something, most likely for content they (1) care about and (2) can’t get anywhere else. Whether or not this kind of money will be capable of sustaining journalism as we’ve known it isn’t clear, but it doesn’t seem likely. All of the current battles — Microsoft vs. Google, micropayments vs. metered paywalls, and so on — are probably just skirmishes around this basic point.

3. The news will be increasingly be produced by smaller, de-institutionalized organizations. If “bloggers vs. journalists” is over, and if consumers won’t ever fully subsidize the costs of old-style news production, and if online journalism advertising won’t ever fully equal its pulp and airwaves predecessors, than the journalism will still get produced. It will just get produced differently, most likely by smaller news organizations focusing more on niche products. Indeed, I think this is the third takeaway from 2009. Omnibus is going away. Something different — something smaller– is taking its place.

What we might be fighting about next year

So that’s what we’ve (kinda sorta) learned. If we pretend (just for a moment) that all those fights are settled, what might be some new, useful things to argue about in 2010? I’ve come up with a list of five, though I’m sure there are others.

1. What kind of politics will be facilitated by this new world? In the old world, the relationship between journalism and politics was fairly clear, and expressed in an endless series of (occasionally meaningful) cliches. But changes on one side of the equation inevitably mean changes on the other. The most optimistic amongst us argue that we might be headed for a new era of citizen participation. Pessimists see the angry town halls unleashed this summer and lament the days when the passions of the multitude could be moderated by large informational institutions. Others, like my colleague Rasmus Kleis Nielsen at Columbia, take a more nuanced view. Whatever the eventual answer, this is a question we should be trying to articulate.

2. What kind of public policies and laws will govern this new world? Law and public policy usually move a few steps “behind” reality, often to the frustration of those on the ground floor of big, social changes. There’s a reason why people have been frustrated with the endless congressional debates over the journalism shield law, and with the FTC hearings on journalism — we’re frustrated because, as far as we’re concerned (and as I noted above), we think we have it all figured out. But our government and legal system don’t work that way. Instead, they act as “consolidating institutions,” institutions that both ratify a social consensus that’s already been achieved and also tilt the playing field in one direction or another — towards incumbent newspapers, for example. So the FTC, the FCC, the Congress, the Supreme Court — all these bodies will eventually be weighing in on what they want this new journalistic world to look like. We should be paying attention to that conversation.

3. What kind of networks will emerge in this new media ecosystem? It’s a strong tenet amongst most journalism futurists that “the future of news is networked,” that the new media ecosystem will be the kind of collaborative, do-what-you-do-best-and-link-to-the-rest model most recently analyzed by the CUNY “New Business Models” project. But what if the future of news lies in networks of a different kind? What if the news networks we’re starting to see emerge are basically the surviving media companies (or big portals) diversifying and branding themselves locally? This is already going on with the Huffington Post local initiative, and we can see national newspapers like The New York Times trying out variations of this local strategy. A series of “local networks,” ultimately accountable to larger, centralized, branded organizations may not be what “networked news” theorists have in mind when they talk about networks, but it seems just as likely to happen as more “ecosystem-esque” approach.

4. What’s the future of journalism school? This one’s fairly self-explanatory. But as the profession it serves mutates, what’s in store for the venerable institution of j-school? Dave Winer thinks we might see the emergence of journalism school for all; Cody Brown thinks j-school might someday look like the MIT Center For Collective Intelligence. Either way, though, j-school probably won’t look like it does now. Even more profoundly, perhaps, the question of j-school’s future is inseparable from questions about the future of the university in general, which, much like the news and music industries, might be on the verge of its own massive shake-up.

5. Human beings, data, and “the algorithm.” This one fascinates me, and it seems more important every day. In a world of Demand Media, computational journalism, and AOL’s news production strategy, questions about the lines between quantitative, qualitative, and human journalism seem ever more pressing. If we are moving towards some kind of semantic web, what does that mean for the future of news? What role are programmers and developers playing? How will they interact with journalists? Is journalism about data, about narrative, or both? Is journalism moving from a liberal art to an information science? And so on.

These are all big, big questions. They get to the heart of democracy, public policy, law, organizations, economics, education, and even what it means to be a human being. They may not be the same questions we’ve been debating these past several years, but maybe its time to start pondering something new.

Photo by Kate Gardiner used under a Creative Commons license.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl