SemanticWeb

ThoughtStorms Wiki

June 2009 : Apparently LinkedData is the future of the SemWeb : https://web.archive.org/web/20110124022640/http://www.semanticsincorporated.com/2009/05/tying-web-30-the-semantic-web-and-linked-data-together-part-23-linked-data-is-a-medium.html

Smells awfully like SynWeb to me (as in http://platformwars.blogspot.com/2007/10/im-keeping-open-mind-on-new-round-of.html )

And that guy's attempt to explain what's wrong with RelationalDatabases is hilariously back-to-front.

Nov.2007 : TimBernersLee : http://dig.csail.mit.edu/breadcrumbs/node/215

Actually, I'm still sceptical ...

June 2004 : Quick admission : I may be wrong in my scepticism about the Semantic Web.

Just read a draft of GuillaumeBarreau's recent paper which suggests an application for the SW I think is plausible and where I see the potential benefit of using FOAF to integrate with other marked up stuff. Maybe more so with DOAP

PhilJones

My recent criticisms and discussions :

At a glance (from http://www.semaview.com/c/SW.html))

http://www.semaview.com/d/SWIllustrated_1280x1024.jpg

Intro : http://www.altova.com/semantic_web.html

Reference card : http://ebiquity.umbc.edu/v2.1/resource/html/id/94/

Criticism

  • SeanMcGrath : IT people don't do metadata - unless its for source code. Beyond source code, you need to put metadata creation in the hands of content specialists. Otherwises, it simply won't get created. What you will get instead will be more and more abstract models for how to manipulate the metadata if only you had it.

: http://seanmcgrath.blogspot.com/archives/20041017seanmcgratharchive.html#109834474309275262

An ALife perspective

I put on my ALife hat for a minute. What's the Semantic Web for? To explicitly markup documents with information in machine readable and processable form. But, as Shirky points out, it may flounder on the fact that this is a very expensive / impossible thing to do.

But what the semantic web is really about is allowing computers to do various forms of processing on data : to search, classify, make inferences from it. From an ALife perspective I don't try to give my agents an unambiguous, complete internal representation of my data. (Or wrap it in an unambiguous external one.) I simply allow them to respond to cues in the environment. What bits of documents afford certain kinds of interpretation or manipulation? (OnAffordance)

Then I hack a whole number of tropisms into my software, to take advantage of these environmental cues. I build my agents up, layer after layer, to do smarter and more interesting things with the world (web) as it really is. (See SubsumptionArchitecture)

Also StigmergicSystems, finding and leaving cues in the environment.

OTOH maybe ALife failed?

DareObasanjo has an interesting point comparing the Semantic Web with mere syntactic processing :

However there are further drawbacks to using the semantics based approach than using the XML-based syntactic approach. In certain cases, where the mapping isn't merely a case of showing equivalencies between the semantics of similarly structured elemebts (e.g. the equivalent of element renaming such as stating that a url and link element are equivalent) an ontology language is insufficient and a Turing complete transformation language like XSLT is not. A good example of this is another example from RSS Bandit. In various RSS 2.0 feeds there are two popular ways to specify the date an item was posted, the first is by using the pubDate element which is described as containing a string in the RFC 822 format while the other is using the dc:date element which is described as containing a string in the ISO 8601 format. Thus even though both elements are semantically equivalent, syntactically they are not. This means that there still needs to be a syntactic transformation applied after the semantic transformation has been applied if one wants an application to treat pubDate and dc:date as equivalent. This means that instead of making one pass with an XSLT stylesheet to perform the transformation in the XML-based solution, two transformation techniques will be needed in the RDF-based solution and it is quite likely that one of them would be XSLT.

http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=5b0e3e66-71af-4c12-903b-cde6e9c7d439

and

http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=5b31837c-49cc-4d1d-9f14-fd25df8b54f2

continues the theme.

: Aside. Hey, I didn't know my wiki automatically turned RFC references into links. Cool! Good UseModPhilJones

See also :