News and Updates on the KRR Group
Header image

Source: Semantic Web world for you

The LOD cloud as rendered by Gephi

One year ago, we posted on the LarkC blog a first network model of the LOD cloud. Network analysis software can highlight some aspects of the cloud that are not directly visible otherwise. In particular, the presence of dense sub-groups and several hubs – whereas in the classical picture, DBPedia is easily perceived as being the only hub.

Computing network measures such as centralities, clustering coefficient or the average path length can reveal much more about the content of a graph and the interplay of its nodes. As shown since that blog post, these information can be used to appreciate the evolution of the Web of Data and devise actions to improve it (see the WoD analysis page for more information about our research on this topic). Unfortunately, the picture provided by Richard and Anja on lod-cloud.net can not be fitted directly into a network analysis software which expects a .net or CSVs files instead. Fortunately, thanks to the very nice API of CKAN.net it is easy to write a script generating such files. We made such a script and thought it would be a good idea to share it :-)

The script is hosted on GitHub. It produces a “.net” file according to the format of Pajek and two CSV files, one for the nodes and one for the edges. These CSV can then easily be imported into Gephi, for instance, or any other software of your choice. We also made a dump of the cloud as of today and packaged the resulting files.

Have fun analysing the graph and let us know if you find something interesting ;-)

 

Source: Semantic Web world for you

Wayan recently blogged about the project SemanticXO, asking about its current status. Unfortunately, I couldn’t comment on his blog so I’d like to answer to his question here. Daniel also emitted some doubts about the Semantic Web, so I’ll try to clarify what this is all about.

To be honest, I’m not sure what that really means. Is this a database project? Is it to help translation of the Sugar User Interface? Or are children somehow to use SemanticXO in their language acquisition?

Semantic technologies are knowledge representation tools used to model factual information – for instance, “Amsterdam,isIn,Netherlands”. These facts are stored in optimised databases called the triple stores. So, yes, it is kind of a data base project which aims at installing such a triple store and provide an API for using it. The technologies developed for the Semantic Web are particularly suited to storing and querying multi-lingual data, thus activities that need to store text in different languages would directly benefit from this feature. The triple store could indeed eventually be used instead of the .po files to store multi-lingual data for Sugar.

The goal of SemanticXO is not only to provide an API to use a triple store on the XO but also to provide access to the data published using Semantic Web technologies. There has been many data sets being published on the Web, providing a network with more than 27 Billion factual information that can be queried and combined. Although not being exhaustive, the Linked Open Data (LOD) cloud provides a good idea of the amount of data out there. With SemanticXO an activity developer will be able to simply get the population of Amsterdam, or the exact location of Paris, or the population of London, or whatever. The LOD cloud can be queried just like a database and it contains a lot of information about many topics. And because the XO will itself be able to use the same publication system, the kids using Sugar will be able to publish their data on the cloud directly from an activity.

Currently, it is hard, if not impossible, to get such atomic information and just insert it somewhere into an activity with a few lines of code…

Regardless of its purpose, it seems that SemanticXO development has come to a halt. The only other post from Christophe Guéret detailed RedStore running on the XO, where he noted the challenges of installing a TripleStore on an XO using RedStore, namely that RedStore depends on some external libraries that are not yet packaged for Fedora11 and since it’s not so easy to compile directly on the XO, a second computer is required.

This post was published on the 11 of April 2011. To date, there were three posts about SemanticXO: the introduction (posted on December 15, 2010), the installation of a triple store (posted on December 20, 2010) and a first activity using the triple store (posted on April 5, 2011). So there was one other post made since the installation of the triple store. But that first step of installing a triple store was indeed important for what I want to do with SemanticXO and it was not easy to find one that would fit the low specs of an XO-1. Then, the installation was a bit challenging because of the dependencies but nothing really exceptional there. Ideally, the triple store will come installed by default on the OLPC OS releases some day :-)

Once installed, the XO didn’t return queries quickly. The XO failed on a number of benchmark different triple stores, even after being executed over a full night.

I was pleased, surprised and relieved to see that the triple store worked in the first place! From what I know, it was the first time a triple store was running on such low-spec hardware and I wanted to see how far I could push it. So I loaded a significant amount of triples (50k) and ran of the testing suite we typically use to test triple store performances. As expected, the response time was long and most complex queries just failed. But these evaluation systems are aimed at testing big triple stores on big hardware and the queries are designed to see how the triple store deal with extreme cases. Considering that on the oldest generation of XO the triple store managed to answer queries way more complex that the one it is expected to deal with, I found the results acceptable and decided to move onto the next steps.

So Christophe, what does this mean? Is a Semantic Web for children using the XO possible?

Yes, it is possible and I’m still actively working on it! The developement is going slower than I would like it to go, as many contributors I work on this project on my spare time, but it is going on. The last post on this blog shows an activity using the store for its internal data and contains a pointer to a technical report that, I hope, will bring more light onto the project goals & status. Right now, I’m working on extending this activity and implementing an drop-in replacement for the data store that would use the triple store to store metadata about the different entries. This clustering activity is only showing how activities in Sugar can store data using the triple store so I’m also working on an activity that will show the other aspect: how the same concepts can be used to get data from the LOD cloud and display it.

I have been able to detect no clear correlation between use of the term “Semantic Web” and knowledge of what it means. I think everybody just read it in Wired in 1999 and filed it away as a really good thing to put on a square of your Buzzword Bingo card.

Since 1999, and until some years ago, the Semantic Web has been searching for its own identity and meaning. It started out as a vision of having data being published on the Web just as the Web as we know it allows for the publication of Documents. Translating a vision into concrete technologies is a lengthy process subject of debates and trial&errors phases before you get into something everyone can see and play with. Now, we are getting on track with data sets being published on the Web using Semantic Web technologies (the LOD cloud, Linked Open Commerce), some dedicated high-end conferences (ISWC, ESWC, SemTech, …) and journals (JWS, SWJ, …). Outside of academia, there is also an increasing amount of Semantic Web application but most of it is invisible to the end user. Have you noticed Facebook is using Semantic Web technologies to mark up the pages for its famous “Like” button? Or that the NYTimes uses the same technologies to tag its articles? and these are only two example out of many more.

As highlighted by Tom Ilube from Garlik (an other company using Semantic Web technology), the Semantic Web is a change in the infrastructure of the Web itself that you won’t even see happening.

Related Articles

Source: Semantic Web world for you

In the past few years many data sets have been published and made public in what is now often called the Web of Linked Data, making a step towards the “Web 3.0”: a Web combining a network of documents and data suitable for both human and machine processing. In this Web 3.0, programs are expected to give more precise answers to queries as they will be able to associate a meaning (the semantic) to the information they process. Sugar, the graphical environment found on the XO, is currently Web 2.0 enabled – it can browse web sites – but has no dedicated tools to interact with the Web 3.0. The goal of the SemanticXO project introduced earlier in this blog is to make Sugar Web 3.0 proof by adding semantic software on the XO.

One corner stone of this project is to get a triple store, the software in charge of storing the semantic data, running on the limited hardware of the machine (in our case, an XO-1). As it proved to be feasible, we can now go further and start building activities making use of it. And to begin with, a simple clustering activity: the goal there is to cluster into boxes using drag&drop. The user can create as many boxes as he needs, and the items may be moved around boxes. Here is a screenshot of the application, showing Amerindian items:

Prototype of the clustering activity

The most interesting aspect of this activity is actually under its hood and is not visible on the screenshot. Here is a some of the triples generated by the application (note that the URLs have been shortened for readability) :

subject predicate object
olpc:resource/a05864b4 rdf:type olpc:Item
olpc:resource/a05864b4 olpc:name “image114″
olpc:resource/a05864b4 olpc:hasDepiction “image114.jpg”
olpc:resource/a82045c2 rdf:type olpc:Box
olpc:resource/a82045c2 olpc:hasItem olpc:resource/a05864b4
olpc:resource/78cbb1f0 rdf:type olpc:Box

It is relevant to note here the flexibility of that data model: The assignment of one item to the only box is stated by a triple using the predicate “hasItem”, one of the box is empty because there is no such statement linking it to an item. A varied number of similar triples can be used, without any constraint and the same goes for actually all the triples in the system. There is no requirement for a set of predicates all the items must have. Let’s see the usage that can be made of this data through three different SPARQL queries, introduced from the simple one to the most sophisticated:

  • List the URIs of all the boxes and the items they contain
  • SELECT ?box ?item WHERE {
    ?box rdf:type olpc:Box.
    ?box olpc:hasItem ?item.
    }
    
  • List of the items and their attributes
  • SELECT ?item ?property ?val WHERE {
      ?item rdf:type olpc:Item.
      ?item ?property ?val.
    }
    
  • List of the items that are not in a box
  • SELECT ?item WHERE {
      ?item rdf:type olpc:Item.
      OPTIONAL {
        ?box rdf:type olpc:Box.
        ?box olpc:hasItem ?item.
      }
      FILTER (!bound(?box))
    }
    

These three queries are just some examples, the really nice thing about this query mechanism is that (almost) anything can be asked through SPARQL. There is no need to define a set of API calls to cover a list of anticipated needs, as soon as the SPARQL end point is made available every activity may ask whatever it wants to ask! :)

We are not done yet as there is still a lot to develop to finish the application (game mechanism, sharing of items, …). If you are interested in knowing more about the clustering prototype, feel free to drop a comment on this post and/or follow this activity on GitHub. You can also find more information in this technical report about the current achievements of SemanticXO and the ongoing work.

Source: Semantic Web world for you

This post is a re-blog of this post published on semanticweb.com

Some weeks ago, a first version of a wrapper for the GoogleArt project from Google was put on line (see also this blog post). This wrapper that was first offering data only data may only be available for individual paintings has now been extended to museums. The front page of GoogleArt is also available as RDF, providing a machine-readible list of museums. This index page makes it possible, and easy, to download an entire snapshot of the data set so let’s see how to do that ;-)

Downloading the data set from a wrapper

Wrappers around web services offer an RDF representation of a content available at the original source. For instance, the SlideShare wrapper provides an RDF representation of a presentation page from the SlideShare web site. The GoogleArt wrapper takes the same approach for paintings and museums listed on the GoogleArt site. Typically, these wrapper would work by mimicking the URI scheme of the site they are wrapping. Changing the hostname, and part of the path, of the URL of the original resource for that of the wrapper gives you access to the seeked data.

From a linked data perspective, wrappers are doing a valid job at providing de-referencable URIs for the entities they described. However, the “de-referencing only” scheme makes them more difficult to query. Wrappers don’t offer SPARQL end points as they don’t store the data they serve, that data being computing on-the-fly when the URIs are accessed. To query a wrapper one has to rely on an indexing service harvesting the different document and indexing them. Something that reminds us of the way to find Web documents and for which the semantic web index Sindice is the state of the art solution.

But such external indexing service may not provide you with the entire set of triples or not allow to download big chunks of their harvested data. In that case, the best way to get the entire dataset locally is to use a spider to download the content published under the different URIs.

LDSpider, an application developped by Andreas Harth (AIFB), Juergen Umbrich(DERI), Aidan Hogan and Robert Isele, is the perfect tool for doing that. LDSpider crawls linked data resources, looking for triples it stores in a Nquad file. Nquads are triples to which a named graph has been added. By using it, LDSpider keeps track of the sources of the triples in the final result.

Using a few simple commands, it is possible to harvest all the triples published by the GoogleArt Wrapper. As of the time of writting, there seems to be a bug with the latest release of LDSpider (1.1d) that prevented us from downloading the data. However, everything works fine with the trunk version which can be downloaded and compile that way:

svn checkout http://ldspider.googlecode.com/svn/trunk/ ldspider-read-only
cd ldspider-read-only
ant build

One we have LDSpider ready to go, point it to the index page “-u http://linkeddata.few.vu.nl/googleart/index.rdf”, ask for a load balanced crawl “-c” and request to stay within the same domain name “-y” as the starting resource. This last option is very important! Since the resources published by the wrapper are connected to DBpedia resources, omitting the “-y” would allow the crawler to download the content of the resources that are pointed to in DBpedia, and then download the content of the resources DBpedia points to, and so on… The last parameter to set is the name of the output file “-o data.nq” and you are ready:

java -jar dist/ldspider-trunk.jar -u http://linkeddata.few.vu.nl/googleart/index.rdf -y -c -o data.nq

After some time (24 minutes in our case), you get a file with all the data + some header information with extra information about the downloaded resource:

<http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> <http://code.google.com/p/ldspider/ns#headerInfo> _:header1087646481301043174989 <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#responseCode> "200"^^<http://www.w3.org/2001/XMLSchema#integer> <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#date> "Fri, 25 Mar 2011 08:51:04 GMT" <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#server> "TornadoServer/1.0" <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#content-length> "5230" <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#content-type> "application/rdf+xml" <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .
_:header1087646481301043174989 <http://www.w3.org/2006/http#connection> "Keep-Alive" <http://linkeddata.few.vu.nl/googleart/museums/vangogh/orchards-in-blossom-view-of-arles-30> .

To filter these out and get only the data contained in the document, simply use a grep:

grep -v "_:header" data.nq > gartwrapper.nq

The final document “gartwrapper.nq” contains around 37k triples, out of which 1.6k are links to DBpedia URIs. More information about the data set is available through it CKAN package description. That description also contains a link to a pre-made dump.

Concluding remarks

This download technique is applicable to downloading the content provided by any wrapper or, in general, data set for which only de-refenrencable URIs are provided. However, we should stress that to ensure completness an seed URI listing all (or most of) the published resources: the spider works by following links so be sure to have access to well connected resources. If several seeds are needed to cover the entire data set, iterate the same process starting at every one of them or use the dedicated option from LDSpider (“-d”).

Related Articles