News and Updates on the KRR Group
Header image

github kitty

We have opened up a Data2Semantics GitHub organisation for publishing all (open source) code produced within the Data2Semantics project. Point your browser (or Git client) to for the latest and greatest!

Enhanced by Zemanta

The COMMIT programme was officially kicked-off by Maxime Verhagen, minister of Economic Affairs, Agriculture and Innovation at  the ICTDelta 2011 event held at the World Forum on November 16, in The Hague.

Throughout the day, members of the Data2Semantics project manned a very busy stand in the foyer, featuring prior and current work by the project partners such as the AIDA toolkit, OpenPHACTS, LarKC and the MetaLex Document Server.

Enhanced by Zemanta

Source: Semantic Web world for you

Scaling is often a central question for data intensive projects, making use of Semantic Web technologies or not, and SemanticXO is no exception to that. The triple store is used as a back end for the Journal of Sugar, which is a central component recording the usage of the different activities. This short post discusses the results found for two questions: “how many journal entries can the triple store sustain?” and “hoe much disk space is used to store the journal entries?”

Answering these questions means loading some Journal entries and measuring the read and write performances along with the disk space used. This is done by a script which randomly generate Journal entries and insert them in the store. A text sampler and the real names of activities are used to make these entries realistic in terms of size. An example of such generated entry, serialised in HTML, can be seen there. The following graphs show the results obtained for inserting 2000 journal entries. These figures have been averaged over 10 runs, each of them starting with a freshly created store. The triple store used is called “RedStore“, it is called with an hash based BerkleyDB backend. The test machine is an XO-1 running the software 11.2.0.

The disk space is minimal for up to 30 entries, grows rapidly between 30 and 70 entries and continues on a linear basis from that number on. The maximum space occupied is a bit less than 100MB which is few of the 1GB of storage of the XO-1.


Amount of disk space used by the triple store

The results for the read and write delay are a bit less of a good news. Write operations are constant in time and always take around 0.1 s. Getting an entry from the triple store proves to get linearly slower as the triple store gets filled. It can be noticed that for up to 600 entries, the retrieval time of an entry is below a second. This should provide a reasonable response time. However, with 2000 entries stored the retrieval time goes as high as 7 seconds :-(

Read and write access time

The answer to the question we started with (“Does it scale?”) is then “yes, for up to 600 entries” considering a first generation device and the current status of the software components (SemanticXO/Redstore/…). This answers also yields new questions, among which: Are 600 entries enough for a typical usage of the XO? Is it possible to improve the software to get better results? How are the result on some more recent hardware?

I would appreciate a bit of help for answering all of these, and especially the last one. I only have an XO-1 and can not thus run my script on an XO-1.5 or XO-1.75. If you have such device and are willing to help me getting the results, please download the package containing the performance script and the triple store and follow the instructions for running it. After a day of execution or so, this script will generate three CSV files that I could then postprocess to get similar curves as the one showed.

Related articles

Source: Think Links

The Journal of Web Semantics recently published a special issue on Using Provenance in the Semantic Web edited by myself and Yolanda Gil. (Vol 9, No 2 (2011)). All articles are available on the journal’s preprint server.

The issue highlights top research at the intersection of provenance and the Semantic Web. The papers addressed a range of topics including:

  • tracking provenance of DBpedia back to the underlying Wikipedia edits [Orlandi & Passant];
  • how to enable reproducibility using Semantic techniques [Moreau];
  • how to use provenance to effectively reason over large amounts (1 billion triples) of messy data [Bonatti et al.]; and
  • how to begin to capture semantically the intent of scientists [Pignotti et al.].
 Our editorial highlights a common thread between the papers and sums them up as follows:

A common thread through these papers is the use of already existing provenance ontologies. As the community comes to an increasing agreement on the commonalities of provenance representations through efforts such as the W3C Provenance Working Group, this will further enable new research on the use of provenance. This continues the fruitful interaction between standardization and research that is one of the hallmarks of the Semantic Web.

Overall, this set of papers demonstrates the latest approaches to enabling a Web that provides rich descriptions of how, when, where and why Web resources are produced and shows the sorts of reasoning and applications that these provenance descriptions make possible

Finally, it’s important to note that this issue wouldn’t have been possible without the quick and competent reviews done by the anonymous reviewers. This is my public thank you to them.

I hope you take a chance to take a look at this interesting work.

Filed under: academia, linked data Tagged: journal, linked data, provenance, semantic web

The Botari application from the LarKC project has won the Open Track of the Semantic Web Challenge.

Botari is a LarKC workflow running on servers in Seoul, plus a user frontend that runs on a Galaxy Tab.

The workflow combines open data from the city of Seoul (Open Street Map, POI’s) with twitter traffic and combines stream processing, machine learning and querying over RDF datasets and streams to give personalised restaurant information and recommendations, presented in an augmented reality interface on the Galaxy Tab.

For more info on Botari, see either the website, or the demo movie or the slide deck or the paper.

Enhanced by Zemanta

Source: Semantic Web world for you

Over the last couple of years, we have engineered a fantastic data sharing technology based on open standards from the W3C: Linked Data. Using Linked Data, it is possible to express some knowledge with a set of facts and connect the facts together to build a network. Having such networked data openly accessible is a source of economical and societal benefits. It enables sharing data in an unambiguous, open and standard way, just as the Web enabled document sharing. Yet, the way we designed it deprives the majority of the World’s population from using it.

Doing “Web-less” Linked Data?

The problem may lay in the fact that Linked Data is based on Web technologies, or in the fact that Linked Data have been designed and engineered by individuals having an easy access to the Web, or maybe a combination of both aspects. Nowadays, Linked Data rhymes with having a Cloud hosted data storing services, a set of (web-based) applications to interact with this service and the infrastructure of the Web. As a result, if you don’t have access to this Web infrastructure, you can not use Linked Data. Which is a pity, because an estimated 4.5B persons don’t have access to it for various reasons (lack of infrastructure, cost of access, literacy issues, …). Wouldn’t it be possible to adjust our design choices to ensure they could also benefit from Linked Data, even if they don’t have the Web? The answer is yes, and the best news is that it wouldn’t be that hard either. But for it to happen, we need to adapt both our mindset and our technologies.

Changing our mindset

We have tendency to think that any data sharing platform is a combination of a cloud based data store, some client applications to access the data and form to feed new data into the system. This is not always applicable as central hosting of data may not be possible, or its access from client applications may not be guaranteed. We should also think of the part of the World which is illiterate and for which Linked Data, and the Web, are not accessible. In short, we need to think de-centralised, small and vocal in order to widen the access to Linked Data.

Think de-centralised

Star-shaped networks can be hard to deploy. They imply setting a central producer of resource somewhere and connecting all the clients to it. Electricity networks have already found a better alternative: the microgrids. Microgrids are made of small networks of producers/consumers (the “prosumers”) of electricity that locally manage the electricity needs. We could, and should, copy on this approach to manage local data production and consumption. For example, think of a decentralised DBpedia whose content would be made of the aggregation of several data sources producing part of the content – most likely, the content that is locally relevant to them.

Think small

Big servers require more energy and more cooling. They usually end up racked into big cabinets that in turn are packed into cooled data centers. These data centers needs to be big in order to cope with the scale issues. Thinking decentralised allow to think small, and we need to think small to provide alternatives to having data centers where these are not available. As the content production and creation goes decentralised, several small servers can be used. To continue with the analogy with microgrids, we can name these small servers taking care of locally relevant content “micro-servers”.

Think vocal

Unfortunately, not everyone can read and type. In some African areas, knowledge is shared using vocal channels (mobile phone, meetings, …) because there is no other alternative. Getting access to knowledge exchanged that way can not be done using form based data acquisition systems. We need to think of exploiting vocal conversation through Text To Speech (TTS) and Automatic Speech Recognition (ASR) rather than staying focused on forms.

Changing our technologies

Changing the mindsets is not enough, if we aim at stripping down the Web from Linked Data we also need to pay attention to our technologies and adapt them. In particular, there are 5 upcoming challenges that can be phrased as research questions:

  1. Dereferencability: How do you get a route to the data if you want to avoid using the routing system provided by the Web? For instance, how do you dereference an host-name based URIs if you don’t have access to the DNS network?
  2. Consistency: In a decentralised setting where several publishers produce part of a common data set, how do you ensure URIs are re-used and non colliding? There are chances that two different producers would use the same URI to describe different things.
  3. Reliability: Unlike centrally hosted data servers, micro-servers can not be asked to provide a 99% availability. They may go on and off unexpectedly. First thing to know is whether that’s an issue or not. The second is, if we should ensure their data remains available, how do we achieve this?
  4. Security: That’s also related to having a swarm of microservers serving a particular dataset. If any microserver can produce a chunk of that dataset, how do you avoid having a spammer getting in and starting producing falsified content? If we want to avoid centralized networks, authority based solution such as in Public Key Infrastructure (PKI) is not an option. We need to find decentralised authentication mechanisms.
  5. Accessibility: How do we make Linked Data accessible to those that are illiterate? As highlighted earlier, not everyone can read an write but illiterate persons can still talk. We need to take more of the vocal technologies into account in order to make Linked Data accessible to them. We can also investigate graphical based data acquisition techniques with visual representations of information.

More about this

This is a presentation that Stefan Schlobach gave at ISWC2011 on this topic:

You are also invited to read the associated paper “Is data sharing the privilege of a few ? Bringing Linked Data to those without the Web” and check out two projects working on the mentioned challenges: SemanticXO and Voices.

Source: Think Links

Yesterday, I had the pleasure of giving a tutorial at the NBIC PhD Course on Managing Life Science Information. This week long course focused on applying semantic web technologies to getting to grips with integrating heterogenous life science information.

The tutorial I gave focused on exposing relational databases to the web using the awesome D2R Server. It’s really a great piece of software that shows results right away. Perfect for a tutorial. I also covered how to get going with LarKC and where that software fit in the whole semantic web data management space.

On to the story…

The students easily exposed our test database (GPCR receptors) as RDF using D2R. Now the cool part: I found out just before the start of my tutorial  that the day before they had setup an RDF repository (Sesame) with some personal profile information. So on the fly I had them take the RDF produced by the database conversion and load that into the repository from the day before . This took a couple of clicks. They were then able to query over both their personal information and this new GPCR dataset. With not much work we hand munged together two really different data sets.

This is old hat to any Semantic Web person, but it was a great reminder of how the flexibility of RDF makes it easy to mashup data. No messing with about with tables or figuring out if the schema is right, just load it up into your triple store and start playing.

Filed under: academia, linked data Tagged: mashup, rdf, semantic web

Source: Think Links

I’ve been a bit quiet for the past couple of months. First, I was on a vacation and then we were finishing up the following demo for the Open PHACTS project. This is one of the main projects I’ll be working on for the next 2.5 years. The project is about integrating and exposing data for pharmacology. The demo below shows the first results of what we’ve done after the first 6 months of the project. Eventually, we aim to have the platform we’re developing be fully provenance enabled so all the integrated results can be checked and filtered based on their sources. Check it out and let me know what you think. Sorry for the poor voice over… it’s me :-)

Original Post

Filed under: academia, linked data

The LarKC project’s development team would like to announce a new release (v.3.0) of the LarKC platform, which is available for downloading here. The new release is a considerable improvement of the previous release (v.2.5), with the following distinctive features: PLATFORM New (plain) plug-in registry light-weight plug-in loading and thus very low platform’s start-up time [...]

With effect from November 15th there is a vacancy for a

PhD student CEDAR – Linked data

38 hours a week (1.0 fte) (for 48 months in total)

(Vacancy number DANS-2011-CEDARPhD1) (repeated call)

DANS in collaboration with the IISH, and the VU University Amsterdam is working on a project of the Computational Humanities programme of the KNAW “Census data open linked – From fragment to fabric – Dutch census data in a web of global cultural and historic information (CEDAR)”

Project Background

This project builds a semantic data-web of historical information taking Dutch census data as a starting point. With such a web we will answer questions such as:

  • What kind of patterns can we identify and interpret in expressions of regional identity?
  • How to relate patterns of changes in skills and labor to technological progress and patterns of geographical migration?
  • How to trace changes of local and national policies in the structure of communities and individual lives?

This project applies also a specific web-based data-model – exploiting the Resource Description Framework (RDF) technology– to make census data inter-linkable with other hubs of historical socio-economic and demographic data and beyond. The project will result in generic methods and tools to weave historical and socio-economic datasets into an interlinked semantic data-web. Further synergy will be created by linking CEDAR to Data2Semantics, a COMMIT project.

Information on the position

The PhD project, titled “Linked Open Data curation model in social science history – the case of Dutch census data” will be supervised by Professor Frank van Harmelen (VU Amsterdam). You will work in a project team consisting of another PhD student (PhD project “Theory and Practice of data harmonization in social history” (under the supervison of Professor Kees Mandemakers (Erasmus School of History, Culture and Communication, Erasmus University Rotterdam; IISH) and a postdoc experienced in complex network analysis and visualization. The project will be coordinated by Dr Andrea Scharnhorst (DANS, e-humanities group). It is part of the Computational Humanities Programme of the KNAW, which will be hosted at the e-humanities group ( and in which further projects (with PhD students, postdoc’s and senior staff) in the area of computational and digital humanities will be carried out.

    You will conduct research on:

    • Review of existing data models of census data, adaptation and modification, construction of the RDF model, links to other semantic web sources
    • Query design (specific to different user communities)
    • Development of RDF models of census data (historical variables)
    • Mapping of different ontologies across domains and along time,
    • Development of best practices to enable take-up of linking and re-use of data in other scientific disciplines and take-up in other KNAW institutes.
    • Visual navigation through RDF modelled information spaces


    Position requirements

    You preferably should have the following qualifications:

      • Master in computer sciences, artificial intelligence, information science or related areas
      • Interest in and knowledge of semantic technologies and their deployment on the Web
      • Fluency in spoken English and excellent written and verbal communication skills,
      • Knowledge of Dutch would be an advantage
      • Willingness and proven ability to work in a team and to liaise with colleagues in an international and interdisciplinary research environment

      Appointment and Salary
      The position involves a temporary appointment with DANS for 4 years with a 2-month period of probation. Applicants should have the right to work in the Netherlands for the duration of the contract. The gross salary will be € 2.042,- per month in the first year, rising to € 2.612,- per month in the fourth year for a full time appointment (scale P, for a PhD position, CAO-Dutch Universities).

      DANS offers an extensive package of fringe benefits, such as 8,3% year-end bonus, 8% holiday pay, a good pension scheme, 6 weeks holiday on an annual basis and the possibility to buy or sell vacation hours.

      Place of employment will be DANS – Data Archiving and Networked Services. The main working location will be at the e-Humanities Group of the KNAW (location Meertens institute, Joan Muyskenweg, Amsterdam).


      For the text of the CEDAR_proposal follow the link from HSN News For further information please contact: Dr. Andrea Scharnhorst or mobile phone (+31) (0)6 23 63 32 93.


      Please send a letter of application including

      1. letter of motivation
      2. CV, copy of Master Thesis and list of M.Sc. courses and grades
      3. the names and addresses of two referees before October 15th , 2011 to DANS,

        t.a.v. Hetty Labots, Personnel Department, P.O. Box 95366, 2509 CJ Den Haag,

        The Netherlands or (preferably) by e-mail to

      Interviews probably will take place at the end of October, 2011 in Amsterdam. When you already applied for this position please do not apply again.

        See also

        Enhanced by Zemanta