News and Updates on the KRR Group
Header image

Benchmarking can stimulate technological progress. Check the Latest Berlin Benchmark Report for RDF & SPARQL compliant DBMS engines: http://bit.ly/Yf5etP and http://bit.ly/12UsFbu

Tags: 

Source: Think Links

 

For the past couple of days (April 8 – 10, 2013), I attended the UKSG conference. UKSG is organization for academic publishers and librarians. The conference itself has over 700 attendees and is focused on these two groups. I hadn’t heard of it until I was invited by Mike Taylor from Elsevier Labs to give a session with him on altmetrics.

The session was designed to both introduce and give a start-of-the art update on altmetrics to publishers and librarians. You can see what I had to say in the clip above but my main point was that altmetrics is at a stage where it can be advantageously used by scholars, projects and institutions not to rank but instead tell a story about their research. It’s particular important when many scientific artifacts beyond the article (e.g. data, posters, blog posts, videos) are becoming increasingly trackable and can help scholars tell their story.

The conference itself was really a bit weird for me as it was a completely different crowd than I normally would connect with… I had to one of the few “actual” academics there, which lead to my first day tweet:

being at #uksglive as an academic is interesting – talking to people who talk about me in the abstract is seriously meta

— Paul Groth (@pgroth) April 8, 2013

It was fun to randomly go up to the ACM and IEEE stand and introduce myself not as a librarian or another publisher but as an actual member of their organizations. Overall, though people were quite receptive of my comments and were keen to get my views on what publishing and librarians could be doing to help me as a research out. I do have to say that it was a fairly well funded operation (there is money in academia somewhere)…. I came away with a lot of free t-shirts and USB sticks and I never have been to a conference that had bumper cars for the evening entertainment:

UKSG bumper cars

In addition to (hopefully) contributing to the conference, I learned some things myself. Here are some bullet points in no particular order:

  • Outrageous talk by @textfiles – the Archive Team is super important
  • I talked a lot to Geoffrey Bilder from CrossRef. Topics included but not limited to:
    • why and when indirection is important for permanence in url space
    • the need for a claims (i.e. nanopublications) database referencing ORCID
    • the need for consistent url policies on sites and a “living will” for sites of importance
    • when will scientist get back to being scientists and stop being marketers (is this statement true, false, in-between, or is it even a bad thing)
    • the coolness of labs.crossref.org
  • It’s clear that librarians are the publishers customers, academics are second. I think this particular indirection  badly impacts the market.
  • Academic content output is situated in a network – why do we de-link it all the time?
  • The open access puppy
  • Wasting no time, #mendeley already up in the #elsevier booths at #uksglive twitter.com/MarkHahnel/sta…

    — Mark Hahnel (@MarkHahnel) April 9, 2013

     

  • It was interesting to see the business of academic publishing going done. There were lots of pretty intense looking dealings going down that I witnessed in the cafe.
  • Bournemouth looks like it could have some nice surfing conditions.

Overall, UKSG was a good experience to see, from the inside, this completely other part of the academic complex.

Filed under: academia, altmetrics Tagged: #altmetrics, #uksglive, trip report

Data2Semantics wins COMMIT/ Valorization Award!

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Data2Semantics wins COMMIT/ Valorization Award!)

Source: Data2Semantics


During the COMMIT/ Community event, April 2-3 in Lunteren, the Data2Semantics won one out of three COMMIT/ Valorization awards. The award is a 10000 euros subsidy to encourage the project to bring one of its products closer to use outside academia.

At the event, we presented and emphasized the philosophy of Data2Semantics to embed new enrichment tools in the current workflow of individual researchers. We are working closely with both Figshare.com (with our Linkitup tool) and Elsevier Labs to bring semantics at the fingertips of the researcher.

Source: Think Links

coffe from the worldThe rise of Fair Trade food and other products has been amazing over the past 4 years. Indeed, it’s great to see how certification for the origins (and production processes) of products  is becoming both prevalent and expected. For me, it’s nice to know where my morning coffee was grown and indeed knowing that lets me figure out the quality of the coffee (is it single origin or a blend?).

I now think it’s time that we do the same for data. As we work in environments where our data is aggregated from multiple sources and processed along complex digital supply chains, we need the same sort of “fair trade” style certificate for our data. I want to know that my data was grown and nurtured and treated with care and it would be great to have a stamp that lets me understand that with a glance without having to a lot of complex digging.

In a just published commentary in IEEE Internet Computing, I go into a bit more detail about how provenance and linked data technologies are laying the ground work for fair trade data. Take a look and let me know what you think.

 

 

Filed under: provenance, supply chains Tagged: data, fair trade, provenance, supply chain

Source: Think Links

In the context of the Open PHACTS and the Linked Data Benchmark Council projects, Antonis Loizou and I have been looking at how to write better SPARQL queries. In the Open PHACTS project, we’ve been writing super complicated queries to integrate multiple data sources and from experience we realized that different combinations and factors can dramatically impact performance. With this experience, we decided to do something more systematic and test how different techniques we came up with mapped to database theory and worked in practice. We just submitted a paper for review on the outcome. You can find a preprint (On the Formulation of Performant SPARQL Queries) on arxiv.org at http://arxiv.org/abs/1304.0567. The abstract is below. The fancy graphs are in the paper.

But if you’re just looking for ways to write better queries, here are the main rules-of-thumb that we found.

  1. Minimise optional triple patterns : Reduce the number of optional triple patterns by identify-ing those triple patterns for a given query that will always be bound using dataset statistics.
  2. Localise SPARQL subpatterns: Use named graphs to specify the subset of triples in a dataset that portions of a query should be evaluated against.
  3. Replace connected triple patterns: Use property paths to replace connected triple patterns where the object of one triple pattern is the subject of another.
  4. Reduce the effects of cartesian products: Use aggregates to reduce the size of solution sequences.
  5. Specifying alternative URIs: Consider different ways of specifying alternative URIs beyond UNION.

Finally, one thing we did learn was test, test, test. The performance of the same query can vary dramatically across stores.

Title: On the Formulation of Performant SPARQL Queries
Authors: Antonis Loizou and Paul Groth

The combination of the flexibility of RDF and the expressiveness of SPARQL
provides a powerful mechanism to model, integrate and query data. However,
these properties also mean that it is nontrivial to write performant SPARQL
queries. Indeed, it is quite easy to create queries that tax even the most optimised triple stores. Currently, application developers have little concrete guidance on how to write “good” queries. The goal of this paper is to begin to bridge this gap. It describes 5 heuristics that can be applied to create optimised queries. The heuristics are informed by formal results in the literature on the semantics and complexity of evaluating SPARQL queries, which ensures that queries following these rules can be optimised effectively by an underlying RDF store. Moreover, we empirically verify the efficacy of the heuristics using a set of openly available datasets and corresponding SPARQL queries developed by a large pharmacology data integration project. The experimental results show improvements in performance across 6 state-of-the-art RDF stores.

Filed under: linked data Tagged: heuristics, performance, sparql, triple store