News and Updates on the KRR Group
Header image

Update: TabLinker & UnTabLinker

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Update: TabLinker & UnTabLinker)

Source: Data2Semantics

TabLinker, introduced in an earlier post, is a spreadsheet to RDF converter. It takes Excel/CSV files as input, and produces enriched RDF graphs with cell contents, properties and annotations using the DataCube and Open Annotation vocabularies.

TabLinker interprets spreadsheets based on hand-made markup using a small set of predefined styles (e.g. it needs to know what the header cells are). Work package 6 is currently investigating whether and how we can perform this step automatically.

Features:

  • Raw, model-agnostic conversion from spreadsheets to RDF
  • Interactive spreadsheet marking within Excel
  • Automatic annotation recognition and export with OA
  • Round-trip conversion: revive the original spreadsheet files from the produced RDF (UnTabLinker)

In Data2Semantics, we have used TabLinker to publish linked socio-historical data, converting the historical Dutch censuses (1795-1971) to RDF (see slides).

 

Social historians are actively doing research using these datasets, producing rich annotations that correct or reinterpret data; these annotations are very useful when checking dataset quality and consistency (see model). Published RDF is ready-to-query and visualze via SPARQL queries.

 

 

Enhanced by Zemanta

Update: Machine Learning and Linked Data

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Update: Machine Learning and Linked Data)

Source: Data2Semantics

Mathematics

Part of work package 2 is developing machine learning techniques to automatically enrich linked data. The web of data has become so large, that maintaining it by hand is no longer possible. In contrast to existing techniques for learning for the semantic web, we aim at applying the techniques directly to the linked data.

We use kernel based machine learning techniques, which can deal well with structured data, such as RDF graphs. Different graph kernels exist, typically developed in the bioinformatics domain, thus which kernels are most suited to RDF is an unanswered question. A big advantage of the graph kernel approach is that relatively little preprocessing/feature selection of the RDF graph is necessary and graph kernels can be applied for a wide range of tasks, such as property prediction, link prediction, node clustering, node ranking, etc.

Currently our research focusses on:

  • which graph kernels are best suited to RDF,
  • what part of the RDF graph do we need for the graph kernel,
  • which tasks are well suited to solve using kernels.

A paper with the most recent results is currently under submission at SDM 2013. Code for different graph kernels and for redoing our experiments is available at: https://github.com/Data2Semantics/d2s-tools.

Enhanced by Zemanta

Linkitup: Lightweight Data Enrichment

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Linkitup: Lightweight Data Enrichment)

Source: Data2Semantics

Linkitup is a Web-based dashboard for enrichment of research output published via the Figshare.com repository service. For license terms, see below.

Linkitup currently does two things:

  • it takes metadata entered through Figshare.com and tries to find equivalent terms, categories, persons or entities on the Linked Data cloud and several Web 2.0 services.
  • it extracts references from publications in Figshare.com, and tries to find the corresponding Digital Object Identifier (DOI).

Linkitup is developed as part of our strategy to bring technology for adding semantics to research data to actual users.

Linkitup currently contains five plugins:

  • Wikipedia/DBPedia linking to tags and categories
  • Linking of authors to the DBLP bibliography
  • CrossRef linking of papers to DOIs through bibliographic reference extraction
  • Elsevier Linked Data Repository linking to tags and categories
  • ORCID linking to authors

Using Figshare allows Data2Semantics to:

  • tap into a wealth of research data already published
  • provide state-of-the art data enrichment services on a prominent platform with a significant user base, and
  • bring RDF data publishing to a mainstream platform.
  • And lastly, Figshare removes the need for a Data2Semantics data repository

Linkitup feeds the enriched metadata back as links to the original article in Figshare, but also builds a RDF representation of the metadata that can be downloaded separately, or published as research output on Figshare.

We aim to extend linkitup to connect to other research repositories such as EASY and the Dataverse Network.

A live version of Linkitup is available at http://linkitup.data2semantics.org. Note that the software is stil in beta! You will need a Figshare login and some data published in Figshare to get started.

More information, including installation instructions are available from Github.

 

Enhanced by Zemanta

Update: Provenance Reconstruction

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Update: Provenance Reconstruction)

Source: Data2Semantics

Work package 5 of Data2Semantics focuses on the reconstruction of provenance information. Provenance is a hot topic in many domains, at it is believed that accurate provenance information can benefit measures of trust and quality. In science, this is certainly true. Provenance information in the form of citations is a centuries old practice. However, this is not enough:

  • What data is a scientific conclusion based on?
  • How was that data gathered?
  • Was the data altered in the process?
  • Did the authors cite all sources?
  • What part of the cited paper is used in the article?

Detailed provenance of scientific articles, presentations, data and images is often missing. Data2Semantics investigates the possibilities for reconstructing provenance information.

Over the past year, we have implemented a pipeline for provenance reconstruction (see picture), that is based on four steps: preprocessing, hypotheses generation, hypotheses pruning, aggragation and ranking. Each of these steps perform multiple analyses on a document that can be run in parallel.

 

 

 

 

 

 

 

Our first results, running this pipeline on a Dropbox folder (including version history), are encouraging. We achieve an F-score of 0.7  when we compare generated dependency relations to manually specified ones (our gold standard).

Sara Magliacane has a paper accepted at the ISWC 2012 Doctoral Consortium, where she explains her approach and methodology.

 

Enhanced by Zemanta

Update: Linked Data Replication

Posted by data2semantics in collaboration | computer science | large scale | semantic web | vu university amsterdam - (Comments Off on Update: Linked Data Replication)

Source: Data2Semantics

What if Dolly was only a partial replica?

Work package 3 of Data2Semantics centers around the problem of ranking Linked Data.

Over the past year, we have identified partial replication as a core use case for ranking.

The amount of linked data is rapidly growing, especially in the life sciences and medical domain. At the same time, this data is increasingly dynamic: information is added and removed at a much more granular level than before. Where until recently the Linked Data cloud grew one dataset at a time, these datasets are now `live`, and can change as fast as every minute.

For most Linked Data applications, the cloud, but also the individual datasets are too big too handle. In some cases, as in e.g. the clinical decision support use case of Data2Semantics, one may need to support e.g. offline access from a tablet computer. Another use case is where semantic web development requires a testing environment: running your experiment on a full dataset will just take too much time.

The question then is, how can we make a proper selection of the original dataset that is sufficiently representative for the application purpose, while at the same time ensuring timely synchronisation with the original dataset whenever updates take place.

A first version of the partial replication implementation was integrated in the latest version of the Hubble CDS Prototype.

Laurens Rietveld has an accepted paper at the ISWC 2012 doctoral consortium, where he explains his research approach and methodology. You can find the paper at http://www.mendeley.com/research/replication-for-linked-data/

Enhanced by Zemanta

Source: Data2Semantics

For over a year now, Data2Semantics organizes biweekly lunches for all COMMIT projects running at VU Amsterdam under the header ‘COMMIT@VU’. These projects are SEALINCMedia, EWIDS, METIS, e-Infrastructure Virtualization for e-Science Applications, Data2Semantics, and eFoodLab.

On October 29th, we had a lively discussion about opportunities to collaborate across projects (see photo).

Concretely:

  • SEALINCMedia and Data2Semantics will collaborate on trust and provenance. Trust and provenance analysis technology developed in SEALINCMedia can benefit from the extensive provenance graphs constructed in Data2Semantics, and vice versa.
  • There is potential for integrating the Amalgame vocabulary alignment toolkit (SEALINCMedia) in the Linkitup service of Data2Semantics. Also, Amalgame can play a role in the census use case of Data2Semantics (aligning vocabularies of occupations through time)
  • Both the SEALINCMedia and Data2Semantics projects are working on annotation methodologies and tools. Both adopt the Open Annotation model, and require multi-layered annotations (i.e. annotations on annotations).
  • eFoodlab and Data2Semantics are both working on annotating and interpreting spreadsheet data. We already had a joint meeting last year on this topic, but it makes sense to revive this collaboration.
  • We decided to gather vocabularies and datasets used by the various projects to make more clear where expertise in using these vocabularies lies. Knowing at a glance who else is using a particular dataset or vocabulary can be very helpful: you know on who’s door to knock if you have questions or want to share experiences.
A first version of the COMMIT Data and Vocabulary catalog is already online at:
Enhanced by Zemanta

Source: Data2Semantics

A quick heads up on the progress of Data2Semantics over the course of the third quarter of 2012.

Management summary: we have made headway in developing data enrichment and analysis tools that will have use in practice.

First of all, we developed a first version of a tool for enriching and interlinking data stored in the popular Figshare

data repository. This tool, called Linkitup, takes metadata provided by the data publisher (usually the author) and can link it to DBPedia/Wikipedia, DBLP, ORCID, ScopusID, the Elsevier Linked Data Repository. These links are fed back to the publication in Figshare, but can also be published separately as RDF. This way, we use Web 2.0 mechanisms that are popular amongst researchers to allow them to enrich their data, and reap immediate benefit. Our plans are to integrate increasingly elaborate enrichment and analysis tools in this dashboard (e.g. annotation, complexity analysis, provenance reconstruction, etc.)

Linkitup is available from http://github.com/Data2Semantics/linkitup . We are aiming for a first release soon!

Furthermore, we have made a good start at reconstructing provenance information from documents stored in a Dropbox folder. This technology can be used to trace back research ideas through chains of publications in a way that augments and extends the citation network. The provenance graph does not necessarily span just published papers, but can be constructed for a variety of document types (posters, presentations, notes, documents, spreadsheets, etc.).

A first implementation of (partial) linked data replication that will make dealing with large volumes of linked data much more manageable. The crux of partial replication lies in the selection (i.e. ranking) of a suitable subset of data to replicate. We will use data information measures, graph analysis, and statistical methods to perform this selection. The use case of clinical decision support will be the primary testing ground for this technology.

Enhanced by Zemanta

Source: Semantic Web world for you
Here it is: the first fully featured release of SemanticXO! Use it in your activities to store and share any kind of structured information with other XOs. The installation procedure is easy and only requires and XO-1 running the operating system version 12.1.0. Go to the GIT repository and download the files “setup.sh” and “semanticxo.tar.gz” […]

Source: Semantic Web world for you
Here it is: the first fully featured release of SemanticXO! Use it in your activities to store and share any kind of structured information with other XOs. The installation procedure is easy and only requires and XO-1 running the operating system version 12.1.0. Go to the GIT repository and download the files “setup.sh” and “semanticxo.tar.gz” […]

Source: Semantic Web world for you
Reblogged from The World Wide Semantic Web: Opening of the festival (see more photos) This year Open Knowledge Festival took place in Helsinki between from September 17 and September 22. Victor de boer and myself (Christophe Guéret) went to this huge (1000 participants from 100 nations) conference to speak about Open Development, one the 13 […]