News and Updates on the KRR Group
Header image

TabLinker is experimental software for converting manually annotated Microsoft Excel workbooks to the RDF Data Cube vocabulary. It is used in the context of the Data2Semantics project to investigate the use of Linked Data for humanities research (Dutch census dataproduced by DANS).

TabLinker was designed for converting Excel or CSV files to RDF (triplification, RDF-izing) that have a complex layout and cannot be handled by fully automatic csv2rdf scripts.

A presentation about Linked Census Data, including TabLinker is available from SlideShare.

Please consult the Github page for the latest release information.

Using TabLinker

TabLinker takes annotated Excel files (found using the srcMask option in the config.ini file) and converts them to RDF. This RDF is serialized to the target folder specified using the targetFolder option in config.ini.

Annotations in the Excel file should be done using the built-in style functionality of Excel (you can specify these by hand). TabLinker currently recognises seven styles:

  • TabLink Title - The cell containing the title of a sheet
  • TabLink Data - A cell that contains data, e.g. a number for the population size
  • TabLink ColHeader - Used for the headers of columns
  • TabLink RowHeader - Used for row headers
  • TabLink HierarchicalRowHeader - Used for multi-column row headers with subsumption/taxonomic relations between the values of the columns
  • TabLink Property - Typically used for the header cells directly above RowHeader or HierarchicalRowHeader cells, cell values are the properties that relate Data cells to RowHeader and HierarchicalRowHeader cells.
  • TabLink Label - Used for cells that contain a label for one of the HierarchicalRowHeader cells.

An eight style, TabLink Metadata, is currently ignored (See #3).

An example of such an annotated Excel file is provided in the input directory. There are ways to import the styles defined in that file into your own Excel files.

Tip: If your table contains totals for HierarchicalRowHeader cell values, use a non-TabLink style to mark the cells between the level to which the total belongs, and the cell that contains the name of the total. Have a look at the example annotated Excel file to see how this is done (up to row 428).

Once you’re all set, start the TabLinker by cd-ing to the src folder, and running:

python tablinker.py

Requirements

TabLinker was developed under the following environment:

Source: Think Links

Update: A version of this post appeared in SURF magazine (on the back page) in their trendwatching column

Technology at its best lets us do what we want to do without being held back by time consuming or complex processes. We see this in great consumer technology: your phone giving you directions to the nearest cafe, your calendar reminding you of a friend’s birthday, or a website telling you what films are on. Good technology removes friction.

While attending the SURF Research day, I was reminded that this idea of removing friction through technology shouldn’t be limited to consumer or business environments but should also be applied in academic research settings. The day showcased a variety of developments in information technology to help researchers do better research. Because SURF is a Dutch organization there was a particular focus on developments here in the Netherlands.

The day began with a fantastic keynote from Cameron Neylon outlining how networks qualitatively change how research can be communicated. A key point was that to create the best networks we need to make research communication as frictionless as possible.  You can find his longer argument here. After Cameron’s talk, Jos Engelen the chairman of the NWO (the Dutch NSF) gave some remarks. For me, the key take-away was that in every one of the Dutch Government’s 9 Priority Sectors, technology has a central role in smoothing both the research process and its transition to practice.

After the opening session, there were four parallel sessions on text analysis, dealing with data, profiling research, and technology for research education. I managed to attend parts of three of the sessions. In the profiling session, the recently released SURF Report on tracking the impact of scholarly publications in the 21st century, sparked my interest.  Finding new faster and broader ways of measuring impact (i.e. altmetrics)  is a way of reducing friction in science communication. The ESCAPE project showed how enriched publications can make it easy to collate and browse related content around traditional articles. The project won SURF’s enriched publication of the year award. Again, the key, simplifying the research process.  Beyond these presentations, there were talks ranging from making it easier to do novel chemistry to helping religious scholars understand groups through online forms. In each case, the technology was successful because it eliminated friction in the research process.

The SURF research day presented not just technology but how, when it’s done right, technology can make research just a bit smoother.

Filed under: academia, altmetrics Tagged: events, ozdag, surffounation

Source: Think Links

Technology at its best lets us do what we want to do without being held back by time consuming or complex processes. We see this in great consumer technology: your phone giving you directions to the nearest cafe, your calendar reminding you of a friend’s birthday, or a website telling you what films are on. Good technology removes friction.

While attending the SURF Research day, I was reminded that this idea of removing friction through technology shouldn’t be limited to consumer or business environments but should also be applied in academic research settings. The day showcased a variety of developments in information technology to help researchers do better research. Because SURF is a Dutch organization there was a particular focus on developments here in the Netherlands.

The day began with a fantastic keynote from Cameron Neylon outlining how networks qualitatively change how research can be communicated. A key point was that to create the best networks we need to make research communication as frictionless as possible.  You can find his longer argument here. After Cameron’s talk, Jos Engelen the chairman of the NWO (the Dutch NSF) gave some remarks. For me, the key take-away was that in every one of the Dutch Government’s 9 Priority Sectors, technology has a central role in smoothing both the research process and its transition to practice.

After the opening session, there were four parallel sessions on text analysis, dealing with data, profiling research, and technology for research education. I managed to attend parts of three of the sessions. In the profiling session, the recently released SURF Report on tracking the impact of scholarly publications in the 21st century, sparked my interest.  Finding new faster and broader ways of measuring impact (i.e. altmetrics)  is a way of reducing friction in science communication. The ESCAPE project showed how enriched publications can make it easy to collate and browse related content around traditional articles. The project won SURF’s enriched publication of the year award. Again, the key, simplifying the research process.  Beyond these presentations, there were talks ranging from making it easier to do novel chemistry to helping religious scholars understand groups through online forms. In each case, the technology was successful because it eliminated friction in the research process.

The SURF research day presented not just technology but how, when it’s done right, technology can make research just a bit smoother.

Filed under: academia, altmetrics Tagged: events, ozdag, surffounation

Source: Think Links

I had a nice opportunity to start out this year with a visit to the Information Sciences Institute (ISI) in Southern California’s beautiful Marina del Rey .  I did my postdoc with Yolanda Gil at ISI and we have continued to have an active collaboration, recently, doing work on using workflows for exposing networks from linked data.

I always get a jolt of information visiting ISI. Here are five pointers to things I learned this time:

1. The Karma system [github] is really leading the way on bringing data integration techniques to linked data. I’ll definitely be looking at Karma with respect to our development of the Open PHACTS platform.

2. I’m excited about change detection algorithms in particular edit distance related measures for figuring out how to generate rich provenance information in the Data2Semantics project. These are pretty well studied algorithms but I think we should be able to apply them differently. A good place to start is the paper:

3. Also with respect to provenance, after talking with Greg Ver Steeg, I think Granger Causality and some of the other associated statistical models are worth looking at. Some pointers:

4. Tran Thanh gave a nice overview of his work on Semantic Search. I liked how he combined and extended the information retrieval and database communities work using Semantic Web techniques. Keyword: Steiner Trees

5. MadSciNetwork is a site where scientists answer questions from the public. This has been around since 1995. They have collected over 40,000 answered science questions. This corpus of questions is available at MadSci Network Research. Very cool.

Finally… it’s nice to visit southern california in January when you live in cold Amsterdam :-)

 

 

Filed under: academia, linked data, provenance markup Tagged: pointers

Source: Think Links

I had a nice opportunity to start out this year with a visit to the Information Sciences Institute (ISI) in Southern California’s beautiful Marina del Rey .  I did my postdoc with Yolanda Gil at ISI and we have continued to have an active collaboration, recently, doing work on using workflows for exposing networks from linked data.

I always get a jolt of information visiting ISI. Here are five pointers to things I learned this time:

1. The Karma system [github] is really leading the way on bringing data integration techniques to linked data. I’ll definitely be looking at Karma with respect to our development of the Open PHACTS platform.

2. I’m excited about change detection algorithms in particular edit distance related measures for figuring out how to generate rich provenance information in the Data2Semantics project. These are pretty well studied algorithms but I think we should be able to apply them differently. A good place to start is the paper:

3. Also with respect to provenance, after talking with Greg Ver Steeg, I think Granger Causality and some of the other associated statistical models are worth looking at. Some pointers:

4. Tran Thanh gave a nice overview of his work on Semantic Search. I liked how he combined and extended the information retrieval and database communities work using Semantic Web techniques. Keyword: Steiner Trees

5. MadSciNetwork is a site where scientists answer questions from the public. This has been around since 1995. They have collected over 40,000 answered science questions. This corpus of questions is available at MadSci Network Research. Very cool.

Finally… it’s nice to visit southern california in January when you live in cold Amsterdam :-)

 

 

Filed under: academia, linked data, provenance markup Tagged: pointers

Source: Semantic Web world for you
The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog. Here’s an excerpt: A San Francisco cable car holds 60 people. This blog was viewed about 2,800 times in 2011. If it were a cable car, it would take about 47 trips to carry that many people. Click here to see the [...]

Source: Think Links

The VU University Amsterdam computer science department has been a pioneer at putting structured data and Semantic Web into the undergraduate curriculum through our Web-based Knowledge Representation. I’ve had the pleasure of teaching the class for the past 3 years. The class is done in a short block of 8 weeks (7 weeks if you give them a week for exams). It’s a fairly complicated class for second year undergraduates but each year the technology becomes easier making it easier for the students to ground the concepts of KR and Web-based data into applications.

The class involves 6 lectures covering the major ground of Semantic Web technologies and KR. We then give them 3 1/2 weeks to design and hopefully build a Semantic Web application in pairs. During this time we give one-on-one support through appointments. For most students, this is the first time they’ve come into contact with Semantic Web technologies.

This year they built applications based on The Times Higher Education 2011 World University rankings. They converted databases to RDF, developed their own ontologies, integrated data from the linked data cloud and visualized data using sparql. I was impressed with all the work they did and I wanted to share some of their projects. Here are four screencasts from the applications the students built.

Points of Interest Around Universities

Guess Which University

Find Universities by Location

SPARQL Query Builder for University Info



Filed under: academia, linked data Tagged: education, linked data, semantic web, student, vu university amsterdam, web-based knowledge representation

Source: Think Links

The VU University Amsterdam computer science department has been a pioneer at putting structured data and Semantic Web into the undergraduate curriculum through our Web-based Knowledge Representation. I’ve had the pleasure of teaching the class for the past 3 years. The class is done in a short block of 8 weeks (7 weeks if you give them a week for exams). It’s a fairly complicated class for second year undergraduates but each year the technology becomes easier making it easier for the students to ground the concepts of KR and Web-based data into applications.

The class involves 6 lectures covering the major ground of Semantic Web technologies and KR. We then give them 3 1/2 weeks to design and hopefully build a Semantic Web application in pairs. During this time we give one-on-one support through appointments. For most students, this is the first time they’ve come into contact with Semantic Web technologies.

This year they built applications based on The Times Higher Education 2011 World University rankings. They converted databases to RDF, developed their own ontologies, integrated data from the linked data cloud and visualized data using sparql. I was impressed with all the work they did and I wanted to share some of their projects. Here are four screencasts from the applications the students built.

Points of Interest Around Universities

Guess Which University

Find Universities by Location

SPARQL Query Builder for University Info



Filed under: academia, linked data Tagged: education, linked data, semantic web, student, vu university amsterdam, web-based knowledge representation

Source: Think Links

It’s nice to see where I work (VU University Amsterdam) putting out some nifty promotional videos on YouTube. Here are two from the Computer Science department and the Network Institute both of which I’m happy to be part of.

 

 

Filed under: academia Tagged: computer science, network institute, vu university amsterdam

Source: Think Links

It’s nice to see where I work (VU University Amsterdam) putting out some nifty promotional videos on YouTube. Here are two from the Computer Science department and the Network Institute both of which I’m happy to be part of.

 

 

Filed under: academia Tagged: computer science, network institute, vu university amsterdam