News and Updates on the KRR Group
Header image

Source: Think Links

Update: A version of this post appeared in SURF magazine (on the back page) in their trendwatching column

Technology at its best lets us do what we want to do without being held back by time consuming or complex processes. We see this in great consumer technology: your phone giving you directions to the nearest cafe, your calendar reminding you of a friend’s birthday, or a website telling you what films are on. Good technology removes friction.

While attending the SURF Research day, I was reminded that this idea of removing friction through technology shouldn’t be limited to consumer or business environments but should also be applied in academic research settings. The day showcased a variety of developments in information technology to help researchers do better research. Because SURF is a Dutch organization there was a particular focus on developments here in the Netherlands.

The day began with a fantastic keynote from Cameron Neylon outlining how networks qualitatively change how research can be communicated. A key point was that to create the best networks we need to make research communication as frictionless as possible.  You can find his longer argument here. After Cameron’s talk, Jos Engelen the chairman of the NWO (the Dutch NSF) gave some remarks. For me, the key take-away was that in every one of the Dutch Government’s 9 Priority Sectors, technology has a central role in smoothing both the research process and its transition to practice.

After the opening session, there were four parallel sessions on text analysis, dealing with data, profiling research, and technology for research education. I managed to attend parts of three of the sessions. In the profiling session, the recently released SURF Report on tracking the impact of scholarly publications in the 21st century, sparked my interest.  Finding new faster and broader ways of measuring impact (i.e. altmetrics)  is a way of reducing friction in science communication. The ESCAPE project showed how enriched publications can make it easy to collate and browse related content around traditional articles. The project won SURF’s enriched publication of the year award. Again, the key, simplifying the research process.  Beyond these presentations, there were talks ranging from making it easier to do novel chemistry to helping religious scholars understand groups through online forms. In each case, the technology was successful because it eliminated friction in the research process.

The SURF research day presented not just technology but how, when it’s done right, technology can make research just a bit smoother.

Filed under: academia, altmetrics Tagged: events, ozdag, surffounation

Source: Think Links

Technology at its best lets us do what we want to do without being held back by time consuming or complex processes. We see this in great consumer technology: your phone giving you directions to the nearest cafe, your calendar reminding you of a friend’s birthday, or a website telling you what films are on. Good technology removes friction.

While attending the SURF Research day, I was reminded that this idea of removing friction through technology shouldn’t be limited to consumer or business environments but should also be applied in academic research settings. The day showcased a variety of developments in information technology to help researchers do better research. Because SURF is a Dutch organization there was a particular focus on developments here in the Netherlands.

The day began with a fantastic keynote from Cameron Neylon outlining how networks qualitatively change how research can be communicated. A key point was that to create the best networks we need to make research communication as frictionless as possible.  You can find his longer argument here. After Cameron’s talk, Jos Engelen the chairman of the NWO (the Dutch NSF) gave some remarks. For me, the key take-away was that in every one of the Dutch Government’s 9 Priority Sectors, technology has a central role in smoothing both the research process and its transition to practice.

After the opening session, there were four parallel sessions on text analysis, dealing with data, profiling research, and technology for research education. I managed to attend parts of three of the sessions. In the profiling session, the recently released SURF Report on tracking the impact of scholarly publications in the 21st century, sparked my interest.  Finding new faster and broader ways of measuring impact (i.e. altmetrics)  is a way of reducing friction in science communication. The ESCAPE project showed how enriched publications can make it easy to collate and browse related content around traditional articles. The project won SURF’s enriched publication of the year award. Again, the key, simplifying the research process.  Beyond these presentations, there were talks ranging from making it easier to do novel chemistry to helping religious scholars understand groups through online forms. In each case, the technology was successful because it eliminated friction in the research process.

The SURF research day presented not just technology but how, when it’s done right, technology can make research just a bit smoother.

Filed under: academia, altmetrics Tagged: events, ozdag, surffounation

Source: Think Links

I had a nice opportunity to start out this year with a visit to the Information Sciences Institute (ISI) in Southern California’s beautiful Marina del Rey .  I did my postdoc with Yolanda Gil at ISI and we have continued to have an active collaboration, recently, doing work on using workflows for exposing networks from linked data.

I always get a jolt of information visiting ISI. Here are five pointers to things I learned this time:

1. The Karma system [github] is really leading the way on bringing data integration techniques to linked data. I’ll definitely be looking at Karma with respect to our development of the Open PHACTS platform.

2. I’m excited about change detection algorithms in particular edit distance related measures for figuring out how to generate rich provenance information in the Data2Semantics project. These are pretty well studied algorithms but I think we should be able to apply them differently. A good place to start is the paper:

3. Also with respect to provenance, after talking with Greg Ver Steeg, I think Granger Causality and some of the other associated statistical models are worth looking at. Some pointers:

4. Tran Thanh gave a nice overview of his work on Semantic Search. I liked how he combined and extended the information retrieval and database communities work using Semantic Web techniques. Keyword: Steiner Trees

5. MadSciNetwork is a site where scientists answer questions from the public. This has been around since 1995. They have collected over 40,000 answered science questions. This corpus of questions is available at MadSci Network Research. Very cool.

Finally… it’s nice to visit southern california in January when you live in cold Amsterdam :-)

 

 

Filed under: academia, linked data, provenance markup Tagged: pointers

Source: Think Links

I had a nice opportunity to start out this year with a visit to the Information Sciences Institute (ISI) in Southern California’s beautiful Marina del Rey .  I did my postdoc with Yolanda Gil at ISI and we have continued to have an active collaboration, recently, doing work on using workflows for exposing networks from linked data.

I always get a jolt of information visiting ISI. Here are five pointers to things I learned this time:

1. The Karma system [github] is really leading the way on bringing data integration techniques to linked data. I’ll definitely be looking at Karma with respect to our development of the Open PHACTS platform.

2. I’m excited about change detection algorithms in particular edit distance related measures for figuring out how to generate rich provenance information in the Data2Semantics project. These are pretty well studied algorithms but I think we should be able to apply them differently. A good place to start is the paper:

3. Also with respect to provenance, after talking with Greg Ver Steeg, I think Granger Causality and some of the other associated statistical models are worth looking at. Some pointers:

4. Tran Thanh gave a nice overview of his work on Semantic Search. I liked how he combined and extended the information retrieval and database communities work using Semantic Web techniques. Keyword: Steiner Trees

5. MadSciNetwork is a site where scientists answer questions from the public. This has been around since 1995. They have collected over 40,000 answered science questions. This corpus of questions is available at MadSci Network Research. Very cool.

Finally… it’s nice to visit southern california in January when you live in cold Amsterdam :-)

 

 

Filed under: academia, linked data, provenance markup Tagged: pointers

Source: Semantic Web world for you
The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog. Here’s an excerpt: A San Francisco cable car holds 60 people. This blog was viewed about 2,800 times in 2011. If it were a cable car, it would take about 47 trips to carry that many people. Click here to see the […]

Source: Think Links

The VU University Amsterdam computer science department has been a pioneer at putting structured data and Semantic Web into the undergraduate curriculum through our Web-based Knowledge Representation. I’ve had the pleasure of teaching the class for the past 3 years. The class is done in a short block of 8 weeks (7 weeks if you give them a week for exams). It’s a fairly complicated class for second year undergraduates but each year the technology becomes easier making it easier for the students to ground the concepts of KR and Web-based data into applications.

The class involves 6 lectures covering the major ground of Semantic Web technologies and KR. We then give them 3 1/2 weeks to design and hopefully build a Semantic Web application in pairs. During this time we give one-on-one support through appointments. For most students, this is the first time they’ve come into contact with Semantic Web technologies.

This year they built applications based on The Times Higher Education 2011 World University rankings. They converted databases to RDF, developed their own ontologies, integrated data from the linked data cloud and visualized data using sparql. I was impressed with all the work they did and I wanted to share some of their projects. Here are four screencasts from the applications the students built.

Points of Interest Around Universities

Guess Which University

Find Universities by Location

SPARQL Query Builder for University Info



Filed under: academia, linked data Tagged: education, linked data, semantic web, student, vu university amsterdam, web-based knowledge representation

Source: Think Links

The VU University Amsterdam computer science department has been a pioneer at putting structured data and Semantic Web into the undergraduate curriculum through our Web-based Knowledge Representation. I’ve had the pleasure of teaching the class for the past 3 years. The class is done in a short block of 8 weeks (7 weeks if you give them a week for exams). It’s a fairly complicated class for second year undergraduates but each year the technology becomes easier making it easier for the students to ground the concepts of KR and Web-based data into applications.

The class involves 6 lectures covering the major ground of Semantic Web technologies and KR. We then give them 3 1/2 weeks to design and hopefully build a Semantic Web application in pairs. During this time we give one-on-one support through appointments. For most students, this is the first time they’ve come into contact with Semantic Web technologies.

This year they built applications based on The Times Higher Education 2011 World University rankings. They converted databases to RDF, developed their own ontologies, integrated data from the linked data cloud and visualized data using sparql. I was impressed with all the work they did and I wanted to share some of their projects. Here are four screencasts from the applications the students built.

Points of Interest Around Universities

Guess Which University

Find Universities by Location

SPARQL Query Builder for University Info



Filed under: academia, linked data Tagged: education, linked data, semantic web, student, vu university amsterdam, web-based knowledge representation

Source: Think Links

It’s nice to see where I work (VU University Amsterdam) putting out some nifty promotional videos on YouTube. Here are two from the Computer Science department and the Network Institute both of which I’m happy to be part of.

 

 

Filed under: academia Tagged: computer science, network institute, vu university amsterdam

Source: Think Links

It’s nice to see where I work (VU University Amsterdam) putting out some nifty promotional videos on YouTube. Here are two from the Computer Science department and the Network Institute both of which I’m happy to be part of.

 

 

Filed under: academia Tagged: computer science, network institute, vu university amsterdam

Source: Semantic Web world for you

I’m currently spending some time at Yahoo labs in Barcelona to work with Peter Mika and his team on data analysis. Last week, I was invited to give a seminar on how we perform network-based analysis of Linked Data at the VU. The slides are embedded at the end of this post.

Essentially, we observe that focusing only on the triples (c.f., for instance, a BTC snapshot) is not enough to explain some of the patterns observed in the Linked Data ecosystem. In order to understand what’s really going on, one as to take in account the data, its publishers/consumers and the machines that serve it. Time also plays an important role and shouldn’t be neglected. This brings us to studying this ecosystem as a Complex System and that’s one of the thing that is keeping Paul, Frank, Stefan, Shenghui and myself busy these days ;-)

Exploring Linked Data content through network analysis