News and Updates on the KRR Group
Header image

Source: Think Links

It’s been about two weeks since we had the almetrics11 Workshop at Web Science 2011 but I was swamped with the ISWC conference deadline so I just got around till posting about this now.

The aim of the workshp was to gather together the group of people working on next generation measures of science based on the Web. Importantly, as organizers, Jason, Dario and I wanted to encourage the growth of the scientific side of altmetrics.

The workshop turned out to be way better than I expected. We had roughly 36 attendees, which was way beyond our expectations. You can see some of the attendees here:

There was nice representation from my institution (VU University Amsterdam) including talks by my collaborators Peter van den Besselaar and Julie Birkholtz. But we had attendees from Israel, the UK, the US and all over Europe. People were generally excited about the event and the discussions went well (although the room was really warm). I think we all had a good time the restaurant, the Alt-Coblenz – highly recommended by the way-and an appropriate name. Thanks to the WebSci organizing team for putting this together.

We had a nice mix of social scientists and computer scientists (~16 & 20 respectively). Importantly, we had representation from the bibliometrics community, social studies of science, and computer science.

Importantly, for an emerging community, there was a real honesty about the research. Good results were shown but importantly almost every author discussed where the gaps were in their own research.

Two discussions come to the fore for me. One was on how we evaluate altmetrics.  Mike Thelwall who gave the keynote (great job by the way) suggests using correlations to the journal impact factor to help demonstrate that there is something scientifically valid that your measuring. What you want is not perfect correlation but correlation with a gap and that gap is what your new alternative metric is then measuring. There was also the notion from Peter van den Besselaar is that we should look more closely our how our metrics match what scientists do in practice (i.e. qualitative studies). For example, do our metrics correlate with promotions or hiring. The second discussion was around where to go next with altmetrics. In particular, there was a discussion on how to position altmetrics in the research field and really it seemed to position itself within and across the fields of science studies (i.e scientometricswebometrics,virtual ethnograpy ). Importantly, it was felt that we needed a good common corpus of information in order to comparative studies of metrics. Altmetrics has the problem of data acquisition. While some people are interested in that others want to focus on metric generation and evaluation. A corpus of traces of science online was felt to be a good way to interconnect both data acquisition and metric generation and allow for such comparative studies. But how to build the corpus….Suggestions welcome.

The attendees wanted to have an altmetrics12 so I’m pretty sure we will do that. Additionally, we will have some exciting news soon about a journal special issue on altmetrics.

Some more links:

Abstracts of all talks

Community Notes

Also, could someone leave a link to the twitter archive in the comments? That would be great.

Filed under: academia, altmetrics, interdisciplinary research Tagged: #altmetrics, science studies, web science, websci11

Chris Welty

Image via Wikipedia

On Monday, June 27th from 4-5 pm, Chris Welty of the IBM T.J. Watson Research Center, will give a talk on Watson in room HG-01A05 of the VU main building.

Title: Inside the mind of Watson

Abstract:
Watson is a computer system capable of answering rich natural language questions and estimating its confidence in those answers at a level of the best humans at the task. On Feb 14-16, in an historic event, Watson triumphed over the best human players of all time on the American television quiz-show, Jeopardy! In this talk I will discuss how Watson works at a high level with examples from the show.

Bio:
Chris Welty is a Research Scientist at the IBM T.J. Watson Research Center in New York. Previously, he taught Computer Science at Vassar College, taught at and received his Ph.D. from Rensselaer Polytechnic Institute, and accumulated over 14 years of teaching experience before
moving to industrial research. Chris’ principal area of research is Knowledge Representation, specifically ontologies and the semantic web, and he spends most of his time applying this technology to Natural Language Question Answering as a member of the DeepQA/Watson team and, in the past, Software Engineering. Dr. Welty is a co-chair of the W3C Rules Interchange Format Working Group (RIF), serves on the steering committee of the Formal Ontology in Information Systems Conferences, is president of KR.ORG, on the editorial boards of AI Magazine, The Journal of Applied Ontology, and The Journal of Web Semantics, and was an editor in the W3C Web Ontology Working Group. While on sabbatical in 2000, he co-developed the OntoClean methodology with Nicola Guarino. Chris Welty’s work on ontologies and ontology methodology has appeared in CACM, and numerous other publications.

Enhanced by Zemanta

The LarKC folk at the German High Performance Computing Centre in Stuttgart did a rather nice write-up on LarKC from a high-performance computing perspective, intended for their own community. Find the relevant pages here.

Enhanced by Zemanta

A LarKC workflow for traffic-aware route-planning has won the 1st prize in the AI Mashup Challenge at the ESWC 2011 conference, held this week on Crete.

The detail of “Traffic_LarKC” can be found at https://sites.google.com/a/fh-hannover.de/aimashup11/home/traffic_larkc, but in brief:

Four different datasets are used:

  • the traffic sensors data, obtained from Milano Municipality
  • the Milano street topology
  • historical weather data from the Italian website ilMeteo.it
  • calendar information (week days and week-end days, holidays, etc.) from Milano Municipality and from the Mozilla Calendar project.

These are used in a batchtime workflow to predict the traffic situation over the next two ours and in a runtime workflow to respond to route-planning queries from users.

This LarKC workflow shows that Linked Open Data and the corresponding technologies are now getting good enough to compete with what’s possible in closed commercial systems.

Congratulations to the entire team that has made this possible!

LarKC traffic demo