Source: Think Links
I think since I’ve moved to Europe I’ve been attending ESWC (Extended/European Semantic Web Conference) and I always get something out of the event. There are plenty of familiar faces but also quite a few new people and it’s a great environment for having chats. In addition, the quality of the content is always quite good. This year the event was held in Montpellier and was for the most part well organized: the main conference wifi worked!
- 300 participants
- 42 accepted papers from 162 submissions
- 26% acceptance rate
- 11 workshops + 7 tutorials
So what was I doing there:
- I presented a paper in the Sepublica workshop co-authored with Sara Magliacane on Repurposing Benchmarks for Provenance Reconstruction.
- With Jun Zhao and Olaf Hartig, we gave a half-day tutorial on PROV.
- I presented Open PHACTS at the EU Project Networking Session.
The VU Semantic Web group also had a strong showing:
- Albert Meroño-Peñuela won the best PhD symposium paper for his work on digital humanities and the semantic web.
- The USEWOD workshop’s (led by Laura Hollink) datasets were used by a number of main track papers for evaluation.
- Stefan Schlobach and Laura Hollink were on the organizing committee. And we organized a couple of workshops & tutorials.
- Albert Meroño-Peñuela, Rinke Hoekstra, Andrea Scharnhorst, Christophe Guéret and Ashkan Ashkpour. Longitudinal Queries over Linked Census Data.
- Niels Ockeloen, Victor de Boer and Lora Aroyo. LDtogo: A Data Querying and Mapping Framework for Linked Data Applications.
- Several workshop papers.
I’ll try to pull out what I thought were the highlights of the event.
What is a semantic web application?
The keynotes from Enrico Motta and David Karger focused on trying to define what a semantic web application was. This starts out in the form of does a Semantic Web application need to use the Semantic Web set of standards (e.g. RDF, OWL, etc). So from my perspective, the answer is no. These standards are great infrastructure for building these applications but are they necessary, no (see google knowledge graph). Then what is a semantic web application?
From what I could gather, Motta would define it as an application that is scalable, uses the web and embraces Model Theoretic semantics. For me that’s rather limiting, there are many other semantics that may be appropriate… we can ground meaning in something else other than model theory. I think a good example of this is the work on Pragmatic Semantics that my colleague Stefan Schlobach presented at the Artificial Intelligence meets the Semantic Web workshop. Or we can reach back into AI and see discussion’s from Brooks’ classic paper Elephant’s Don’t Play Chess. I felt that Karger’s definition (in what was a great keynote) was getting somewhere. He defined a semantic web application essentially as:
An application whose schema is expected to change.
This seems to me to capture the semantic portion of the definition, in a sense that the semantics need to be understood on the fly. However, I think we need to role the web back into this definition… Overall, I thought this discussion was worth having and helps the field define what it is that we are aiming at. To be continued…
As I said, I thought Karger’s keynote was great. He gave a talk within a talk, on the subject of homebrew databases from this paper in CHI 2011:
Amy Voida, Ellie Harmon, and Ban Al-Ani. 2011. Homebrew databases: complexities of everyday information management in nonprofit organizations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 915-924. DOI=10.1145/1978942.1979078 http://doi.acm.org/10.1145/1978942.1979078
They define a homebrew database as “an assemblage of information management resources that people have pieced together to satisfice their information management needs.” This is just what we see all the time, the combination of excel, word, email, databases and don’t forget normal paper brought together to try to attack information management problems. A number of our use cases from the pharma industry as well as science reflect essentially this practice. It’s great to see a good definition of this problem grounded in ethnographic studies.
The Concerns of Linking
There were a couple of good papers on generating linkage across datasets (the central point of linked data). In Open PHACTS, we’ve been dealing with the notion of essentially context dependent linkages. I think this notion is becoming more prevalent in the community. We had a lot of positive response on this in the poster session when presenting Open PHACTS. Probably, my favorite paper was on linking the Smithsonian American Art museum to the Linked Data cloud. They use PROV to drive their link generation. Essentially, proposing links to human’s who then verify the connections. See:
- Pedro Szekely, Craig Knoblock, Fengyu Yang, Xuming Zhu, Eleanor Fink, Rachel Allen and Georgina Goodlander.Connecting the Smithsonian American Art Museum to the Linked Data Cloud.
I also liked the following paper on which hardware environment you should use when doing link discovery. Result: use GPU’s there fast!
- Axel-Cyrille Ngonga Ngomo, Lars Kolb, Norman Heino, Michael Hartung, Sören Auer andErhard Rahm. When to Reach for the Cloud: Using Parallel Hardware for Link Discovery.
Additionally, I think the following paper is cool because they use network statistics not just to measure but to do something, namely create links:
- Bernardo Pereira Nunes, Stefan Dietze, Marco Antonio Casanova, Ricardo Kawase, Besnik Fetahu and Wolfgang Nejdl.Combining a co-occurrence-based and a semantic Measure for Entity Linking.
APIs were a growing theme of the event with things like the Linked Data Platform working group and the successful SALAD workshop. (Fantastic acronym). Although I was surprised people in the workshop hadn’t heard of the Linked Data API. We had a lot of good feedback on the Open PHACTS API. It’s just the case that there is more developer expertise for using web service apis rather than semweb tech. I’ve actually seen a lot of demand for Semweb skills and while we our doing our best to train people there is still this gap. It’s good then that we are thinking about how these two technologies play together nicely.
- We should support content negotiation in dev.openphacts.org.
- The Semantic Graph Management Tutorial had good introductory slides (ask me if you want them).
- The Benchmark Handbook edited by Jim Gray
- http://sdshare.org is worth a look for publishing updates for RDF based info.
- Bio2RDF version 2 is looking great. Lots of updates and they use PROV.
- Tobias Kuhn talked about nanopublications.
- I’m getting more and more convinced that you can get good results with SPARQL to SQL translators.
- IPTC news subject codes in skos.
- LOD cloud colored by license. Huge and neglected issue.
- I should stop having high hopes for panels.
- Golden rule of session organization: be on time.
You can follow any responses to this entry through the RSS 2.0 You can skip to the end and leave a response. Pinging is currently not allowed.