Two totally different disciplines meet. Professor and philosopher Arianna Betti and scientist Stefan Schlobach conduct joint research. Want to know more about this particular collaboration? Read more here
NWO has awarded 12M Euro to CLARIAH, a project to build a digital infrastructure for software, data, enrichment, search and analytics in the Humanities. Frank van Harmelen, Maarten de Rijke en Cees Snoek are among the 9 scientists that form the core team of the project. See http://clariah.nl/, http://bit.ly/TERNC0, http://bit.ly/1mWtnje for more details.
Recently the Linked Data Benchmark Council (LDBC) launched its portal http://ldbcouncil.org
LDBC is an organization for graph and RDF data management systems benchmarking that came out of a EU PF7 project by the same name (ldbc.eu).
The LDBC will survive the EU project and be industry supported and operate with ldbcouncil.org as its web presence.
Also, LDBC announced public drafts of its two first benchmarks. Public draft means that implementations of benchmark software and technical specification documents are available, and ready for public testing and comments. The two benchmarks are
– the Semantic Publishing Benchmark (SPB – http://ldbcouncil.org/benchmarks/spb) which is based on the BBC use case and ontologies, and
– the Social Network Benchmark (SNB – http://ldbcouncil/benchmarks/snb) on which an interactive workload has been released. Later, a Business Intelligence and a Graph Analytics will follow on this same dataset. The SNB data generator was recently used in the ACM SIGMOD programming contest, which was about graph analytics.
The ldbcouncil.org website also holds a blog with new and technical backgrounds of LDBC. The most recent post is about “Choke-Point based Benchmark Design”, by Peter Boncz.
+ + YASGUI
This year we again have a nice group of students (almost 70) following the 3rd year bachelor Semantic Web course. Until this year it was quite a hassle to combine the bits and pieces to create a complete workflow starting from ontology creation (in Protégé) to having a nice SPARQL endpoint that reasons in OWL over the ontology+instances.
Like the previous years, the instructors (Stefan Schlobach and Ronald Siebes) updated the assignment by the latest developments regarding available toolkits and software. We were surprised that it was very easy to integrate these latest tools and are now able to do the following within 30 minutes on any machine:
– create a simple ontology in Protégé
– install a sparql endpoint with OWL reasoning (Jena-Fuseki)
– import the ontology
– connect this local endpoint with Yasgui (http://yasgui.laurensrietveld.nl)
– do via Yasgui a federated query combining results from our local endpoint together with results from other endpoints (e.g. DBPedia)
Conclusion: it is now in reach of many people to get, without a lot of suffering, a nice Semantic Web infrastructure up-and-running, and connect it with the vast amount of external Linked-data from various endpoints.
The Semantic Web works!
Here you can find a manual to do this yourself and hopefully share the conclusion.
On 27.03.2012, 04.00pm CET the LOD2 project (http://lod2.eu) will offer the next free one hour webinar on LIMES. LIMES is a tool providing time-efficient and lossless discovery of links across knowledge bases. LIMES is an extensible declarative framework that encapsulates manifold algorithms dedicated to the processing of structured data of any sort. Built with extensibility and easy integration in mind, LIMES allows implementing applications that integrate, consume and/or generate Linked Data. Within LOD2, it will be used for discovering links between knowledge bases.
The LOD2 webinar series is powered by the LOD2 project organised and produced by the Semantic Web Company (Austria). If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the LOD2 webinar series! The LOD2 team is looking forward to meeting you at the webinar!
When : 27.03. 2012, 04.00pm – 05.00pm CET
Information & Registration: https://www2.gotomeeting.com/register/369667514
The LOD2 team is looking forward to meeting you at the webinar!! All the best and have a nice day!
With effect from November 15th there is a vacancy for a
PhD student CEDAR – Linked data
38 hours a week (1.0 fte) (for 48 months in total)
(Vacancy number DANS-2011-CEDARPhD1) (repeated call)
DANS in collaboration with the IISH, and the VU University Amsterdam is working on a project of the Computational Humanities programme of the KNAW “Census data open linked – From fragment to fabric – Dutch census data in a web of global cultural and historic information (CEDAR)”
This project builds a semantic data-web of historical information taking Dutch census data as a starting point. With such a web we will answer questions such as:
- What kind of patterns can we identify and interpret in expressions of regional identity?
- How to relate patterns of changes in skills and labor to technological progress and patterns of geographical migration?
- How to trace changes of local and national policies in the structure of communities and individual lives?
This project applies also a specific web-based data-model – exploiting the Resource Description Framework (RDF) technology– to make census data inter-linkable with other hubs of historical socio-economic and demographic data and beyond. The project will result in generic methods and tools to weave historical and socio-economic datasets into an interlinked semantic data-web. Further synergy will be created by linking CEDAR to Data2Semantics, a COMMIT project.
Information on the position
The PhD project, titled “Linked Open Data curation model in social science history – the case of Dutch census data” will be supervised by Professor Frank van Harmelen (VU Amsterdam). You will work in a project team consisting of another PhD student (PhD project “Theory and Practice of data harmonization in social history” (under the supervison of Professor Kees Mandemakers (Erasmus School of History, Culture and Communication, Erasmus University Rotterdam; IISH) and a postdoc experienced in complex network analysis and visualization. The project will be coordinated by Dr Andrea Scharnhorst (DANS, e-humanities group). It is part of the Computational Humanities Programme of the KNAW, which will be hosted at the e-humanities group (ehumanities.nl) and in which further projects (with PhD students, postdoc’s and senior staff) in the area of computational and digital humanities will be carried out.
You will conduct research on:
- Review of existing data models of census data, adaptation and modification, construction of the RDF model, links to other semantic web sources
- Query design (specific to different user communities)
- Development of RDF models of census data (historical variables)
- Mapping of different ontologies across domains and along time,
- Development of best practices to enable take-up of linking and re-use of data in other scientific disciplines and take-up in other KNAW institutes.
- Visual navigation through RDF modelled information spaces
You preferably should have the following qualifications:
- Master in computer sciences, artificial intelligence, information science or related areas
- Interest in and knowledge of semantic technologies and their deployment on the Web
- Fluency in spoken English and excellent written and verbal communication skills,
- Knowledge of Dutch would be an advantage
- Willingness and proven ability to work in a team and to liaise with colleagues in an international and interdisciplinary research environment
Appointment and Salary
The position involves a temporary appointment with DANS for 4 years with a 2-month period of probation. Applicants should have the right to work in the Netherlands for the duration of the contract. The gross salary will be € 2.042,- per month in the first year, rising to € 2.612,- per month in the fourth year for a full time appointment (scale P, for a PhD position, CAO-Dutch Universities).
DANS offers an extensive package of fringe benefits, such as 8,3% year-end bonus, 8% holiday pay, a good pension scheme, 6 weeks holiday on an annual basis and the possibility to buy or sell vacation hours.
Place of employment will be DANS – Data Archiving and Networked Services. The main working location will be at the e-Humanities Group of the KNAW (location Meertens institute, Joan Muyskenweg, Amsterdam).
For the text of the CEDAR_proposal follow the link from HSN News
Please send a letter of application including
- letter of motivation
- CV, copy of Master Thesis and list of M.Sc. courses and grades
- the names and addresses of two referees before October 15th , 2011 to DANS,
t.a.v. Hetty Labots, Personnel Department, P.O. Box 95366, 2509 CJ Den Haag,
Interviews probably will take place at the end of October, 2011 in Amsterdam. When you already applied for this position please do not apply again.
Title: Inside the mind of Watson
Watson is a computer system capable of answering rich natural language questions and estimating its confidence in those answers at a level of the best humans at the task. On Feb 14-16, in an historic event, Watson triumphed over the best human players of all time on the American television quiz-show, Jeopardy! In this talk I will discuss how Watson works at a high level with examples from the show.
Chris Welty is a Research Scientist at the IBM T.J. Watson Research Center in New York. Previously, he taught Computer Science at Vassar College, taught at and received his Ph.D. from Rensselaer Polytechnic Institute, and accumulated over 14 years of teaching experience before
moving to industrial research. Chris’ principal area of research is Knowledge Representation, specifically ontologies and the semantic web, and he spends most of his time applying this technology to Natural Language Question Answering as a member of the DeepQA/Watson team and, in the past, Software Engineering. Dr. Welty is a co-chair of the W3C Rules Interchange Format Working Group (RIF), serves on the steering committee of the Formal Ontology in Information Systems Conferences, is president of KR.ORG, on the editorial boards of AI Magazine, The Journal of Applied Ontology, and The Journal of Web Semantics, and was an editor in the W3C Web Ontology Working Group. While on sabbatical in 2000, he co-developed the OntoClean methodology with Nicola Guarino. Chris Welty’s work on ontologies and ontology methodology has appeared in CACM, and numerous other publications.
The KR&R group investigates modelling and representation of different forms of knowledge and reasoning, as found in a large variety of AI systems. We have an interest in both applications and theory. We study theoretical properties of knowledge representation and reasoning formalisms, but are also involved in developing practical knowledge-based systems. Recently, we have been very active in developments around the Semantic Web.
Posts on this website are continuously aggregated from project and member blogs.