News and Updates on the KRR Group
Header image

Source: Think Links

This past Tuesday, I had the opportunity to give a webinar for Elsevier Labs giving an overview of altmetrics. It was a fun opportunity to talk to people who have a great chance to influence the next generation of academic measurement. The slides are embedded below.

At the VU, we are also working with Elsevier Labs on the Data2Semantics project where we are trying to enrich data with additional machine understandable metadata. How does this relate to metrics? I believe that metrics (access, usage, etc) can be e a key piece of additional semantics for datasets. I’m keen to see how metrics can make our data more useful, findable and understandable.

 

Filed under: altmetrics Tagged: #altmetrics, data2semantics, presentation

Source: Think Links

This past Tuesday, I had the opportunity to give a webinar for Elsevier Labs giving an overview of altmetrics. It was a fun opportunity to talk to people who have a great chance to influence the next generation of academic measurement. The slides are embedded below.

At the VU, we are also working with Elsevier Labs on the Data2Semantics project where we are trying to enrich data with additional machine understandable metadata. How does this relate to metrics? I believe that metrics (access, usage, etc) can be e a key piece of additional semantics for datasets. I’m keen to see how metrics can make our data more useful, findable and understandable.

 

Filed under: altmetrics Tagged: #altmetrics, data2semantics, presentation

Source: Semantic Web world for you

Scaling is often a central question for data intensive projects, making use of Semantic Web technologies or not, and SemanticXO is no exception to that. The triple store is used as a back end for the Journal of Sugar, which is a central component recording the usage of the different activities. This short post discusses the results found for two questions: “how many journal entries can the triple store sustain?” and “hoe much disk space is used to store the journal entries?”

Answering these questions means loading some Journal entries and measuring the read and write performances along with the disk space used. This is done by a script which randomly generate Journal entries and insert them in the store. A text sampler and the real names of activities are used to make these entries realistic in terms of size. An example of such generated entry, serialised in HTML, can be seen there. The following graphs show the results obtained for inserting 2000 journal entries. These figures have been averaged over 10 runs, each of them starting with a freshly created store. The triple store used is called “RedStore“, it is called with an hash based BerkleyDB backend. The test machine is an XO-1 running the software 11.2.0.

The disk space is minimal for up to 30 entries, grows rapidly between 30 and 70 entries and continues on a linear basis from that number on. The maximum space occupied is a bit less than 100MB which is few of the 1GB of storage of the XO-1.

 

Amount of disk space used by the triple store

The results for the read and write delay are a bit less of a good news. Write operations are constant in time and always take around 0.1 s. Getting an entry from the triple store proves to get linearly slower as the triple store gets filled. It can be noticed that for up to 600 entries, the retrieval time of an entry is below a second. This should provide a reasonable response time. However, with 2000 entries stored the retrieval time goes as high as 7 seconds :-(

Read and write access time

The answer to the question we started with (“Does it scale?”) is then “yes, for up to 600 entries” considering a first generation device and the current status of the software components (SemanticXO/Redstore/…). This answers also yields new questions, among which: Are 600 entries enough for a typical usage of the XO? Is it possible to improve the software to get better results? How are the result on some more recent hardware?

I would appreciate a bit of help for answering all of these, and especially the last one. I only have an XO-1 and can not thus run my script on an XO-1.5 or XO-1.75. If you have such device and are willing to help me getting the results, please download the package containing the performance script and the triple store and follow the instructions for running it. After a day of execution or so, this script will generate three CSV files that I could then postprocess to get similar curves as the one showed.

Related articles

Source: Think Links

The Journal of Web Semantics recently published a special issue on Using Provenance in the Semantic Web edited by myself and Yolanda Gil. (Vol 9, No 2 (2011)). All articles are available on the journal’s preprint server.

The issue highlights top research at the intersection of provenance and the Semantic Web. The papers addressed a range of topics including:

  • tracking provenance of DBpedia back to the underlying Wikipedia edits [Orlandi & Passant];
  • how to enable reproducibility using Semantic techniques [Moreau];
  • how to use provenance to effectively reason over large amounts (1 billion triples) of messy data [Bonatti et al.]; and
  • how to begin to capture semantically the intent of scientists [Pignotti et al.].
 Our editorial highlights a common thread between the papers and sums them up as follows:

A common thread through these papers is the use of already existing provenance ontologies. As the community comes to an increasing agreement on the commonalities of provenance representations through efforts such as the W3C Provenance Working Group, this will further enable new research on the use of provenance. This continues the fruitful interaction between standardization and research that is one of the hallmarks of the Semantic Web.

Overall, this set of papers demonstrates the latest approaches to enabling a Web that provides rich descriptions of how, when, where and why Web resources are produced and shows the sorts of reasoning and applications that these provenance descriptions make possible

Finally, it’s important to note that this issue wouldn’t have been possible without the quick and competent reviews done by the anonymous reviewers. This is my public thank you to them.

I hope you take a chance to take a look at this interesting work.

Filed under: academia, linked data Tagged: journal, linked data, provenance, semantic web

Source: Semantic Web world for you

Over the last couple of years, we have engineered a fantastic data sharing technology based on open standards from the W3C: Linked Data. Using Linked Data, it is possible to express some knowledge with a set of facts and connect the facts together to build a network. Having such networked data openly accessible is a source of economical and societal benefits. It enables sharing data in an unambiguous, open and standard way, just as the Web enabled document sharing. Yet, the way we designed it deprives the majority of the World’s population from using it.

Doing “Web-less” Linked Data?

The problem may lay in the fact that Linked Data is based on Web technologies, or in the fact that Linked Data have been designed and engineered by individuals having an easy access to the Web, or maybe a combination of both aspects. Nowadays, Linked Data rhymes with having a Cloud hosted data storing services, a set of (web-based) applications to interact with this service and the infrastructure of the Web. As a result, if you don’t have access to this Web infrastructure, you can not use Linked Data. Which is a pity, because an estimated 4.5B persons don’t have access to it for various reasons (lack of infrastructure, cost of access, literacy issues, …). Wouldn’t it be possible to adjust our design choices to ensure they could also benefit from Linked Data, even if they don’t have the Web? The answer is yes, and the best news is that it wouldn’t be that hard either. But for it to happen, we need to adapt both our mindset and our technologies.

Changing our mindset

We have tendency to think that any data sharing platform is a combination of a cloud based data store, some client applications to access the data and form to feed new data into the system. This is not always applicable as central hosting of data may not be possible, or its access from client applications may not be guaranteed. We should also think of the part of the World which is illiterate and for which Linked Data, and the Web, are not accessible. In short, we need to think de-centralised, small and vocal in order to widen the access to Linked Data.

Think de-centralised

Star-shaped networks can be hard to deploy. They imply setting a central producer of resource somewhere and connecting all the clients to it. Electricity networks have already found a better alternative: the microgrids. Microgrids are made of small networks of producers/consumers (the “prosumers”) of electricity that locally manage the electricity needs. We could, and should, copy on this approach to manage local data production and consumption. For example, think of a decentralised DBpedia whose content would be made of the aggregation of several data sources producing part of the content – most likely, the content that is locally relevant to them.

Think small

Big servers require more energy and more cooling. They usually end up racked into big cabinets that in turn are packed into cooled data centers. These data centers needs to be big in order to cope with the scale issues. Thinking decentralised allow to think small, and we need to think small to provide alternatives to having data centers where these are not available. As the content production and creation goes decentralised, several small servers can be used. To continue with the analogy with microgrids, we can name these small servers taking care of locally relevant content “micro-servers”.

Think vocal

Unfortunately, not everyone can read and type. In some African areas, knowledge is shared using vocal channels (mobile phone, meetings, …) because there is no other alternative. Getting access to knowledge exchanged that way can not be done using form based data acquisition systems. We need to think of exploiting vocal conversation through Text To Speech (TTS) and Automatic Speech Recognition (ASR) rather than staying focused on forms.

Changing our technologies

Changing the mindsets is not enough, if we aim at stripping down the Web from Linked Data we also need to pay attention to our technologies and adapt them. In particular, there are 5 upcoming challenges that can be phrased as research questions:

  1. Dereferencability: How do you get a route to the data if you want to avoid using the routing system provided by the Web? For instance, how do you dereference an host-name based URIs if you don’t have access to the DNS network?
  2. Consistency: In a decentralised setting where several publishers produce part of a common data set, how do you ensure URIs are re-used and non colliding? There are chances that two different producers would use the same URI to describe different things.
  3. Reliability: Unlike centrally hosted data servers, micro-servers can not be asked to provide a 99% availability. They may go on and off unexpectedly. First thing to know is whether that’s an issue or not. The second is, if we should ensure their data remains available, how do we achieve this?
  4. Security: That’s also related to having a swarm of microservers serving a particular dataset. If any microserver can produce a chunk of that dataset, how do you avoid having a spammer getting in and starting producing falsified content? If we want to avoid centralized networks, authority based solution such as in Public Key Infrastructure (PKI) is not an option. We need to find decentralised authentication mechanisms.
  5. Accessibility: How do we make Linked Data accessible to those that are illiterate? As highlighted earlier, not everyone can read an write but illiterate persons can still talk. We need to take more of the vocal technologies into account in order to make Linked Data accessible to them. We can also investigate graphical based data acquisition techniques with visual representations of information.

More about this

This is a presentation that Stefan Schlobach gave at ISWC2011 on this topic:

You are also invited to read the associated paper “Is data sharing the privilege of a few ? Bringing Linked Data to those without the Web” and check out two projects working on the mentioned challenges: SemanticXO and Voices.

Source: Think Links

Yesterday, I had the pleasure of giving a tutorial at the NBIC PhD Course on Managing Life Science Information. This week long course focused on applying semantic web technologies to getting to grips with integrating heterogenous life science information.

The tutorial I gave focused on exposing relational databases to the web using the awesome D2R Server. It’s really a great piece of software that shows results right away. Perfect for a tutorial. I also covered how to get going with LarKC and where that software fit in the whole semantic web data management space.

On to the story…

The students easily exposed our test database (GPCR receptors) as RDF using D2R. Now the cool part: I found out just before the start of my tutorial  that the day before they had setup an RDF repository (Sesame) with some personal profile information. So on the fly I had them take the RDF produced by the database conversion and load that into the repository from the day before . This took a couple of clicks. They were then able to query over both their personal information and this new GPCR dataset. With not much work we hand munged together two really different data sets.

This is old hat to any Semantic Web person, but it was a great reminder of how the flexibility of RDF makes it easy to mashup data. No messing with about with tables or figuring out if the schema is right, just load it up into your triple store and start playing.

Filed under: academia, linked data Tagged: mashup, rdf, semantic web

Source: Think Links

I’ve been a bit quiet for the past couple of months. First, I was on a vacation and then we were finishing up the following demo for the Open PHACTS project. This is one of the main projects I’ll be working on for the next 2.5 years. The project is about integrating and exposing data for pharmacology. The demo below shows the first results of what we’ve done after the first 6 months of the project. Eventually, we aim to have the platform we’re developing be fully provenance enabled so all the integrated results can be checked and filtered based on their sources. Check it out and let me know what you think. Sorry for the poor voice over… it’s me :-)

Original Post

Filed under: academia, linked data

Source: Semantic Web world for you

With the last post about SemanticXO dating back from April, it’s time for an update, isn’t it? ;-)

A lot of things happened since April. First, a paper about the project was accepted for presentation at the First International Conference on e-Technologies and Networks for Development (ICeND2011). Then, I spoke about the project during the symposium of the Network Institute as well as during the SugarCamp #2. Lastly, a first release of a triple-store powered Journal is now available for testing.

Publication

The paper entitled “SemanticXO : connecting the XO with the World’s largest information network ” is available from Mendeley. It explains what the goal of the project is and then report on some performance assessement and a first test activity. Most of the information contained has actually been blogged before on this blog (c.f. there and there) but if you want a global overview of the project, this paper is still worth a read. The conference in itself was very nice and I did some networking. I came back with a lot of business card and the hope of keeping in touch with the people I met there. The slides from the presentation are available from SlideShare

Presentations

The Network Institute of Amsterdam organised on May 10 the Network Institute organized a one-day symposium to strengthen the ties between its members and to stimulate further collaboration. This institute is a long-term collaboration between groups from the Department of Computer Science, the Department of Mathematics, the Faculty of Social Sciences and the Faculty of Economics and Business Administration. I presented a poster about SemanticXO and an abstract went into the proceedings of the event.

More recently, I spent the 10 and the 11 of September at Paris for the Sugar Camp #2 organised by OLPC France. Bastien managed me a bit of time on Sunday afternoon to re-do the presentation from ICeND2011 (thanks again for that!) and get some feedback. This was a very well organised event held at a cool location (“La cité des sciences“), it was also the first time I met so many other people working on Sugar and I could finally put some faces on the name I saw so many time on the mailing lists and on the GIT logs :)

First SemanticXO prototype

The project developement effort is split in 3 parts: a common layer hidding the complexity of SPARQL, a new implementation of the journal datastore and the coding of diverse activities making use of the new semantic capabilities. All three are going more or less in parallel, at different speed, as, for instance, the work on activities direct what the common layer will contain. I’ve focused my efforts on the journal datastore to get something ready to test. It’s a very first prototype that has been coded starting with the genuine datastore 0.92 and replacing the part in charge of the metadata. The code taking care of the files remains the same. This new datastore is available from Gitorious but because installing the triple store and replacing the journal is a tricky manual process, I bundled all of that ;-)

Installation

The installation bundle consists of two files, a “semanticxo.tgz” and a script “patch-my-xo.sh“. To install SemanticXO, you need to download the two and put them in the same location somewhere on your machine and then type (as root):

sh ./patch-my-xo.sh setup

This will install a triple store, add it to the daemons to start at boot time and replace the default journal by one using the triple store. Be careful to have backups if needed as this will remove all the content previously stored in the journal! Once the script has been executed, reboot the machine to start using the new software.

The bundle has been tested on an XO-1 running the software release 11.2.0 but it should work on any software release on both the XO-1 and XO-1.5. This bundle won’t work on the 1.75 has it contains a binary (the triple store) not compiled for ARM.

What now?

Now that you have the thing installed, open the browser and go to “http://127.0.0.1:8080″. You will see the web interface of the triple store which allows you to make some SPARQL queries and see which named graphs are stored. If you are not fluent in SPARQL, the named graph interface is the most interesting part to play with. Every entry in the journal gets its own named graph, after having populated the journal with some entries you will see this list of named graphs growing. Click on one of them and the content of the journal entry will be displayed. Note that this web interface is also accessible from any other machine on the same network as the XO. This yields new opportunities in terms of backup and information gathering: a teacher can query the journal of any XO directly from a school server, or an other XO.

Removing

The patch script comes with an install function if you want to revert the XO to its original setup. To use it, simply type (as root):

sh ./patch-my-xo.sh remove

and then reboot the machine.

Source: Think Links

Here’s an interesting TED talk by cognitive psychologist Paul Bloom about the origins of  pleasure. What’s cool to me is he uses the same anecdotes (Hans van Meergeren, Joshua Bell) that I’ve used previously to illustrate the need for provenance.   I often make a technical case for provenance for automated systems. He makes a compelling case that provenance is fundamental for people. Check out the video below… and let me know what you think.

Thanks to Shiyong Lu for the pointer.

Filed under: communicating provenance, interdisciplinary research Tagged: people, provenance, ted

Source: Think Links

Two articles in recent issues of Wired (June/July) got me thinking about the potential role of PhD programs in particular with respect to facilitating business. The first article was about Ycombintor – the highly successful start-up incubator founded by Paul Graham based in Silicon Valley. The second was about the emergence of a vibrant hacker community around the Microsoft Kinect (the sensor system that allows the xbox to recognize and track people within a room without a remote).

The program that Graham has created with Y-combinator, in my opinion, resembles in parts a PhD program. Like a good PhD program, Graham focuses on identifying and then developing people. The goal is to create a founder that can develop a great business. Similarly, the goal of a PhD program is to develop researchers who can do great research. In both cases, the idea is to provide the tools, support and environments that make great business founders or great researchers. I think in both cases we are the business of

people development – not product development.

However, I think Y-combinator offers more than just people development [1], it creates a creative, intense, social atmosphere. It’s designed to not only create talent but teams of talent. There are not many places in the world that talented people just work on the project their interested in – together.

Graham compares the effect to a coral reef, a self-generating ecosystem whose members provide nourishment for one another. “Pick the right founders and help them—and the coral reef will just happen,” he says

This brings me to the next article about the Kinnect. There were several ancedotes within the article about how Phd students quickly embraced the Kinect to do interesting and novel stuff: Here’s an example:

A group from UC Berkeley strapped a Kinect to a quadrotor—a small helicopter with four propellers—enabling it to fly autonomously around a room. A couple of students at the University of Bundeswehr Munich attached a Kinect to a robotic car and sent it through an obstacle course.

You need two things to pull of something like that:

  1. free time
  2. talent

PhD programs have a tendency to have these properties. Talented people with enough spare bandwidth to try new things (although I have a feeling phd students are becoming too busy lately…). As you noticed from the quote, it was a team of students that did the work. From my own personal, experience this happens all the time in phd programs. Indeed, there are great examples such as Google, Yahoo and Dropbox. But I think in general we in academia see this as happy side effect and don’t encourage or even look at it is a goal of our PhD programs.

Instead, I believe that we should actively encourage such projects. PhD programs could become talent incubators. We in the academy can encourage the generation of talented teams, for example, making group projects a key part of phd programs. Indeed, one could see a future where universities specifically advertise or promote project teams (groups of students) to business or investors as success-ready talent. Additionally, this would also be an interesting way of providing alternative careers to PhD students who want to do something outside activities.

Thoughts?

[1] Clearly *people development* is the one of the key services that universities provide. In general, i think universities are good at people development (although we could always be better) However, as the demand for high education increases and the pressure on universities to be more cost effective, I think it’s pertinent to begin to think about where universities can have added value that other higher education delivery mechanisms can’t. This is what this post is about.

Filed under: academia Tagged: incubator, wired magazine inspiration