Reboot of “Using interface encapsulation to listen to linked data predicates”


3 Years ago I wrote a blog post which was a submission to the ISWC 2014 Developers workshop. The idea has been implemented with the Enyo framework which at the time was still a viable ecosystem. The implementation has been used in production in various installations of our systems.
Time has passed on and the EnyoJS framework is no longer supported, its successor was very much tied to react so would be very framework specific. In order to come up with a simple solution to the encapsulation problem I decided to reimplement the whole idea using WebComponents

Design Principle

One of the major issues with using linked data nowadays is that people still conceive the whole tool chain rather complex, even though JSON-LD has dramatically improved the usability of Linked Data in various front-end application frameworks. The central problem described in my previous post still remains, there is no semantics on UI level, front-end frameworks are able to consume JSON-LD but only by ignoring the semantics and treating it just like any other data stream. On the other side of the spectrum are UI building tools who manifest themselves as classic Monoliths, “So you want a front-end framework, you’ll need to use our triple store as well” This whole position is even stranger if you come to realize that in the Linked Data world everything can be done according open standards.

RDF, SPARQL, SPARQL-Protocol, LDP and more recently SHACL are all open standards who brilliantly inter-operate with each other and provide the building blocks to just develop components to create applications.

In this article I’ll describe the ideas behind our new set of WebComponents which allow you to absorb linked data in your application front-end without installing a monolith

The end goal

With the introduction of WebComponents came a new way of extending HTML applications and extending the semantics while still being able to use any traditional UI toolkit. The second major element is the SHACL standard, now there is a standard to describe which elements  ( triples ) you want to select and use for your application. Inspired by SHACL and web components I came up with the following desired structure

<node-shape target-class='foaf:Person'>
    <property-shape path='foaf:name'></property-shape>
    <property-shape path='foaf:img' bind-to='img[src]><img></img></property-shape>
    <property-shape path='foaf:knows'>
        <property-shape path='foaf:name' bind-to='.name'>
           <span class='name'></span>

As you can see we map SHACL terminology within a relative standard HTML layout, this allows you to do any standard markup and styling through CSS. It will not interfere with any other standards DOM interactions on the elements encapsulated within the WebComponents.

To load the Linked Data into a <node-shape> we use rdf-ext a set of javascript libraries which again is adhering to a open standard. The <node-shape> and <property-shapes> will coordinate the propagation of the data through the HTML automatically, even dereferencing is taken care of. When multiple values are found, the encapsulated component is automatically repeated.


Although we use SHACL as guidance there is no validation in place yet, we only use the selection process of SHACL to select these parts of our Graph that we want to display in our front-end. This also means that we might display incomplete Graphs, that is a issue we need to work on.


This whole idea is work in progress, the current implementation based on WebComponents is now being tested with a POC and the results look great! We are planning on publishing the components as opensource on our github repository.

Some more detailed articles on how this works internally will be published as well!

Using IBM DB2 NoSQL Graph Store in Websphere Application Server Community Edition

This is the first of a series of blog posts about our experience with the IBM DB2 Express-C NoSQL Graph Store (hereafter DB2 RDF) in combination with IBM WebSphere Application Server Community Edition (hereafter WASCE).

The DB2 RDF product allows the storage and manipulation of RDF data. The data can be stored in graphs, all according to W3C Recommendations. The DB2 RDF product uses the Apache Jena programming model to interact with the underlying store. In the very detailed documentation there is an outline of products and tools needed to get the basic DB2 RDF programming environment going.

This series of articles is specifically about using the tools inside the WASCE environment. While developing our RESC.Info product we gathered a lot of experience which we like to share with the community using this article. We will also be presenting our experience during this years IBM Information On Demand 2013 in Las Vegas.

The series will cover the following topics:

  • Configuring WASCE data sources
  • Assembling the correct Jena distribution
  • Dealing with transactions

This first article is about configuring WASCE data sources for use with the DB2 RDF and Jena programming model. This is NOT meant to be an extensive installation guide for these products, you should refer to the respective product documentation for more information on installation.
It is very important to select the correct versions of the various products:

  • IBM DB2 Express-C 10.1.2
  • IBM WebSphere Application Server Community Edition 3.0.4

Creating a DB2 database and graph store

To ¬†be able to use the DB2 RDF features we need to create a standard database with some specific parameters. The DB2 documentation contains extensive information for this task. To create a database ‘STORE’ that supports DB2 RDF we issue the following commands:


For the correct administration we also need to execute:


Now that the database is created we still need to create a graph store inside the database. This is done with the following command:

createrdfstore rdfStore  -db STORE -user db2admin -password XXX -schema public 

This can take a while. After completion you will have a graph store ‘rdfStore’ inside your database ‘STORE’. To check the presence of this store issue the following command when connected to ‘STORE’:


The resulting table should contain reference to our store ‘rdfStore’ in schema ‘public’

WebSphere Application Server Community Edition installation

Install WASCE with the installer, but do not start it yet. WASCE is distributed with some older DB2 JDBC drivers which interfere with the DB2 JDBC4 drivers that are needed for the DB2 RDF interface. In the repository directory of WASCE look for the path


and delete the db2 sub-directory. Run WASCE with the -clean parameter, which causes WASCE to cleanup all references to the included DB2 JDBC drivers.

geronimo.[sh/bat] run -clean

Installing db2jcc4.jar

Now it is time to install the JDBC4 driver into WASCE repository. In the advanced mode of the console you will find the Resources/Repository tab where you can add new jars to the repository. Select the db2jcc4.jar from your <DB2_INST>/java directory and fill out the fields as shown in the image and click ‘Install’.

Creating a Database Pool

Once the correct jar is installed the creation of the connection to the database is the same as any other regular database connection. Select DB2 XA as ‘Database type’ and fill out the connection information. You should only see one JDBC driver here, the one we just installed. Fill out the details of your regular DB2 database ‘STORE’ and click ‘Deploy’.

After a database source object is created we can use it in the simple SQL entry field, select the newly created data source and issue the following command:


The result should be the same as the result we had after issuing this query from the command line.


Now we have setup a DB2 RDF connection inside WASCE with the correct version of both products and connecting drivers. The next step will be to create a simple Jena based application to interact with the store.

Simple Application of Linked Data Principles for Firefighters

One way firefighters world wide assist themselves while navigating through towns and cities is recording ‘distinctive points’ (actually that is how we navigate through smoke as well.) So instead of a ‘turn left at the third traffic light’ we often record ‘turn left at ABC Pharmacy’. So where does linked data come into play?

In the current economic situation shops come and go, so ‘ABC Pharmacy’ from the example might be long gone, leaving a driver clueless as where to turn left!
What stays the same is the address, but then again navigating on addresses is not really trustworthy so how can we use this address to generate a distinctive point?

In The Netherlands we have a public data set called ‘Building and Address Base Administration’ this is a official nation wide register of all buildings, dwellings and addresses throughout the country. One of the interesting features is that this data is internally interlinked, and uses nation wide unique identifiers. A absolute ideal situation to generate linked data from. If you inspect the dataset more closely you will actually find out that a address and a building are loosely coupled, that means if for some reason a full restructure of addresses takes place, the building identifiers stay the same.
Back to our problem, we want to know the business located in a specific building so we can use it to navigate. For this we use a semi official open chamber of commerce dataset which uses the building identifiers as location specifiers. This means that for generating the distinctive points, we only need to locate a building based on its address and then query the chamber of commerce data to see what business is located there!

So no matter what business is located in the building, as long as the building doesn’t move we can automatically find the businesses in that building and generate up to date distinctive points with linked data!