Reboot of “Using interface encapsulation to listen to linked data predicates”

Introduction

3 Years ago I wrote a blog post which was a submission to the ISWC 2014 Developers workshop. The idea has been implemented with the Enyo framework which at the time was still a viable ecosystem. The implementation has been used in production in various installations of our systems.
Time has passed on and the EnyoJS framework is no longer supported, its successor was very much tied to react so would be very framework specific. In order to come up with a simple solution to the encapsulation problem I decided to reimplement the whole idea using WebComponents

Design Principle

One of the major issues with using linked data nowadays is that people still conceive the whole tool chain rather complex, even though JSON-LD has dramatically improved the usability of Linked Data in various front-end application frameworks. The central problem described in my previous post still remains, there is no semantics on UI level, front-end frameworks are able to consume JSON-LD but only by ignoring the semantics and treating it just like any other data stream. On the other side of the spectrum are UI building tools who manifest themselves as classic Monoliths, “So you want a front-end framework, you’ll need to use our triple store as well” This whole position is even stranger if you come to realize that in the Linked Data world everything can be done according open standards.

RDF, SPARQL, SPARQL-Protocol, LDP and more recently SHACL are all open standards who brilliantly inter-operate with each other and provide the building blocks to just develop components to create applications.

In this article I’ll describe the ideas behind our new set of WebComponents which allow you to absorb linked data in your application front-end without installing a monolith

The end goal

With the introduction of WebComponents came a new way of extending HTML applications and extending the semantics while still being able to use any traditional UI toolkit. The second major element is the SHACL standard, now there is a standard to describe which elements  ( triples ) you want to select and use for your application. Inspired by SHACL and web components I came up with the following desired structure

<html><body>
<node-shape target-class='foaf:Person'>
<section>
    <property-shape path='foaf:name'></property-shape>
    <property-shape path='foaf:img' bind-to='img[src]><img></img></property-shape>
    <ul>
    <property-shape path='foaf:knows'>
      <li>
        <property-shape path='foaf:name' bind-to='.name'>
           <span class='name'></span>
        </property-shape>
      </li>
    </property-shape>
    </ul>
</section>
</node-shape>

As you can see we map SHACL terminology within a relative standard HTML layout, this allows you to do any standard markup and styling through CSS. It will not interfere with any other standards DOM interactions on the elements encapsulated within the WebComponents.

To load the Linked Data into a <node-shape> we use rdf-ext a set of javascript libraries which again is adhering to a open standard. The <node-shape> and <property-shapes> will coordinate the propagation of the data through the HTML automatically, even dereferencing is taken care of. When multiple values are found, the encapsulated component is automatically repeated.

SHACL

Although we use SHACL as guidance there is no validation in place yet, we only use the selection process of SHACL to select these parts of our Graph that we want to display in our front-end. This also means that we might display incomplete Graphs, that is a issue we need to work on.

Status

This whole idea is work in progress, the current implementation based on WebComponents is now being tested with a POC and the results look great! We are planning on publishing the components as opensource on our github repository.

Some more detailed articles on how this works internally will be published as well!

Central Version reporting of deployed artifacts with JEE

While building our RESC.Info solution stack we found ourselves with the challenge to query the build versions of the components installed. After searching the web and consulting various communities we found out there is no ‘standard’ way of solving this.

What we actually wanted is a way to query the installed version of the various modules we deployed to the Java Container without any predefine knowledge which modules are actually installed. We did not want to rely on any platform/container specific solution to solve this.

The approach taken

Java comes standard with JMX, Java Management Extension, it would be preferable to utilize such a infrastructure instead of coming up with our own.
So we decided the technical approach to take would be JMX, now we need to define what we wanted to expose, we decided to stay very close our original goal, extract version information from the installed artifacts programmatically.

We identified 4 main attributes:

  • Name
  • Full name
  • Artifact
  • Version

Since we use Maven for our build process ideally this information would come from the build process itself, there are tons of references on the web on how to do this. We decided to use the Maven resource filtering to generate a properties files, in our pom.xml we have:

<resources>
  <resource>
    <directory>src/main/resources</directory>
    <filtering>true</filtering>
  </resource>
</resources>

in src/main/resources we placed a version.properties file which looks like:

version=${project.version}
name=${project.build.finalName}
fullname=${project.name}
artifact=${project.groupId}.${project.artifactId}

During the maven build process the variables are replaced by the actual data from the POM file which we then can reference from the version reporting code.

Drop in version reporting

On of the other requirements was to have ‘drop-in’ version reporting module. You don’t want to code version reporting over and over again since we would have the version in the same place all the time anyway. So we created a Maven artifact we can simply add as a dependency to our project. This leaves only the task create and modify some files in our project and from that moment on everything goes automatically.

Exposing the information

To expose information to JMX the easiest approach is to expose a MBean to the containers MBeanServer. This is a two step approach

  1. create a interface which name ends with MBean.
  2. create the actual implementation of the interface.

Since we are actually interested in some metadata of our artifact we will call our interface MetaDataMBean

package com.example;

/**
 * Interface for Metadata exposure
 */
public interface MetaDataMBean {
    public String getVersion();

    public String getName();

    public String getFullName();

    public String getArtifact(););
}

Next we will do the implementation of our MetaData class

package com.example;

import java.util.MissingResourceException;
import java.util.ResourceBundle;

/**
 *
 */
public class MetaData implements MetaDataMBean {
    /*
     */
    @Override
    public String getVersion() {
        try {
            return ResourceBundle.getBundle("version").getString("version");
        } catch (MissingResourceException e) {
            // the version.properties file is not present
            return null;
        }
    }

    /*
     */
    @Override
    public String getName() {
        try {
            return ResourceBundle.getBundle("version").getString("name");
        } catch (MissingResourceException e) {
            // the version.properties file is not present
            return null;
        }
    }

    /*
     */
    @Override
    public String getFullName() {
        try {
            return ResourceBundle.getBundle("version").getString("fullname");
        } catch (MissingResourceException e) {
            // the version.properties file is not present
            return null;
        }
    }

    /*
     */
    @Override
    public String getArtifact() {
        try {
            return ResourceBundle.getBundle("version").getString("artifact");
        } catch (MissingResourceException e) {
            // the version.properties file is not present
            return null;
        }
    }
}

As you can see not to much rocket science here, we simple return the strings we get from our version.properties file which is created during our build process.

Hooking up to JMX

In the web.xml of your application you can define a <listner> tag which will be called as soon as the application is deployed. the <listner> tag requires the name of a class within your war which subclasses javax.servlet.ServletContextListener. When your war is deployed the contextInitialized override you need to define gets called, this is the ideal location to do one time initializations like the version reporting.
We create a Class, LifeCycleListner which extends ServletContextListner, on initilization we create the MBean, and register it with the containers MBeanServer. When the context is destroyed we will deregister it.
As registration name and type we use the full package name and choose a type identifier. This identifier is the actual key by which later you can retrieve the version information of multiple artifacts with a JMX query. In our example we chose for the type name ‘EXVersion’ which is a arbitrary name, but it needs to be the same for all your projects !

The context listner looks like this:

package com.example;

import java.lang.management.ManagementFactory;
import java.util.MissingResourceException;
import java.util.ResourceBundle;

import javax.management.InstanceAlreadyExistsException;
import javax.management.InstanceNotFoundException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;

/**
 *
 */
public class LifeCycleListner implements ServletContextListener {
    private MetaData metaData;
    private ObjectName beanName;

    /*
     */
    @Override
    public void contextDestroyed(ServletContextEvent arg0) {
        MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
        try {
            mbs.unregisterMBean(beanName);
        } catch (MBeanRegistrationException | InstanceNotFoundException e) {
            e.printStackTrace();
        }
    }

    /*
     */
    @Override
    public void contextInitialized(ServletContextEvent arg0) {
        MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();

        try {
            this.beanName = new ObjectName(ResourceBundle.getBundle("version")
                    .getString("artifact") + ":type=EXVersion");
            this.metaData = new MetaData();
            mbs.registerMBean(this.metaData, this.beanName);
        } catch (MalformedObjectNameException | InstanceAlreadyExistsException
                | MBeanRegistrationException | NotCompliantMBeanException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (MissingResourceException e) {
            // the version.properties file is not present
            System.out.println("version.properties is not present!!");
        }
    }
}

In the context initialized we already use the version.properties file to get the artifact name. This is also a direct test to see if the version.properties file is present at all, if not we don’t even bother registering the MBean.
We save the MBean name to destroy it when the application gets undeployed.

To have this code called on application deployment we need to register a listner in web.xml:

 <listener>
    <listener-class>com.example.LifeCycleListner</listener-class>
</listener>

Packing it up

If you create a standard Maven project which you deploy to your local repo with the above files included you can use it as a dependency on your other projects.

In your project you need to do 3 things.

  1. add the version.properties template in src/main/resources ( sample above )
  2. add the extra lines to the POM.XML ( sample above )
  3. add extra lines to web.xml ( sample above )

Querying the information

So now we get to what it was all about, how do we extract the version information without any predefined knowledge of what is installed ? We use the simple JMX api to query for our EXVersion type, a code snippet looks like this:

MBeanServer thisServer = ManagementFactory.getPlatformMBeanServer();
        try {

            ObjectName query = new ObjectName("*:type=EXVersion");

            for (ObjectName name : thisServer.queryNames(query, null)) {
                out.println("module: " + thisServer.getAttribute(name, "Name")
                        + "tName: "
                        + thisServer.getAttribute(name, "FullName")
                        + "tVersion: "
                        + thisServer.getAttribute(name, "Version"));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

This will print out a set of lines with information about ALL the deployed artifacts which expose the EXVersion type bean.

Rounding up

This is very simple approach of creating a centralized version reporting method without to much infrastructure. It is easy enough to simply include in all your projects with a minimum amount of configuration. It does have some limitations, e.g. 2 deployments of the same artifact on different contexts will not work.

For our internal use we did extend the MetaData class to have some extra properties, but those were very specific to our project.

Using IBM DB2 NoSQL Graph Store in Websphere Application Server Community Edition

This is the first of a series of blog posts about our experience with the IBM DB2 Express-C NoSQL Graph Store (hereafter DB2 RDF) in combination with IBM WebSphere Application Server Community Edition (hereafter WASCE).

The DB2 RDF product allows the storage and manipulation of RDF data. The data can be stored in graphs, all according to W3C Recommendations. The DB2 RDF product uses the Apache Jena programming model to interact with the underlying store. In the very detailed documentation there is an outline of products and tools needed to get the basic DB2 RDF programming environment going.

This series of articles is specifically about using the tools inside the WASCE environment. While developing our RESC.Info product we gathered a lot of experience which we like to share with the community using this article. We will also be presenting our experience during this years IBM Information On Demand 2013 in Las Vegas.

The series will cover the following topics:

  • Configuring WASCE data sources
  • Assembling the correct Jena distribution
  • Dealing with transactions

This first article is about configuring WASCE data sources for use with the DB2 RDF and Jena programming model. This is NOT meant to be an extensive installation guide for these products, you should refer to the respective product documentation for more information on installation.
It is very important to select the correct versions of the various products:

  • IBM DB2 Express-C 10.1.2
  • IBM WebSphere Application Server Community Edition 3.0.4

Creating a DB2 database and graph store

To  be able to use the DB2 RDF features we need to create a standard database with some specific parameters. The DB2 documentation contains extensive information for this task. To create a database ‘STORE’ that supports DB2 RDF we issue the following commands:

db2 CREATE DATABASE STORE PAGESIZE 32 K
db2 UPDATE DATABASE CONFIGURATION FOR STORE USING LOGFILSIZ 20000
db2 CREATE TABLESPACE SYSTOOLSPACE IN IBMCATGROUP MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 4

For the correct administration we also need to execute:

db2set DB2_ATS_ENABLE=YES
db2 UPDATE DB CONFIG USING AUTO_MAINT ON AUTO_TBL_MAINT ON AUTO_RUNSTATS ON
db2 alter bufferpool IBMDEFAULTBP IMMEDIATE SIZE 15000 AUTOMATIC

Now that the database is created we still need to create a graph store inside the database. This is done with the following command:

createrdfstore rdfStore  -db STORE -user db2admin -password XXX -schema public 

This can take a while. After completion you will have a graph store ‘rdfStore’ inside your database ‘STORE’. To check the presence of this store issue the following command when connected to ‘STORE’:

db2 SELECT * FROM SYSTOOLS.RDFSTORES

The resulting table should contain reference to our store ‘rdfStore’ in schema ‘public’

WebSphere Application Server Community Edition installation

Install WASCE with the installer, but do not start it yet. WASCE is distributed with some older DB2 JDBC drivers which interfere with the DB2 JDBC4 drivers that are needed for the DB2 RDF interface. In the repository directory of WASCE look for the path

<wasce_install>/repository/com/ibm

and delete the db2 sub-directory. Run WASCE with the -clean parameter, which causes WASCE to cleanup all references to the included DB2 JDBC drivers.

geronimo.[sh/bat] run -clean

Installing db2jcc4.jar

Now it is time to install the JDBC4 driver into WASCE repository. In the advanced mode of the console you will find the Resources/Repository tab where you can add new jars to the repository. Select the db2jcc4.jar from your <DB2_INST>/java directory and fill out the fields as shown in the image and click ‘Install’.
wascerepos

Creating a Database Pool

Once the correct jar is installed the creation of the connection to the database is the same as any other regular database connection. Select DB2 XA as ‘Database type’ and fill out the connection information. You should only see one JDBC driver here, the one we just installed. Fill out the details of your regular DB2 database ‘STORE’ and click ‘Deploy’.

After a database source object is created we can use it in the simple SQL entry field, select the newly created data source and issue the following command:

SELECT * FROM SYSTOOLS.RDFSTORES

The result should be the same as the result we had after issuing this query from the command line.

Conclusion

Now we have setup a DB2 RDF connection inside WASCE with the correct version of both products and connecting drivers. The next step will be to create a simple Jena based application to interact with the store.