Darren Mothersele

Software Developer

Warning: You are viewing old, legacy content. Kept for posterity. Information is out of date. Code samples probably don't work. My opinions have probably changed. Browse at your own risk.

Kasabi Hack Day and Drupal SPARQL Views

Aug 17, 2011


I recently attended a Hack Day hosted by Kasabi and BrightLemon on Open Government data and semantic web. We played around with the Kasabi services, and the available datasets. During the day I quickly put together a Drupal 7 website to demonstrate querying a Kasabi SPARQL endpoint using Drupal and SPARQL Views. If you've not already done so, head over to the Kasabi beta and register for an API key and find an interesting dataset you want to use. Read on for full details of how to set up a Drupal site to talk to the Kasabi services...

I had to hack around a bit on the day to make everything work. Including applying some patches from the Drupal.org issue queues to modules and generally hacking around with the code. You can read up on some of the issues via the issue queues, but luckily all these patches have made it into the official releases of the Drupal modules. In particular the patch that makes all this possible - the ability to specify GET parameters in SPARQL endpoints. If you use the versions of the modules I have specified below, this all just works!

Download and install Drupal

Here's a make file you can use to download a Drupal 7 installation with the required modules for this guide. If you don't know how to use Drush Make, check the documentation, or download Drupal core and the modules separately from drupal.org.

core = 7.x
api = 2
projects[] = drupal
projects[ctools][version] = 1.0-rc1
projects[ctools][subdir] = contrib
projects[views][version] = 3.0-rc1
projects[views][subdir] = contrib
projects[entity][version] = 1.0-beta10
projects[entity][subdir] = contrib
projects[rdfx][version] = 2.0-alpha3
projects[rdfx][subdir] = contrib
projects[sparql][version] = 2.0-alpha3
projects[sparql][subdir] = contrib
projects[sparql_views][version] = 2.0-beta1
projects[sparql_views][subdir] = contrib

Install the Drupal site as you would normally, for me I find the quickest way is the command line, using Drush...

drush make kasabi.make kasabi
mysql -uUSERNAME -pPASSWORD -e"create database kasabi;"
cd kasabi
drush si --db-url=mysql://USERNAME:[email protected]/kasabi
chmod -R a+w sites/darrenmothersele.com/files

The first line uses Drush Make to download Drupal and the required modules into a folder called "kasabi". The second line creates a database. Then change to the site folder. The next line uses Drush to run the Drupal installed, using the database we just created. Then we give the webserver permission to write to the files folder.

You now have a Drupal 7 site up and running, to which you can login with username admin and password admin. I'm doing all this on a dev box, but you should go ahead and change these login details after your first login.

Enable and configure the required modules

In this example I've turned off some core modules to clear up the interface a bit, this includes things like overlay. I've then enabled these additional contributed modules:

I then need to download the ARC2 library that is required by the RDFx modules. To do this I run the following back at the command line:

cd sites/all/modules/contrib/rdfx
mkdir vendor
cd vendor
wget http://github.com/semsol/arc2/tarball/master
mv master arc.tgz
tar xzf arc.tgz
mv semsol-arc2-495d10b arc

You can now navigate to admin/reports/status to check we've succesfully installed everything and our Drupal site is ready to go.

Configure RDFx and SPARQL Endpoints

In the "Structure" section of the Drupal admin, go to the "SPARQL Endpoint Registry" and select "Add SPARQL endpoint". In this configuration form enter the details of the Kasabi SPARQL endpoint you want to use, and your API key for the Kasabi service.

As you can see in the image above, for this example, I've selected the endpoint for reading data about NHS Organisations. I've also entered a query parameter for apikey=whateveryourapikeyis.

Add prefix mapping for namespaces

The next step is to add the mappings for the prefixes for any RDF namespaces you want to use. To find this configuration, in the "Configuration" section of the Drupal admin, click "RDF publishing settings" and then the tab labelled "RDF namespaces". At the bottom of this page you can add extra mappings. I've added the following mappings:

nhs -> http://data.kasabi.com/dataset/nhs-organization/def/
org -> http://www.w3.org/ns/org#
os  -> http://data.ordnancesurvey.co.uk/ontology/postcode/

Add SPARQL Views resource

SPARQL Views allows you to use Drupal's powerful query builder (Views) to create SPARQL queries. In order to do this SPARQL Views needs to know what kind of resources we will be querying.

Back in the "Structure" section of the Drupal admin, go to "SPARQL Views resource types" and select "Add SPARQL Views resource". Give your resource a name and select the SPARQL endpoint this applies to.

Now, back on the "SPARQL Views resource type" page select "manage fields" on your newly created resource type. Add fields for all the attributes we are interested in giving each field a human readable name and an internal identifier. You can use the field type "text" for each one, what is important is the RDF mapping that you specify for each field. When you add a field, on the configuration page for the field you will see a section to enter the RDF mapping:

When you're finished you have all the fields configured, with their mappings to RDF predicates.

Configure Views

In the Structure section of the Drupal admin, go to "Views". You might want to adjust some Views settings under the "Settings" tab. I've turned off automatic update of the preview, and turned on "Show the SQL query". Useful for debugging.

Now, back on the Views list, click "add new view". Give the view a name, and select the "SPARQL Views: NHS Organization" (or whatever entity you created) as the View type. Click "Continue & edit"

This takes you to the main View admin:

If you click "Update preview" you can see that a very basic query has been created for you as a starting point. The query is:

SELECT DISTINCT ?nhs_organization
?nhs_organization ?p ?o  }

You can start to adapt this by adding fields and filters, changing settings such as the pager settings. Note that some Views options don't work yet. Paging for example is not fully functional because it's not possible to do the required queries in SPARQL yet. When we get support for aggregate functions in SPARQL 1.1 this should be possible.

Here's an example query I created. I created a page display, using the table style plugin, and added the following fields. This is the query generated:

PREFIX org: <http://www.w3.org/ns/org#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX nhs: <http://data.kasabi.com/dataset/nhs-organization/def/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT DISTINCT ?nhs_organization_field_org_id ?nhs_organization_field_org_name ?nhs_organization_field_org_healtharea ?nhs_organization_field_org_healthauthority
?nhs_organization rdfs:label ?nhs_organization_field_org_name; rdf:type <http://data.kasabi.com/dataset/nhs-organization/def/GeneralPractice>.
OPTIONAL {?nhs_organization org:identifier ?nhs_organization_field_org_id}
OPTIONAL {?nhs_organization nhs:healthArea ?nhs_organization_field_org_healtharea}
OPTIONAL {?nhs_organization nhs:healthAuthority ?nhs_organization_field_org_healthauthority}

And the result looks like this:

Notice how I have created a second view which is linked to from this view. I have passed an argument to the second view, which then filters on the memberOf property to show all members of this organisation. You can acheive this by using the powerful Views field rewriting to output a field as a link, using the ID as a token in this link that is substituted when displayed. The second view has a "Contextual filter" defined (look in the "Advanced" section of your views admin).

This is just the start of what is possible using SPARQL endpoints and Views. All without writing a single line of SPARQL code! If you are already familiar with Views 3 then hopefully you know how to start using this stuff. If not then I suggest going and looking up some of the excellent guides on using Views that can be found on the web. If you want to look more into what is possible with these modules then this article is a good starting point.