Transforming EAD XML into RDF/XML using XSLT

This is a (brief!) second post revisiting my “process” diagram from an early post. Here I’ll focus on the “transform” process on the left of the diagram:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data

The “transform” process is currently performed using XSLT to read an EAD XML document and output RDF/XML, and the current version of the stylesheet is now available:

(The data currently available via http://data.archiveshub.ac.uk/ was actually generated using the previous version http://data.archiveshub.ac.uk/xslt/20110502/ead2rdf.xsl. The 20110630 version includes a few tweaks and bug fixes which will be reflected when we reload the data, hopefully within the next week.)

As I’ve noted previously, we initially focused our efforts on processing the set of EAD documents held by the Archives Hub, and on the particular set of markup conventions recommended by the Hub for data contributors – what I sometimes referred to as the Archives Hub EAD “profile” – though in practice, the actual dataset we’ve worked with encompasses a good degree of variation. But it remains the case that the transform is really designed to handle the set of EAD XML documents within that particular dataset rather than EAD in general. (I admit that it also remains somewhat “untidy” – the date handling is particularly messy! And parts of it were developed in a rather ad hoc fashion as I amended things as I encountered new variations in new batches of data. I should try to spend some time cleaning it up before the end of the project.)

Over the last few months, I’ve also been working on another JISC-funded project, SALDA, with Karen Watson and Chris Keene of the University of Sussex Library, focusing on making available their catalogue data for the Mass Observation Archive as Linked Data.

I wrote a post over on the SALDA blog on how I’d gone about applying and adapting the transform we developed in LOCAH for use with the SALDA data. That work has prompted me to think a bit more about the different facets of the data and how they are reflected in aspects of the transform process:

  • aspects which are generic/common to all EAD documents
  • aspects which are common to some quite large subset of EAD documents (like the Archives Hub dataset, with its (more or less) common set of conventions)
  • aspects which are “generic” in some way, but require some sort of “local” parameterisation – here, I’m thinking of the sort of “name/keyword lookup” techniques I describe in the SALDA post: the technique is broadly usable but the “lookup tables” used would vary from one dataset to another
  • aspects which reflect very specific, “local” characteristics of the data – e.g., some of the SALDA processing is based on testing for text patterns/structures which are very particular to the Mass Observation catalogue data

What I’d like to do (but haven’t done yet) is to reorganise the transform to try to make it a little more “modular” and to separate the “general”/”generic” from the “local”/”specific”, so that it might be easier for other users to “plug in” components more suitable for their own data.

Some thoughts on architecture and workflows

This is an attempt to sketch out some of my/our initial thoughts on the approaches the project is considering to exposing data as Linked Data. I should emphasise that these are very much initial thoughts, and things may change as we progress.

The project is dealing with two main data sources, and at the moment two different approaches are being considered to those sources.

The first data source is the collection of archival finding aids describing the holdings of the archives of educational and research institutions in the UK, aggregated by the JISC Archives Hub service. This data takes the form of XML documents in the Encoded Archival Description (EAD) format, created by archivists in the various institutions, and submitted to the Hub.

Currently, the aggregated data is indexed using the Cheshire 3 application, and exposed as HTML pages on the archiveshub.ac.uk site for search and browse. (SRU and Z39.50 targets and an OAI-PMH repository are also available.)

To expose (probably a subset of) the Hub EAD finding aids as Linked Data, the workflow is expected to look something like that represented in Figure 1 below:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (1)
  1. Transform: EAD XML documents are transformed to an RDF format. We’ll write about our current thinking on this more in a subsequent post, as working out how best to represent the EAD data in RDF as the target for the transform is in itself a significant chunk of work (and an area I’m particularly interested in). This is likely to be something of an “iterative” process: we’ll start with a fairly basic transform that captures some subset of the content of the input documents, and perhaps refine things later to generate more data (and correct errors we’ll no doubt make in the first cut!)
  2. Enhance: RDF data from the previous step is “enhanced” and augmented. This step might include processes to (1) generally “clean up” the data (e.g. normalise some literals, identify internal co-references etc); (ii) add links to resources in other datasets; (iii) (maybe) pull in some useful data from other datasets, either data held by the Hub but not included in the EAD docs or data from other sources. Again this will probably be a process which we extend and refine over time.
  3. Upload: Load the RDF data from the previous step to an instance of the Talis Platform triple store, which Talis are kindly making available to the project.
  4. Expose: Expose a set of linked “bounded descriptions” from the triple store over HTTP, as documents in both human-readable and RDF formats, following the principles of the W3C TAG httpRange-14 resolution/Cool URIs for the Semantic Web. The use of the Platform also provides us with a SPARQL endpoint for the data – which we can make available to others to use – and which also means we can consider layering other Web interfaces over that endpoint. For example, I’d be interested in trying out the Linked Data API, which I talked about over on eFoundations a while ago.

It may be that that the second and third steps are reversed and we upload the data to the triple store and perform the “enhance” step on the data there, i.e. something closer to Figure 2:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (2)

Or indeed that a “hybrid” of the two is appropriate, and some “enhance” processes take place before upload and others take place afterwards.

We’ll also need to integrate some provision for “version control” and “provenance”/”attribution” (e.g. to track which data comes directly from the EAD sources, and which is added from elsewhere) into this process.

So for the Hub data, the plan is that the data is “exported” from the existing EAD dataset, and that the Platform triplestore provides the “back-end” for the app that serves up the “Linked Data” document views and provides a SPARQL endpoint.

The second data source of interest is the collection of bibliographic metadata aggregated into the Copac catalogue from the member libraries of Research Libraries UK and from other specialist libraries. This data is also held as XML in the MODS XML format. (Bethan Ruddock has a couple of posts on the Copac Development blog which describe the processes by which data is transferred from the contributor libraries to the Copac catalogue).

As for the case of the Archives Hub data, the first stage will be to design an appropriate RDF representation and an algorithm for transforming the MODS data to RDF (or to select – and maybe adapt, if necessary – an existing one).

In contrast to the case of the Hub I outlined above, the plan is to serve the RDF data from the existing Copac database, rather than upload it to a triplestore. This will probably require the development of a small additional application (or maybe just the configuration of an HTTP server) to service the new URIs coined for resources, to support content negotiation and redirect to URIs of appropriate pages.

One of the questions raised by this approach is how to handle the process I described above as “enhance”, and in particular how to accommodate the addition of new data – at a minimum, links to existing resources described in other Linked Data datasets – assuming that we aren’t going to be able to update the source MODS XML documents. For some cases, it may be trivial to incorporate this in the MODS-to-RDF transform (e.g., to generate links to languages described by lexvo.org). Another approach might be to generate simple “seeAlso” links to an additional set of documents (which could be simple static documents or could be served from an RDF store). Hmm. As you can probably tell, I’ve thought about this rather less than I’ve thought about the Hub case! Anyway, the suggested approach is sketched in Figure 3:

Diagram showing process of transforming MODS to RDF and exposing as Linked Data

Another constraint of this approach would be that although we can serve the set of linked documents, it doesn’t provide a SPARQL endpoint.

One of the expectations for the project is that it “explores and reports on the opportunities and barriers in making content structured and exposed”, and an assessment of the pros and cons of the different approaches to hosting the data should contribute to that report.