Some more “things”: some extensions to the Hub model

Having had a little more time to experiment with the Archives Hub EAD data, and to think about what sort of operations on the RDF data we might wish to perform or enable others to perform, I’ve introduced a few small extensions to the model I described a couple a few weeks ago.

Extents

At our last project meeting, we talked about some of the possibilities for visualisations of the data. One of the ideas (suggested by Jane) is to explore representing relative sizes of collections, perhaps on a map, so that, for example, a researcher could provide a geographic location and a subject area and get a visual representation of the relative sizes of collections within that area.

The EAD XML format provides an element called <extent> for “information about the quantity of the materials being described or an expression of the physical space they occupy”. Although the EAD Tag Library provides guidelines to try to encourage some uniformity of the content, the data in the Hub EAD documents is quite variable. Examples of the content in the samples I’ve looked at include:

  • 6.5 linear metres
  • 2.04 metres
  • 0.48m
  • 190 archive boxes
  • 13 boxes
  • One sheet of paper
  • 13 lever arch files, 48 sound tape reels, 490 audio cassette tapes (1 filing cabinet)

In the initial model, this was just treated in RDF as a single triple with subject the URI of the unit of description (an archival collection or some part of it) and this string a literal object. I’m suggesting changing this to treat the “extent” as a resource with its own URI, rather than simply as a literal. Doing that enables us – for at least some of these cases – to make explicit that it is a value measured in some “unit” (linear metres, archival boxes), to “normalise” the way those units are represented (so e.g. “linear metres”, “metres” and “m” can be mapped to a single form in the RDF data), and possibly to make comparisons, albeit approximate ones, between extents measured in different units (for example, “archival boxes” and “linear metres”).

So we end up with patterns in the RDF graph like:

unit:123 dcterms:extent extent:123 .

extent:123 ex:metres “2.04”^^xsd:decimal .

Having said that, I recognise that the nature of the input data is such that such techniques are usefully applicable only to a subset of the data; I’m not sure there’s a great deal we can do with “composite” strings like the last one in the list above, other than present them to a human reader.

Events and Times

One of the other ideas for presenting data we’ve chewed around is that of some sort of “timeline” view. It’s something I’ve been quite keen to explore – though I’m conscious that the much of the most useful information is, in the EAD documents, in the form only of prose in the “biographical/administrative histories” provided for the originators of the archives.

As a first tentative step in this direction, I’ve introduced a notion of “event” into the model, where, in the first instance:

  • the Creation of a unit of description is modelled as an event taking place during a period of time
  • (where birth/death dates are provided in the input) the Birth and Death of a person are modelled as events taking place during a period of time

It’s possible to generate this just from simple processing of the input data. It may be possible to go further and generate a richer range of “events” through the use of some flavour of intelligent text analysis/”entity extraction” tools on the biographical/administrative history text, but that’s something for us to consider in the future.

Postcodes

Finally – and as I noted in the previous post this is something which goes beyond the content of the EAD documents themselves – prompted mainly by the recent announcement by John Goodwin that the Ordnance Survey had extended their linked data dataset to include “post code units”, I’ve added in a notion of “Postcode Unit” so that we can make links to resources from that dataset (and also to the UK Postcodes dataset).

So the revised model looks something like the figure below:

Diagram showing data model for EAD data

Figure 1

So, I’m hoping that – bug fixes aside – I can stop tinkering with this for a while 🙂 and that we can work with this version of the model, and test out what is possible and where any “pain points” are, and then think about where further changes might be useful.

The “things” in EAD: a first cut at a model

As mentioned by Jane in a couple of previous posts, she, Bethan and I met up in Manchester in August to share our thoughts about how to model the Archives Hub EAD data in a form that can be represented in RDF.

RDF in a nutshell

For the purposes of this discussion, the main point to bear in mind is that the “grammatical principle” underpinning RDF is one of making simple three-part statements, each of which makes an assertion of a relationship (of some particular type) between two things. So for example, in RDF I can “say” things like:

Document 123 has-title “Arthur and George”

or

Document 123 is-authored-by Person P
Person P has-name “Julian Barnes”

When considering how to represent EAD data in RDF, then, the first step is to try to take a step back from the “nitty-gritty” of the EAD XML markup, and think about the three part statements we might construct to represent the “information content” of that document. We need to think in terms, not of XML documents and elements and attributes and nesting/containment, but rather of what an EAD document is “saying” about “things in the world” (perhaps more accurately, in the “world” as conceptualised by the creator of the archival finding aid, shaped by archival description practices in general) and what sort of questions we want to answer about those “things”. What are the “things” – and here I use the term in a general sense to include concepts and abstractions as well as material objects – that an EAD document provides information about? What are the relationships between these things? What else does an EAD document say about those things?

Note: The discussion here does not cover the “document”/”description” side of the “Linked Data” picture i.e. for each “thing”, we’ll be providing a “description” of that “thing” in the form of a “document”. Metadata describing that “document” will be important in providing information about provenance and currency, for example, but that is not discussed here.

EAD as used by the Archives Hub

The EAD XML format was designed to cope with the “encoding” of a wide range of archival finding aids, including those constructed according to the (slightly different) cataloguing practices and traditions of different communities.

Further, many features of the EAD format are optional: one can construct a valid EAD document using only a fairly minimal level of markup, or one can use more detailed markup to represent more information.

This flexibility can be something of a “double-edged sword”: on the one hand, it enables data creators to accommodate a wide range of data, and it provides choice in the level of detail of markup (and human resources in creating that markup!) to be applied; on the other hand, it can make working with EAD data quite complex for a consumer, particularly when processing data from a range of sources which perhaps use a range of different conventions and features of the language.

In part to address this sort of issue (as well as to make things simpler for data providers by insulating them from the detail of EAD markup), the Archives Hub provides a forms-based EAD editor, based primarily on the information categories enumerated by the ISAD(G) archival description standard, which generates EAD documents following a consistent set of markup conventions. (I sometimes think of this as a “profile” of EAD, a narrower set of constraints than that imposed by the EAD DTD/schema itself, but I’m not sure that sort of terminology is in widespread use in this context.)

So, we made the “pragmatic” decision to work, in the first instance at least, on the basis of this particular set of EAD markup conventions, rather than trying to address the full EAD format, which means we can limit the number of variants we need to deal with. Having said that, even for the case of data created using the Hub editor, an element of variation is present, because although the data entry form generates a common high-level structure, data creators can apply different markup within those high-level structural components. In this first cut at a model, we have focused on analysing those common structural elements, with the intention of extending and refining our approach at a later stage.

In the course of this (or in thinking about it afterwards) we’ve come up with a few questions, which I’ll try to highlight in the course of the discussion below. Any feed back on these points (or indeed on any other aspect of the post!) would be very welcome.

The “world” as seen by EAD

Jane and I had both done some doodling before our meeting, and we started out by walking through our ideas, highlighting both those aspects which seemed pretty clear and uncontroversial, and aspects where we were uncertain or several alternatives seemed possible (and reasonable). Although we were using slightly different terminology, I think we had come up with quite similar notions, and after a bit of discussion, we arrived at a first cut at a “core” model which I’m representing graphically in Figure 1 below. This isn’t intended as a formal UML or E-R diagram, but each box represents a type of “thing” (a class) and each arrow represents a type of relationship between individual things (“instances” of those classes):

Diagram showing draft data model for EAD data (1)

Figure 1

So the “core” types of things identified in this first stage were:

  • Unit of Description: these are the “units” of archival material, a document or set of documents, the actual stuff held in the repository and described by the finding aid. It’s a “generic” class to reflect the archival description principle of “multi-level description”. An archival finding aid typically has a “hierarchical” structure, in which one “unit of description” is (described as logically forming) “part of” another “unit of description”. A finding aid may provide a only a “collection-level” description of a collection which contains many thousands of individual records, without describing those records individually at all; or it may include descriptions of various component groupings and sub-groupings of records; or it may indeed go as far as describing individual records within such groupings. For each Unit of Description, information relevant to that particular unit is provided. EAD and ISAD(G)) allow for the provision of more or less the same set of information whatever the “level” of unit described, though in practice some elements are more commonly used for “aggregate/group” units.
  • Archival Finding Aid: these are the documents created by archival cataloguers to describe the archival materials. Often a single finding aid describes (or has as its topic/subject) several units of description, but it may be the case that a finding aid describes only a single unit – where only a description of the collection as a whole is provided.
  • Repository (Agent): the organisations who curate and provide access to the archival material, and who create and maintain the archival finding aids. (EAD allows for the possibility that two different agencies perform these two roles; the Hub EAD Editor works on the basis that a single agent is responsible for both).
  • Origination (Agent): the entity (individual, organisation or family) “responsible for the creation, accumulation, or assembly of the described materials before their incorporation into an archival repository” (from the description of the EAD <origination> element). Jane analysed the rather complex nature of the ISAD(G) Creator/EAD origination relationship, which encompases notions of both “item creator” and “collector”, in <a href="http://archiveshub.ac.uk/blog/?p=2401"an earlier post on the Archives Hub blog.
  • “Things” which are referenced in the form of names used as “access points” or “index terms” using the EAD <controlaccess> element. The Hub EAD Editor supports the provision of the following as <controlaccess> terms, and recommends the use of a number of thesauri or “authority files” from which they should be drawn: Names of “Subjects” (topics); Personal Names; Family Names; Corporate Names; Place Names; Book Titles; Names of Genres or Forms; Names of Functions. So the corresponding “things named” are: Concepts, Persons, Families, Organisations, Places, Books, Genres or Forms, and Functions. As Jane notes in her recent post the relationship between the Unit of Description and the entity named in the <controlaccess> element is not necessarily a relationship of “about”-ness, but a rather less specific one, which for the moment we’ve labelled as simply “associated with” (though a better label might be preferable!).

(I’ve shown the Origination and Repository as distinct classes in the diagram, rather than as a single Agent class, because, as I hope will become clearer below, it ends up that they participate in a slightly different set of relationships).

We went on to extend and refine this core model to accommodate more of the information from the EAD document.

First, we refined the way the “access points” are represented. I’d discussed this aspect of the model with Leigh Dodds of Talis and he suggested that we consider modelling the physical entities here as concepts, in turn related to physical entities, i.e. that we represent the “conceptualisation” of a person, family, organisation or place captured in a thesaurus entry or authority file record, as distinct from the actual physical entity. So, to take an example which I think Bethan used during our conversation, we can distinguish between a conceptualisation of William Blake as a poet and one of William Blake as an artist, each in turn related to William Blake the person.

Although I don’t plan to discuss the specifics of RDF vocabulary in this post, it’s worth noting that the FOAF RDF vocabulary has recently been extended with the addition of a property, foaf:focus, to represent the relationship between the conceptualisation and the thing conceptualised (person, place etc), to support exactly this convention.

For some of the <controlaccess> named entities – like the topics, genres/forms and functions – there is no “other thing conceptualised” and it is sufficient to model them simply as concepts (or as instances of a subclass); and for the book case, we’ll just treat it as a “book” (and for the moment, at least, sidestep any FRBR-ish issues).

In both cases, the notion that the concept is a member of a specific thesarus/authority file can be captured by introducing the notion (from SKOS) of a “Concept Scheme”.

Question 1: One question raised by this approach is whether, for the cases where there is a distinct entity involved, in transforming an EAD document into RDF, we should:

  1. Coin URIs for, and generate “descriptions” of, both the concept and the person/family/organisation/place conceptualised (with a triple with a foaf:focus predicate relating the two? Or:
  2. Coin a URI for, and generate a “description” of only the concept, and leave the relationship with the person/family/organisation/place conceptualised “out of scope” at the transform stage (though that relationship might be obtained at a later stage by linking the concept to external data)?

My inclination is to do the former, on the grounds that this enables us to capture more of the information present in the EAD document i.e. to capture the information that where a <persname> element is used, this is the name of a conceptualisation of a person, where a <corpname> element is used, this is the name of a conceptualisation of an organisation, and so on.

Question 2: Is it necessary/useful to also model the name itself as a distinct resource? I think we can manage without that, but we may revisit that point in the future.

Second, having made this choice for the <controlaccess> entities, we decided to apply it also to the case of the “origination” agent discussed above, with the “origination” relationship becoming one between a Unit of Description and a conceptualisation of an agent, rather than between a Unit of Description and the agent itself. I admit I’m still not completely sure this is necessary/useful/”the right thing to do”. The use of the <origination> element in the Hub EAD profile is described in the guidelines here. It allows for names to be presented in “the commonly used form of name”, rather than the form specified by an authority record (and indeed a survey of the data reveals a good deal of variation), so it’s a bit more difficult to argue that this corresponds directly to the name of an entry (concept) listed in an “authority file”.

Question 3: Is it necessary/useful to introduce a “conceptualisation” of the agent who “originated” the Unit of Description? For now, we’re working on the basis that it is, but we may revisit that choice.

This extended model is represented graphically in Figure 2:

Diagram showing draft data model for EAD data (2)

Figure 2

A final stage of refinement gave us a few further extensions.

First the EAD Document is introduced as a particular “encoding of” the Finding Aid.

Second, I’ve suggested that we model the Biographical or Administrative History associated with each Unit of Description as a resource in its own right, distinct from the Finding Aid as a whole. I’m not sure this is strictly necessary, and again it’s something that we may revist in the future. But it enables us to provide information about the Biographical History as a distinct resource. One of the reasons this may be useful is that we’ve discussed (albeit somewhat vaguely at this point!) analysing/mining the text of the Biographical History as a source of further information, and having a URI for the Biographical History enables us to be explicit about the source of that data. We can also make the Biographical History the subject of triples to indicate that it is related not just to the Unit of Description but also to the entity who “originated” that unit (or, given the discussion above, to the conceptualisation of that entity). Also, we could associate it with different literal expressions (e.g. the original EAD fragment as XML Literal, but also an XHTML or plain text derivative). It also, of course, makes the Biographical History into a resource that others can refer to in their own assertions in their own data.</p

Third, we introduced the “level” of the Unit of Description as a distinct resource, a concept. This means that each “level” within the (relatively small) set used within the Hub data can each be assigned a distinct URI, and described in their own right, and – again – referenced by others.

Fourth, similarly, the “language” of the Unit of Description is treated as a distinct resource. (The plan here is that we’ll try to simply reference resources within an existing Linked Data dataset, such as lexvo.orga>.)

Fifth, the EAD <dao> and <daogrp< elements are mapped into a relationship between the Unit of Description and an external digital object (or group of objects). I’ve labelled the relationship here as “is represented by” as that is the description provided by the EAD documentation, but I think Jane and Bethan felt that in practice in the Hub data, the relationship might sometimes be rather less specific than that.

For the moment, the other EAD elements corresponding to ISAD(G) elements (i.e. to textboxes in the Hub data entry form) will be treated as properties with XML Literal values (though we could follow the <bioghist> approach and generate individual URI-identified resources if that proves to be useful).

Sixth – and here we stepped slightly beyond the scope of the EAD document itself (so I’ve greyed it out in the diagram below) – we’ve added a notion of the location of the Repository and a relationship between the Repository-as-Agent and that Place. Although details of repository location aren’t included in the Hub EAD documents, Jane and Bethan said they do have that data available, and it should be fairly easy to integrate it.

So we’ve ended up with the model illustrated in Figure 3.

Diagram showing draft data model for EAD data (3)

Figure 3

Question 4: Are we missing any obvious “things” that we need to treat as resources?

Note: In this post, I haven’t gone as far here as to enumerate all the properties that will be used to describe instances of each of those classes, but I’ll provide that in a future post.

Multi-level description, context, “completeness” and “inheritance”

The one remaining question – and perhaps one of the thorniest to address fully – is that arising from one of the fundamental characteristics of the nature of archival description. As noted above, archival description is typically based on a “hierarchical”, “multi-level” approach, in which, within a single finding aid, information is provided about an aggregation of records, and then about component parts of that aggregation, and so on, perhaps down to the level of providing descriptions of individual records, but often stopping short of that.

The ISAD(G) standard presents principles of moving from the general to the specific, and providing information relevant to the particular unit of description (ISAD(G) 2.2):

Provide only such information as is appropriate to the level being described. For example, do not provide detailed file content information if the unit of description is a fonds; do not provide an administrative history for an entire department if the creator of a unit of description is a division or a branch.

And of “non repetition” (ISAD(G) 2.4):

At the highest appropriate level, give information that is common to the component parts. Do not repeat information at a lower level of description that has already been given at a higher level.

In some cases, it may indeed be the case that if some descriptive attribute is not explicitly provided for the unit of description, then the information provided for its “parent” unit in the hierarchy is applicable; however, this is often not the case. The elements of the ISAD(G) Identity Statement Area (or the EAD <did> child elements), for example, are specific to the unit of description and do not apply to its “child” units; and for many other descriptive elements, a simple rule of “direct inheritance” may not be appropriate. For the <controlaccess> elements, for example, a “blunt” inference rule that the named entities “associated with” a unit of description are also “associated with” every “child” unit (and so on “down the tree”) may result in associations that are simply not useful to the consumer of the data.

In a post on the Archives Hub blog, Jane emphasised the value of the “Linked Data” approach in making things mentioned in our data into “first-class citizens”. One consequence of the multi-level approach in archival description practice is a strong sense of the importance of “context”, and that the descriptions of the “lower level” units should be read and interpreted in the context of the higher levels of description (perhaps even that they are in some sense “incomplete” without that “contextual” data). In contrast, the “Linked Data” approach typically involves exposing “bounded descriptions” of individual resources. Now, certainly, yes, those “bounded descriptions” include assertions of relationships with other resources (including the sort of part-whole/member-of/component-of relationships present here), and those links can be followed by consumers to obtain further information on the other resources – however, there is no requirement or expectation that consumers will do so. So, there is arguably a (perhaps unavoidable) element of tension between the strongly “contextual” emphasis of EAD and ISAD(G) and the “bounded descriptions” of “Linked Data”. Rather than seeing that as an insurmountable hurdle, however, I think it provides an area that the project can usefully explore and evaluate.

(If I remember correctly) we made the decision that, for now at least, the only piece of information for which we would implement an explicit “inheritance” from a “higher-level” Unit of Description to a “lower” one (and generate additional RDF triples in the data) would be that of the repository which provides access to the material (i.e. the EAD <repository> element).

Conclusion

As I said above, the model I’ve outlined here is intended as very much a first cut, not the “last word”, and something we’ll most likely revisit and refine further in the future, particularly as we see in practice what it enables us (and others) to do with the data generated, and where we might require some further tweaks to enable us to do more. For now, we feel it provides a basis for our initial work on transforming EAD data into RDF.

The next steps are:

  1. to decide on URI patterns for the URIs we will be generating (i.e. URIs for instances of the classes in the diagram above)
  2. to select terms from existing RDF vocabularies and to define any additional RDF terms required to create “descriptions” of these things based on information from the EAD document
  3. to create a transformation that implements the model (in the first instance, an XSLT transform)

I’ve already done some work on all of these, and I’ll write about them in a separate post here – which hopefully will be rather shorter than this one and will take me rather less time to write!

Some thoughts on architecture and workflows

This is an attempt to sketch out some of my/our initial thoughts on the approaches the project is considering to exposing data as Linked Data. I should emphasise that these are very much initial thoughts, and things may change as we progress.

The project is dealing with two main data sources, and at the moment two different approaches are being considered to those sources.

The first data source is the collection of archival finding aids describing the holdings of the archives of educational and research institutions in the UK, aggregated by the JISC Archives Hub service. This data takes the form of XML documents in the Encoded Archival Description (EAD) format, created by archivists in the various institutions, and submitted to the Hub.

Currently, the aggregated data is indexed using the Cheshire 3 application, and exposed as HTML pages on the archiveshub.ac.uk site for search and browse. (SRU and Z39.50 targets and an OAI-PMH repository are also available.)

To expose (probably a subset of) the Hub EAD finding aids as Linked Data, the workflow is expected to look something like that represented in Figure 1 below:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (1)
  1. Transform: EAD XML documents are transformed to an RDF format. We’ll write about our current thinking on this more in a subsequent post, as working out how best to represent the EAD data in RDF as the target for the transform is in itself a significant chunk of work (and an area I’m particularly interested in). This is likely to be something of an “iterative” process: we’ll start with a fairly basic transform that captures some subset of the content of the input documents, and perhaps refine things later to generate more data (and correct errors we’ll no doubt make in the first cut!)
  2. Enhance: RDF data from the previous step is “enhanced” and augmented. This step might include processes to (1) generally “clean up” the data (e.g. normalise some literals, identify internal co-references etc); (ii) add links to resources in other datasets; (iii) (maybe) pull in some useful data from other datasets, either data held by the Hub but not included in the EAD docs or data from other sources. Again this will probably be a process which we extend and refine over time.
  3. Upload: Load the RDF data from the previous step to an instance of the Talis Platform triple store, which Talis are kindly making available to the project.
  4. Expose: Expose a set of linked “bounded descriptions” from the triple store over HTTP, as documents in both human-readable and RDF formats, following the principles of the W3C TAG httpRange-14 resolution/Cool URIs for the Semantic Web. The use of the Platform also provides us with a SPARQL endpoint for the data – which we can make available to others to use – and which also means we can consider layering other Web interfaces over that endpoint. For example, I’d be interested in trying out the Linked Data API, which I talked about over on eFoundations a while ago.

It may be that that the second and third steps are reversed and we upload the data to the triple store and perform the “enhance” step on the data there, i.e. something closer to Figure 2:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (2)

Or indeed that a “hybrid” of the two is appropriate, and some “enhance” processes take place before upload and others take place afterwards.

We’ll also need to integrate some provision for “version control” and “provenance”/”attribution” (e.g. to track which data comes directly from the EAD sources, and which is added from elsewhere) into this process.

So for the Hub data, the plan is that the data is “exported” from the existing EAD dataset, and that the Platform triplestore provides the “back-end” for the app that serves up the “Linked Data” document views and provides a SPARQL endpoint.

The second data source of interest is the collection of bibliographic metadata aggregated into the Copac catalogue from the member libraries of Research Libraries UK and from other specialist libraries. This data is also held as XML in the MODS XML format. (Bethan Ruddock has a couple of posts on the Copac Development blog which describe the processes by which data is transferred from the contributor libraries to the Copac catalogue).

As for the case of the Archives Hub data, the first stage will be to design an appropriate RDF representation and an algorithm for transforming the MODS data to RDF (or to select – and maybe adapt, if necessary – an existing one).

In contrast to the case of the Hub I outlined above, the plan is to serve the RDF data from the existing Copac database, rather than upload it to a triplestore. This will probably require the development of a small additional application (or maybe just the configuration of an HTTP server) to service the new URIs coined for resources, to support content negotiation and redirect to URIs of appropriate pages.

One of the questions raised by this approach is how to handle the process I described above as “enhance”, and in particular how to accommodate the addition of new data – at a minimum, links to existing resources described in other Linked Data datasets – assuming that we aren’t going to be able to update the source MODS XML documents. For some cases, it may be trivial to incorporate this in the MODS-to-RDF transform (e.g., to generate links to languages described by lexvo.org). Another approach might be to generate simple “seeAlso” links to an additional set of documents (which could be simple static documents or could be served from an RDF store). Hmm. As you can probably tell, I’ve thought about this rather less than I’ve thought about the Hub case! Anyway, the suggested approach is sketched in Figure 3:

Diagram showing process of transforming MODS to RDF and exposing as Linked Data

Another constraint of this approach would be that although we can serve the set of linked documents, it doesn’t provide a SPARQL endpoint.

One of the expectations for the project is that it “explores and reports on the opportunities and barriers in making content structured and exposed”, and an assessment of the pros and cons of the different approaches to hosting the data should contribute to that report.

LOCAH Project – Projected Timeline, Workplan & Overall Project Methodology

Project Plan

WP1:  Project Management.

  • Project management to support the project, the relationships with project partners, and with the funders.

WP2:  Data Modelling

  • Model Archives Hub EAD data and Copac data to RDF

WP3:  Technical Development – Linked Data Interface

  • Transform RDF modelled to RDF XML.
  • Enrich Hub and Copac data with data/links from sources such as DBPedia, BBC, LOC, VIAF, Musicbrainz, Freebase
  • Provide both RDF and HTML documents for Archives Hub and Copac resources with stable well designed URIs
  • Provide a SPARQL endpoint for the Hub Linked Data resources
  • Look at feasibility of providing RESTful API interface to the Hub and Copac Linked Data resources

WP4: Prototype Development

  • Test and refine requirements for proposed prototypes
  • Design user interfaces for prototype
  • Technical development and testing of the user interfaces

WP5: ‘Opportunities and Barriers’ Reporting

  • Design and implement  procedures for logging ongoing projects issues
  • Analyse and synthesise logged issues around known Linked Data issues
  • Report on opportunities and barriers using the project blog outlining methods and recommendations on how to overcome, mediate or mitigate against issues identified wherever possible.

WP6: Advocacy and Dissemination

  • Report on ongoing project progress and findings at JISC programme events
  • Demonstrate project outputs and report to communities on the findings of the opportunities and barriers reporting at relevant conferences and workshops

Timetable

WPMonth 1 2 3 4 5 6 7 8 9 10 11 12
WP1 X X X X X X X X X X X X
WP2 X X X X
WP3 X X X X X
WP4 X X X X X X X
WP5 X X X X X X X X X X
WP6 X X X X X X X X X

Project Management and Staffing

Adrian Stevenson will project manage LOCAH to ensure that the workplan is carried out to the timetable, and that effective dissemination and evaluation mechanisms are implemented according to the JISC Project Management guidelines. Consortium agreements in line with JISC guidelines will be established for the project partners. UKOLN will lead on all the workpackages. Staff who will work on LOCAH are already in post.

Support for Standards, Accessibility and Other Best Practices

LOCAH will adhere to the guidance and good practice provided by JISC in the Standards Catalogue and JISC Information Environment. The primary technology methodologies, standards and specifications adopted for this project will be:

  • XML, XSLT, RDF XML, RDFa, FOAF, SKOS, SPARQL, n3, JSON, RSS/ATOM
  • Metadata standards: EAD, MODS, Dublin Core
  • Berners-Lee,T. (2006). ‘Linked Data – Design Issues’
  • Berners-Lee,T. (1998). ‘W3C Style: Cool URIs don’t change’
  • Cabinet Offices ‘Designing URI Sets for the UK Public Sector’
  • Dodds, L., Davis, I., ‘Linked Data Patterns’
  • W3C Web Accessibility Initiative (WAI)

LOCAH Project – Project Team Relationships and End User Engagement

Project Team

Adrian Stevenson

Adrian Stevenson

Adrian Stevenson is a project manager and researcher at UKOLN. He has managed the highly successful SWORD project since May 2008 and also manages the JISC Information Environment Technical Review project. He has extensive experience of the implementation of interoperability standards, and has a long-standing interest in Linked Data. Adrian will manage LOCAH, and will be involved in the data modelling work, testing and the opportunities and barriers reporting.

Jane Stevenson

Jane Stevenson

Jane Stevenson is the Archives Hub Coordinator at Mimas. In this role, she manages the day-to- day running of the Archives Hub service. She is a registered archivist with substantial experience of cataloguing, implementation of data standards, dissemination and online service provision. She has expertise in the use of Encoded Archival Description for archives, and will be involved in the data modelling work, mapping EAD to RDF, testing as well as the opportunities and barriers reporting.

Pete Johnston

Pete Johnston

Pete Johnston is a Technical Researcher at Eduserv. His work has been primarily in the areas of metadata/resource description, with a particular interest in the use of Semantic Web technologies and the Linked Data approach. He participates in a number of standards development activities, and is an active contributor to the work of the Dublin Core Metadata Initiative. He was also a co-editor of the Open Archives Initiative Object Reuse and Exchange (OAI ORE) specifications.

Pete joined Eduserv in May 2006 from UKOLN, University of Bath, where he advised the UK education and cultural heritage communities on strategies for the effective exchange and reuse of information. Pete will be involved in the data modelling work, mapping EAD and MODS to RDF, software testing and the opportunities and barriers report.

Bethan Ruddock

Bethan Ruddock

Bethan Ruddock is involved in content development activity for both the Archives Hub and Copac. She is currently working on a year-long project to help expand the coverage of the Archives Hub through the refinement of our automated data import routines. Bethan also undertakes a range of outreach and promotional activities, collaborating with Lisa on a number of publications. Bethan will be involved in the modelling work of transforming MODS to RDF.

Julian Cheal

Julian Cheal

Julian Cheal is a software developer at UKOLN. He is currently working on the analysis and visualisation of UK open access repository metadata from the RepUK project. He has experience of writing software to process metadata at UKOLN, and has previous development experience at Aberystwyth University. Julian will be mainly involved in developing the prototype and visualisations.

Ashley Sanders

Ashley Sanders

Ashley Sanders is the Senior Developer for Copac, and has been working with the service since his inception. He is currently leading the technical work involved in the Copac Re-Engineering project, which involves a complete overhaul of the service. Ashley will be involved in the development work of transforming MODS to RDF.

Shirley Cousins is a Coordinator for the Copac service. Shirley will be involved in the work of transforming MODS to RDF.

An additional Mimas developer will provide the development work for transforming the Archives Hub EAD data to RDF. This person will be allocated from existing Mimas staff in post.

Talis are our technology partner on the project, kindly providing us with access to store our data in the Talis Store. Leigh Dodds is our main contact at the company. Talis is a privately owned UK company that is amongst the first organisations to be applying leading edge Semantic Web technologies to the creation of real-world solutions. Talis has significant expertise in semantic web and Linked Data technologies, and the Talis Platform has been used by a variety of organisations including the BBC and UK Government as part of data.gov.uk.

OCLC are also partnering us, mainly to help out with VIAF. Our contacts at OCLC are John MacColl, Ralph LeVan and Thom Hickey. OCLC is a worldwide library cooperative, owned, governed and sustained by members since 1967. Its public purpose is to work with its members to improve access to the information held in libraries around the globe, and find ways to reduce costs for libraries through collaboration. Its Research Division works with the community to identify problems and opportunities, prototype and test solutions, and share findings through publications, presentations and professional interactions.

Engagement with the Community

Stakeholders

Several key stakeholder groups have been identified: end users, particularly historical researchers, students & educators; data providers, including RLUK and the libraries & archives that contribute data to the services; the developer community; the library community; the archival sector and more broadly, the cultural heritage sector.

End users

Copac and the Archives Hub services are heavily used by historical researchers and educators. Copac is one of JISC’s most heavily used services, averaging around one million sessions per month. Around 48% of HE research usage can be attributed historical research. Both services can directly engage relevant end users, and have done so successfully in the past to conduct market research or solicit feedback on service developments. In addition, channels such as twitter can be used to reach end users, particularly the digital humanities community.

Data providers; Library Community; Archival Community; Cultural Heritage Sector

Through the Copac and Archives Hub Steering Committees we have the means to consult with a wide range of representatives from the library and archival sectors. The project partners have well- established links with stakeholders such as RLUK, SCONUL, and the UK Archives Discovery Network, which represents all the key UK archives networks including The National Archives and the Scottish Archives Networks. The Archives Hub delivers training and support to the UK archives community, and can effectively engage its contributors through workshops, fora, and social media. OCLC’s community engagement channels will also provide a valuable means of sharing project outputs for feedback internationally. The key project partners are also engaged in the Resource Discovery Taskforce Vision implementation planning, as well as the JISC/SCONUL Shared Services Proposal. Outputs from this project will be shared in both these contexts. In addition, we will proactively share information with bodies such as the MLA, Collections Trust and Culture24.

Developer Community

As a JISC innovation support centre, UKOLN is uniquely placed to engage the developer community through initiatives such as the DevCSI programme, which is aimed at helping developers in HE to realise their full potential by creating the conditions for them to be able to learn, to network effectively, to share ideas and to collaborate.

Dissemination

The primary channel for disseminating the project outputs will be the UKOLN hosted blog. End users will be primarily engaged for survey feedback via the Copac and Archives Hub services. Social media will be used to reach subject groups with active online communities (e.g. Digital Humanities). Information aimed at the library and archival community, including data providers, will be disseminated through reports to service Steering Group meetings, UKAD meetings, the Resource Discovery Taskforce Vision group, the JISC/SCONUL Shared Services Proposal Group, as well as professional listservs. Conference presentations and demonstrations will be proposed for events such as ILI, Online Information, and JISC conferences. An article will be written for Ariadne. The developer community will be engaged primarily through the project blog, twitter, developer events & the Linked Data competition.

LOCAH Project – Intellectual Property Rights (IPR)

The project will be managed according to JISC guidelines for intellectual property. Any custom-built prototype outputs will be made available under open-source license free of charge to the UK HE and FE community. There may be some rights restrictions relating to the Copac and Hub data content due to data licensing issues. These will be explored and addressed as part of the project.

LOCAH Project – Risk Analysis, Evaluation and Impact

Risk Analysis

Risk Probability Severity Score Action/Mitigation
Difficulties recruiting or retaining staff 2 4 8 Key members of staff already in post at UKOLN, Mimas and Eduserv
Project is over-ambitious 2 2 4 The project plan will ensure that deliverables are delivered in a timely fashion and the project does not divert from agreed goals.
Failure to meet deadlines within the project timescale 2 4 8 Clear project plan with all relevant tasks outlined, continuous review and rescheduling of work as necessary
Failure to disseminate best practices effectively 2 2 4 UKOLN has very effective dissemination channels. The involvement of partners who can gain clear benefits from this work will allow them to be involved in dissemination activities.
Project partners fail to work effectively 1 3 3 UKOLN has good links with all the partners, many through previous joint projects and recent consultancy work. A consortium agreement with address potential concerns.

Evaluation

LOCAH will be evaluated by a number of means including qualitative and quantitative methods, and will look at both the tangible and intangible outputs of the project. We will regularly check progress against the project plan and requirements, and we will engage with users through the blog, social media, questionnaires and events. The project manager will lead the evaluation, liaising with relevant parties and drawing on contacts within the JISC community and wider HE community.

Impact

Several members of the project team are closely involved with current Linked Data activities, and are fully aware of the current ‘state of the art’ against which the impact of the project will be evaluated. The immediate impact of the project will be to provide two new enriched and quality assured data sets to the UK HE and global data graph. It will also provide a prototype that highlights the potential of Linked Data for enhancing learning, teaching and research. The long-term impact will be to help Linked Data gain traction and achieve a critical mass in the UK HE community, as well as providing invaluable experience and insight on a range of issues. Mimas intends to sustain the Linked Data sets, and will ensure that the resources have stable URIs for two years beyond the life of the project. The project may be able to transition to using the Talis Connected Commons scheme if the licensing situation can be clarified. This would then provide long-term sustainability for the data publishing.



LOCAH Project – Wider Benefits to Sector & Achievements for Host Institution

Meeting a need

High quality research and teaching relies partly on access to a broad range of resources. Archive and library materials inform and enhance knowledge and are central to the JISC strategy. JISC invests in bibliographic and archival metadata services to enable discovery of, and access to, those materials, and we know the research, teaching and learning communities value those services.

As articulated in the Resource Discovery Taskforce Vision, that value could be increased if the data can be made to “work harder”, to be used in different ways and repurposed in different contexts.

Providing bibliographic and archive data as Linked Data creates links with other data sources, and allows the development of new channels into the data. Researchers are more likely to discover sources that may materially affect their research outcomes, and the ‘hidden’ collections of archives and special collections are more likely to be exposed and used.

Archive data is by its nature incomplete and often sources are hidden and little known. User studies and log analyses indicate that Archives Hub1 users frequently search laterally through the descriptions; this gives them a way to make serendipitous discoveries. Linked data is a way of vastly expanding the benefits of lateral search, helping users discover contextually related materials. Creating links between archival collections and other sources is crucial – archives relating to the same people, organisations, places and subjects are often widely dispersed. By bringing these together intellectually, new discoveries can be made about the life and work of an individual or the circumstances surrounding important historical events. New connections, new relationships, new ideas about our history and society. Put this together with other data sources, such as special collections, multimedia repositories and geographic information systems, and the opportunities for discovery are significantly increased.

Similarly, by making Copac bibliographic data available as Linked Data we can increase the opportunities for developers to provide contextual links to primary and secondary source material held within the UK’s research libraries and an increasing number of specialist libraries, including the British Museum, the National Trust, and the Royal Society. The provision of library and special collections content as Linked Data will allow developers to build interfaces to link contextually related historical sources that may have been curated and described using differing methodologies. The differences in these methodologies and the emerging standards for description and access have resulted in distinct challenges in providing meaningful cross-searching and interlinking of this related content – a Linked Data approach offers potential to overcome that significant hurdle.

Researchers and teachers will have the ability to repurpose data for their own specific use. Linked Data provides flexibility for people to create their own pathways through Archives Hub and Copac data alongside other data sources. Developers will be able to provide applications and visualisations tailored to the needs of researchers, learning environments, institutional and project goals.

Innovation

Archives are described hierarchically, and this presents challenges for the output of Linked Data. In addition, descriptions are a combination of structured data and semi-structured data. As part of this project, we will explore the challenges in working with semi-structured data, which can potentially provide a very rich source of information. The biographical histories for creators of archives may provide unique information that has been based on the archival source. Extracting event-based data from this can really open up the potential of the archival description to be so much more than the representation of an archive collection. It becomes a much more multi-faceted resource, providing data about people, organisations, places and events.

The library community is beginning to explore the potential of Linked Data. The Swedish and Hungarian National Libraries have exposed their catalogues as Linked Data, the Library of Congress has exposed subject authority data (LCSH), and OCLC is now involved in making the Virtual International Authority File (VIAF) available in this way.

By treating the entities (people, places, concepts etc) referred to in bibliographic data as resources in their own right, links can be made to other data referring to those same resources. Those other sources can be used to enrich the presentation of bibliographic data, and the bibliographic data can be used in conjunction with other data sources to create new applications.

Copac is the largest union catalogue of bibliographic data in the UK, and one of the largest in the world, and its exposure as Linked Data can provide a rich data source, of particular value to the research, learning and teaching communities.

In answering the call, we will be able to report on the challenges of the project, and how we have approached them. This will be of benefit to all institutions with bibliographic and archival data looking to maximise its potential. We are very well placed within the research and teaching communities to share our experiences and findings.

LOCAH Project – Aims, Objectives and Final Outputs

This is the first of a number of posts outlining our project plan in line with the requirements of the call document. So here we are – our aims, objectives and intended final outputs:

The LOCAH project aims to make records from the JISC funded Archives Hub service, and records from the JISC funded Copac service available as Linked Data. In each case, the aim is to provide persistent URIs for the key entities described in that data, dereferencing to documents describing those entities. The information will be made available as web pages in XHTML containing RDFa and also Linked Data RDF/XML. SPARQL endpoints will be provided to enable the data to be queried. In addition, consideration will be given to the provision of a simple query API for some common queries.

Making resources available as structured data

The work will involve:

  1. Analysis & modelling of the current data and the selection (or definition) of appropriate RDF vocabularies.
  2. Design of suitable URI patterns (based on the current guidelines for UK government data).
  3. Development of procedures to transform existing data formats to RDF. Either:
    • uploading of that transformed data to an RDF store (such as a Talis Platform instance ) and development of application to serve data from that store, or
    • development of an application to serve RDF data from an existing data store.
  4. The former will be the case for the Hub data; the latter is likely to be used for Copac.

  5. We intend to enhance the source data with links between these two datasets and with existing Linked Data sets made available by other parties (e.g. DBpedia, Geonames, the Virtual International Authority File, Library of Congress’ Subject headings). This process may include simple name lookups and also the use of services such as EDINA Unlock, OpenCalais and Muddy to identify entities from text fragments. Given that Copac is in a transition phase to a new database during the project, we will be taking a more lightweight approach to structuring and enhancing Copac data. We will then be able to make a comparison between the outcomes of a lightweight rapid approach to producing Linked Data for Copac, and the relatively resource intensive data enrichment approach for the Archives Hub.
  6. We will look to provide resources such as dataset-level descriptions (using vOID and/or DCat) and semantic sitemaps.
  7. The project will adopt a lightweight iterative approach to the development and testing of the exposed structured content. This will involve the rapid development of interfaces to Hub and Copac data that will be tested against existing third party Linked Data tools and data sets. The evaluated results will feed into the further phases of development.

The result will be the availability of two new quality-assured datasets which are “meshable” with other global Linked Data sources. In addition, the documents made available will be accessible to all the usual web search and indexing services such as Google, contributing to their searchability and findability, and thereby raising the profile of these Mimas JISC services to research users. In common parlance, the resources will have more “Google juice”.

Prototype Data Visualisations

We also suggest a number of end user prototype ideas. These would provide attractive and compelling data visualisations based around a number of visualisation concepts. We intend to produce one prototype. We intend to use the ideas suggested as the basis for this, but given the iterative nature of the project, it may end up being something quite different. We will produce additional prototypes if time and resources allow.

The project intends to hold a small developer competition to gather further end use cases and prototype ideas run by the UKOLN DevCSI team on behalf of the project.

Opportunities and Barriers Reporting

We will log ongoing projects issues as they arise to inform our opportunities and barriers reporting that we will deliver via posts on the LOCAH project blog. We will outline and discuss the methods and solutions we have adopted to overcome, mediate or mitigate against these, wherever this has been possible.

The methods and solutions we establish will iteratively feed into the ongoing development process. This will mean that we are able to work out solutions to issues as they arise, and implement them in the next phase of rapid development.

We are keen to engage with the other projects funded as part of the jiscExpo call, and any additional UK HE projects working at implementing Linked Data solutions. The project team has very strong links with the Linked Data community: we will look to engage the community by stimulating debate about implementation problems via the project blog. We will also set up a project Twitter feed to generate discussion on the project #locah tag. In addition, we will engage via relevant JISCmail lists as well as the UK Government Data Developers and the Linked Data API Google discussion groups that several members of the team are already part of.

RDFa – from theory to practice

Adrian Stevenson will be talking about the LOCAH project at IWMW 2010 in Sheffield in a session that looks at implementing RDFa.  The session will:

  1. provide an introduction to what’s happening now in Linked Data and RDFa
  2. demonstrate recent work exposing repository metadata as RDFa
  3. explain how integration of RDFa within a content management system such as Drupal can enrich semantic content – and in some cases help significantly boost search engine ranking.

Information about the session can be found at http://iwmw.ukoln.ac.uk/iwmw2010/sessions/bunting-dewey-stevenson/

[slideshare id=4734746&doc=rdfatheorytopractice-100712063426-phpapp01]