Some thoughts on architecture and workflows

This is an attempt to sketch out some of my/our initial thoughts on the approaches the project is considering to exposing data as Linked Data. I should emphasise that these are very much initial thoughts, and things may change as we progress.

The project is dealing with two main data sources, and at the moment two different approaches are being considered to those sources.

The first data source is the collection of archival finding aids describing the holdings of the archives of educational and research institutions in the UK, aggregated by the JISC Archives Hub service. This data takes the form of XML documents in the Encoded Archival Description (EAD) format, created by archivists in the various institutions, and submitted to the Hub.

Currently, the aggregated data is indexed using the Cheshire 3 application, and exposed as HTML pages on the archiveshub.ac.uk site for search and browse. (SRU and Z39.50 targets and an OAI-PMH repository are also available.)

To expose (probably a subset of) the Hub EAD finding aids as Linked Data, the workflow is expected to look something like that represented in Figure 1 below:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (1)
  1. Transform: EAD XML documents are transformed to an RDF format. We’ll write about our current thinking on this more in a subsequent post, as working out how best to represent the EAD data in RDF as the target for the transform is in itself a significant chunk of work (and an area I’m particularly interested in). This is likely to be something of an “iterative” process: we’ll start with a fairly basic transform that captures some subset of the content of the input documents, and perhaps refine things later to generate more data (and correct errors we’ll no doubt make in the first cut!)
  2. Enhance: RDF data from the previous step is “enhanced” and augmented. This step might include processes to (1) generally “clean up” the data (e.g. normalise some literals, identify internal co-references etc); (ii) add links to resources in other datasets; (iii) (maybe) pull in some useful data from other datasets, either data held by the Hub but not included in the EAD docs or data from other sources. Again this will probably be a process which we extend and refine over time.
  3. Upload: Load the RDF data from the previous step to an instance of the Talis Platform triple store, which Talis are kindly making available to the project.
  4. Expose: Expose a set of linked “bounded descriptions” from the triple store over HTTP, as documents in both human-readable and RDF formats, following the principles of the W3C TAG httpRange-14 resolution/Cool URIs for the Semantic Web. The use of the Platform also provides us with a SPARQL endpoint for the data – which we can make available to others to use – and which also means we can consider layering other Web interfaces over that endpoint. For example, I’d be interested in trying out the Linked Data API, which I talked about over on eFoundations a while ago.

It may be that that the second and third steps are reversed and we upload the data to the triple store and perform the “enhance” step on the data there, i.e. something closer to Figure 2:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (2)

Or indeed that a “hybrid” of the two is appropriate, and some “enhance” processes take place before upload and others take place afterwards.

We’ll also need to integrate some provision for “version control” and “provenance”/”attribution” (e.g. to track which data comes directly from the EAD sources, and which is added from elsewhere) into this process.

So for the Hub data, the plan is that the data is “exported” from the existing EAD dataset, and that the Platform triplestore provides the “back-end” for the app that serves up the “Linked Data” document views and provides a SPARQL endpoint.

The second data source of interest is the collection of bibliographic metadata aggregated into the Copac catalogue from the member libraries of Research Libraries UK and from other specialist libraries. This data is also held as XML in the MODS XML format. (Bethan Ruddock has a couple of posts on the Copac Development blog which describe the processes by which data is transferred from the contributor libraries to the Copac catalogue).

As for the case of the Archives Hub data, the first stage will be to design an appropriate RDF representation and an algorithm for transforming the MODS data to RDF (or to select – and maybe adapt, if necessary – an existing one).

In contrast to the case of the Hub I outlined above, the plan is to serve the RDF data from the existing Copac database, rather than upload it to a triplestore. This will probably require the development of a small additional application (or maybe just the configuration of an HTTP server) to service the new URIs coined for resources, to support content negotiation and redirect to URIs of appropriate pages.

One of the questions raised by this approach is how to handle the process I described above as “enhance”, and in particular how to accommodate the addition of new data – at a minimum, links to existing resources described in other Linked Data datasets – assuming that we aren’t going to be able to update the source MODS XML documents. For some cases, it may be trivial to incorporate this in the MODS-to-RDF transform (e.g., to generate links to languages described by lexvo.org). Another approach might be to generate simple “seeAlso” links to an additional set of documents (which could be simple static documents or could be served from an RDF store). Hmm. As you can probably tell, I’ve thought about this rather less than I’ve thought about the Hub case! Anyway, the suggested approach is sketched in Figure 3:

Diagram showing process of transforming MODS to RDF and exposing as Linked Data

Another constraint of this approach would be that although we can serve the set of linked documents, it doesn’t provide a SPARQL endpoint.

One of the expectations for the project is that it “explores and reports on the opportunities and barriers in making content structured and exposed”, and an assessment of the pros and cons of the different approaches to hosting the data should contribute to that report.

LOCAH Project – Projected Timeline, Workplan & Overall Project Methodology

Project Plan

WP1:  Project Management.

  • Project management to support the project, the relationships with project partners, and with the funders.

WP2:  Data Modelling

  • Model Archives Hub EAD data and Copac data to RDF

WP3:  Technical Development – Linked Data Interface

  • Transform RDF modelled to RDF XML.
  • Enrich Hub and Copac data with data/links from sources such as DBPedia, BBC, LOC, VIAF, Musicbrainz, Freebase
  • Provide both RDF and HTML documents for Archives Hub and Copac resources with stable well designed URIs
  • Provide a SPARQL endpoint for the Hub Linked Data resources
  • Look at feasibility of providing RESTful API interface to the Hub and Copac Linked Data resources

WP4: Prototype Development

  • Test and refine requirements for proposed prototypes
  • Design user interfaces for prototype
  • Technical development and testing of the user interfaces

WP5: ‘Opportunities and Barriers’ Reporting

  • Design and implement  procedures for logging ongoing projects issues
  • Analyse and synthesise logged issues around known Linked Data issues
  • Report on opportunities and barriers using the project blog outlining methods and recommendations on how to overcome, mediate or mitigate against issues identified wherever possible.

WP6: Advocacy and Dissemination

  • Report on ongoing project progress and findings at JISC programme events
  • Demonstrate project outputs and report to communities on the findings of the opportunities and barriers reporting at relevant conferences and workshops

Timetable

WPMonth 1 2 3 4 5 6 7 8 9 10 11 12
WP1 X X X X X X X X X X X X
WP2 X X X X
WP3 X X X X X
WP4 X X X X X X X
WP5 X X X X X X X X X X
WP6 X X X X X X X X X

Project Management and Staffing

Adrian Stevenson will project manage LOCAH to ensure that the workplan is carried out to the timetable, and that effective dissemination and evaluation mechanisms are implemented according to the JISC Project Management guidelines. Consortium agreements in line with JISC guidelines will be established for the project partners. UKOLN will lead on all the workpackages. Staff who will work on LOCAH are already in post.

Support for Standards, Accessibility and Other Best Practices

LOCAH will adhere to the guidance and good practice provided by JISC in the Standards Catalogue and JISC Information Environment. The primary technology methodologies, standards and specifications adopted for this project will be:

  • XML, XSLT, RDF XML, RDFa, FOAF, SKOS, SPARQL, n3, JSON, RSS/ATOM
  • Metadata standards: EAD, MODS, Dublin Core
  • Berners-Lee,T. (2006). ‘Linked Data – Design Issues’
  • Berners-Lee,T. (1998). ‘W3C Style: Cool URIs don’t change’
  • Cabinet Offices ‘Designing URI Sets for the UK Public Sector’
  • Dodds, L., Davis, I., ‘Linked Data Patterns’
  • W3C Web Accessibility Initiative (WAI)

LOCAH Project – Risk Analysis, Evaluation and Impact

Risk Analysis

Risk Probability Severity Score Action/Mitigation
Difficulties recruiting or retaining staff 2 4 8 Key members of staff already in post at UKOLN, Mimas and Eduserv
Project is over-ambitious 2 2 4 The project plan will ensure that deliverables are delivered in a timely fashion and the project does not divert from agreed goals.
Failure to meet deadlines within the project timescale 2 4 8 Clear project plan with all relevant tasks outlined, continuous review and rescheduling of work as necessary
Failure to disseminate best practices effectively 2 2 4 UKOLN has very effective dissemination channels. The involvement of partners who can gain clear benefits from this work will allow them to be involved in dissemination activities.
Project partners fail to work effectively 1 3 3 UKOLN has good links with all the partners, many through previous joint projects and recent consultancy work. A consortium agreement with address potential concerns.

Evaluation

LOCAH will be evaluated by a number of means including qualitative and quantitative methods, and will look at both the tangible and intangible outputs of the project. We will regularly check progress against the project plan and requirements, and we will engage with users through the blog, social media, questionnaires and events. The project manager will lead the evaluation, liaising with relevant parties and drawing on contacts within the JISC community and wider HE community.

Impact

Several members of the project team are closely involved with current Linked Data activities, and are fully aware of the current ‘state of the art’ against which the impact of the project will be evaluated. The immediate impact of the project will be to provide two new enriched and quality assured data sets to the UK HE and global data graph. It will also provide a prototype that highlights the potential of Linked Data for enhancing learning, teaching and research. The long-term impact will be to help Linked Data gain traction and achieve a critical mass in the UK HE community, as well as providing invaluable experience and insight on a range of issues. Mimas intends to sustain the Linked Data sets, and will ensure that the resources have stable URIs for two years beyond the life of the project. The project may be able to transition to using the Talis Connected Commons scheme if the licensing situation can be clarified. This would then provide long-term sustainability for the data publishing.



LOCAH Project – Wider Benefits to Sector & Achievements for Host Institution

Meeting a need

High quality research and teaching relies partly on access to a broad range of resources. Archive and library materials inform and enhance knowledge and are central to the JISC strategy. JISC invests in bibliographic and archival metadata services to enable discovery of, and access to, those materials, and we know the research, teaching and learning communities value those services.

As articulated in the Resource Discovery Taskforce Vision, that value could be increased if the data can be made to “work harder”, to be used in different ways and repurposed in different contexts.

Providing bibliographic and archive data as Linked Data creates links with other data sources, and allows the development of new channels into the data. Researchers are more likely to discover sources that may materially affect their research outcomes, and the ‘hidden’ collections of archives and special collections are more likely to be exposed and used.

Archive data is by its nature incomplete and often sources are hidden and little known. User studies and log analyses indicate that Archives Hub1 users frequently search laterally through the descriptions; this gives them a way to make serendipitous discoveries. Linked data is a way of vastly expanding the benefits of lateral search, helping users discover contextually related materials. Creating links between archival collections and other sources is crucial – archives relating to the same people, organisations, places and subjects are often widely dispersed. By bringing these together intellectually, new discoveries can be made about the life and work of an individual or the circumstances surrounding important historical events. New connections, new relationships, new ideas about our history and society. Put this together with other data sources, such as special collections, multimedia repositories and geographic information systems, and the opportunities for discovery are significantly increased.

Similarly, by making Copac bibliographic data available as Linked Data we can increase the opportunities for developers to provide contextual links to primary and secondary source material held within the UK’s research libraries and an increasing number of specialist libraries, including the British Museum, the National Trust, and the Royal Society. The provision of library and special collections content as Linked Data will allow developers to build interfaces to link contextually related historical sources that may have been curated and described using differing methodologies. The differences in these methodologies and the emerging standards for description and access have resulted in distinct challenges in providing meaningful cross-searching and interlinking of this related content – a Linked Data approach offers potential to overcome that significant hurdle.

Researchers and teachers will have the ability to repurpose data for their own specific use. Linked Data provides flexibility for people to create their own pathways through Archives Hub and Copac data alongside other data sources. Developers will be able to provide applications and visualisations tailored to the needs of researchers, learning environments, institutional and project goals.

Innovation

Archives are described hierarchically, and this presents challenges for the output of Linked Data. In addition, descriptions are a combination of structured data and semi-structured data. As part of this project, we will explore the challenges in working with semi-structured data, which can potentially provide a very rich source of information. The biographical histories for creators of archives may provide unique information that has been based on the archival source. Extracting event-based data from this can really open up the potential of the archival description to be so much more than the representation of an archive collection. It becomes a much more multi-faceted resource, providing data about people, organisations, places and events.

The library community is beginning to explore the potential of Linked Data. The Swedish and Hungarian National Libraries have exposed their catalogues as Linked Data, the Library of Congress has exposed subject authority data (LCSH), and OCLC is now involved in making the Virtual International Authority File (VIAF) available in this way.

By treating the entities (people, places, concepts etc) referred to in bibliographic data as resources in their own right, links can be made to other data referring to those same resources. Those other sources can be used to enrich the presentation of bibliographic data, and the bibliographic data can be used in conjunction with other data sources to create new applications.

Copac is the largest union catalogue of bibliographic data in the UK, and one of the largest in the world, and its exposure as Linked Data can provide a rich data source, of particular value to the research, learning and teaching communities.

In answering the call, we will be able to report on the challenges of the project, and how we have approached them. This will be of benefit to all institutions with bibliographic and archival data looking to maximise its potential. We are very well placed within the research and teaching communities to share our experiences and findings.

LOCAH Project – Aims, Objectives and Final Outputs

This is the first of a number of posts outlining our project plan in line with the requirements of the call document. So here we are – our aims, objectives and intended final outputs:

The LOCAH project aims to make records from the JISC funded Archives Hub service, and records from the JISC funded Copac service available as Linked Data. In each case, the aim is to provide persistent URIs for the key entities described in that data, dereferencing to documents describing those entities. The information will be made available as web pages in XHTML containing RDFa and also Linked Data RDF/XML. SPARQL endpoints will be provided to enable the data to be queried. In addition, consideration will be given to the provision of a simple query API for some common queries.

Making resources available as structured data

The work will involve:

  1. Analysis & modelling of the current data and the selection (or definition) of appropriate RDF vocabularies.
  2. Design of suitable URI patterns (based on the current guidelines for UK government data).
  3. Development of procedures to transform existing data formats to RDF. Either:
    • uploading of that transformed data to an RDF store (such as a Talis Platform instance ) and development of application to serve data from that store, or
    • development of an application to serve RDF data from an existing data store.
  4. The former will be the case for the Hub data; the latter is likely to be used for Copac.

  5. We intend to enhance the source data with links between these two datasets and with existing Linked Data sets made available by other parties (e.g. DBpedia, Geonames, the Virtual International Authority File, Library of Congress’ Subject headings). This process may include simple name lookups and also the use of services such as EDINA Unlock, OpenCalais and Muddy to identify entities from text fragments. Given that Copac is in a transition phase to a new database during the project, we will be taking a more lightweight approach to structuring and enhancing Copac data. We will then be able to make a comparison between the outcomes of a lightweight rapid approach to producing Linked Data for Copac, and the relatively resource intensive data enrichment approach for the Archives Hub.
  6. We will look to provide resources such as dataset-level descriptions (using vOID and/or DCat) and semantic sitemaps.
  7. The project will adopt a lightweight iterative approach to the development and testing of the exposed structured content. This will involve the rapid development of interfaces to Hub and Copac data that will be tested against existing third party Linked Data tools and data sets. The evaluated results will feed into the further phases of development.

The result will be the availability of two new quality-assured datasets which are “meshable” with other global Linked Data sources. In addition, the documents made available will be accessible to all the usual web search and indexing services such as Google, contributing to their searchability and findability, and thereby raising the profile of these Mimas JISC services to research users. In common parlance, the resources will have more “Google juice”.

Prototype Data Visualisations

We also suggest a number of end user prototype ideas. These would provide attractive and compelling data visualisations based around a number of visualisation concepts. We intend to produce one prototype. We intend to use the ideas suggested as the basis for this, but given the iterative nature of the project, it may end up being something quite different. We will produce additional prototypes if time and resources allow.

The project intends to hold a small developer competition to gather further end use cases and prototype ideas run by the UKOLN DevCSI team on behalf of the project.

Opportunities and Barriers Reporting

We will log ongoing projects issues as they arise to inform our opportunities and barriers reporting that we will deliver via posts on the LOCAH project blog. We will outline and discuss the methods and solutions we have adopted to overcome, mediate or mitigate against these, wherever this has been possible.

The methods and solutions we establish will iteratively feed into the ongoing development process. This will mean that we are able to work out solutions to issues as they arise, and implement them in the next phase of rapid development.

We are keen to engage with the other projects funded as part of the jiscExpo call, and any additional UK HE projects working at implementing Linked Data solutions. The project team has very strong links with the Linked Data community: we will look to engage the community by stimulating debate about implementation problems via the project blog. We will also set up a project Twitter feed to generate discussion on the project #locah tag. In addition, we will engage via relevant JISCmail lists as well as the UK Government Data Developers and the Linked Data API Google discussion groups that several members of the team are already part of.

Who is the creator?

I recently posted on the Archives Hub blog about the tricky issue of the ‘creator’ in an archival context. This is something we need to think about when modelling EAD data to create RDF triples. It has generated some discussion about the definition of a creator of archives, and some of the issues surrounding this. The post can be found at http://archiveshub.ac.uk/blog/?p=2401.

RDFa – from theory to practice

Adrian Stevenson will be talking about the LOCAH project at IWMW 2010 in Sheffield in a session that looks at implementing RDFa.  The session will:

  1. provide an introduction to what’s happening now in Linked Data and RDFa
  2. demonstrate recent work exposing repository metadata as RDFa
  3. explain how integration of RDFa within a content management system such as Drupal can enrich semantic content – and in some cases help significantly boost search engine ranking.

Information about the session can be found at http://iwmw.ukoln.ac.uk/iwmw2010/sessions/bunting-dewey-stevenson/

[slideshare id=4734746&doc=rdfatheorytopractice-100712063426-phpapp01]