Some thoughts on architecture and workflows

This is an attempt to sketch out some of my/our initial thoughts on the approaches the project is considering to exposing data as Linked Data. I should emphasise that these are very much initial thoughts, and things may change as we progress.

The project is dealing with two main data sources, and at the moment two different approaches are being considered to those sources.

The first data source is the collection of archival finding aids describing the holdings of the archives of educational and research institutions in the UK, aggregated by the JISC Archives Hub service. This data takes the form of XML documents in the Encoded Archival Description (EAD) format, created by archivists in the various institutions, and submitted to the Hub.

Currently, the aggregated data is indexed using the Cheshire 3 application, and exposed as HTML pages on the archiveshub.ac.uk site for search and browse. (SRU and Z39.50 targets and an OAI-PMH repository are also available.)

To expose (probably a subset of) the Hub EAD finding aids as Linked Data, the workflow is expected to look something like that represented in Figure 1 below:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (1)
  1. Transform: EAD XML documents are transformed to an RDF format. We’ll write about our current thinking on this more in a subsequent post, as working out how best to represent the EAD data in RDF as the target for the transform is in itself a significant chunk of work (and an area I’m particularly interested in). This is likely to be something of an “iterative” process: we’ll start with a fairly basic transform that captures some subset of the content of the input documents, and perhaps refine things later to generate more data (and correct errors we’ll no doubt make in the first cut!)
  2. Enhance: RDF data from the previous step is “enhanced” and augmented. This step might include processes to (1) generally “clean up” the data (e.g. normalise some literals, identify internal co-references etc); (ii) add links to resources in other datasets; (iii) (maybe) pull in some useful data from other datasets, either data held by the Hub but not included in the EAD docs or data from other sources. Again this will probably be a process which we extend and refine over time.
  3. Upload: Load the RDF data from the previous step to an instance of the Talis Platform triple store, which Talis are kindly making available to the project.
  4. Expose: Expose a set of linked “bounded descriptions” from the triple store over HTTP, as documents in both human-readable and RDF formats, following the principles of the W3C TAG httpRange-14 resolution/Cool URIs for the Semantic Web. The use of the Platform also provides us with a SPARQL endpoint for the data – which we can make available to others to use – and which also means we can consider layering other Web interfaces over that endpoint. For example, I’d be interested in trying out the Linked Data API, which I talked about over on eFoundations a while ago.

It may be that that the second and third steps are reversed and we upload the data to the triple store and perform the “enhance” step on the data there, i.e. something closer to Figure 2:

Diagram showing process of transforming EAD to RDF and exposing as Linked Data (2)

Or indeed that a “hybrid” of the two is appropriate, and some “enhance” processes take place before upload and others take place afterwards.

We’ll also need to integrate some provision for “version control” and “provenance”/”attribution” (e.g. to track which data comes directly from the EAD sources, and which is added from elsewhere) into this process.

So for the Hub data, the plan is that the data is “exported” from the existing EAD dataset, and that the Platform triplestore provides the “back-end” for the app that serves up the “Linked Data” document views and provides a SPARQL endpoint.

The second data source of interest is the collection of bibliographic metadata aggregated into the Copac catalogue from the member libraries of Research Libraries UK and from other specialist libraries. This data is also held as XML in the MODS XML format. (Bethan Ruddock has a couple of posts on the Copac Development blog which describe the processes by which data is transferred from the contributor libraries to the Copac catalogue).

As for the case of the Archives Hub data, the first stage will be to design an appropriate RDF representation and an algorithm for transforming the MODS data to RDF (or to select – and maybe adapt, if necessary – an existing one).

In contrast to the case of the Hub I outlined above, the plan is to serve the RDF data from the existing Copac database, rather than upload it to a triplestore. This will probably require the development of a small additional application (or maybe just the configuration of an HTTP server) to service the new URIs coined for resources, to support content negotiation and redirect to URIs of appropriate pages.

One of the questions raised by this approach is how to handle the process I described above as “enhance”, and in particular how to accommodate the addition of new data – at a minimum, links to existing resources described in other Linked Data datasets – assuming that we aren’t going to be able to update the source MODS XML documents. For some cases, it may be trivial to incorporate this in the MODS-to-RDF transform (e.g., to generate links to languages described by lexvo.org). Another approach might be to generate simple “seeAlso” links to an additional set of documents (which could be simple static documents or could be served from an RDF store). Hmm. As you can probably tell, I’ve thought about this rather less than I’ve thought about the Hub case! Anyway, the suggested approach is sketched in Figure 3:

Diagram showing process of transforming MODS to RDF and exposing as Linked Data

Another constraint of this approach would be that although we can serve the set of linked documents, it doesn’t provide a SPARQL endpoint.

One of the expectations for the project is that it “explores and reports on the opportunities and barriers in making content structured and exposed”, and an assessment of the pros and cons of the different approaches to hosting the data should contribute to that report.

LOCAH Project – Projected Timeline, Workplan & Overall Project Methodology

Project Plan

WP1:  Project Management.

  • Project management to support the project, the relationships with project partners, and with the funders.

WP2:  Data Modelling

  • Model Archives Hub EAD data and Copac data to RDF

WP3:  Technical Development – Linked Data Interface

  • Transform RDF modelled to RDF XML.
  • Enrich Hub and Copac data with data/links from sources such as DBPedia, BBC, LOC, VIAF, Musicbrainz, Freebase
  • Provide both RDF and HTML documents for Archives Hub and Copac resources with stable well designed URIs
  • Provide a SPARQL endpoint for the Hub Linked Data resources
  • Look at feasibility of providing RESTful API interface to the Hub and Copac Linked Data resources

WP4: Prototype Development

  • Test and refine requirements for proposed prototypes
  • Design user interfaces for prototype
  • Technical development and testing of the user interfaces

WP5: ‘Opportunities and Barriers’ Reporting

  • Design and implement  procedures for logging ongoing projects issues
  • Analyse and synthesise logged issues around known Linked Data issues
  • Report on opportunities and barriers using the project blog outlining methods and recommendations on how to overcome, mediate or mitigate against issues identified wherever possible.

WP6: Advocacy and Dissemination

  • Report on ongoing project progress and findings at JISC programme events
  • Demonstrate project outputs and report to communities on the findings of the opportunities and barriers reporting at relevant conferences and workshops

Timetable

WPMonth 1 2 3 4 5 6 7 8 9 10 11 12
WP1 X X X X X X X X X X X X
WP2 X X X X
WP3 X X X X X
WP4 X X X X X X X
WP5 X X X X X X X X X X
WP6 X X X X X X X X X

Project Management and Staffing

Adrian Stevenson will project manage LOCAH to ensure that the workplan is carried out to the timetable, and that effective dissemination and evaluation mechanisms are implemented according to the JISC Project Management guidelines. Consortium agreements in line with JISC guidelines will be established for the project partners. UKOLN will lead on all the workpackages. Staff who will work on LOCAH are already in post.

Support for Standards, Accessibility and Other Best Practices

LOCAH will adhere to the guidance and good practice provided by JISC in the Standards Catalogue and JISC Information Environment. The primary technology methodologies, standards and specifications adopted for this project will be:

  • XML, XSLT, RDF XML, RDFa, FOAF, SKOS, SPARQL, n3, JSON, RSS/ATOM
  • Metadata standards: EAD, MODS, Dublin Core
  • Berners-Lee,T. (2006). ‘Linked Data – Design Issues’
  • Berners-Lee,T. (1998). ‘W3C Style: Cool URIs don’t change’
  • Cabinet Offices ‘Designing URI Sets for the UK Public Sector’
  • Dodds, L., Davis, I., ‘Linked Data Patterns’
  • W3C Web Accessibility Initiative (WAI)

LOCAH Project – Project Team Relationships and End User Engagement

Project Team

Adrian Stevenson

Adrian Stevenson

Adrian Stevenson is a project manager and researcher at UKOLN. He has managed the highly successful SWORD project since May 2008 and also manages the JISC Information Environment Technical Review project. He has extensive experience of the implementation of interoperability standards, and has a long-standing interest in Linked Data. Adrian will manage LOCAH, and will be involved in the data modelling work, testing and the opportunities and barriers reporting.

Jane Stevenson

Jane Stevenson

Jane Stevenson is the Archives Hub Coordinator at Mimas. In this role, she manages the day-to- day running of the Archives Hub service. She is a registered archivist with substantial experience of cataloguing, implementation of data standards, dissemination and online service provision. She has expertise in the use of Encoded Archival Description for archives, and will be involved in the data modelling work, mapping EAD to RDF, testing as well as the opportunities and barriers reporting.

Pete Johnston

Pete Johnston

Pete Johnston is a Technical Researcher at Eduserv. His work has been primarily in the areas of metadata/resource description, with a particular interest in the use of Semantic Web technologies and the Linked Data approach. He participates in a number of standards development activities, and is an active contributor to the work of the Dublin Core Metadata Initiative. He was also a co-editor of the Open Archives Initiative Object Reuse and Exchange (OAI ORE) specifications.

Pete joined Eduserv in May 2006 from UKOLN, University of Bath, where he advised the UK education and cultural heritage communities on strategies for the effective exchange and reuse of information. Pete will be involved in the data modelling work, mapping EAD and MODS to RDF, software testing and the opportunities and barriers report.

Bethan Ruddock

Bethan Ruddock

Bethan Ruddock is involved in content development activity for both the Archives Hub and Copac. She is currently working on a year-long project to help expand the coverage of the Archives Hub through the refinement of our automated data import routines. Bethan also undertakes a range of outreach and promotional activities, collaborating with Lisa on a number of publications. Bethan will be involved in the modelling work of transforming MODS to RDF.

Julian Cheal

Julian Cheal

Julian Cheal is a software developer at UKOLN. He is currently working on the analysis and visualisation of UK open access repository metadata from the RepUK project. He has experience of writing software to process metadata at UKOLN, and has previous development experience at Aberystwyth University. Julian will be mainly involved in developing the prototype and visualisations.

Ashley Sanders

Ashley Sanders

Ashley Sanders is the Senior Developer for Copac, and has been working with the service since his inception. He is currently leading the technical work involved in the Copac Re-Engineering project, which involves a complete overhaul of the service. Ashley will be involved in the development work of transforming MODS to RDF.

Shirley Cousins is a Coordinator for the Copac service. Shirley will be involved in the work of transforming MODS to RDF.

An additional Mimas developer will provide the development work for transforming the Archives Hub EAD data to RDF. This person will be allocated from existing Mimas staff in post.

Talis are our technology partner on the project, kindly providing us with access to store our data in the Talis Store. Leigh Dodds is our main contact at the company. Talis is a privately owned UK company that is amongst the first organisations to be applying leading edge Semantic Web technologies to the creation of real-world solutions. Talis has significant expertise in semantic web and Linked Data technologies, and the Talis Platform has been used by a variety of organisations including the BBC and UK Government as part of data.gov.uk.

OCLC are also partnering us, mainly to help out with VIAF. Our contacts at OCLC are John MacColl, Ralph LeVan and Thom Hickey. OCLC is a worldwide library cooperative, owned, governed and sustained by members since 1967. Its public purpose is to work with its members to improve access to the information held in libraries around the globe, and find ways to reduce costs for libraries through collaboration. Its Research Division works with the community to identify problems and opportunities, prototype and test solutions, and share findings through publications, presentations and professional interactions.

Engagement with the Community

Stakeholders

Several key stakeholder groups have been identified: end users, particularly historical researchers, students & educators; data providers, including RLUK and the libraries & archives that contribute data to the services; the developer community; the library community; the archival sector and more broadly, the cultural heritage sector.

End users

Copac and the Archives Hub services are heavily used by historical researchers and educators. Copac is one of JISC’s most heavily used services, averaging around one million sessions per month. Around 48% of HE research usage can be attributed historical research. Both services can directly engage relevant end users, and have done so successfully in the past to conduct market research or solicit feedback on service developments. In addition, channels such as twitter can be used to reach end users, particularly the digital humanities community.

Data providers; Library Community; Archival Community; Cultural Heritage Sector

Through the Copac and Archives Hub Steering Committees we have the means to consult with a wide range of representatives from the library and archival sectors. The project partners have well- established links with stakeholders such as RLUK, SCONUL, and the UK Archives Discovery Network, which represents all the key UK archives networks including The National Archives and the Scottish Archives Networks. The Archives Hub delivers training and support to the UK archives community, and can effectively engage its contributors through workshops, fora, and social media. OCLC’s community engagement channels will also provide a valuable means of sharing project outputs for feedback internationally. The key project partners are also engaged in the Resource Discovery Taskforce Vision implementation planning, as well as the JISC/SCONUL Shared Services Proposal. Outputs from this project will be shared in both these contexts. In addition, we will proactively share information with bodies such as the MLA, Collections Trust and Culture24.

Developer Community

As a JISC innovation support centre, UKOLN is uniquely placed to engage the developer community through initiatives such as the DevCSI programme, which is aimed at helping developers in HE to realise their full potential by creating the conditions for them to be able to learn, to network effectively, to share ideas and to collaborate.

Dissemination

The primary channel for disseminating the project outputs will be the UKOLN hosted blog. End users will be primarily engaged for survey feedback via the Copac and Archives Hub services. Social media will be used to reach subject groups with active online communities (e.g. Digital Humanities). Information aimed at the library and archival community, including data providers, will be disseminated through reports to service Steering Group meetings, UKAD meetings, the Resource Discovery Taskforce Vision group, the JISC/SCONUL Shared Services Proposal Group, as well as professional listservs. Conference presentations and demonstrations will be proposed for events such as ILI, Online Information, and JISC conferences. An article will be written for Ariadne. The developer community will be engaged primarily through the project blog, twitter, developer events & the Linked Data competition.

LOCAH Project – Intellectual Property Rights (IPR)

The project will be managed according to JISC guidelines for intellectual property. Any custom-built prototype outputs will be made available under open-source license free of charge to the UK HE and FE community. There may be some rights restrictions relating to the Copac and Hub data content due to data licensing issues. These will be explored and addressed as part of the project.

LOCAH Project – Risk Analysis, Evaluation and Impact

Risk Analysis

Risk Probability Severity Score Action/Mitigation
Difficulties recruiting or retaining staff 2 4 8 Key members of staff already in post at UKOLN, Mimas and Eduserv
Project is over-ambitious 2 2 4 The project plan will ensure that deliverables are delivered in a timely fashion and the project does not divert from agreed goals.
Failure to meet deadlines within the project timescale 2 4 8 Clear project plan with all relevant tasks outlined, continuous review and rescheduling of work as necessary
Failure to disseminate best practices effectively 2 2 4 UKOLN has very effective dissemination channels. The involvement of partners who can gain clear benefits from this work will allow them to be involved in dissemination activities.
Project partners fail to work effectively 1 3 3 UKOLN has good links with all the partners, many through previous joint projects and recent consultancy work. A consortium agreement with address potential concerns.

Evaluation

LOCAH will be evaluated by a number of means including qualitative and quantitative methods, and will look at both the tangible and intangible outputs of the project. We will regularly check progress against the project plan and requirements, and we will engage with users through the blog, social media, questionnaires and events. The project manager will lead the evaluation, liaising with relevant parties and drawing on contacts within the JISC community and wider HE community.

Impact

Several members of the project team are closely involved with current Linked Data activities, and are fully aware of the current ‘state of the art’ against which the impact of the project will be evaluated. The immediate impact of the project will be to provide two new enriched and quality assured data sets to the UK HE and global data graph. It will also provide a prototype that highlights the potential of Linked Data for enhancing learning, teaching and research. The long-term impact will be to help Linked Data gain traction and achieve a critical mass in the UK HE community, as well as providing invaluable experience and insight on a range of issues. Mimas intends to sustain the Linked Data sets, and will ensure that the resources have stable URIs for two years beyond the life of the project. The project may be able to transition to using the Talis Connected Commons scheme if the licensing situation can be clarified. This would then provide long-term sustainability for the data publishing.



LOCAH Project – Wider Benefits to Sector & Achievements for Host Institution

Meeting a need

High quality research and teaching relies partly on access to a broad range of resources. Archive and library materials inform and enhance knowledge and are central to the JISC strategy. JISC invests in bibliographic and archival metadata services to enable discovery of, and access to, those materials, and we know the research, teaching and learning communities value those services.

As articulated in the Resource Discovery Taskforce Vision, that value could be increased if the data can be made to “work harder”, to be used in different ways and repurposed in different contexts.

Providing bibliographic and archive data as Linked Data creates links with other data sources, and allows the development of new channels into the data. Researchers are more likely to discover sources that may materially affect their research outcomes, and the ‘hidden’ collections of archives and special collections are more likely to be exposed and used.

Archive data is by its nature incomplete and often sources are hidden and little known. User studies and log analyses indicate that Archives Hub1 users frequently search laterally through the descriptions; this gives them a way to make serendipitous discoveries. Linked data is a way of vastly expanding the benefits of lateral search, helping users discover contextually related materials. Creating links between archival collections and other sources is crucial – archives relating to the same people, organisations, places and subjects are often widely dispersed. By bringing these together intellectually, new discoveries can be made about the life and work of an individual or the circumstances surrounding important historical events. New connections, new relationships, new ideas about our history and society. Put this together with other data sources, such as special collections, multimedia repositories and geographic information systems, and the opportunities for discovery are significantly increased.

Similarly, by making Copac bibliographic data available as Linked Data we can increase the opportunities for developers to provide contextual links to primary and secondary source material held within the UK’s research libraries and an increasing number of specialist libraries, including the British Museum, the National Trust, and the Royal Society. The provision of library and special collections content as Linked Data will allow developers to build interfaces to link contextually related historical sources that may have been curated and described using differing methodologies. The differences in these methodologies and the emerging standards for description and access have resulted in distinct challenges in providing meaningful cross-searching and interlinking of this related content – a Linked Data approach offers potential to overcome that significant hurdle.

Researchers and teachers will have the ability to repurpose data for their own specific use. Linked Data provides flexibility for people to create their own pathways through Archives Hub and Copac data alongside other data sources. Developers will be able to provide applications and visualisations tailored to the needs of researchers, learning environments, institutional and project goals.

Innovation

Archives are described hierarchically, and this presents challenges for the output of Linked Data. In addition, descriptions are a combination of structured data and semi-structured data. As part of this project, we will explore the challenges in working with semi-structured data, which can potentially provide a very rich source of information. The biographical histories for creators of archives may provide unique information that has been based on the archival source. Extracting event-based data from this can really open up the potential of the archival description to be so much more than the representation of an archive collection. It becomes a much more multi-faceted resource, providing data about people, organisations, places and events.

The library community is beginning to explore the potential of Linked Data. The Swedish and Hungarian National Libraries have exposed their catalogues as Linked Data, the Library of Congress has exposed subject authority data (LCSH), and OCLC is now involved in making the Virtual International Authority File (VIAF) available in this way.

By treating the entities (people, places, concepts etc) referred to in bibliographic data as resources in their own right, links can be made to other data referring to those same resources. Those other sources can be used to enrich the presentation of bibliographic data, and the bibliographic data can be used in conjunction with other data sources to create new applications.

Copac is the largest union catalogue of bibliographic data in the UK, and one of the largest in the world, and its exposure as Linked Data can provide a rich data source, of particular value to the research, learning and teaching communities.

In answering the call, we will be able to report on the challenges of the project, and how we have approached them. This will be of benefit to all institutions with bibliographic and archival data looking to maximise its potential. We are very well placed within the research and teaching communities to share our experiences and findings.

LOCAH Project – Aims, Objectives and Final Outputs

This is the first of a number of posts outlining our project plan in line with the requirements of the call document. So here we are – our aims, objectives and intended final outputs:

The LOCAH project aims to make records from the JISC funded Archives Hub service, and records from the JISC funded Copac service available as Linked Data. In each case, the aim is to provide persistent URIs for the key entities described in that data, dereferencing to documents describing those entities. The information will be made available as web pages in XHTML containing RDFa and also Linked Data RDF/XML. SPARQL endpoints will be provided to enable the data to be queried. In addition, consideration will be given to the provision of a simple query API for some common queries.

Making resources available as structured data

The work will involve:

  1. Analysis & modelling of the current data and the selection (or definition) of appropriate RDF vocabularies.
  2. Design of suitable URI patterns (based on the current guidelines for UK government data).
  3. Development of procedures to transform existing data formats to RDF. Either:
    • uploading of that transformed data to an RDF store (such as a Talis Platform instance ) and development of application to serve data from that store, or
    • development of an application to serve RDF data from an existing data store.
  4. The former will be the case for the Hub data; the latter is likely to be used for Copac.

  5. We intend to enhance the source data with links between these two datasets and with existing Linked Data sets made available by other parties (e.g. DBpedia, Geonames, the Virtual International Authority File, Library of Congress’ Subject headings). This process may include simple name lookups and also the use of services such as EDINA Unlock, OpenCalais and Muddy to identify entities from text fragments. Given that Copac is in a transition phase to a new database during the project, we will be taking a more lightweight approach to structuring and enhancing Copac data. We will then be able to make a comparison between the outcomes of a lightweight rapid approach to producing Linked Data for Copac, and the relatively resource intensive data enrichment approach for the Archives Hub.
  6. We will look to provide resources such as dataset-level descriptions (using vOID and/or DCat) and semantic sitemaps.
  7. The project will adopt a lightweight iterative approach to the development and testing of the exposed structured content. This will involve the rapid development of interfaces to Hub and Copac data that will be tested against existing third party Linked Data tools and data sets. The evaluated results will feed into the further phases of development.

The result will be the availability of two new quality-assured datasets which are “meshable” with other global Linked Data sources. In addition, the documents made available will be accessible to all the usual web search and indexing services such as Google, contributing to their searchability and findability, and thereby raising the profile of these Mimas JISC services to research users. In common parlance, the resources will have more “Google juice”.

Prototype Data Visualisations

We also suggest a number of end user prototype ideas. These would provide attractive and compelling data visualisations based around a number of visualisation concepts. We intend to produce one prototype. We intend to use the ideas suggested as the basis for this, but given the iterative nature of the project, it may end up being something quite different. We will produce additional prototypes if time and resources allow.

The project intends to hold a small developer competition to gather further end use cases and prototype ideas run by the UKOLN DevCSI team on behalf of the project.

Opportunities and Barriers Reporting

We will log ongoing projects issues as they arise to inform our opportunities and barriers reporting that we will deliver via posts on the LOCAH project blog. We will outline and discuss the methods and solutions we have adopted to overcome, mediate or mitigate against these, wherever this has been possible.

The methods and solutions we establish will iteratively feed into the ongoing development process. This will mean that we are able to work out solutions to issues as they arise, and implement them in the next phase of rapid development.

We are keen to engage with the other projects funded as part of the jiscExpo call, and any additional UK HE projects working at implementing Linked Data solutions. The project team has very strong links with the Linked Data community: we will look to engage the community by stimulating debate about implementation problems via the project blog. We will also set up a project Twitter feed to generate discussion on the project #locah tag. In addition, we will engage via relevant JISCmail lists as well as the UK Government Data Developers and the Linked Data API Google discussion groups that several members of the team are already part of.

RDFa – from theory to practice

Adrian Stevenson will be talking about the LOCAH project at IWMW 2010 in Sheffield in a session that looks at implementing RDFa.  The session will:

  1. provide an introduction to what’s happening now in Linked Data and RDFa
  2. demonstrate recent work exposing repository metadata as RDFa
  3. explain how integration of RDFa within a content management system such as Drupal can enrich semantic content – and in some cases help significantly boost search engine ranking.

Information about the session can be found at http://iwmw.ukoln.ac.uk/iwmw2010/sessions/bunting-dewey-stevenson/

[slideshare id=4734746&doc=rdfatheorytopractice-100712063426-phpapp01]

The power of connections: unlocking the Web of data

Welcome to our project blog, just set up today. Lots more to come, but here’s a news item Jane Stevenson from Mimas has written to get us going:

Mimas and UKOLN are working together on an exciting JISC funded project to make our Archives Hub and Copac data available as structured Linked Data, for the benefit of education and research. We will also be working in partnership with Eduserv, Talis and OCLC, leading experts within their fields. We want to put archival and bibliographic data at the heart of the Linked Data Web, enabling new links to be made between diverse content sources and enabling the free and flexible exploration of data so that researchers can make new connections between subjects, people, organisations and places to reveal more about our history and society.

Linked Data uses the RDF data model to identify concepts and to describe relationships between those concepts. It promotes the idea of a Web of data rather than a Web of documents. The more document-centric approach, based on Web pages, does not readily expose data within the text in a way that applications can process, so the wealth of information within a page is of limited value.  Both the Archives Hub and Copac have so much rich data within them, and with Linked Data it can be brought to the fore by structuring concepts within the data it in a way that identifies them and facilitates linking to them.  Data can be combined in a way that results in new correlations, new perspectives and new discoveries.

http://www.flickr.com/photos/reedsturtevant/4288406572/

http://www.flickr.com/photos/reedsturtevant/4288406572/

Mimas is keen to explore new ways to open up data for the benefit our users. Providing bibliographic and archive data as Linked Data enables links with other data sources and creates new channels into the data. Researchers are more likely to discover sources that may materially affect their research outcomes.  It means that we can give researchers the potential to combine data sources for themselves, so that we do not need to predict the use of the data.

We know that researchers using the Hub or Copac are sometimes looking for a particular piece of information, such as a photograph of a library, or the birth date of a writer, or the location of an event. Linked Data can be valuable here because it helps to pin down concepts. If a researcher is looking for a photograph of John Rylands Library in Manchester, for example, Linked Data can clarify the concepts – a photograph, the library, ‘John Rylands’ as the name of a library, ‘John Rylands’ as a Victorian philanthropist, ‘Manchester’ as a place in England. It enables us to link across to other sources that can provide further information about these concepts. If a researcher is gathering information around a subject area, they can benefit from the linking concept and explore the Web much more fully because the data is no longer held within silos.

Archive data is by its nature incomplete and often potentially valuable sources are difficult to identify.  Bibliographic data is vast and it can be difficult to make useful connections. Researchers frequently search laterally through the descriptions, giving them a way to make serendipitous discoveries. Linked Data could potentially vastly expand the benefits of lateral search, helping users discover contextually related materials. Creating links just between cultural heritage collections can bring great benefits – archives, artifacts and published works relating to the same people, organisations, places and subjects are often widely dispersed. By bringing these together intellectually, new discoveries can be made about the life and work of an individual or the circumstances surrounding important historical events. New connections, new relationships, new ideas about our history and society. Put this together with other data sources, such as special collections, multimedia repositories and geographic information systems, and the opportunities for discovery are significantly increased.  A Linked Data approach offers potential to overcome differences in methodologies and standards for description and access which can hinder meaningful cross-searching and interlinking of related content.

Linked Data can enable researchers and teachers to repurpose data for their own specific use. It provides flexibility for people to create their own pathways through Archives Hub and Copac data alongside other data sources. Developers will be able to provide applications and visualisations tailored to the needs of researchers, learning environments, institutional and project goals.

This project, named LOCAH (Linked Open Copac and Archives Hub), is exploratory and real world applications of Linked Data are still in the early stages. Whilst the benefits could be extensive, we know that there are challenges, and in particular concerns about the resources required to create Linked Data and the availability of tools to make use of it. A number of key data sources are now available as Linked Data, such as BBC data, Wikipedia and Government datasets. In addition, developers are busy creating tools to make the data easy to query and process.  By getting involved in this creating Linked Data, we can explore the benefits and pitfalls in exposing archival and bibliographic data in this way. This is a project that enables us to contribute to a global effort to unlock the enormous potential within our data for the benefit of researchers and society as a whole.