Pages

Thursday, 10 April 2014

Hydra: multiple heads are better than one

Trinity College, Dublin
I spent a couple of days this week in Dublin at the Hydra Europe Symposium. Hydra has been on my radar for a little while but these two days really gave me an opportunity to focus on what it is and what it can do for us. This is timely for us as we are currently looking at systems for fulfilling our repository and digital archiving functions. 

At York we currently use Fedora for our digital library so developments within the Hydra community are of particular interest because of its relationship to Fedora.

Chris Awre from the University of Hull stated that the fundamental assumptions on which Hydra was built were that:

1. No single system can provide the full range of repository based solutions for an institutions needs
2. No single institution can resource development of a full range of solutions on its own

This chimes well with our recent work at York trying to propose a technical architecture that could support deposit, storage, curation and access to research data (among other things). There is no one solution for this and building our own bespoke system from scratch or based purely on Fedora would clearly not be the best use of our resources.

The solution that Hydra provides is a technical framework that multiple institutions can work with but that can be built upon with adopting institutions developing custom elements tailored to local workflows. Hydra has one body but many heads supporting many different workflows.

We were told pretty early on within the proceedings that for Hydra, the community is key. Hydra is as much about knowledge sharing as sharing bits of code.

“If you want to go fast go alone, if you want to go far, go together” – This African proverb was used to help explain the Hydra concept of community. In working together you can achieve more and go further. However, some of the case studies that were presented during the Symposium clearly showed that for some, it is possible to go both far and fast using Hydra and with very little development required. Both Trinity College Dublin and the Royal Library of Denmark commented on the speed with which a repository solution based on Hydra could be up and running. Speed is of course largely dependent on the complexity or uniqueness of the workflows you need to put in place. Hydra does not provide a one-size-fits-all solution but should be seen more as a toolkit with building blocks that can be put together in different ways.

Dermot Frost from Trinity College Dublin summed up their reasons for joining the Hydra community, saying that they had had experience with both Fedora and DSpace and neither suited their needs. Fedora is highly configurable and in theory does everything you need to do, but you need a team of rocket scientists to work it out. DSpace is a more out-of-the-box solution but you can not configure it in the way you need to to get it to conform to local needs. Hydra sits between the two providing a solution that is highly configurable, but easier to work with than Fedora.

Anders Conrad from the Royal Library of Denmark told us that for their repository solution, 10-20% of material is deemed worthy of proper long term preservation and is pushed to the national repository. The important thing here is that Hydra can support these different workflows and allows an organisation to put one repository in place that could support different types of material with different values placed on the content and thus different workflows going on within it. The 'one repository - multiple workflows' model is very much the approach that the University of Hull have taken with their Hydra implementation. Richard Green described how data comes in to the repository through different routes and different types of data are treated and displayed in different ways depending on the content type.

And what about digital preservation? This is of course my main interest in all of this. One thing that is worth watching is Archivesphere, a Hydra head that is being created by Penn State designed to "create services for preserving, managing, and providing access to digital objects, in a way that is informed by archival thinking and practices" and including support for both PREMIS and EAD metadata. This is currently being tested by Hydra partners and it will be interesting to see how it develops.

Another thing to think about is how Hydra could meet my digital preservation requirements that I published last year (note they have changed a little bit since then). I think the answer to this is that it probably could meet most of them if we wanted to develop the solutions on top of existing Hydra components. Archivesphere is already starting to introduce some of the required functionality to Hydra, for example file characterisation, normalisation and fixity checking. I guess the bigger question for me is whether this is the best approach for us or whether we would be preferable to make use of existing digital archiving software (Archivematica for example) and ensure the systems can talk to each other effectively.

Friday, 28 March 2014

Discovering archives: it's all about the standards

Yesterday at the UK Archives Discovery Forum we mostly talked about standards.*

Specifically metadata standards for resource discovery of archives, both physical and digital.. Standards are key to making archival data discoverable and of course this is our main reason for being - we preserve things so that they can be reused - they can only be reused if they can be discovered.

The day was really relevant to work we are currently doing at the Borthwick Institute, with the installation of a new archival management system (AtoM) underway and scoping work ongoing for a retroconversion project which will help us move our legacy catalogues into this new system - both major initiatives intended to make our catalogue data more widely discoverable.

Nick Poole from the Collections Trust talked about user focused design (both for physical buildings and digital interfaces), how we should avoid putting barriers between our users and the information we need. The gov.uk website is an obvious example of how this approach to design can work in a digital sphere and their design principles are on-line. This is something I think we can all learn from.

He also touched on the Open Data agenda and how the principles of making data ‘open by default’ are sometimes seen as being at odds with traditional models for income generation in the archives sector. Nick argues that by opening up data we are allowing more people to find us and making way for new opportunities and transactions as they engage further with the other services we have to offer.

He also mentioned that we can be ‘digitally promiscuous’ - making our data available in many different ways via many different platforms. We do not need to keep our data close to our chests but should be signposting what we have and drawing people in. We can only really do this if we make use of data standards. Standards help us to exchange and share our data and allow others to find and interpret it.

Jane Stevenson talked about the importance of standards to the Archives Hub.  Aggregating data from multiple sources would be very tricky if no-one used metadata standards. The problem is that the standards that we have are not perfect. Encoded Archival Description (EAD), the XML realisation of ISAD(G), can be too flexible and thus is realised in different ways by different institutions. Even those archives using CALM as their archival cataloguing system may have individual differences in how they use the metadata fields available to them. This does make life as an aggregator more challenging.

Once data is standardised into the Archives Hub flavour of EAD it can be transformed again into other data standards allowing it to be cross searchable beyond the UK archives sector. Jane touched on their work with RDF and linked data and the opportunities this can bring.

We should make use of opportunities to join the European stage. The Archives Hub are 'country manager' for Archives Portal Europe (APE) thus making it a simple matter for Hub contributors to push their data out beyond national borders. For those archival descriptions that link directly to a digital object, the opportunity exists to make this data available through Europeana. This takes our data beyond the archives sector, allowing our collections to be cross-searched alongside other European digital cultural heritage resources. In my mind, this really is the start of ‘digital promiscuity’ and an opportunity I feel we should be embracing (if we can accept the necessity to open up our metadata with a CC0 licence).

Geoff Browell from Kings College London talked about what we as archivists can offer our users over and above what they can get by visiting Google. He highlighted our years of experience at indexing data and pointed out that at approximately half of users of the AIM25 search interface appreciate our efforts in this area and use the index terms provided to browse for data in preference to the google-style free text search. He thinks that we should be talking more closely with both our users and the interface developers to ensure we are giving people what they need. He mentioned that delivery of data to users should be a conversation not a one-sided process.

The National Archives asked us for comment on a beta version of the new Discovery interface which will provide a new portal into selected UK archival holdings. They are encouraging conversation with users by encouraging ‘tagging’ of pages within the search interface.

Malcolm Howitt from Axiell discussed how systems and software can support standards. Standards is a topic that is often raised and they are asked to support many of them. They are keen to help where they can and need to work with the community to ensure that they know what is required of them. The different flavours of EAD was again raised as an issue but Malcolm pointed out that when standards work, the user doesn’t even need to be aware of them.

The National Archives, Kew, London by Jim Linwood on Flickr CC BY 2.0


Reflections

I think we are all in agreement that metadata standards are necessary and we need to work with them in order to make our catalogue data more visible. Some further issues were picked out in the final session of the day where attendees were invited to share their thoughts on the standards they use and the ones they would like to know more about.


  1. Do we need a standard for accessions data? Would this be a specific subset of ISAD(G) or does it need further definition? The next step in our planned implementation of AtoM is to populate it with accessions data from various different sources and I expect there will be some issues to deal with along the way as a result of lack of standards in this area.
  2. How do we describe digital material? Is ISAD(G) fit for this purpose? As born digital material becomes more and more prevalent in our collections this will become more of an issue. The use of PREMIS to hold technical preservation metadata will be essential alongside the resource discovery metadata but is this enough? This is undoubtedly an area for future exploration.
  3. Does the hierarchical nature of ISAD(G) and EAD hold us back? If we can’t create detailed resource discovery metadata for an archive until we know both the hierarchy and its place in the hierarchy does this slow us down in getting the information out there?



*…mostly standards - with the addition of a surprisingly entertaining session on copyright from Roman Deazley – check out the CREATe project for more on this topic

Monday, 17 March 2014

'Routine encounters with the unexpected' (or what we should tell our digital depositors)


I was very interested a few months back to hear about the release of a new and much-needed report on acquiring born-digital archives: Born Digital: Guidance for Donors, Dealers, and Archival Repositories published by the Council on Library and Information Resources. I read it soon after it was published and have been mulling over its content ever since.

The quote within the title of this post "routine encounters with the unexpected" is taken from the concluding section of the report and describes the stewardship of born-digital archival collections. The report intends to describe good practices that can help reduce these archival surprises.

The publication takes an interesting and inclusive approach, being aimed at both at archivists who will taking in born-digital material, and also at those individuals and organisations involved with offering born-digital material to an archive or repository.

It appeared at a time when I was developing new content for our new website aimed specifically at donors and depositors and also a couple of weeks before I went on my first trip to collect someone's digital legacy for inclusion in our archive. This last few months alongside archivist colleagues I have also been planning and documenting our own digital accessions workflow. This report has been a rich source of information and advice and has helped inform all of these activities.

There is lots of food for thought within the publication but what I like best are the checklists at the end which neatly summarise many of the key issues highlighted within the report and provide a handy quick reference guide.

Much as I find this a very useful and interesting publication it got me thinking about the alternative and apparently conflicting advice that I give depositors and how the two relate.

I have always thought that one of the most important things that anyone can do to ensure that their digital legacy survives into the future is to put into practice good data management strategies. These strategies are often just simple common sense rules, things like weeding out duplicate or unnecessary files, organising your data into sensible and logical directory structures and naming them well.

Where we have depositors who wish to give us born-digital material for our archive, I would like to encourage them to follow rules like these to help ensure that we can make better sense of their data when it comes our way. This also helps fulfil the OAIS responsibility to ensure the independent utility of data - the more we know about data from the original source, the greater the likelihood that others will be able to make sense of it in the future. I have put guidance to this effect on our new website which is based on an advice sheet from the Archaeology Data Service.

Screenshot of the donor and depositor FAQ page on the Borthwick Institute's new website

However, this goes against the advice in the 'Born Digital' report which states that "...donors and dealers should not manipulate, rearrange, extract, or copy files from their original sources in anticipation of offering the material for gift or purchase."

In a blog post last year I talked about a digital rescue project I had been working on, looking at the data on some 5 1/4 inch floppy disks from the Marks and Gran archive. This project would not have been nearly as interesting if someone had cleaned up the data before deposit - rationalising and re-naming files and deleting earlier versions. There would have been no detective story and information about the creative process would have been lost. However, if all digital deposits came to us like this would we be able to resource the amount of work required to make sense of them?

So, my question is as follows. What do we tell our depositors? Is there room for both sets of advice - the 'organise your data before deposit' approach aimed at those organisations who regularly deposit their administrative information with us, and the 'leave well alone' approach for the digital legacies of individuals? This is the route I have tried to take on our new website, however, I have concerns as to whether it will be clear enough to donors and depositors as to which advice they should follow, especially where there are areas of cross-over. I'm interested to hear how other archives handle this question.


Tuesday, 17 December 2013

Updating my requirements

Last week I published my digital preservation Christmas wishlist. A bit tongue in cheek really but I saw it as my homework in advance of the latest Digital Preservation Coalition (DPC) day on Friday which was specifically about articulating requirements for digital preservation systems.

This turned out to be a very timely and incredibly useful event. Along with many other digital preservation practitioners I am currently thinking about what I really need a digital preservation system to do and which  systems and software might be able to help.

Angela Dappert from the DPC started off the day with a very useful summary of requirements gathering methodology. I have since returned to my list and tidied it up a bit to get my requirements in line with her SMART framework – specific, measurable, attainable, relevant and time-bound. I also realised that by focusing on the framework of the OAIS model I have omitted some of the ‘non-functional’ requirements that are essential to having a working system – requirements related to the quality of a service, its reliability and performance for example.

As Carl Wilson of the Open Planets Foundation (OPF) mentioned, it can be quite hard to create sensible measurable requirements for digital preservation when we are talking about time frames which are so far in the future. How do we measure the fact that a particular digital object will still be readable in 50 years time? In digital preservation we regularly use phrases such as ‘always’, ‘forever’ and ‘in perpetuity’. Use of these terms in a requirements document inevitably leads us to requirements that can not be tested and this can be problematic.

I was interested to hear Carl describing requirements as being primarily about communication - communication with your colleagues and communication with the software vendors or developers. This idea tallies well with the thoughts I voiced last week. One of my main drivers for getting the requirements down in writing was to communicate my ideas with colleagues and stakeholders.

The Service Providers Forum at the end of the morning with representatives from Ex Libris, Tessella, Arkivum, Archivematica, Keep Solutions and the OPF was incredibly useful. Just hearing a little bit about each of the products and services on offer and some of the history behind their creation was interesting. There was lots of talk about community and the benefits of adopting a solution that other people are also using. Individual digital preservation tools have communities that grow around them and feed into their development. Ed Fay (soon to be of the OPF) made an important point that the wider digital preservation community is as important as the digital preservation solution that you adopt. Digital preservation is still not a solved problem. The community is where standards and best practice come from and these are still evolving outside of the arena of digital preservation vendors and service providers.

Following on from this discussion about community there was further talk about how useful it is for organisations to share their requirements. Is one organisation's needs going to differ wildly from another's? There are likely to be a core set of digital preservation requirements that are going to be relevant for most organisations. 

Also discussed was how we best compare the range of digital preservation software and solutions that are available. This can be hard to do when each vendor markets themselves or describes their product in a different way. Having a grid from which we can compare products against a base line of requirements would be incredibly useful. Something like the excellent tool grid provided by POWRR with a higher level of detail in the criteria used would be good.


I am not surprised that after spending a day learning about requirements gathering I now feel the need to go back and review my previous list. I was comforted by the fact that Maite Braud from Tessella stated that “requirements are never right first time round” and Susan Corrigall from the National Records of Scotland informed us that requirements gathering exercises can take months and will often go through many iterations before they are complete. Going back to the drawing board is not such a bad thing.

Wednesday, 11 December 2013

My digital preservation Christmas wish list

All I want for Christmas is a digital archive.

By paparutzi on Flickr CC BY 2.0
Since I started at the Borthwick Institute for Archives I have been keen to adopt a digital preservation solution. Up until this point, exploratory work on the digital archive has been overtaken by other priorities, perhaps the most important of these being an audit of digital data held at the Borthwick and an audit of research data management practices across the University. The outcome is clear to me – we hold a lot of data and if we are to manage this data effectively over time, a digital archiving system is required.

In a talk at the SPRUCE end of project workshop a couple of weeks ago both Ed Fay and Chris Fryer spoke about the importance of the language that we use when we talk about digital archiving. This is a known problem for the digital preservation community and one I have myself come up against on a number of different levels.

In an institution relatively new to digital preservation the term ‘digital archiving’ can mean a variety of different things and on the most basic IT level it implies static storage, a conceptual box we can put data in, a place where we put data when we have finished using it, a place where data will be stored but no longer maintained.

Those of us who work in digital preservation have a different understanding of digital archiving. We see digital archiving as the continuous active management of our digital assets, the curation of data over its whole life cycle, the systems that ensure data remains not only preserved, but fit for reuse over the long term. Digital archiving is more than just storage and needs to encompass activities as described within the Open Archival Information System reference model such as preservation planning and data management. Storage should be seen as just one part of a digital preservation solution.

To this end, and to inform discussions about what digital preservation really is, I pulled together a list of digital preservation requirements which any digital preservation system or software should be assessed against. This became my wish list for a digital preservation system. I do not really expect to have a system such as this unwrapped and ready to go on Christmas morning this year but may-be some time in the future!

In order to create this list of requirements I looked at the OAIS reference model and the main functional entities within this model. The list below is structured around these entities. 

I also bravely revisited ISO16363: Audit and Certification of Trustworthy Digital Repositories. This is the key (and most rigorous) certification route for those organisations who would like to become Trusted Digital Repositories. It goes into great detail about some of the activities which should be taking place within a digital archive and many of these are processes which would be most effectively carried out by an automated system built into the software or system on which the digital archive runs.

This list of requirements I have come up with has a slightly different emphasis from other lists of this nature due to the omission of the OAIS entity for Access. 

Access should be a key part of any digital archive. What is the point of preserving information if we are not going to allow others to access it at some point down the line? However, at York we already have an established system for providing access to digital data in the shape of York Digital Library. Any digital preservation system we adopt would need to build on and work alongside this existing repository not replace it. 

Functional requirements for access have also been well articulated by colleagues at Leeds University as part of their RoaDMaP project and I was keen not to duplicate effort here.

As well as helping to articulate what I actually mean when I talk about my hypothetical ‘digital archive’, one of the purposes of this is to provide a grid for comparing the functionality of different digital preservation systems and software.


Thanks to Julie Allinson and Chris Fryer for providing comment thus far. Chris's excellent case study for the SPRUCE project helped inform this exercise.

My requirements are listed below. Feedback is most welcome


#
Requirement

INGEST
I1
The digital archive will enable us to record/store administrative information relating to the Submission Information Package (information and correspondence relating to receipt of the SIP)
I2
The digital archive will include a means for recording decisions regarding selection/retention/disposal of material from the Submission Information Package
I3
The digital archive will be able to identify and characterise data objects (where appropriate tools exist)
I4
The digital archive will be able to validate files (where appropriate tools exist)
I5
The digital archive will support automated extraction of metadata from files
I6
The digital archive will incorporate virus checking as part of the ingest process
I7
The digital archive will be able to record the presence and location of related physical material



DATA MANAGEMENT
DM1
The digital archive will generate persistent, unique internal identifiers
DM2
The digital archive will ensure that preservation description information (PDI) is persistently associated with the relevant content information. The relationship between a file and its metadata/documentation should be permanent
DM3
The digital archive will support the PREMIS metadata schema and use it to store preservation metadata
DM4
The digital archive will enable us to describe data at different levels of granularity – for example metadata could be attached to a collection, a group of files or an individual file
DM5
The digital archive will accurately record and maintain relationships between different representations of a file (for example, from submitted originals to dissemination and preservation versions and subsequent migrations)
DM6
The digital archive must store technical metadata extracted from files (for example that created as part of the ingest process)



PRESERVATION PLANNING
PP1
The digital archive will allow preservation plans (such as file migration or refreshment) to be enacted on individual or groups of files.
PP2
Automated checking of significant properties of files will be carried out post-migration to ensure they are preserved (where tools exist).
PP3
The digital archive will record actions, migrations and administrative processes that occur whilst the digital objects are contained within the digital archive



ADMINISTRATION
A1
The digital archive will allow for disposal of data where appropriate. A record must be kept of this data and when disposal occurred
A2
The digital archive will have reporting capabilities so statistics can be gathered on numbers of files, types of files etc.



ARCHIVAL STORAGE
AS1
The digital archive will actively monitor the integrity of digital objects with the use of checksums
AS2
Where problems of data loss or corruption occur, The digital archive will have a reporting/notification system to prompt appropriate action

AS3
The digital archive will be able to connect to, and support a range of storage systems



GENERAL
G1
The digital archive will be compliant with the Open Archival Information System (OAIS) reference model
G2
The digital archive will integrate with the access system/repository
G3
The digital archive will have APIs or other services for integrating with other systems
G4
The digital archive will be able to incorporate new digital preservation tools (for migration, file validation, characterisation etc) as they become available
G5
The digital archive will include functionality for extracting and exporting the data and associated metadata in standards compliant formats
G6
The software or system chosen for the digital archive will be supported and technical help will be available
G7
The software or system chosen for the digital archive will be under active development

Friday, 29 November 2013

COPTR: It's short for "Making my Thursday much easier"

This is a guest post from Nathan Williams, Archives Assistant.
For four days of a working week I can largely be found on the front desk of the Borthwick Institute assisting people with their research, fetching up documents within our vast holdings, and assisting people with interpreting the materials they have in front of them. Part of the role of an Archives Assistant is one of providing researchers with the tools of discovery.

On the fifth day of a working week I don a different cap altogether, for on Thursday I head on up to Jen Mitcham’s office to help with a different challenge altogether: digital preservation.

So it was somewhat of a pleasant surprise when I received an email circulated through the jisc-digital-preservation list regarding the beta launch of COPTR or the Community Owned digital Preservation Tool Registry. Ok, so my title is silly, but here’s why it really should stand for “Making my Thursday much easier”:

  • As an institutional repository with strong University, Research, Diocesan and local and national collections of import, we have varying and ever-increasing demands on our ability to manage digital objects.
  • We don’t currently have an overarching OAIS-compliant preservation system, but we still have to take action on digital objects both in our care and yet to be created.
  • We have to act but resources are limited and the correct tools, used properly, can help us to act now instead of risking our digital assets.
  • Sometimes finding those tools, especially for the entry level practitioner, isn’t easy - COPTR should help to make it easy.
COPTR is not the first such ‘tool registry’ to exist but its aim is to collate the contents of five previously used registries (amongst them those present via the Digital Curation Centre and Open Planets Foundation to name just two).

Here are just a few potentially great things about it:


  1. It’s working to collate all the information that’s currently out there into one place.
  2. It’s managed by us.
  3. It’s browse function already looks really promising - show all the tools, or tools by functional category, or even tools by content they act upon. I don’t think it can be overstated how useful this is for the entry level practitioner!
  4. It brings together advanced and entry level practitioners and allows for collaboration across the digital preservation spectrum.
  5. User experiences go beyond just descriptions but actually provide use cases and general experiences from people who have used a tool. These sections will hopefully get a lot more material added to them as time goes on.
  6. There is already quite a bit to get your teeth into and entries are added to all the time - the activity log already looks promising.
I’ve already found some potential tools for investigation for my second look at finding us a temporary fixity solution. It’s also great to just browse and see what else is out there. What tools will you discover through COPTR?

Tuesday, 26 November 2013

Fund it, Solve it, Keep it – a personal perspective on SPRUCE

Yesterday I attended the SPRUCE end of project event at the fabulous new Library of Birmingham. The SPRUCE project was lauded by Neil Grindley as one of the best digital preservation projects that JISC has funded and it is easy to see why. Over the 2 years it has run, SPRUCE has done for a great deal for the digital preservation community. Bringing together people to come up with solutions for some of our digital preservation problems being one of the most important of these. The SPRUCE project is perhaps most well-known for its mash-up events* but should also be praised for its involvement and leadership in other community based digital preservation initiatives such as the recently launched tool registry COPTR (more about this in a future blog post).
Library of Birmingham by KellyNicholls27 on Flickr

SPRUCE can’t fix all the problems of the digital preservation community but what it has done very effectively is what William Kilbride describes as “productive small scale problem solving”. 

This event was a good opportunity to learn more about some of the tools and resources that have come out of the SPRUCE project. 

I was interested to hear Toni Sant of the Malta Music Memory Project describing their tool for extracting data from audio cds that was made available last week. I have not had a chance to investigate this in any detail yet but think this could be exactly what we need in order to move us forward from our audit of audio formats at the Borthwick Institute earlier this year to a methodology for ensuring their long term preservation in line with the proposed 15 year digitisation strategy as described last month. Obviously this deals only with audio CDs so its scope is limited, but being that audio CDs are a high priority for digital preservation this is an important development.

Another interesting tool described by Eleonora Nicchiarelli at Nottingham University allowed them to put XMP metadata into the headers of TIF images produced by their digitisation team. This avoids the separation of the images from the contextual information that is so important in making sense of them.

It was also good to hear Ray Moore from the Archaeology Data Service talk about the ReACT tool (Resource Audit and Comparison), the proposal for which I wrote in my last few weeks at the ADS. A simple tool written in VBA with a friendly Excel GUI capable of automatically checking for the presence of related files in different directories. Originally created for those situations where you want to ensure a dissemination or preservation version of a file is present for each of your archival originals, it could have many use cases in alternative scenarios. As Ray articulated, “simple solutions are sometimes the best solutions”. Thanks due to Ray and Andrew Amato of LSE for seeing that project through.

Chris Fryer of Northumberland Estates described some great work he has done (along with Ed Pinsent of ULCC) on defining digital preservation requirements and assessing a number of solutions against these requirements. He has produced a set of resources that could be widely re-used by others going through a similar process.

When I attended the first SPRUCE mash-up in Glasgow early last year participants did a bit of work on defining the business case for digital preservation in the context in of their own organisations and roles. At the time this seemed barely relevant to me, working as I was at the time within an organisation for which digital preservation was its very reason for being and for which the business case had already been well defined using the Keeping Research Data Safe model. Since Glasgow I have moved to a different job within the University of York so it was useful yesterday to have a reminder of this work from Ed Fay who was able to summarise some of the key tools and techniques and highlight why a business case is so important in order to get senior buy in for digital preservation. This is something I need to go back to and review.  The recently published Digital Preservation Business Case Toolkit should be a great resource to help me with this. 

The need to have a well prepared elevator pitch to persuade senior managers that more resources should be put into digital preservation has also become more real for me. The one I wrote at the time in Glasgow was a good start but perhaps needs to be a little bit less tongue in cheek!


* as an ex-archaeologist I see SPRUCE mash-ups as being the digital preservation equivalent of Channel 4's Time Team but without the TV cameras, and with Paul Wheatley ably taking on  the role of Tony Robinson. Instead of 3 days to excavate an archaeological site we have 3 days to solve a selection of digital preservation problems and issues.