Pages

Thursday, 7 November 2019

Preserving emails. How hard can it be?


By Jim Costin, Bridging the Digital Gap Trainee
Emails. Most of us have at least one email account with some people having as many as eight or nine. Since the introduction of the internet, email has been seen as an easy way of communicating between two parties and has now, for most of us,replaced traditional posted letters and fax machines.


However, unlike the picture above shows, emails are not without their downsides. For example, how often have you had to search back through a giant email thread for something only to find it’s in an entirely different email account? Or, how often have you tried to find that one email from someone you need to contact again only to find that it’s been deleted accidentally and you can’t get it back?

Those are just some of the issues which present themselves when working with emails as opposed to letters. From an archival standpoint, however, things are much more difficult.
I thought you could just take an inbox and preserve that?
Well, that’s not that easy. Whilst it might be considered easy to just print out all of the emails and stick them in a box, that is not really solving the problem and, in many ways, is creating more. The actual exporting of the emails from an inbox can be very difficult to do depending on which company is hosting the emails. Microsoft is one of the more difficult ones due to the use of the proprietary .pst files whereas Google uses the open-source .mbox format and offers a facility to export all email data in a convenient zipped file.

But, just exporting the emails is half the problem. Now that they are out, do you preserve them as you would another digital file or do you to adopt a new process?
But why should we preserve them?
Preserving emails is not a small undertaking, despite what it might seem on the surface. Given how widespread emails are and the vast sizes which some threads can be, some organisations may think that the cost of preservation may be too high for something which might not be used for many years.

However, just like paper archives, email archives present a multitude of evidential benefits and can help to substantiate a story of decision making. In environments where quick decisions are required, often times it can be a simple email which determines the outcome. One example might be the decision by an organisation to appoint a certain contractor for a job. If it transpires that an incentive was offered for choosing one over the other, one of the parties may attempt to showcase that a fair competition was not carried out. By preserving the emails and ensuring they are still accessible, the organisation will be able to justify its position and show that protocols were either followed or not followed.

Had the organisation not preserved its emails, they would instead have to rely on other means of justification which would likely not stand up as well if the case went to court. Admittedly, this is an extreme example but just the same as paper archives, emails have now gained significant evidential value and in some cases can make the difference between costly lawsuits and dismissals.
What work has been done?
At present, the sector as a whole is aware of the need to preserve emails however, aside from one or two institutions, very few places can truly say they are ‘preserving emails’. However, work on researching the topic and suggesting ways to potentially do it has been done with the Digital Preservation Coalition releasing a technology watch back in 2011 on it. Furthermore, earlier this year a collaborative research project was launched between the University of Hull  and this archive to research what software packages exist for email archiving and whether they can assist in the long-term preservation.

In celebration of World Digital Preservation Day, the report from the project is now available to be viewed after having undergone a small update since the time of writing.



Thursday, 29 November 2018

To reorganise or to not reorganise?

A blog post from Jim Costin our Bridging the Digital Gap trainee - written for International Digital Preservation Day

Last year, Jenny Mitcham, our former digital archivist, posted about saving your digital stuff from becoming files which are unable to be opened and how to manage a personal digital archive. What I’m going to be talking about is following on from that and a unique issue which we both came across recently when putting some new items into our digital archive. This phenomenon can be referred to as the folders within folders with folders... 

I’ll forgive you if you’ve never heard that term before. To put it simply, the phrase refers to having multiple folders nested within each other. The picture below will give you an example of it which was created just for this post:


Now, whilst the example above shows how to not name your folders, it does show how easily you can end up nesting folders. Whilst this might be a very good way of organising data and knowing exactly what it is, this can create problems when you are trying to preserve the data in a archival sense. The operating systems we use at the moment to manage our digital archive can only go so deep in the current iteration. So now that I have explained a little bit about nesting, let me explain the problem that can occur.

“I can’t delete?”
As that section title says, that was the problem. It is possible that you may never have come across not being able to delete a file for the reason I am about to explain, and might well never do, but this was an issue for us as we needed to remove that file. Now the traditional method of file deletion would have worked in any other circumstance. Simply click on the file and press delete. Simple right? Well, not in this case.


Windows decided to throw an error saying:


‘Unable to find file’


and as such Jen and myself were a little confused as to why that issue was occuring. Both of us had never come across this issue before until I remembered hearing something about reaching the file extension limit a while ago. When you contain folders within folders within folders at a certain point windows explorer stops being able to find the file. The maximum path length (path is where the file is eg. C:/John/Documents/Letters/2018) for explorer is 255/260 characters which as you can imagine can cause issues when items are nested so deeply. Thankfully our IT team were able to fix the issue but how can we avoid it in future? To answer that question, we need to answer another question first, one which is a lot harder to answer than that one.

Do you preserve the structure or not?
From a paper records perspective, preserving a structure is a little easier as you have catalogues and indexes which can be used to refer to where an item is and what it is. When dealing with digital data however, that becomes a little more complicated. The way which we receive the data might be in an incoherent mess which makes no sense to anyone but the donor. We then have to decide whether to preserve the data in that format or to reset the structure into something more logical. But just what is more logical?


That question is a little easier to answer as it can come down to personal preference and is governed under very similar laws to traditional archival items. The issue then comes when you try to catalogue and preserve the data. Do you keep it nested within folders and run the risk of encountering the issue of windows explorer being unable to find it, or do you take the approach of using labels and tags? 

The latter approach might make a lot of sense in certain circumstances where you are dealing a large amount of data. To enable this to be done, you would organise the data into sectors and build relationships between the data. The easiest way to describe this would be in the form of a diagram.

Relationships



The above diagram shows how a typical business might split its records down. It will have employee records and records on its assets. The overarching hierarchy will be defining the data set and then tags on the data can be used to help build the relationships between the data. For example, a part time staff member might help with a project that creates a physical asset. Therefore, a tag would be added to that members record saying that they worked on X project. By using those tags, a database can be created which will allow the easy access to the files, meaning that not only are researchers able to more easily find what they want, we can avoid the issue of excessively long path names.

Now whilst that might seem that it is problem solved, it sort of isn’t really. What happens when that business decides to reorganise and as a result that part time staff member gets reassigned and is no longer working on that project?. Do you preserve that data that says that they worked on that project or do you modify your records to suit what the business is now like? 

Those are all questions which a digital archivist must answer and makes the tasks of preserving digital data all the more challenging!


Monday, 19 November 2018

The sustainability of a digital preservation blog...

So this is a topic pretty close to home for me.

Oh the irony of spending much of the last couple of months fretting about the future preservation of my digital preservation blog...!

I have invested a fair bit of time over the last 6 or so years writing posts for this blog. Not only has it been useful for me as a way of documenting what I've done or what I've learnt, but it has also been of interest to a much wider audience.

The most viewed posts have been on the topic of preserving Google Drive formats and disseminating the outputs of the Filling the Digital Preservation Gap project. Access statistics show that the audience is truly international.

When I decided to accept a job elsewhere I was of course concerned about what would happen to my blog. I hoped that all would be well, given that Blogger is a Google supported solution and part of the suite of Google tools that University of York staff and students use. But what would happen when my institutional Google account was closed down?

Initially I believed that as long as I handed over ownership of the blog to another member of staff who remained at the University, then all would be well. However, I soon realised that there were going to be some bigger challenges.

The problem

Once I leave the institution and my IT account is closed, Blogger will no longer have a record of who I am.

All posts that have been written my me will be marked as 'Unknown'. They will no longer have my name automatically associated with them. Not ideal from my perspective and also not ideal for anyone who might want to cite the blog posts in the future.

The other problem is the fact that once my account is closed down, all images within blog posts that I have posted will completely disappear.

This is pretty bad news!

When a member of staff adds images to a blog post the usual method of doing this is to select an image from the local PC or network drive. What Google then does is stores a copy of that image in https://get.google.com/albumarchive/ (in a location that is tied to that individual's account). When the account is closed, all of these blog related images are also wiped. The images are not recoverable.

So, I could make copies of all my images now and hand them to my colleagues, so that they could put them all back in again once I leave...but who is going to want to do that?

A solution of sorts

I asked IT Support to help me, and a colleague has had some success at extracting the contents of my blog, amending the image urls in the XML and importing the posts back into a test Blogger account with images hosted in a location that isn't associated with an individual staff account.

There is a description of how this result was achieved here and I'm hugely grateful for all of the time that was spent trying to fix this problem for me.

The XML was also amended directly to add the words 'Jenny Mitcham, Digital Archivist' to the end of every blog post, to save me having to open each of the 120 posts in turn and adding my name to them. That was a big help too.

So, in my last couple of weeks at work I have been experimenting with importing the tweaked XML file back into blogger.

Initially, I just imported the XML file back into the blog without deleting the original blog posts. I had understood that the imported blogs would merge with the original ones and that all would be well. Unfortunately though, I ended up with two versions of each blog post - the original one and the new one at a slightly different url.

So, I held my breath, took the plunge and deleted everything and then re-imported the amended XML.

I had envisaged that the imported blog posts would be assigned their original urls but was disappointed to see that this was not the case. Despite the url being included within the XML, blogger clearly had a record that these urls had already been used and would not re-use them.

I know some people link to the blog posts from other blogs and websites. I also interlink between blog posts from within the blog, so a change to all the urls will lead to lots of broken links. Bad news!

I tried going into individual posts and changing the permalink by hand back to the original link, but Blogger would not accept this and kept adding a different number to the end of the url to ensure it did not replicate the url of one of my deleted posts. Hugely frustrating!

Luckily my colleague in IT came up with an alternative solution, adding some clever code into the header of the blog which carries out a search every time a page is requested. This seems to work well, serving up one or more posts based on the url that is requested. Being that the new urls are very similar to the old ones (essentially the same but with some numbers added to the end), the search is very effective and the right post is served up at the top of the page. Hopefully this will work for the foreseeable future and should lead to minimal impact for users of the blog.


Advice for Blogger users

If you are using Blogger from an institutional Google account, think about what will happen to your posts after your account is closed down.

There are a few things you can do to help future proof the blog:
  • Host images externally in a location that isn't tied to your institutional account - for example a Google Team Drive or an institutional website - link to this location from the blog post rather than uploading images directly.
  • Ensure that your name is associated with the blog posts you write by hard coding it in to the text of your blog post - don't rely on blogger knowing who you are forever.
  • Ensure that there are others who have administrative control of the blog so that it continues after your account has been closed.
And lastly - if just starting out, consider using a different blogging platform. Presumably they are not all this unsustainable...!

Apologies...

Unfortunately, with the tweak that has been made to how the images are hosted and pulled in to the posts, some of them appear to have degraded in quality. I began to edit each post and resizing the images (which appears to fix the problem) but have run out of time to work through 120 posts before my account is closed.

Generally, if an image looks bad in the blog, you can see a clearer version of it if you click on it so this isn't a disaster.

Also, there may be some images that are out of place - I have found (and fixed) one example of this but have not had time to check all of them.

Apologies to anyone who subscribes to this blog - I understand you may have had a lot of random emails as a result of me re-importing or republishing blog posts over the last few weeks!

Thanks to...

As well as thanking Tom Smith at the University of York for his help with fixing the blog, I'd also like to thank the Web Archiving team at the British Library who very promptly harvested my blog before we started messing around with it. Knowing that it was already preserved and available within a web archive did give me comfort as I repeatedly broke it!

A plea to Google

Blogger could (and should) be a much more sustainable blogging platform. It should be able to handle situations where someone's account closes down. It should be possible to make the blogs (including images) more portable. It should be possible for an institution to create a blog that can be handed from one staff member to another without breaking it. A blog should be able to outlive its primary author.

I genuinely don't think these things would be that hard for a clever developer at Google to fix. The current situation creates a very real headache for those of us who have put a lot of time and effort into creating content within this platform.

It really doesn't need to be this way!


Thursday, 15 November 2018

Goodbye and thanks

This is my last day as Digital Archivist for the University of York.

Next week I will be taking on a brand new post as Head of Standards and Good Practice at the Digital Preservation Coalition. This is an exciting move for me but it is with some sadness that I leave the Borthwick Institute and University of York behind.

I have been working in digital preservation at the University of York for the last 15 years. Initially with the Archaeology Data Service as part of the team that preserves and disseminates digital data produced by archaeologists in the UK; and since 2012, branching out to work with many other types of digital material at the Borthwick Institute.

This last six years has been both interesting and challenging and I have learnt a huge amount.

Perhaps the biggest change for me was moving from being one of a team of digital archivists to being a lone digital archivist. I think this is one of the reasons I started this blog. I missed having other digital archivists around who were happy to endlessly discuss the merits of different preservation file formats and tools!

Blogging about my work at the Borthwick became a helpful way for me to use the wider digital preservation community as a sounding board and for sense checking what I was doing. I have received some really helpful advice in the comments and the blogs have led to many interesting discussions on Twitter.

In a discipline where resources are often scarce, it makes no sense for us all to quietly invent the same wheel in our own local contexts. Admittedly there is no one-size-fits-all solution to digital preservation, but talking about what we do and learning from each other is so very important.

Of course there have been challenges along the way...

It is difficult to solve a problem that does not have clear boundaries. The use cases for digital preservation in a large institution are complex and ever growing.

I began by focusing on the born digital archives that come to the Borthwick from our donors and depositors. Perhaps if that were the only challenge, we would be further down the line of solving it...

However, we also have the complexities of research data to consider, the huge volumes of digitised content we are producing, the need to digitise audio-visual archives and preserve them in digital formats, the need to preserve the institutional record (including websites, social media, email), and the desire to preserve theses in digital formats. On top of this, is the increasing need to be able to provide access to digital resources. 

The use cases overlap and are not neatly bounded. Multiple integrations with other systems are required to ensure that preservation processes are seamless and can be implemented at scale.

I have frequently reached the limit of my own technical ability. I am an archaeologist with probably above average IT skills but I can only get so far with the knowledge I have. Getting the right level of technical support to move digital preservation forward is key. 

So, I’ve made some mistakes, I’ve changed my mind about some things, I’ve often tried to do too much, but ultimately I've had the freedom to try things out and to share those experiences with the wider community.

Some lessons learned from my 6 years at the Borthwick:
  • Doing something is normally better than doing nothing
  • Accept solutions that are 'good enough' ...don't hold out for 'perfect'
  • Try things out. Research and planning are important, but it is hard to fully understand things without diving in and having a go
  • Digital continuity actually begins quite close to home - consider the sustainability of your blogging platform!

The biggest lesson for me perhaps has been that I have spent much of my 6 years chasing the somewhat elusive dream of an all-singing-all-dancing 'digital preservation system', but in actual fact, the interim measures I have put in place at the Borthwick might be just about ‘good enough’ for the time being.

It is not always helpful to think about digital preservation in 'forever' terms. It is more realistic to consider our role to be to keep digital archives safe to hand over to the next person. Indeed, digital preservation has frequently been likened to a relay race.

So I hereby finish my leg of this particular race and hand over the baton to the next digital archivist...

A big thank you and goodbye to all my colleagues at the Borthwick Institute and across Information Services. It has been fun! :-)




Thursday, 8 November 2018

Testing manual normalisation workflows in Archivematica

This week I traveled to Warwick University for the UK Archivematica meeting. As usual, it was a really interesting day. I’m not going to blog about all of it (I’ll leave that to our host, Rachel MacGregor) but I will blog about the work I presented there.

Followers of my blog will be aware that I recently carried out some file migration work on a batch of WordStar 4 files from screenwriting duo Marks and Gran.

The final piece of work I wanted to carry out was to consider how we might move the original files along with the migrated versions of those files into Archivematica (if we were to adopt Archivematica in the future).

I knew the migration I had carried out was a bit of an odd one so I was particularly interested to see how Archivematica would handle it.

It was odd for a number of reasons.

1. Firstly, I ended up creating 3 different versions of each WordStar file – DOCX, PDF/A and ASCII TXT. After an assessment of the significant properties of the files (essentially, the features that I wanted to preserve) and some intense QA of the files, it was clear that all versions were imperfect. None of them preserved all the properties I wanted them to. They all had strengths and weaknesses and between them, they pretty much captured everything.


2. Secondly I wasn’t able to say whether the files I had created were for preservation or access purposes. Typically, a file migration process will make a clear distinction between those files that are for dissemination to users and those files that are preservation copies to keep behind the scenes.

After talking to my colleagues about the file migrations and discussing the pros and cons of the resulting files, it was agreed that we could potentially use any (or all) of the formats to provide access to future users, depending on user needs. I’m not particularly happy with any of the versions I created being a preservation files, given the fact that none of them capture all the elements of the file that I considered to be of value, but they may indeed need to become preservation versions in future if WordStar 4 files become impossible to read.


3. Thirdly the names of the migrated versions of the files did not exactly match up with the original files. The original WordStar files were created in the mid 1980’s. In this early period of home computing, file extensions appeared to be optional. The WordStar 4 manual actually suggests that you use the 3 characters available for the file extension to record additional information that won't fit in the 8 character filename.

For many of the WordStar files in this archive, there is no file extension at all. For other files, the advice in the manual has been followed, in that an extension has been used which gives additional context to the filename. For many of the files WordStar has also created a backup file (essentially an earlier version of the file is saved with a .BAK extension but it is still a Wordstar file). There are therefore scenarios where we need to save information about the original file extension in the migrated version in order to ensure that we don’t create filenaming conflicts and don’t lose information from the original filename.



Why not just normalise within Archivematica?

  • These WordStar files are not recognised by the file identification tools in Archivematica (I don’t think they are recognised by any file identification tools). The Format Policy Registry only works on those files that are identified for which there is a rule/policy set up
  • Even if they were identifiable, it would not be possible to replicate some of the manual steps we went through to create the migrated versions with command line tools called by Archivematica. 
  • As part of the migration process itself, several days were spent doing QA and checking of the migrated files against the originals as viewed in WordStar 4 and detailed documentation (to be stored alongside the files) was created. Archivematica does give you a decision point after a normalisation so that checking or QA can be carried out, but we’d need to find a way of doing the QA, creating the necessary documentation and associating it with the AIP half way through the ingest process.



How are manually normalised files handled in Archivematica?

I’d been aware for a while that there is a workflow for manually normalised files in Archivematica and I was keen to see how it would work. Reading the documentation,  it was clear that the workflow allows for a number of different approaches (for example normalising files before ingest or at a later date) but was also quite specific regarding the assumptions it is based on.

There is an assumption that the names of original and normalised files will be identical (apart from the file extension which will have changed). However, there is a workaround in place which allows you to include a csv file with your transfer. The csv file should provide information about how the originals are related to the preservation and/or access files. Given the filename issues described above, this was something I would need to include.

There is an assumption that you will know whether the migrated versions of files are intended for preservation or for access. This is a fair assumption – and very much in line with digital preservation thinking, but does it reflect the imperfect real world?

There is also an assumption that there will be no more than one preservation file and no more than one access file for each original (happy to be corrected on this if I am wrong).



Testing the manual normalisation workflow in Archivematica

Without a fully working test version of Archivematica to try things out on, my experimentation was limited. However, a friendly Archivematica guru (thanks Matthew) was able to push a couple of test files through for me and provide me with the AIP and the DIP to inspect.

The good news is that the basic workflow did work – we were able to push an already normalised ‘access’ file and ‘preservation’ file into Archivematica along with the original as part of the ingest process. The preservation files appeared in the AIP and the access files appeared in the DIP as expected.

We also investigated the workflow for adding additional metadata about the migration.

Archivematica creates PREMIS metadata as part of the ingest process, recording the details and outcomes of events (such as virus checks and file normalisations) that it carries out. The fact that Archivematica creates PREMIS events automatically has always been a big selling point for me. As I have mentioned before – who wants to create PREMIS by hand?

Where files are included using the manual normalisation workflow, Archivematica will always create a PREMIS event for the normalisation and if you set up your processing configuration in the right way, it will stop and prompt you to add additional information into the PREMIS eventDetail field. This is a good start but it would be great if a more detailed level of information could be included in the PREMIS metadata.

I wondered what would happen to all the documentation I had created? I concluded that the best way of keeping this documentation alongside the files would be by putting it in the SubmissionDocumentation directory as described in the manual and submitting along with the transfer. This information will be stored with the AIP but the link between the documentation and the normalised files may not be immediately apparent.

What I didn’t test was whether it is possible to push more than one access file into Archivematica using this workflow. I'm assuming that Archivematica will not support this scenario.


Some suggested improvements to the manual normalisation workflow in Archivematica

So, I wanted to highlight a few improvements that could be made.

  1. Allow the user to add PREMIS eventDetail in bulk – at the moment you have to enter the information one file at a time
  2. Allow the user to add information into more than one PREMIS field. Being able to add the actual date of the manual migration into the date field would be a good start. Being able to add an event outcome would also be helpful (by default event outcome is 'None' for files added using the manual normalisation workflow).
  3. Even better – allow the user to be able to add this more detailed PREMIS information through a csv import. Another spreadsheet containing information about the normalisation event could be included in the transfer and be automatically associated with the files and included in the METS.
  4. Allow Archivematica to accept more than one access or preservation file


In conclusion

Testing shows that Archivematica's workflow for manual normalisation will work for a standard scenario and will cope well with changes in filenames (with the addition of a normalization.csv file). However, do not assume it will currently handle more complex scenarios.

I accept that it is impossible for a digital preservation system to do everything we might want it to do. It can’t be flexible and adaptable to every imperfect use case. As a baseline it does have to know whether to put a manually migrated file into the DIP for access or AIP for preservation and it would perhaps not be fair to suggest it should cope with uncertainty.

In his forthcoming book, Trevor Owens makes the interesting point that "Specialized digital preservation tools and software are just as likely to get in the way of solving your digital preservation problems as they are to help." I do therefore wonder how we should address the gap between the somewhat rigid, rule-based tools and the sometimes imperfect real world scenarios?



Jenny Mitcham, Digital Archivist

Monday, 22 October 2018

Probably my last UK AtoM user group meeting

This week the 3rd UK AtoM users group meeting was held at the Honourable Artillery Company (HAC) in London. A packed and interesting programme had been put together by Justine Taylor and it was great to see how well attended it was. Indeed a room change was required to accommodate the number of people who wanted to attend.

Elizabeth Wells from Westminster School Archives started off the presentations by talking about how she is using AtoM to catalogue objects and artefacts. Several of us in the room have items in their care that are not archives, but I think Westminster School were the only archive to be looking after a 92 year old pancake! Being able to catalogue such items in AtoM is a high priority for many AtoM users given that they don’t want to manage multiple systems.

It is really interesting to hear how different institutions use AtoM and in particular the workarounds they use to resolve specific problems. Elizabeth talked us through the processes she has put in place for storing additional information about objects (such as valuations) that she doesn’t want to make available to the wider public. She mentioned how useful a previous UK AtoM meeting was in highlighting the fact that information within an archival description that is hidden from view within the AtoM interface will still be available to users if they download the EAD. This was a concern so she is using the accessions module of AtoM to store information that is confidential.

She also mentioned that she was using the RAD template for describing the objects in her collections. These can sit within an ISAD(G) hierarchy, but the RAD standard gives more flexibility to record different types of items. I had not realised that AtoM allowed you to chop and change between the templates in this way so this was really interesting to hear.

Victoria Peters from Strathclyde University talked to us about their work to link AtoM with their Library Catalogue interface Primo (SUPrimo - the best name ever for a Primo library catalogue!). Following on from York’s own work in this area, they enabled top level records from AtoM to be harvested into Primo and this allows staff and students to more easily discover things that are available in the archives.

They have also been thinking about how to best surface special collections. Special collections are catalogued at item level within the library catalogue but there is no overarching record describing each of the collections (for example who collected the material and why), and no obvious way to enter this information into the library catalogue, which doesn't support hierarchical descriptions. Information about special collections isn't discoverable from AtoM and there is no way to cross link with information that is held by the archives even though there are obvious links between material held in the archives and special collections.

The solution they have come up with is to add a description of each of the special collections into AtoM. This allows links to be made between related archives and special collections and will really help those users who are browsing the archives catalogue to see which special collections they may also be interested in. The description within AtoM then links back to the individual items within SUPrimo for more detailed item level information.

Victoria summed this work up by saying that it isn’t perfect but was a pretty quick and effective way of solving a problem. As a consequence, both archives and special collections are more discoverable and the links between them are clearer. Users do not need to know whether they should go to the library catalogue or the archives catalogue as both archives and special collections are signposted from both systems.

I then updated the group on work to enable EAD harvesting in AtoM. I have previously blogged about phase 1 of the project and wanted to talk about more recent testing since we have upgraded to AtoM 2.4 and future plans to make the harvesting functionality better. This may be the subject of a future blog post….if I have time!

Caroline Catchpole from The National Archives followed on from my presentation to tell us about Discovery and their future plans. The ability to harvest EAD from systems like AtoM is still very much on their wishlist but the development resource is not currently available. She has however extracted some EAD from various AtoM sites in the UK so that she can explore how easy it would be to incorporate it into Discovery. She talked through some of the problems with the “unwieldy beast” that is EAD and how different implementations and lack of consistency can cause problems for aggregators.

After lunch Justine Taylor our host talked us through how she is using the Function entity in AtoM. She has been experimenting with AtoM’s functions as a way to create a useful company structure to hold information about what key activities HAC carries out. This will be another useful way for users to browse the catalogue and find information that is of interest to them.

Lucy Shepherd from Imperial College gave us a brief overview of preparatory work around establishing AtoM and Archivematica. They have not yet got this up and running but she is thinking about how it will be used and what deposit workflows will be put in place. She sees the AtoM community as a key selling point, but mentioned that were potential challenges around finding the time to complete this exploratory work and what systems their IT department would support.

Matthew Addis from Arkivum gave us a nice demo of the integration between AtoM and Archivematica and talked through an issue around how the two systems share metadata (or not as the case may be). He has been investigating this because Arkivum's Perpetua service includes both AtoM and Archivematica and a good integration between the two products is something that is required by their customers. He described the use case where clients have digital objects and metadata to add in batches. They want automated preservation using Archivematica, the master copy protected in long term storage and an access version accessible in AtoM with rich and hierarchical metadata to give context and enable search and retrieval.

AtoM supports bulk imports and hierarchical description, but when digital objects are passed through Archivematica, the metadata within the Dissemination Information Package (DIP) is flattened - only Dublin Core metadata is passed to AtoM through the DIP. Archivematica however, will accept various types of metadata and will store them in its Archival Information Package (AIP). This is a potential problem because valuable metadata that is stored in Archivematica will not be associated with the dissemination copy in AtoM unless it is Dublin Core.

Matthew demonstrated a workaround he has been using to get the right level of metadata into AtoM. After digital objects have been transferred from Archivematica to AtoM at the right point in an existing hierarchy, he then imported additional metadata directly into AtoM using the CSV import to enhance the basic Dublin Core metadata that has come through AtoM. He suggested that configuring AtoM with the slugs generated from the identifier field makes this process easier to automate. He is still thinking about this issue, and in particular whether the AIP in Archivematica could be enhanced by metadata from AtoM.

Geoff Browell from King's College London talked to us about an ambitious project to create an AtoM catalogue for the whole of Africa. The Archives Africa project has been working with The National Archives of Madagascar and exploring a lightweight way of getting local descriptions into an AtoM instance hosted in the UK using spreadsheets and email.

Lastly, we had an update from Dan Gillean from Artefactual Systems which included some news about initial technical planning for AtoM 3 and an update on the AtoM Foundation. The Foundation has been set up to oversee and support the development, sustainability and adoption of AtoM, specifically in relation to AtoM 3. Dan talked about the benefits in moving the governance of AtoM outside of Artefactual Systems and establishing a more diverse ecosystem. The Foundation will be collecting information from AtoM users about the functionality that is required in AtoM 3 at some point in the future. Dan also revealed that AtoM version 2.4.1 should be with us very soon and that the next UK AtoM Camp will be held at the University of Westminster in July 2019.

I anticipate this will be my last UK AtoM user group meeting given that I am moving on to pastures new next month. It has been really encouraging to see how much the user community in the UK has grown since my first involvement in AtoM back in 2014 and it is great to see the active knowledge sharing and collaboration in the UK user group. Long may it continue!



This post was written by Jenny Mitcham, Digital Archivist

Friday, 28 September 2018

Auditing the digital archive filestore

A couple of months ago I blogged about checksums and the methodology I have in place to ensure that I can verify the integrity and authenticity of the files within the digital archive.

I was aware that my current workflows for integrity checking were 'good enough' for the scale at which I'm currently working, but that there was room for improvement. This is often the case when there are humans involved in a process. What if I forget to create checksums to a directory? What happens if I forget to run the checksum verification?

Also, I am aware that checksum verification does not solve everything. For example, read all about The mysterious case of the changed last modified dates. Also, When checksums don't match... the checksum verification process doesn't tell you what has changed, who has changed it, when it was changed...it just tells you that something has changed. So perhaps we need more information.

A colleague in IT Services here at York mentioned to me that after an operating system upgrade on the filestore server last year, there is now auditing support (a bit more information here). This wasn't being widely used yet but it was an option if I wanted to give it a try and see what it did.

This seemed like an interesting idea so we have given it a whirl. With a bit of help (and the right level of permissions on the filestore), I have switched on auditing for the digital archive.

My helpful IT colleague showed me an example of the logs that were coming through. It has been a busy week in the digital archive. I have ingested 11 memory sticks, 24 CD-ROM and a pile of floppy disks. The logs were extensive and not very user friendly in the first instance.

That morning I had wanted to find out the total size of the born digital archives in the digital archive filestore and had right clicked on the folder and selected 'properties'. This had produced tens of thousands of lines of XML in the filestore logs as the attributes of each individual file had to be accessed by the server in order to make the calculation. The audit logs really are capable of auditing everything that happens to the files!

...but do I really need that level of information? Too much information is a problem if it hides the useful stuff.

It is possible to configure the logging so that it looks for specific types of events. So, while I am not specifically interested in accesses to the files, I am interested in changes to them. We configured the auditing to record only certain types of events (as illustrated below). This cuts down the size of the resulting logs and restricts it just to those things that might be of interest to me.




There is little point in switching this on if it is not going to be of use. So what do I intend to do with the output?

The format this is created in is XML, but this would be more user-friendly in a spreadsheet. IT have worked out how to pull out the relevant bits of the log into a tab delimited format that I can then open in a spreadsheet application.

What I have is some basic information about the date and time of the event, who initiated it, the type of event (eg RENAME, WRITE, ATTRIBUTE|WRITE) and the folder/file that was affected.

As I can view this in a spreadsheet application, it is so easy to reorder the columns to look for unexpected or unusual activity.

  • Was there anyone other than me working on the filestore? (there shouldn't be right now)
  • Was there any activity on a date I wasn't in the office?
  • Were there any activity in a folder I wasn't intentionally working on?
The current plan is that these logs will be emailed to me on a weekly basis and I will have a brief check to ensure all looks OK. This will sit alongside my regular integrity checking as another means of assuring that all is as it should be.

We'll review how this is working a few weeks to see if it continues to be a valuable exercise or should be tweaked further.

In my Benchmarking with the NDSA Levels of Preservation post last year, I put us at level 2 for Information Security (as highlighted in green below).



See the full NDSA levels here

Now we have switched on this auditing feature and have a plan in place for regular checking of the logs, does this now take us to level 4 or is more work required?

I'd be really interested to find out whether other digital archivists are utilising filestore audit logs and what processes and procedures are in place to monitor these.

Final thoughts...

This was a quick win and hopefully will prove a useful tool for the digital archive her at the University of York. It is also a nice little example of collaboration between IT and Archives staff.

I sometimes think that IT people and digital preservation folk don't talk enough. If we take the time to talk and to explain our needs and use cases, then the chances are that IT might have some helpful solutions to share. The tools that we need to do our jobs effectively are sometimes already in place in our institutions. We just need to talk to the right people to get them working for us.

Jenny Mitcham, Digital Archivist