In this scenario, we have lost some of the archival files, and we have no backups for them, either in ADPNet or otherwise.
1) The first task is identification of what's missing:
a) One method of identifying what is lost is to use the list of files for which we have captured checksums (in flat files on libcontent and in a database on encompass). (Comparing against what’s in Acumen would not include redacted content) (DS Unit Head, or Repository Manager could do this)
b) The monthly Acumen collection list (emailed to several), or the Tracking Filenames spreadsheet on the share drive, can help determine which collections are missing. This method would be more difficult, as someone would have to look through every directory in turn.
c) Jeremiah’s count script on the share drive has a record of derivatives in Acumen. This could be used via script to do a comparison, by the Digitization Manager, Repository Manager, or the DS Unit Head.
d) April and Marina have copies of the original spreadsheets on the share drive. This method would be even more difficult, as there's a single spreadsheet for each of hundreds of collections.
2) We determined that after identification of loss, the archivists would set the priorities for re-scanning of lost content.
3) The Unit Head (or Repository Manager) would modify the MD5 checksum database entries for all the lost content.
4) Digital Services would then digitize the content as identified. Old metadata spreadsheets would be pulled from the DS share drive store for this purpose, but if the remediated MODS for the content is online, new MODS would not be created to overwrite them.
5) Recreated content would be restored to the archive. Existing MODS, EADs, transcripts, and collection XML would be pulled from Acumen into the archive.