Watching Our Backs
Currently, we are storing archival content (described here: formats on a Linux server, in the directory structure described here: Organization_of_completed_content_for_long-term_storage. Content which we want included in LOCKSS (ADPNET) is linked into Manifests (as described here: File_Naming_and_Linking_for_LOCKSS.
Once this archival content is placed into storage, anything linked into the Manifest.html pages should NOT be changed. However, since our new delivery platform derives from this stored content, we need to be able to update the metadata as needed. Hence, the primary metadata file is copied as a versioned file, and the versioned file is what is linked into the Manifests for LOCKSS pickup. The metadata file which is NOT versioned is the most recent, and over-writeable, copy.
For the content which is not allowed to change, we have scripts running weekly to verify that the md5 checksum has not changed, prior to the full tape backup. If there's an error, we are notified in time to retrieve a good copy from a previous backup, before the corrupted item can be written to tape.
On libcontent, in /srv/scripts/md5/cya/, the script which calculates and checks sums is called "md5check".
First it looks for new holder areas (such as u0001, u0003, etc. in the archive). If there are any, it sets up a new location for checksums for that holder. Within each holder directory in the checksums area, there's a file for each collection. Each collection's file contains all the checksums for that collection.
Second, md5check goes through each holder area in turn, checking collections one at a time. If the collection exists, it checks the checksums for each file. If they have changed, it outputs an error. While traversing the archive it notes new content, and generates checksums for those before going on to the next collection.
Third, if any new collections were noted in the holder area, the script creates collection md5 files and generates checksums for the content before going on to the next holder area.
Fourth, the script goes through the new holder areas, finds new collections, notes any new content in existing collections.
The script then logs in with the checkscripts database to verify that it ran, the timestamp, and any errors.
Following this script, another script named storeFileSums runs. This one goes through all the gathered checksums, compares them to the ones in the md5sums mysql database, verifies (again) that nothing has changed, and enters new checksums that are found, with a timestamp.
This database is backed up monthly, and regular copies kept on libcontent.lib.ua.edu.
Errors are sent to email@example.com and firstname.lastname@example.org.
This script also logs in with the checkscripts database to verify that it ran, the timestamp, and any errors.
Once a month, makeMix runs through the archive looking for TIFF files that do not yet have technical metadata captured. FITS is run against each such tiff, and a MIX metadata file is generated from the output and additional information that we deemed necessary or important. We also keep a copy of the FITS output, for a record of what tools were used and the results of each test.
This script also adds some of the information to the InfoTrack:md5sums imageTechMed table for administrative management purposes.
If a serious problem is encountered when testing the tiff, such as "invalid", "not well-formed" or not even really a tiff, then the technical metadata is versioned, and the fits file is written to the share drive for review; the image file name and location are added to a tiffList for retrieval for repair. Once repaired, the tiffs will be re-archived and the technical metadata regenerated.
Because this script uses a good bit of CPU power, it is set at a low priority level (using "nice") and only runs between 9 pm and 8 am when the indexer and md5 scripts aren't running.
Automated Script Verifications
Once a week, a script called "checkscripts" on libcontent.lib.ua.edu in /srv/scripts/cya looks through the entries in the checkscripts database for the past week, and compares them with the list of entries of existing cron scripts and when they are due to run.
The checkscripts database consists of 2 tables: "ran" and "scripts." Ran contains an auto-incrementing primary key (num), scriptid (identifier for the script, again a number), datestamp (for when this script ran) and errors.
Scripts contains a numerical id (which corresponds to the scriptid in "ran"), scriptname, server, directory where the script resides, the cron job that calls it, and a textual description of when it runs ("runswhen") and what it does ("doeswhat") as well as names of any scripts that it precedes (preceeds) or succeeds (succeeds) in order to work properly.
If any scripts did NOT run, which were scheduled, or any of them logged errors, this script sends emails to notify us of problems. It also sends email to reassure us that all scripts ran on time (and if any extra runs were logged). Of course, this script also logs in with the checkscripts database to verify that it ran, and when.
Since the checksums are backed up in a MySQL database, and a second MySQL database manages the scripts, there's a cron script (called "backups") on both content.lib.ua.edu (in /home/jlderidder/scripts/cya; backups go into /contentdbs/backups/databases/) and libcontent.lib.ua.edu (in /srv/scripts/cya/; backups go into /srv/backups/ ) which backs up selected databases monthly. Both scripts delete backups over a year old. Currently the list of databases backed up on content.lib.ua.edu is this:
And the list of databases currently backed up on libcontent.lib.ua.edu is this:
- InfoTrack ( see Tracking_for_the_long_term)
- md5sums ( for more info, see Tracking_for_the_long_term and Image_Tech_Metadata
- acumen_staging (the test database for Acumen)
- steve_museum (used for Tagit social tagging of content)
- transcribe (with trmediawiki below, used for Transcribe user transcriptions of content)
These lists are subject to change.