Watching Our Backs

From UA Libraries Digital Services Planning and Documentation
Revision as of 09:08, 6 August 2013 by Kgmatheny (talk | contribs) (reflecting server switch from libcontent1 to libcontent)


Currently, we are storing archival content (described here: formats on a Linux server, in the directory structure described here: Organization_of_completed_content_for_long-term_storage. Content which we want included in LOCKSS (ADPNET) is linked into Manifests (as described here: File_Naming_and_Linking_for_LOCKSS.

Once this archival content is placed into storage, anything linked into the Manifest.html pages should NOT be changed. However, since our new delivery platform derives from this stored content, we need to be able to update the metadata as needed. Hence, the primary metadata file is copied as a versioned file, and the versioned file is what is linked into the Manifests for LOCKSS pickup. The metadata file which is NOT versioned is the most recent, and over-writeable, copy.

MD5 Checksums

For the content which is not allowed to change, we have scripts running weekly to verify that the md5 checksum has not changed, prior to the full tape backup. If there's an error, we are notified in time to retrieve a good copy from a previous backup, before the corrupted item can be written to tape.

On libcontent, in /srv/scripts/md5/cya/, the script which calculates and checks sums is called "md5check".

First it looks for new holder areas (such as u0001, u0003, etc. in the archive). If there are any, it sets up a new location for checksums for that holder. Within each holder directory in the checksums area, there's a file for each collection. Each collection's file contains all the checksums for that collection.

Second, md5check goes through each holder area in turn, checking collections one at a time. If the collection exists, it checks the checksums for each file. If they have changed, it outputs an error. While traversing the archive it notes new content, and generates checksums for those before going on to the next collection.

Third, if any new collections were noted in the holder area, the script creates collection md5 files and generates checksums for the content before going on to the next holder area.

Fourth, the script goes through the new holder areas, finds new collections, notes any new content in existing collections.

The script then logs in with the checkscripts database to verify that it ran, the timestamp, and any errors.


Following this script, another script named storeFileSums runs. This one goes through all the gathered checksums, compares them to the ones in the md5sums mysql database, verifies (again) that nothing has changed, and enters new checksums that are found, with a timestamp.

This database is backed up monthly, and regular copies kept on

Errors are sent to and

This script also logs in with the checkscripts database to verify that it ran, the timestamp, and any errors.

Automated Script Verifications

Once a week, a script called "checkscripts" on in /srv/scripts/cya looks through the entries in the checkscripts database for the past week, and compares them with the list of entries of existing cron scripts and when they are due to run.

  The checkscripts database consists of 2 tables:  "ran" and "scripts."
  Ran contains an auto-incrementing primary key (num), scriptid (identifier for the script, again a number), 
  datestamp (for when this script ran) and errors.
  Scripts contains a numerical id (which corresponds to the scriptid in "ran"), scriptname, server, directory 
  where the script resides, the cron job that calls it, and a textual description of when it runs ("runswhen") 
  and what it does ("doeswhat") as well as names of any scripts that it precedes (preceeds) or succeeds (succeeds) 
  in order to work properly.

If any scripts did NOT run, which were scheduled, or any of them logged errors, this script sends emails to notify us of problems. Of course, this script also logs in with the checkscripts database to verify that it ran, and when.

Then, since any server can go down, and multitudes of error possibilities exist, a third script on a third server ("checkscriptcheck" in /home/jlderidder/cya on runs to verify that the checkscripts script ran as it should. Again, this one sends us any errors, and it logs in with the checkscripts database.

Database Backups

Since the checksums are backed up in a MySQL database, and a second MySQL database manages the scripts, there's a cron script (called "backups") on both (in /home/jlderidder/scripts/cya; backups go into /contentdbs/backups/databases/) and (in /srv/scripts/cya/; backups go into /srv/backups/ ) which backs up selected databases monthly. Both scripts delete backups over a year old. Currently the list of databases backed up on is this:

  1. mysql

And the list of databases currently backed up on is this:

  1. InfoTrack ( see Tracking_for_the_long_term)
  2. mysql
  3. Acumen
  4. md5sums
  5. metaview (the test database for Acumen)
  6. steve_museum (used for Tagit social tagging of content)
  7. transcribe (with trmediawiki below, used for Transcribe user transcriptions of content)
  8. trmediawiki

These lists are subject to change.

Index Backups

And since CONTENTdm seems to be so prone to corruption, we also have a script called "indexes" in /home/jlderidder/scripts/metadata/backups/ on which backs up both the indexes and the binary index monitor files, for all content currently in the CONTENTdm database. These are .tar.gz compressed and stored in /contentdbs/backups/indexes, and this script also checks in with the checkscripts database, and deletes backups over a year old.