The diagram below shows the creation of the EAD in Archivists Toolkit (AT) during processing. The archivists consider the copy in Archivists Toolkit to be the copy of record; however, we have found that reloading EADs containing links to component items modifies the links, and reexporting modifies them further. So we have altered our workflow. While the EAD in AT is the copy of record for analog material, the delivery EAD (containing the links to digitized content) will be stored separately. Every time EADs are modified by the archivists, they must go through the item-level linking process again.
1) EADs are exported by archivists from Archivists Toolkit and placed in the "new" or "remediated" folders in the share drive S:\Special Collections\Digital_Program_files\EAD directory.
2) Every Friday night, a script called "getEADs" (File:GetEads.txt -- this script no longer places the EADs live in Acumen) picks up these EADs, makes a datestamped directory in the "uploaded" directory there on the share drive (for example, "uploaded_new_20100803"), copys the EADs to the corresponding "uploaded" directory (so the archivists will know what was picked up when), and then places them in an "notInDbase" directory on libcontent (under /srv/deposits/EADs/).
3) Before digitized content can be linked into the EAD (via script), the EAD is tested to ensure that it's in good condition For example, we don't want to see series within subseries, or multiple locations of a particular box and folder. "eadModsTester" (in/srv/scripts/eads/) looks for which ones we can link, outputs several lists of problems found, and creates FaList which is used by next script, of what can (and cannot) be linked. When new types of containers started showing up in the error output, we asked for clarification of what can contain what. Below is a diagram of the information we were provided by the archivists on 1/29/13:
4) "linkInContent" uses the FaList created by eadModsTester, and pulls EADs from /srv/deposits/EADs/notInDbase/. This script goes through the Acumen directories, reading box and folder values in the item-level MODS, hunts through the appropriate EAD to match up the location, creates PURL links and enters them into the EAD, puts unlinked version in /srv/deposits/EADs/unlinked, linked version in /srv/deposits/EADs/notInDbase and LINKED folder (in /srv/scripts/eads) after backing up previously linked version (datestamped) into backups (also in /srv/scripts/eads/).
Once a month, Jody collects all online EADs for linking in newly digitized content. After placing them in a single directory, she modifies a line in eadModsTester and linkInContent to point to this directory, and runs those two scripts (steps 3 and 4). Step 5 follows this monthly process.
5)THIS STEP FOR LINKING NEWLY DIGITIZED CONTENT into EADs previously uploaded. If any linking is done (previous step) linkedEADlive will copy the EAD into Acumen, place a copy in the /srv/deposits/EADs/new directory for uploading to the archive, and will give the archivists a copy over on the share drive, and also load a copy into the /srv/deposits/EADs/linked directory. Remaining EADs in notInDbase may be discarded, as they are old copies that did not link.
Note: Currently lists of problems found are being copied to the archivists area under S:\Special Collections\Digital_Program_files\EAD\Feedback into the summaries and byEAD subdirectories. What needs to be done here is a sorting: what changes need to be made to the EAD; what EADs must be linked by hand (already contain unlinked items); what collections require MODS remediation (item level metadata repair); and script errors.
5 alternative) THIS STEP FOR NEW or CHANGED (by archivists) EADS ONLY. "EadsToDbase" pulls from /srv/deposits/EADs/notInDbase/, updates the database (including replacing changed title and abstract); the values in the database appear in the online collection list (collection list), so it also puts the EADs live online in Acumen, and moves the copy from notInDbase to /srv/deposits/EADs/new.
6) "waitCheckEADs" checks to see if the EAD has changed from the last version cached. If not, it is deleted from the deposits directory. If so, it checks to see if this collection has been released into LOCKSS (and on what date). If it has, the script asks if you are going to go ahead and archive; if you say yes, the script will copy the existing manifest to one ending in "_LOCKSS_$date" where $date is today's date. We need this because LOCKSS collects each version of manifest, and we need to know how many bytes we have in the preservation architecture, as it impacts our costs. Try NOT to archive to a collection frequently, or within 2-3 weeks of release to LOCKSS.
7) Remove RelocateManifests. Uncomment $test = 1 in relocatingEads and run it.
8) Check moveme & relocateManifests to verify that the manifests will be written correctly, and verify in move me that the Eads are going to be copied over to the correct place.
9) Comment back in $test = 1; and re-run relocatingEads."relocatingEads" pulls from /srv/deposits/EADs/new and locates where the EADs go in the archive, versioning as necessary and linking them into existing LOCKSS manifests, or creating new ones as needed.
10)run Checkem to verify that they have been copied over correctly and deleting the Eads in the deposits.
11) Check directory to make sure nothing is left /srv/deposits/EADs/new. If anything is left in this location, there is a problem and you need to figure out what it is.
Some discussion of the linking from EADs, instructions, and implementation, can be found in Linking_out_from_EADS. We are currently in the throes of analysis and repair of data entry in both the item-level metadata and the EADs to enable automated even more linking to digitized content from the finding aid.
updated 7/1/11 Jlderidder