Electronic Theses and Dissertations

From UA Libraries Digital Services Planning and Documentation
Revision as of 11:06, 27 April 2010 by Jlderidder (Talk | contribs)

Jump to: navigation, search

ETDs 20100426.png

Workflow Overview

A. ProQuest uploads deposits of zip files to the content.lib.ua.edu server via ftp into the ftpaccess home directory, and notifies Janet Lee-Smeltzer of the upload. Janet notifies the Metadata Librarian (in this case, Shawn) to process them.

B. Shawn runs the PERL script "moveContent" which is located in the scripts directory of her home area on content.lib.ua.edu. This script picks up all zip files sitting in the etd_deposits directory (which corresponds to the ftpaccess home area), identifies the date the files were deposited, and relocates the files into a directory named for this date of deposit (yyyymmdd) in etd_deposits. (This script and the following one encompass the tasks that Jody was doing, outlined here: Preprocessing ETDs)

C. Shawn then runs the PERL script "processEtds" which is located in her scripts directory. This script will ask which directory to process, and whether the files are for May, December, or September graduation, so it knows when to start the embargoes. It will create a subdirectory in the "working" directory which matches the name of the selected directory. Within this it will create 3 subdirectories: OPEN, PRQ, and CONTENT. OPEN is where it will open the zip directories. PRQ is where it will put the renamed and altered XML for processing. CONTENT is where it will put the renamed content files.

The "processEtds" script performs these tasks:

  1. extracts from each metadata file the following information:
    1. title,
    2. author,
    3. year the manuscript was completed
    4. year the degree was awarded
    5. embargo code (if any)
  2. calls the InfoTrack.bornDigital mysql database table on libcontent1.lib.ua.edu to find the next filenumber to assign and the InfoTrack.lookup table to determine the next persistent URL;
  3. records the item number assigned, the PURL assigned, the author and title in these tables
  4. inserts the assigned item number (filename, minus the extension) into a UA_identifier attribute and the assigned PURL into a UA_purl attribute within the DISS_submission field in a copy of the metadata
  5. places this altered copy of the metadata into an PRQ subdirectory; the copy will be named with the assigned filename followed by ".prq.xml" (thus a correctly named file would be: u0015_0000001_0000023.prq.xml) to indicate this is still ProQuest XML.
  6. copies all the bitstreams and renames them appropriately, placing them in a CONTENT subdirectory. The primary PDF will be named with the assigned filename followed by ".pdf"; subsidiary files will be numbered sequentially, with a 4-digit left-padded number attached to the assigned filename, followed by the extension. So the first subsidiary file for this file (if a jpeg) would properly be named u0015_0000001_0000023_0001.jpg, and the second (if a text file) would be named u0015_0000001_0000023_0002.txt.
  7. creates an entry for each record in a tab-delimited xmlList.xml file which contains the following fields:
    1. assigned filename
    2. original filename
    3. author
    4. title
    5. directory (created out of zip file name)
    6. year the manuscript was completed
    7. year the degree was awarded
    8. an indicator of the existence of subsidiary files (a count)
    9. the embargo code
    10. date item is made available via the web
    11. the assigned PURL

D. Shawn works with the deposited content to create valid MODS files meeting our local profile, which include the assigned identifier and PURL, and are named for the assigned identifier with a ".mods.xml" extension.

E. Shawn also creates valid MARC files for upload into our OPAC system, which reference the included assigned identifier and PURL.

F. She uploads the finished MODS and the associated renamed content into a datestamp-titled MODS directory in her home area on libcontent1.lib.ua.edu.

G. Shawn runs the relocate_all.pl script in her home directory. This places all the content and MODS (which are not under embargo) into the correct directories in Acumen, as well as copying everything to the deposits directory for upload into the storage archive.

H. Shawn checks the final display and access via Acumen to verify that no problems exist. If any problems are encountered, she contacts Jody and we work out how to fix them.  :-)

I. Jody runs a script (/srv/scripts/bornDigital/relocatingBd) which will move the files into the correct subdirectories for long-term storage, linking them into the LOCKSS manifests.

J. Janet submits a batch upload of the MARC records into our catalog system.

K. Janet checks the final display and access via the OPAC.

L. Janet will batch upload the MARC to WorldCat.

M. The embargo-checking script "checkEmbargo" will check the database on the 21st of each month for embargoes which are due to lift the next month; this script will email Jody and Shawn with the filename, title, author, and date that the embargo is to lift.

N. Another script, "liftEmbargo" will run on the first of each month, and will copy live anything whose embargo has lifted.

O. Shawn will then prepare and upload the no-longer-embargoed-content into Acumen;

P. Janet will upload the no-longer-embargoed-content into the local OPAC and into WorldCat.

Q. Should the metadata require remediation, Shawn will add a recordChangeDate field, and will upload the altered MODS files to libcontent1 and run relocate_all.pl. This will move the files to the live web directory (except for those under embargo) and also to a deposits directory for archival storage.

P. Jody will transfer these into archival storage using the aforementioned script, relocatingBd.



Reference: Find_our_content_online

updated 4/26/10 jlderidder

Personal tools