For Creating Derivatives

From UA Libraries Digital Services Planning and Documentation
(Difference between revisions)
Jump to: navigation, search
(Getting content from the share drive to the storage server to prepare for archiving)
Line 1: Line 1:
 
 
 
 
== Getting content from the share drive to the storage server to prepare for archiving ==
 
 
The moveContent script will copy content to the deposits directory on the storage server, where it will be prepared for archiving at a later date.
 
 
Additionally, it will check the collection-level xml file (in the Admin directory), then add it to the online database which feeds the collection list online.  Thus, this script should NOT be run for new collections until the content is indexed (as described in the previous section);  otherwise, the link from the collection listing will be dead.
 
 
After making the copy, it will verify that each file copied without alteration, and then delete the copy on the share drive. 
 
 
If any files remain in the directories on the share drive, they did NOT copy to the server!!  Run the script again, as there may have been a failure of the copy across the network.  If this fails, the file will need to be moved manually, and the problem encountered by the script must be resolved.
 
 
 
== Archiving content ==
 
== Archiving content ==
  

Revision as of 10:54, 4 June 2010

Archiving content

See lines 9-25 and 31-33 on this page: Moving_Content_To_Long-Term_Storage



The following section is deprecated; we are replacing it with the preceding workflow, which enables us to get content online without putting it into the storage archive first. Benefits include that while LOCKSS partners are harvesting our content, we do not have to twiddle our thumbs till they're done; also, by putting the tools into the hands of Digital Services Staff, we free up the programming needs on the server, and avoid bottlenecks.




After content has been moved to the long term archive, we need online derivatives for web access in a directory structure that mirrors the archive.

For images and audio

The following script runs through the archive, looking for tiff and wave files, and transcripts.

    1. Transcripts are simply copied to the web-accessible directory, placed under a Transcripts directory at the level to which it applies.
    2. ImageMagick (http://www.imagemagick.org/ )is used to create 2 image derivatives:
      1. a thumbnail, where the longest size is 128 pixels (file ends in _128.jpg)
      2. a large image, where the longest side is 2048 pixels (file ends in _2048)
    3. LAME (http://lame.sourceforge.net/ )is used to create an mp3 from each wave file

The command used with ImageMagick is of this form (this for the 2048 size):

 convert [OLDFILE] -strip -density 96 -resample 96x96  -resize 2048x2048 -filter Cubic -quiet [NEWFILE]

The command used with LAME is of this form:

 lame [OLDFILE] [NEWFILE] -V4 --noreplaygain -S

Here's the perl script: File:Copychange.txt


NOTE: During the process of creating derivatives in this manner, we discovered to our dismay that the software that comes with the Captureback overhead creates two tiffs inside each tiff file. One is a thumbnail, and one is the full-size master image. Unfortunately, Image Magick by default creates a jpeg from both tiffs when the above command is run. It concatenates a "-0" to one of the filenames and a "-1" to the other. Examples of these can be seen here: [[1]]. The files ending in "-1" were created from the thumbnail, so they are blurry. We developed an additional script (called "repair" which hunts through directories, seeking out the files thus named, deleting the ones ending in "-1.jpg" and renaming the ones ending in "-0.jpg" to remove the "-0" addition. Here's the perl script: File:Repair.txt


For OCR text:

We're more selective. We don't want to OCR image files -- our guideline is that there must be at least 50% textual content on a page before we will consider OCRing it. We're using the open source tesseract-ocr (http://sourceforge.net/projects/tesseract-ocr/ ) on the command line.

Given a set of collection names, the following Perl script goes through /srv/archive, locates tiff files, checks to see if OCR files already exist online in /srv/www/htdocs/content, and if not, creates directories for them, and uses tesseract-ocr to create OCR derivatives and places them there.

The command used with tesseract-ocr is of this form:

 tesseract [OLDFILE]  [NEWFILE]

Here's the script: File:OcrIt.txt

Personal tools