Rohlig Audio

From UA Libraries Digital Services Planning and Documentation
(Difference between revisions)
Jump to: navigation, search
(13 intermediate revisions by 3 users not shown)
Line 1: Line 1:
An exception to the workflow spelled out in [[Most Content]]  is the Rohlig audio.  We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted, and creates metadata.
+
Overview diagram:
  
  As of March 11, 2011 please refer to [https://intranet.lib.ua.edu/wiki/digcoll/images/4/47/RohligWorkflow.txt RohligWorkflow.txt] for detailed information regarding the Rohlig workflow.
+
[[Image:Audio4.png]]
  Also note that this text document trumps any prior (before 3/11/2011) and conflicting information found on this page or other pages on the WIKI regarding the workflow for this collection. - arora, 03/11/2011.
+
  
When her contribution returns, we create an MP3 for each track specified or a single MP3 that concatenates all specified tracks. These MP3s need to be uploaded with the preliminary MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script.
+
An exception to the workflow spelled out in [[Most Content]] is the Rohlig audio.  We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted and creates metadata. When her contribution returns, we create an MP3 for each track specified. These MP3s need to be uploaded with the MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script.
  
  
[[image:audio4.png]]
+
== Rohlig Workflow ==
  
== Preliminary steps on the server ==
+
'''1.''' Digitize reels into master WAV files on local computer.
  
# make sure that the Windows share is mounted:  type into the ssh window on libcontent1:  `ls /cifs-mount`  -- if no listing appears, or the window hangs up, you need to mount the share drive.  Otherwise, proceed to getting the audio content live.
 
# To mount the Windows drive, type this in on the commandline on libcontent1:  `sudo mount -t cifs -o username=jjcolonnaromano,domain=lib //libfs1.lib.ua-net.ua.edu/share/Digital\ Projects/ /cifs-mount`  and use the password for share for Jeremiah.  If successful, the command in the last step will show you the directories within the Digital Projects folder on Share.
 
  
== Here's the workflow for getting audio content live. ==
+
'''2.''' Use Sound Forge to optimize master WAV files; upload copy of optimized WAV files to share drive.
 +
        '''•''' Remove non-native silence between tracks
 +
        '''•''' Leave (or create if needed) 2 seconds of leading silence before each track
 +
        '''•''' Level audio volume
  
1)  starting on the Share drive, make sure all tiffs are in a Transcripts folder
 
  
2)  create MP3s and place in an MP3 folder
+
'''3.''' Make MP3 versions of WAV files using Lame. Place on shared public folder for MAF, who then uses the files to create metadata (including track timecodes) on a shared Excel sheet.
  
3) export spreadsheet of completed items if not already done;  create MODS files. Place them in a MODS directory inside the Metadata directory
 
  
4) log into the libcontent1 server via ssh on commandline.  Type in: `cd AUDIO/scripts`.  This will change your working directory to the one where the scripts are.
+
'''4.''' Copy metadata from MAF’s shared Excel sheet and paste to appropriate cells (changing format as needed) in main metadata Excel sheet.
  
5) run makeAudioJpegs script ([[Image:makeAudioJpegs.txt]]) :  type `makeAudioJpegs`.  This will:
 
  a)  check the MODS, copy them to the server and put them in the AUDIO/MODS directory, then delete them on the share drive
 
  b)  make JPEGS of any transcript tiffs and put them inthe AUDIO/transcripts directory on the server
 
  c)  OCR tiffs in the transcripts directory, and put the OCR on the server in the OCR directory
 
  d)  copy the MP3s to the server and put them in the MP3 directory, then delete them on the share drive
 
  
6)  Spot check the output. There should be 2 jpegs for every tiff. One ends in _128, and one in _2048.  These correspond to the number of maximum pixels to a side. The one ending in "_128.jpg" is a thumbnail. The one ending in "_2048.jpg" is a large image, the default used in display.
+
'''5.''' Provide Metadata to Metadata Librarian for remediation.
  
7)  run relocate_audio script in AUDIO/scripts directory ([[Image:relocate_audio.txt]]:  type in:  `relocate_audio` .  This distributes the jpegs, MODS, ocr and mp3s in Acumen.  It will delete the local copies, so you can reuse the same directories over and over. This script assumes that jpegs and ocr are transcripts. Watch for errors on the command line.
 
  
8)  check the upload directories. Any files remaining are problems and were not distributed. Repair
+
'''6.''' Use MAF’s metadata to create a TXT file of track timecodes. Place TXT file into the Cue INPUT folder.
and rerun the script.
+
  
9)  Check the website and make sure everything is hunky dory.  :-)
 
  
10) Run the moveAudioContent script ([[Image:moveAudioContent.txt]]). If this is a new collection, wait until the files have been indexed and content is viewable online, so there will not be a
+
'''7.''' Use [[Cue_GUI]] to run the [[CueMaker]] and [[CueSplitter]] scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)  
dead link in the collection list.  This script will: 
+
    
  a)  check the database for existing info about this collection, and provide you with whatever we already know, so you can correct
+
      it with your collection xml file,
+
  b)  update our database and online collection xml file, if yours is new or changed  -- adding the online link to the collection if new
+
  c)  update or add the icon image if you are providing it for a collection thumbnail
+
  d)  copy the archival content to the deposits/content directory on the server, for processing into the archive
+
  e) compare the copied content with what you have on the share drive;  if it uploaded okay, it will delete it on the share drive
+
   f)  output errors into a file in the output directory
+
  
  All XML Audio Decision List files (SimpleADL) that correspond to the archival content MUST be placed in the Admin folder. This is not optional given that, as of mid-2010, ALL archival WAV files
+
'''8.''' Create MODS using Archivist Utility (use m02 template).
  must have a corresponding SimpleADL file.
+
  
11)  In the scripts directory (on the same level as the UploadArea, AUDIO and CABANISS directories, run 'findMissing' ([[Image:findMissing.txt]]. This script will hunt through Acumen to make sure there is a MODS file for every item, and at least one derivative for each MODS file. Any errors will be found in the output file written to the scripts/output directory. If errors are found, regenerate those MODS and/or JPEGs/MP3s and rerun relocate_all. Then run this script again to ensure all errors have been remedied.
 
  
12) Check back after a few hours and look at the output file to verify that there were no problems and that the script completed.  If you want, you can watch the files being uploaded to the storage server in the deposits/content directory. :-)
+
'''9.''' Run makeAudioJpegs script which will create JPEGs (if we have transcripts) and QC's the MP3 files.
  
13)  Check the share drive directories for any files that still remain. If any archival files are still there, rerun moveContent.
+
 
There may have been a failure in the network connection between the servers. If the files still remain, notify Jody.
+
'''10.''' Run relocate_audio script to upload MP3s, JPEGs, and MODS into Acumen.
   
+
 
13)  exit out of secure shell. Good work!!
+
 
 +
'''11.''' Run moveAudioContent script to archive master WAV files and MODS.
 +
 
 +
 
 +
== Future Changes to Workflow ==
 +
 
 +
Austin is working on further improving the audio workflow. Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.  
 +
 
 +
The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Revision as of 11:49, 5 September 2013

Overview diagram:

Audio4.png

An exception to the workflow spelled out in Most Content is the Rohlig audio. We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted and creates metadata. When her contribution returns, we create an MP3 for each track specified. These MP3s need to be uploaded with the MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script.


Rohlig Workflow

1. Digitize reels into master WAV files on local computer.


2. Use Sound Forge to optimize master WAV files; upload copy of optimized WAV files to share drive.

         Remove non-native silence between tracks
         Leave (or create if needed) 2 seconds of leading silence before each track
         Level audio volume


3. Make MP3 versions of WAV files using Lame. Place on shared public folder for MAF, who then uses the files to create metadata (including track timecodes) on a shared Excel sheet.


4. Copy metadata from MAF’s shared Excel sheet and paste to appropriate cells (changing format as needed) in main metadata Excel sheet.


5. Provide Metadata to Metadata Librarian for remediation.


6. Use MAF’s metadata to create a TXT file of track timecodes. Place TXT file into the Cue INPUT folder.


7. Use Cue_GUI to run the CueMaker and CueSplitter scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)


8. Create MODS using Archivist Utility (use m02 template).


9. Run makeAudioJpegs script which will create JPEGs (if we have transcripts) and QC's the MP3 files.


10. Run relocate_audio script to upload MP3s, JPEGs, and MODS into Acumen.


11. Run moveAudioContent script to archive master WAV files and MODS.


Future Changes to Workflow

Austin is working on further improving the audio workflow. Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.

The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Personal tools