Rohlig Audio

From UA Libraries Digital Services Planning and Documentation
(Difference between revisions)
Jump to: navigation, search
(New page: An exception to the workflow spelled out in Most Content is the Rohlig audio. We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determine...)
 
(30 intermediate revisions by 3 users not shown)
Line 1: Line 1:
An exception to the workflow spelled out in [[Most Content]]  is the Rohlig audio.  We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted, and creates metadata.
+
Overview diagram:
  
When her contribution returns, we create an MP3 for each track specified.  Since this cannot be automated, and must be done by hand, these MP3s need to be uploaded with the preliminary MODS and distributed in Acumen.  Since little content has to be OCR'd, that will also be handled by Digital Services staff and uploaded with the MODS.
+
[[Image:Audio4.png]]
  
[[image:audio2.png]]
+
An exception to the workflow spelled out in [[Most Content]] is the Rohlig audio. We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted and creates metadata. When her contribution returns, we create an MP3 for each track specified. These MP3s need to be uploaded with the MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script.
 +
 
 +
 
 +
== Rohlig Workflow ==
 +
 
 +
'''1.''' Digitize reels into master WAV files on local computer.
 +
 
 +
 
 +
'''2.''' Use Sound Forge to optimize master WAV files; upload copy of optimized WAV files to share drive.
 +
        '''•''' Remove non-native silence between tracks
 +
        '''•''' Leave (or create if needed) 2 seconds of leading silence before each track
 +
        '''•''' Level audio volume
 +
 
 +
 
 +
'''3.''' Make MP3 versions of WAV files using Lame. Place on shared public folder for MAF, who then uses the files to create metadata (including track timecodes) on a shared Excel sheet.
 +
 
 +
 
 +
'''4.''' Copy metadata from MAF’s shared Excel sheet and paste to appropriate cells (changing format as needed) in main metadata Excel sheet.
 +
 
 +
 
 +
'''5.''' Provide Metadata to Metadata Librarian for remediation.
 +
 
 +
 
 +
'''6.''' Use MAF’s metadata to create a TXT file of track timecodes. Place TXT file into the Cue INPUT folder.
 +
 
 +
 
 +
'''7.''' Use [[Cue_GUI]] to run the [[CueMaker]] and [[CueSplitter]] scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)
 +
 
 +
 
 +
'''8.''' Create MODS using Archivist Utility (use m02 template).
 +
 
 +
 
 +
'''9.''' Run makeAudioJpegs script which will create JPEGs (if we have transcripts) and QC's the MP3 files.
 +
 
 +
 
 +
'''10.''' Run relocate_audio script to upload MP3s, JPEGs, and MODS into Acumen.
 +
 
 +
 
 +
'''11.''' Run moveAudioContent script to archive master WAV files and MODS.
 +
 
 +
 
 +
== Future Changes to Workflow ==
 +
 
 +
Austin is working on further improving the audio workflow. Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.
 +
 
 +
The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Revision as of 10:49, 5 September 2013

Overview diagram:

Audio4.png

An exception to the workflow spelled out in Most Content is the Rohlig audio. We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted and creates metadata. When her contribution returns, we create an MP3 for each track specified. These MP3s need to be uploaded with the MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script.


Rohlig Workflow

1. Digitize reels into master WAV files on local computer.


2. Use Sound Forge to optimize master WAV files; upload copy of optimized WAV files to share drive.

         Remove non-native silence between tracks
         Leave (or create if needed) 2 seconds of leading silence before each track
         Level audio volume


3. Make MP3 versions of WAV files using Lame. Place on shared public folder for MAF, who then uses the files to create metadata (including track timecodes) on a shared Excel sheet.


4. Copy metadata from MAF’s shared Excel sheet and paste to appropriate cells (changing format as needed) in main metadata Excel sheet.


5. Provide Metadata to Metadata Librarian for remediation.


6. Use MAF’s metadata to create a TXT file of track timecodes. Place TXT file into the Cue INPUT folder.


7. Use Cue_GUI to run the CueMaker and CueSplitter scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)


8. Create MODS using Archivist Utility (use m02 template).


9. Run makeAudioJpegs script which will create JPEGs (if we have transcripts) and QC's the MP3 files.


10. Run relocate_audio script to upload MP3s, JPEGs, and MODS into Acumen.


11. Run moveAudioContent script to archive master WAV files and MODS.


Future Changes to Workflow

Austin is working on further improving the audio workflow. Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.

The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Personal tools