Rohlig Audio

From UA Libraries Digital Services Planning and Documentation
(Difference between revisions)
Jump to: navigation, search
(Rohlig Workflow)
 
Line 12: Line 12:
 
         '''•''' [[Level Volume]]
 
         '''•''' [[Level Volume]]
  
'''3.''' Covert WAV files to MP3 using any method you feel comfortable with (perhaps SoundForge or Lame, it really doesn't matter as long as the files get converted). Give said MP3 files to MAF (by placing into a public folder on the shared drive and emailing her a link to the folder). MAF should then provide you with an Excel sheet full of metadata for each track.
+
'''3.''' Covert WAV files to MP3 using any method you feel comfortable with (perhaps SoundForge or Lame, it really doesn't matter as long as the files get converted). Give said MP3 files to MAF (by placing into a public folder on the shared drive and emailing her a link to the folder). MAF should then provide an Excel sheet full of metadata for each track.
  
  
'''4.''' Copy metadata from MAF’s Excel sheet and paste to appropriate cells (changing format as needed) in main metadata Excel sheet (which is located in the Rohlig folder on the shared drive).
+
'''4.''' Copy metadata from MAF’s Excel sheet and paste into main metadata Excel sheet (which is located in the Rohlig folder on the shared drive).
  
  
 
'''5.''' Provide Metadata to Metadata Librarian for remediation (by placing it into the 'needsRemediation' folder located here: S:\Digital Projects\Administrative\Pipeline\collectionInfo\forMDlib\needsRemediation).
 
'''5.''' Provide Metadata to Metadata Librarian for remediation (by placing it into the 'needsRemediation' folder located here: S:\Digital Projects\Administrative\Pipeline\collectionInfo\forMDlib\needsRemediation).
  
'''6.''' Use MAF’s metadata to create a TXT file of filenames and track timecodes (only the begin times are used, as each track's begin time is the previous track's end time. The first track's begin time is always 00:00). Place the TXT file into the Cue INPUT folder.
+
'''6.''' Use MAF’s metadata to create a TXT file of filenames and track timecodes (only the begin times are used, as each track's begin time is the previous track's end time. The first track's begin time is always 00:00). Place the TXT file into the Cue INPUT folder (located here: S:\Digital Projects\Administrative\scripts\Cue_Maker).
 
         '''•''' TXT file should follow this pattern (using a single-space between filename and begin time):
 
         '''•''' TXT file should follow this pattern (using a single-space between filename and begin time):
 
               ''Filename1 Begin Time''
 
               ''Filename1 Begin Time''
Line 26: Line 26:
  
  
'''7.''' Use [[Cue_GUI]] to run the [[CueMaker]] and [[CueSplitter]] scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)  
+
'''7.''' Use [[Cue_GUI]] (located here: S:\Digital Projects\Administrative\scripts\Cue_Maker) to run the [[CueMaker]] and [[CueSplitter]] scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)  
  
 
For more information on scripts: [[http://journal.code4lib.org/articles/9314 Code{4}Lib]].
 
For more information on scripts: [[http://journal.code4lib.org/articles/9314 Code{4}Lib]].
Line 52: Line 52:
 
== Future Changes to Workflow ==
 
== Future Changes to Workflow ==
  
Austin is working on further improving the audio workflow. Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.  
+
Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.  
  
 
The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.
 
The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Latest revision as of 12:10, 12 May 2014

An exception to the workflow spelled out in Most Content is the Rohlig audio. We digitize the master, create derivatives for Mary Alice Fields, who then listens to them and determines which sections should be omitted and creates metadata. When her contribution returns, we create an MP3 for each track specified. These MP3s need to be uploaded with the MODS and distributed in Acumen. OCR of transcripts is handled by the makeJpegs script. See the steps of our workflow below. There are also links below (the text in blue) where you can find more in-depth details. You can also visit the Audio Upload page for more detailed upload instructions.


[edit] Rohlig Workflow

1. Digitize reels into master WAV files on local computer.


2. Use Sound Forge to optimize master WAV files; upload copy of optimized WAV files to share drive.

         Remove non-native silence between tracks 
         Leave (or create if needed) 2 seconds of leading silence before each track
         Level Volume

3. Covert WAV files to MP3 using any method you feel comfortable with (perhaps SoundForge or Lame, it really doesn't matter as long as the files get converted). Give said MP3 files to MAF (by placing into a public folder on the shared drive and emailing her a link to the folder). MAF should then provide an Excel sheet full of metadata for each track.


4. Copy metadata from MAF’s Excel sheet and paste into main metadata Excel sheet (which is located in the Rohlig folder on the shared drive).


5. Provide Metadata to Metadata Librarian for remediation (by placing it into the 'needsRemediation' folder located here: S:\Digital Projects\Administrative\Pipeline\collectionInfo\forMDlib\needsRemediation).

6. Use MAF’s metadata to create a TXT file of filenames and track timecodes (only the begin times are used, as each track's begin time is the previous track's end time. The first track's begin time is always 00:00). Place the TXT file into the Cue INPUT folder (located here: S:\Digital Projects\Administrative\scripts\Cue_Maker).

         TXT file should follow this pattern (using a single-space between filename and begin time):
              Filename1	Begin Time
              Filename2	Begin Time


7. Use Cue_GUI (located here: S:\Digital Projects\Administrative\scripts\Cue_Maker) to run the CueMaker and CueSplitter scripts for each item. (The scripts will use the track timecodes to create a CUE file, use the CUE file to split the master WAV file into sub-item tracks, convert the tracks from WAV to MP3 format, and save the resulting files to the Cue OUTPUT folder while naming them appropriately.)

For more information on scripts: [Code{4}Lib].


8. Create MODS using Archivist Utility (use m02 template).


You are now ready to Upload. See Audio Upload for more detailed instructions for the below steps.


9. Run makeAudioJpegs script which will create JPEGs (if we have transcripts) and QC's the MP3 files.


10. Run relocate_audio script to upload MP3s, JPEGs, and MODS into Acumen.


11. Run moveAudioContent script to archive master WAV files and MODS.

[edit] Overview Diagram

Audio4.png


[edit] Future Changes to Workflow

Future projects include tweaking the CueMaker script to enhance efficiency, automating the creation of the input txt file, and automating the volume leveling.

The volume leveling is important because if you were listening to two separate tracks from our database, one after the other, and there was a drastic change in volume between the two, it would prove to be distracting. But leveling the volume programmatically is challenging. There is not a standardized approach for accomplishing this. One method involves cutting off the highs and lows so all audio fits into a certain narrow range. However, this is a destructive process and results in a loss of information, and thus a deteriorated sound. There are other methods we are researching, some involve averaging the highs and lows, others add gain until all tracks are at the same level. They each have their pros and cons, and we are still in the process of determining which system is best suited to our needs. Other issues involve the definition of "volume". Volume is usually attributed to sound pressure, which is measured in decibels, but this is not a precise unit of measurement. Also, perceived volume and decibel value are not the same, so two audio files at the same technical volume might sound radically different to the human ear based on factors such as audio frequency. All of these considerations must be taken into account before a solution can be decided on and scripted out.

Personal tools