Metadata Comparison Test

From UA Libraries Digital Services Planning and Documentation
Revision as of 07:04, 18 October 2011 by Jlderidder (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In the Digital Planning Meeting on 10/10/08, we decided on a pilot project to test the feasibility of:

  1. Special Collections staff grouping content and providing general descriptive metadata about the group
  2. Digital Services digitizing the content, adding minimal metadata (identifier, box and folder location, notes of damage, number of pages)
  3. Metadata Unit adding caption, title, description, subject, genre, and language for each item.



As the discussion around this proposed project evolved, we determined that we needed to know to what extent it would matter to the user who creates the item-level metadata. If the resulting usability of the content does not differ, or is improved by the metadata unit creating the descriptions, then it makes sense to move the burden of this work to that unit. If, however, the usability of the content is hampered by descriptions being provided by those not familiar with the content and context, then we need to find ways to provide more assistance to the archivists. They are not currently staffed sufficiently to provide item-level descriptions for content to be digitized.

Additionally, we hope to determine whether it saves archivist time and money in fleshing out the finding aid if the contents are digitized after minimal processing, and then the finding aid description is updated.

For example, if creating the finding aid while processing down to the item level took 15 hours for 260 photos; but creating the finding aid while processing down to folder level took 6 hours for 260 photos; and then another 5 hours to remediate the finding aid after the item level metadata is created -- then there's a savings of 4 hours in creating the finding aid.

Alternatively, if web delivery prior to finishing the finding aid COSTS the archivist more time, we need to know that too.

Early in 2009, Donnelly Walton and Marina Klaric (Archival Technician) selected two comparable image collections and two comparable text collections from manuscript holdings at Hoole Special Collections Library.


The manuscript collections selected were:

  1. the Cahill Family Papers: 305 handwritten letters (1115 scans)
  2. Berman Family papers: 337 mostly handwritten letters (975 scans)

The image collections selected were:

  1. the George Nichols Photos (115 images)
  2. the Kappa Alpha Photos (164 images)


The initial plan was that the archivists would create item-level descriptions for Cahill and Kappa Alpha and the Metadata Unit would create item-level descriptions for the Berman and Nichols collections. Both groups would track the time involved, for comparison.

The archivists proceeded to process the collections. After the archivists finished their work, Digital Services digitized the materials, and then the Metadata Unit took a turn at creating item-level metadata for Berman and Nichols.


In this process, we ran into some issues:

  • We realized that the Digital Services staff were not instructed to capture physical dimensions of images, and the Metadata Unit could not do this after digitization.
  • The archivist-created metadata does *NOT* include subject headings, whereas the Metadata Unit-created metadata does; the former has to be remediated by the Metadata Unit to add these headings. Thus, measurement of time to create metadata should not include subject heading assignments.
  • Digitizing the Berman papers without even a minimal spreadsheet, left Digital Services staff confused as to where to begin. Sometimes what they selected as an item was only a portion of an item, and when the Mary Alexander (Metadata Librarian) went to describe it, she discovered the errors.
  • Also, the files were not scanned in order, but divided up across three students to speed digitization. This caused her great difficulty when trying to match objects (with file names) to descriptions. This required much back and forth between the Metadata Unit and Digital Services to sort out the issues and rename the files appropriately: a waste of time and energy!
  • The spreadsheets the archivists used were not of the same form as the spreadsheets used by the metadata unit, requiring rework by the latter. This was not a good thing. We need to communicate better about what fields are needed for online delivery (as opposed to archival tracking) and which spreadsheet formats must be used.
  • During digitization of the Kappa Alpha collection, Amanda Presnell (Digitization Specialist) noted content which was not suitable for online delivery (permissions issues). After investigation, it was determined that this collection should indeed not go online. Again, a waste of effort and time.


According to Donnelly Walton (Archival Access Coordinator) Cahill and Kappa Alpha were arranged to the item level (chronological or alphabetical order) but described to the folder level. Nichols and Berman were arranged and described to the folder level within each series, with no arrangement within the folders. The archivists tried to do just series level arrangement, but since they had to look at every item, the result was folder-level arrangement.


After some discussion we decided that we were comparing apples and oranges. We could not compare description of items in one collection against description of items in another collection. Therefore, we changed our trajectory: we decided to compare the metadata created by each group for two collections only, Berman and Nichols. Thus, the archivists were asked to describe the Nichols and Berman collections.

Note: Item divisions differed between the two groups. Where one group included an enclosed document from a letter as a separate item, the other group included it as part of the letter that enclosed it. Thus the number of items in the two versions of each collection vary from one another.


The resulting comparison of time spent in processing, digitization, and description is assessed in this document: http://www.lib.ua.edu/wiki/digcoll/images/c/cb/Comparison.xlsx

As can be seen by this document, scanning of textual documents was most cheaply performed by Digital Services, as they have access to overhead capture equipment.

  1. Metadata created for photos took the Archivist staff 41% more time to create than the Metadata Unit;
  2. Metadata created for handwritten documents took the Metadata Unit 52% longer than for archivist staff.

Thus it seems that it would be best if image metadata were created by the Metadata Unit, but (at least handwritten) document descriptions were created by the archival staff.


At this point, we determined that we need to know which set of metadata is more useful to the patron, or if findability for content is impacted at all.

Donnelly Walton drew up ten questions from the finding aid descriptions, without viewing the item metadata created by either group. We reasoned that, for example, if the series descriptions explained that the content contained letters by Fred written after 1862, then the user should be able to locate letters by Fred written after 1862 in the digitized items.


The original test questions selected were:

  1. How would you find a photograph of a child from the George Nichols collection?
  2. How would you find a photograph of ROTC members from the George Nichols collection?
  3. How would you find a photograph of a parade from the George Nichols collection?
  4. How would you find a photograph taken by George Nichols from the George Nichols collection?
  5. How would you find a photograph related to an election campaign from the George Nichols collection?
  6. How would you find a photograph that shows construction of a parade float from the George Nichols collection?
  7. How would you find a photograph taken in June 1958 from the George Nichols collection?
  8. How would you find a photograph of University of Alabama President Frank Rose from the George Nichols collection?
  9. How would you find a photograph taken at a Theta Chi fraternity costume party from the George Nichols collection?
  10. How would you find a photograph taken at a University of Alabama basketball game from the George Nichols collection?


Digital Services staff (Jeremiah Colonna-Romano (Digitization Manager) and Nitin Arora (Digitization Specialist)) developed a usability test with the Morae software, and Mary Alexander processed both sets of both Berman and Nichols collections for web delivery.

Jody set up two test set collections in CONTENTdm, visible only to those logged in as administrator. Test set A contains one group's version of these two collections, and Test set B contains the other group's version of these two collections.

Half the participants are to be presented with Test Set A first, and half with Test set B first, to avoid skewing results due to familiarity with the first set. The same questions will be repeated in each set of collections (that is, the user answers the same question twice -- once in test set A, once in test set B).


Preliminary testing (see results here: metadata_pilot_betatest_table_of_results) with volunteers and staff pointed up some problems:

  1. we had too many questions (the test took too long; it should take 20-30 minutes at most;
  2. some questions had no results with either set of collections (indicating no difference, which is indeed useful information, but not for contrasting the two sets);
  3. indexing of one set of collections was not functioning correctly



We repaired the indexing problem and reduced the questions to these 6 (to be repeated in each test set):

Final Test Questions

George Nichols Photos:

  • Find a photograph taken by George Nichols
  • Find a photograph related to an election campaign
  • Find a photograph of University of Alabama President Frank Rose


Berman Family Papers

  • Find a letter written by Debby Schwartz
  • Find a letter written in Hebrew by Bill Berman
  • Find a letter written by Marian Berman after 1941



Jody and Jeremiah retested the site after the indexing was repaired to ensure functioning was optimal. Jody and Jeremiah then applied to the IRB for approval of the finalized study. Approval was granted on 12/13/10. Jeremiah advertised via the website and departmental listserves to find participants, and began testing in March 2011.


As well as timing and success/failure, for each question, the tester will note:

  1. what they click on
  2. what field/fields they type in
  3. what they type in
  4. what they do when they get search results.
  5. If the decision comes quickly or is the result of guessing
  6. If the user seems frustrated


Once testing was completed, we analyzed the results and met to determine next steps.

The recap of that meeting, held 9/8/11, is available here.

Jlderidder

Personal tools