Usability Studies and Feedback

From UA Libraries Digital Services Planning and Documentation
Revision as of 10:43, 30 March 2011 by Jlderidder (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

For us to be a success, we must reach our audience at the point of need, with the content they need, in the form they need it. This is an ambitious goal, given that we have not yet clearly identified our audiences. We believe that they include researchers primarily in the humanities, undergraduate and graduate students at the University of Alabama, and patrons in the University of Alabama community.

To succeed, we must connect with our users and obtain feedback, and then modify our content and its delivery methods appropriately. This must be an iterative effort, as our user base and our offerings are continually morphing.


Donnelly Lancaster Walton (Archival Access Coordinator) developed a proposal for a user study in late 2008: http://www.lib.ua.edu/wiki/digcoll/images/f/f4/User_study_20081110.docx Some of the questions she posed include:

  • Do our researchers want to do their own searches and pull together the letters written by individuals and place them in their own order?
  • Do they want us to present them to them in the order in which they are arranged physically at the Hoole?
  • Do they want us to present the collections folder-by-folder and allow them to virtually open the folder and view the contents?
  • Do they want to look at the archival finding aid and view the digital items within the context of the finding aid?

We attempted to address these points with the Cabaniss_Usability_Study in late 2010.

A second usability study evolved from the concern over the lack of staffing support for archivists to create item-level descriptive metadata, against the burgeoning need for content to feed the digitization machine. This usability study is currently underway and more information on this effort is available in the Metadata_Comparison_Test.

Our experience with these studies has clarified for us that we need to better understand the metrics for testing learnability of interfaces and how to best select questions and perform tests to ensure valuable results for analysis. We plan to study

Personal tools