From UA Libraries Digital Services Planning and Documentation
Revision as of 09:17, 6 August 2013 by Kgmatheny (talk | contribs) (reflecting server switch from libcontent1 to libcontent)

In an effort to further engage our patrons and improve search and retrieval in our delivery system, we set up two new websites in early 2012: and The intent is to rotate content through these two systems, asking users to tag images and transcribe documents, and then add those tags and transcriptions into Acumen for indexing (and possibly display at a later date). In our InfoTrack database on libcontent, I'm tracking what content was loaded when into which software, and the extent of success of that effort.

Over time, the workflows detailed from the links below will get simpler, and hopefully more automated... that is, unless it becomes a part of Acumen functionality first. But for now...


The software behind is Steve-Museum (an alpha version from 2009, available from the Steve Museum site). We selected this because it doesn't require more current PHP (not yet supported by our SUSE Linux server), Tomcat or Solr. However, the older version has some drawbacks that possibly are not present in newer versions; one is that it is no longer supported by the creators. Other drawbacks were that some of the functionality was not working properly in this version (such as batch uploads), but I analyzed the database and built software to load and extract content in batches, based on collection number.

The workflow, procedures, and tracking for insertion, extraction, and deletion from the tagging software are documented in user_tagging.


The software behind is more complex; it includes Scripto, [ MediaWiki], and Omeka for display. As such, it uses two separate MySQL databases, not just one; and the storage of information is a bit complex, including base64 encoding and decoding of combined database keys. One of the drawbacks to this setup is that it's designed to allow extraction of a single item's transcript at a time via the user interface; and all pages for that item are combined into a single document. Again, I analyzed the databases (including the multiple dependencies) and wrote software to batch upload, extract, and delete based on collection number (and optional box number, since we are only loading one box at a time of large collections).

The workflow, procedures, and tracking for insertion, extraction, and deletion from the transcription software are documented in user_transcription.