Place-based online data management for documenting the built environment
Josh Conrad, UT Austin
The fields of cultural resource management and historic preservation often have government agencies asking what buildings and other physical structures in a particular area are worth saving. They need to plan what needs preservation before they build new stuff.
Josh and his team go out into field and find built environment features, take pictures, and document with metadata. This data is all used to decide whether or not the building would be eligible for National Register of Historic Places. Places can be referenced by polygons on a map--usually known as "districts". But you also have features within places such as bridges. These can be connected to other features via lines.
Problem is how to represent this in a database. Josh tries to combine controlled vocabulary with free form notes. When you are out in the field, you often run into something that you haven't seen before
Local residents can provide a wealth of knowledge about unusual structures. Josh has been developing the Austin Historical Survey Wiki, a multi-layered approach to combining data contributed by experts with those contributed by the public. In addition to the wiki, his firm is developing a tablet-based web app that can allow everyone from professional historians to motivated neighbors to easily collect and view information, photos and scanned documents about the historic places in their area.
Using IR's to archive websites
Colleen Lyon and Katherine Miles, UT Austin
Many departments at UT want the library to archive blogs and twitter feeds in the DSpace IR. Even though DSpace is not designed to archive complex items like websites, the library decided to give it a trial with a few trial departments. This project raised many interesting questions such as: what to capture in a website, how do you generate metadata, and how do you deal with the fact that websites, blogs, and feeds are constantly changing. DSpace and other IR platforms are designed for more static materials.
It was a useful exercise, but it will not scale. They are taking raw files and uploading to DSpace and generate metadata. Very labor intensive and it loses the look and feel of a website. The tool ArchiveIt might work, but it is far too expensive to use on this scale. The library will continue to experiment with a small group of departments, but they find the prospect of continuing this daunting.
Vireo Users Group
David Reynolds and Stephanie Larrimer
I co-chair the Vireo Users Group along with Stephanie Larrimer of Texas State University. Vireo is the open-source electronic thesis and dissertation (ETD) software that we use at Hopkins. The half-day session focused on the upcoming development cycle and how we would go about gathering a list of new features and prioritizing them. The software is used by both graduate schools and libraries for different reasons, so it is difficult finding a fair way to get both sides involved. While we had a list of prioritized features that was developed a couple of years ago, we decided to start over. Lots of new schools are using the software (JHU included) that did not participate in the previous round and others have had two additional years to figure out what is important. We will develop and prioritize the list this summer, and the first development sprint will start in August. The tech team will use an Agile project management approach in order to create useful updates more quickly and distribute them more often.
The other big discussion was about the connection of Vireo to ProQuest. We have been working with ProQuest to develop a method to export from Vireo and upload theses directly to the ProQuest Dissertations and Theses database. Only Texas A&M has done this so far, but they found it to be straightforward. JHU will wait until we have ETDs published in our repository this fall before doing the ProQuest upload.
No comments:
Post a Comment