Durm
New Book Chapter on Semantic Wikis and Natural Language Processing for Cultural Heritage Data
Springer just published a new book, Language Technology for Cultural Heritage, where we also contributed a chapter: "Integrating Wiki Systems, Natural Language Processing, and Semantic Technologies for Cultural Heritage Data Management". The book collects selected, extended papers from several years of the LaTeCH workshop series, where we presented our work on the Durm Project back in 2008.
In this project, which ran from 2004–2006, we analysed the historic Encyclopedia of Architecture, which was written in German between 1880-1943. It was one of the largest projects aiming at conserving all architectural knowledge available at that time. Today, its vast amount of content is mostly lost: few complete sets are available and its complex structure does not lend itself easily to contemporary application. We were able to track down one of the rare complete sets in the Karlsruhe University's library, where it fills several meters of shelves in the archives. The goal, then, was to apply "modern" (as of 2005) semantic technologies to make these heritage documents accessible again by transforming them into a semantic knowledge base (due to funding limitations, we only worked with one book in this project, but the system was developed to be able to eventually cover the complete set). Using techniques from Natural Language Processing and Semantic Computing, we automatically populate an ontology that can be used for various application scenarios: Building historians can use it to navigate and query the encyclopedia, while architects can directly integrate it into contemporary construction tools. Additionally, we made all content accessible through a user-friendly Wiki interface, which combines original text with NLP-derived metadata and adds annotation capabilities for collaborative use (note that not all features are enabled in the public demo version).
All data created in the project (scanned book images, generated corpora, etc.) is publicly available under open content licenses. We also still maintain a number of open source tools that were originally developed for this project, such as the Durm German Lemmatizer. A new version of our Wiki/NLP integration, which will allow everyone to easily set up a similar system, is currently under development and will be available early 2012.
- Login to post comments
Integrating Wiki Systems, Natural Language Processing, and Semantic Technologies for Cultural Heritage Data Management
Submitted by rene on Thu, 2011-07-14 17:58- Login to post comments
- Tagged
- XML
- BibTex
- Google Scholar
{An Integration Architecture for User-Centric Document Creation, Retrieval, and Analysis}
Submitted by rene on Wed, 2010-08-25 07:28- Login to post comments
- Tagged
- XML
- BibTex
- Google Scholar
{Engineering a Semantic Desktop for Building Historians and Architects}
Submitted by rene on Tue, 2010-08-17 10:35- Login to post comments
- Tagged
- XML
- BibTex
- Google Scholar
{A Semantic Wiki Approach to Cultural Heritage Data Management}
Submitted by rene on Fri, 2010-07-30 08:51- Login to post comments
- Tagged
- XML
- BibTex
- Google Scholar
Converting a Historical Architecture Encyclopedia into a Semantic Knowledge Base
Submitted by rene on Mon, 2010-07-26 09:36- Login to post comments
- Tagged
- XML
- BibTex
- Google Scholar
Durm XML Markup
The formal DTD used within the Durm Corpus is available for download. Here, we briefly describe the meaning of the various elements.
Durm TUSTEP Markup
Tustep in general is documented at http://www.zdv.uni-tuebingen.de/tustep/tustep_eng.html. Here, we only provide an informal overview for users of the TUSTEP version of our Durm Corpus.
The Durm Corpus
As part of the Durm project, we digitized a single volume from the historical German Handbuch der Architektur (Handbook on Architecture), namely:
E. Marx: Wände und Wandöffnungen (Walls and Wall Openings). In "Handbuch der Architektur", Part III, Volume 2, Number I, Second edition, Stuttgart, Germany, 1900.
Contains 506 pages with 956 figures.
The corpus developed in this project is made available under a free document license in several formats: scanned page images, Tustep format, and XML format. Additionally, an online version and tools for transforming the various formats are available as well.
The Durm Project
The Durm project, carried out from 2004-2006 at the Institute for Program Structures and Data Organization (IPD) at the University of Karlsruhe, Germany, investigated the use of advanced semantic technologies for cultural heritage data management. The goal was to support end users, in particular users from building history and architecture, with tools that go beyond classical information retrieval techniques. Experiments were carried out on the historical Handbuch der Architektur (Handbook on Architecture).
