Semantic representation of scientific literature: bringing claims, contributions and named entities onto the Linked Open Data cloudSubmitted by bahar on Thu, 2016-01-07 17:09
This page provides supplementary material for our submission to the SAVE-SD 2015 workshop on Semantics, Analytics, Visualisation: Enhancing Scholarly Data. We have published our populated knowledge base from the experiments described in the paper. In order to reproduce the results in the "Application" section, you can execute the queries by clicking on the link to the full page below.
Wikis have become powerful knowledge management platforms, offering high customizability while remaining relatively easy to deploy and use. With a majority of content in natural language, wikis can greatly benefit from automated text analysis techniques. Natural Language Processing is a branch of computer science that employs various Artificial Intelligence (AI) techniques to process content written in natural language. NLP-enhanced wikis can support users in finding, developing and organizing knowledge contained inside the wiki repository. Rather than relying on external NLP applications, we developed an approach that brings NLP as an integrated feature to wiki systems, thereby creating new human/AI collaboration patterns, where human users work together with automated "intelligent assistants" on developing, structuring and improving wiki content. This is achieved with our open source Wiki-NLP integration, a Semantic Assistants add-on that allows to incorporate NLP services into the MediaWiki environment, thereby enabling wiki users to benefit from modern text mining techniques.
This tutorial has two main parts: In the first part, we will present an introduction into NLP and text mining, as well as related frameworks, in particular the General Architecture for Text Engineering and the Semantic Assistants framework. Building on the foundations covered in the first part, we will then look into the Wiki-NLP integration and show how you can add arbitrary text processing services to your (Semantic) MediaWiki instance with minimal effort. Throughout the tutorial, we illustrate the application of NLP in wikis with a number of applications examples from various domains we developed in our research within the last decade, such as cultural heritage data management, collaborative software requirements engineering, and biomedical knowledge management. These showcases of the Wiki-NLP integration highlight a number of integration patterns that will help you to adopt this technology for your own domain.
The complete volume contains abstracts for the two invited talks, two full papers, two short papers, five early career track papers, and four systems papers. Individual papers can be downloaded from the CEUR-WS.org site, where you can also find a BibTeX file with all references.
We just released a new version of the OwlExporter ontology population plugin for GATE. The OwlExporter PR can be added to any NLP pipeline to facilitate the population of an existing OWL ontology with entities detected in the corpus. It supports the population of separate NLP- and domain-ontologies and has support for some advanced features, like the export of coreference chains.
In this release, we included a pre-compiled binary and a complete example pipeline that transforms GATE's ANNIE information extraction example into an ontology population system. We also completely revamped the documentation and website to make it more accessible to ontology population novices.