Skip navigation.
Home
Semantic Software Lab
Concordia University
Montréal, Canada

NLP

Natural Language Processing

Semantic Publishing Challenge 2015: Supplementary Material

This page provides supplementary material for our submission to the Semantic Publishing Challenge 2015 co-located with the Extended Semantic Web Conference (ESWC 2015).

We present an automatic workflow that performs text segmentation and entity extraction from scientific literature to primarily address Task 2 of the Semantic Publishing Challenge 2015. The proposed solution is composed of two subsystems: (i) A text mining pipeline, developed based on the GATE framework, which extracts structural and semantic entities, such as, authors' information and citations, from text and produces semantic (typed) annotations; and (ii) a flexible exporting module that translates the document annotations into RDF triples according to a custom mapping file.

SAVE-SD 2015 Submission: Supplementary Material

This page provides supplementary material for our submission to the SAVE-SD 2015 workshop on Semantics, Analytics, Visualisation: Enhancing Scholarly Data. We have published our populated knowledge base from the experiments described in the paper. In order to reproduce the results in the "Application" section, you can execute the queries by clicking on the link to the full page below.

Tutorial: Adding Natural Language Processing Support to your (Semantic) MediaWiki

Wikis have become powerful knowledge management platforms, offering high customizability while remaining relatively easy to deploy and use. With a majority of content in natural language, wikis can greatly benefit from automated text analysis techniques. Natural Language Processing is a branch of computer science that employs various Artificial Intelligence (AI) techniques to process content written in natural language. NLP-enhanced wikis can support users in finding, developing and organizing knowledge contained inside the wiki repository. Rather than relying on external NLP applications, we developed an approach that brings NLP as an integrated feature to wiki systems, thereby creating new human/AI collaboration patterns, where human users work together with automated "intelligent assistants" on developing, structuring and improving wiki content. This is achieved with our open source Wiki-NLP integration, a Semantic Assistants add-on that allows to incorporate NLP services into the MediaWiki environment, thereby enabling wiki users to benefit from modern text mining techniques.

This tutorial has two main parts: In the first part, we will present an introduction into NLP and text mining, as well as related frameworks, in particular the General Architecture for Text Engineering and the Semantic Assistants framework. Building on the foundations covered in the first part, we will then look into the Wiki-NLP integration and show how you can add arbitrary text processing services to your (Semantic) MediaWiki instance with minimal effort. Throughout the tutorial, we illustrate the application of NLP in wikis with a number of applications examples from various domains we developed in our research within the last decade, such as cultural heritage data management, collaborative software requirements engineering, and biomedical knowledge management. These showcases of the Wiki-NLP integration highlight a number of integration patterns that will help you to adopt this technology for your own domain.

Semantic Assistants for Web Portals


As a new extension to our Semantic Assistants framework, the integration of Semantic Assistants for Liferay allows portals to automatically process textual content using state-of-the-art techniques from the Natural Language Processing (NLP) domain. The SA-Liferay integration aims at bringing NLP power to this popular portal system and its users in a seamless, user-friendly manner, realized as a ready-to-deploy custom portlet. With this new integration, we envision a new generation of web portals that can provide context-sensitive support through semantic analysis services, in particular based on NLP, allowing AI "assistants" support portal users with their tasks at hand.

Natural Language Processing for Web Portals: First release of the Semantic Assistants-Liferay Integration


A data portal is a web-based software application, which provides a central entry point to an enormous amount of heterogeneous data sources. These mostly heterogeneous information are aggregated from various sources and presented to users based on their assigned roles. Ideally, an intelligent portal must be able to offer content to users, taking into account contextual information beyond their roles and permissions. Our integration of Semantic Assistants for Liferay allows portals to automatically process textual content using state-of-the-art techniques from the Natural Language Processing (NLP) domain. The SA-Liferay integration aims at bringing the NLP power to this popular portal system and its users in a seamless, user-friendly manner, realized as a ready-to-deploy custom portlet.

TagCurate: crowdsourcing the verification of biomedical annotations to mobile users

Sateli, B., S. Luong, and R. Witte, "TagCurate: crowdsourcing the verification of biomedical annotations to mobile users", NETTAB 2013 workshop focused on Semantic, Social, and Mobile Applications for Bioinformatics and Biomedical Laboratories, vol. 19 Suppl. B, Venice Lido, Italy : EMBnet.journal, pp. 24–26, 10/2013.

Smarter Mobile Apps through Integrated Natural Language Processing Services

Daniel, F., G. A. Papadopoulos, and P. Thiran (Eds.), Sateli, B., G. Cook, and R. Witte, "Smarter Mobile Apps through Integrated Natural Language Processing Services", The 10th International Conference on Mobile Web Information Systems (MobiWIS 2013), vol. 8093, Paphos, Cyprus : Springer Berlin Heidelberg, pp. 187–202, 08/2013.
Syndicate content