The Semantic Software Lab was founded in 2008 by René Witte at Concordia University in Montréal, Québec, Canada. Our lab focuses on research and applications of Semantic Computing, Text Mining, Linked Data, Natural Language Processing (NLP), Information Extraction, Intelligent Information Systems, and related technologies. We are committed to providing free, open source software and open research data to the community.
This website provides information about the lab's research activities and our published tools and resources. It also provide information for students interested in course or research work, as well as career opportunities for researchers. It also aims to serve as a community portal for selected topics and events in the area of semantic systems within (north-east) America in general and Montréal in particular. You can also follow us on Twitter @SemSoft, on LinkedIn or connect with us on Google+. In addition to the resources published on this site, we have code repositories on SourceForge and GitHub.
The LODtagger is a GATE component that provides linking entities from a document to their corresponding resource on the Linked Open Data (LOD) cloud. LODtagger relies on external tools to perform the actual content tagging and hides the complexity of communicating with LOD taggers, such as DBpedia Spotlight, from the perspective of pipeline developers.
This overabundance of literature available in online repositories is an ongoing challenge for scientists that have to efficiently manage and analyze content for their information needs. Most of the existing literature management systems merely provide support for storing bibliographical metadata, tagging, and simple annotation capabilities. We go beyond these approaches by demonstrating how an innovative combination of semantic web technologies with natural language processing can mitigate the information overload by helping in curating and organizing scientific literature. Zeeva is our research prototype for demonstrating how we can turn existing papers into a queryable knowledge base.
Wikis have become powerful knowledge management platforms, offering high customizability while remaining relatively easy to deploy and use. With a majority of content in natural language, wikis can greatly benefit from automated text analysis techniques. Natural Language Processing is a branch of computer science that employs various Artificial Intelligence (AI) techniques to process content written in natural language. NLP-enhanced wikis can support users in finding, developing and organizing knowledge contained inside the wiki repository. Rather than relying on external NLP applications, we developed an approach that brings NLP as an integrated feature to wiki systems, thereby creating new human/AI collaboration patterns, where human users work together with automated "intelligent assistants" on developing, structuring and improving wiki content. This is achieved with our open source Wiki-NLP integration, a Semantic Assistants add-on that allows to incorporate NLP services into the MediaWiki environment, thereby enabling wiki users to benefit from modern text mining techniques.
This tutorial has two main parts: In the first part, we will present an introduction into NLP and text mining, as well as related frameworks, in particular the General Architecture for Text Engineering and the Semantic Assistants framework. Building on the foundations covered in the first part, we will then look into the Wiki-NLP integration and show how you can add arbitrary text processing services to your (Semantic) MediaWiki instance with minimal effort. Throughout the tutorial, we illustrate the application of NLP in wikis with a number of applications examples from various domains we developed in our research within the last decade, such as cultural heritage data management, collaborative software requirements engineering, and biomedical knowledge management. These showcases of the Wiki-NLP integration highlight a number of integration patterns that will help you to adopt this technology for your own domain.
Save the dates! SMWCon Spring 2014 will be held at Concordia University this year in the vibrant and culturally-fascinating city of Montréal, from May 21-23. We are inviting you to submit your contributions to assemble the conference program. Registration is now open, with early bird rates applicable until April 30th.
This twice-yearly conference brings together researchers, users, developers and enthusiasts of Semantic MediaWiki and related projects, such as Wikidata. Semantic MediaWiki is a family of extensions to the open-source wiki software MediaWiki (best known for powering Wikipedia) that allow a wiki to store structured data in addition to textual content, thereby, turning a wiki into a flexible, collaborative knowledge repository.
Natural Language Processing for Web Portals: First release of the Semantic Assistants-Liferay Integration
A data portal is a web-based software application, which provides a central entry point to an enormous amount of heterogeneous data sources. These mostly heterogeneous information are aggregated from various sources and presented to users based on their assigned roles. Ideally, an intelligent portal must be able to offer content to users, taking into account contextual information beyond their roles and permissions. Our integration of Semantic Assistants for Liferay allows portals to automatically process textual content using state-of-the-art techniques from the Natural Language Processing (NLP) domain. The SA-Liferay integration aims at bringing the NLP power to this popular portal system and its users in a seamless, user-friendly manner, realized as a ready-to-deploy custom portlet.
The complete volume contains abstracts for the two invited talks, two full papers, two short papers, five early career track papers, and four systems papers. Individual papers can be downloaded from the CEUR-WS.org site, where you can also find a BibTeX file with all references.
The Fourth Canadian Semantic Web Symposium will be held at Concordia University, Montreal, Quebec, on July 10, 2013. CSWS 2013 aims to bring together Canadian and international researchers in semantic technologies and knowledge management to discuss issues related to the Semantic Web.
The event is part of the Semantic Trilogy 2013 featuring:
- International Conference on Biomedical Ontologies (ICBO 2013)
- Canadian Semantic Web Symposium (CSWS 2013)
- Data Integration in the Life Sciences (DILS 2013)
For more information, please refer to:
Twitter: https://twitter.com/CSWS2013 (@CSWS2013)
Our research on Natural Language Processing (NLP) for wiki systems has been featured in Concordia University's NOW newsletter. Explaining the technology and its applications to a general audience, it quickly become one of the most read and shared articles of the week.
Natural Language Processing for MediaWiki: First major release of the Semantic Assistants Wiki-NLP Integration
We are happy to announce the first major release of our Semantic Assistants Wiki-NLP integration. This is the first comprehensive open source solution for bringing Natural Language Processing (NLP) to wiki users, in particular for wikis based on the well-known MediaWiki engine and its Semantic MediaWiki (SMW) extension. It allows you to bring novel text mining assistants to wiki users, e.g., for automatically structuring wiki pages, answering questions in natural language, quality assurance, entity detection, summarization, among others, which are deployed in the General Architecture for Text Engineering (GATE) and brokered as web services through the Semantic Assistants server.