Skip navigation.
Home
Semantic Software Lab
Concordia University
Montréal, Canada

The Semantic Software Lab

ENCS Building

The Semantic Software Lab was founded in 2008 by René Witte at Concordia University in Montréal, Québec, Canada. Our lab focuses on research and applications of Semantic Computing, Text Mining, Linked Data, Natural Language Processing (NLP), Information Extraction, Intelligent Information Systems, and related technologies. We are committed to providing free, open source software and open research data to the community.

This website provides information about the lab's research activities and our published tools and resources. It also provide information for students interested in course or research work, as well as career opportunities for researchers. It also aims to serve as a community portal for selected topics and events in the area of semantic systems within (north-east) America in general and Montréal in particular. You can also follow us on Twitter @SemSoft, on LinkedIn or connect with us on Google+.

Zeeva: A Collaborative Semantic Literature Management System

This overabundance of literature available in online repositories is an ongoing challenge for scientists that have to efficiently manage and analyze content for their information needs. Most of the existing literature management systems merely provide support for storing bibliographical metadata, tagging, and simple annotation capabilities. We go beyond these approaches by demonstrating how an innovative combination of semantic web technologies with natural language processing can mitigate the information overload by helping in curating and organizing scientific literature. Zeeva is our research prototype for demonstrating how we can turn existing papers into a queryable knowledge base.

Tutorial: Adding Natural Language Processing Support to your (Semantic) MediaWiki

Wikis have become powerful knowledge management platforms, offering high customizability while remaining relatively easy to deploy and use. With a majority of content in natural language, wikis can greatly benefit from automated text analysis techniques. Natural Language Processing is a branch of computer science that employs various Artificial Intelligence (AI) techniques to process content written in natural language. NLP-enhanced wikis can support users in finding, developing and organizing knowledge contained inside the wiki repository. Rather than relying on external NLP applications, we developed an approach that brings NLP as an integrated feature to wiki systems, thereby creating new human/AI collaboration patterns, where human users work together with automated "intelligent assistants" on developing, structuring and improving wiki content. This is achieved with our open source Wiki-NLP integration, a Semantic Assistants add-on that allows to incorporate NLP services into the MediaWiki environment, thereby enabling wiki users to benefit from modern text mining techniques.

This tutorial has two main parts: In the first part, we will present an introduction into NLP and text mining, as well as related frameworks, in particular the General Architecture for Text Engineering and the Semantic Assistants framework. Building on the foundations covered in the first part, we will then look into the Wiki-NLP integration and show how you can add arbitrary text processing services to your (Semantic) MediaWiki instance with minimal effort. Throughout the tutorial, we illustrate the application of NLP in wikis with a number of applications examples from various domains we developed in our research within the last decade, such as cultural heritage data management, collaborative software requirements engineering, and biomedical knowledge management. These showcases of the Wiki-NLP integration highlight a number of integration patterns that will help you to adopt this technology for your own domain.

Semantic MediaWiki Conference (SMWCon) Spring 2014: 2nd call for contributions

Save the dates! SMWCon Spring 2014 will be held at Concordia University this year in the vibrant and culturally-fascinating city of Montréal, from May 21-23. We are inviting you to submit your contributions to assemble the conference program. Registration is now open, with early bird rates applicable until April 30th.

This twice-yearly conference brings together researchers, users, developers and enthusiasts of Semantic MediaWiki and related projects, such as Wikidata. Semantic MediaWiki is a family of extensions to the open-source wiki software MediaWiki (best known for powering Wikipedia) that allow a wiki to store structured data in addition to textual content, thereby, turning a wiki into a flexible, collaborative knowledge repository.

Proceedings of the 4th Canadian Semantic Web Symposium (CSWS 2013) now at CEUR

The complete proceedings of the 4th Canadian Semantic Web Symposium (CSWS 2013) are now available on the CEUR-WS.org website as Volume 1054.

The complete volume contains abstracts for the two invited talks, two full papers, two short papers, five early career track papers, and four systems papers. Individual papers can be downloaded from the CEUR-WS.org site, where you can also find a BibTeX file with all references.

4th Canadian Semantic Web Symposium (CSWS 2013), Montréal, Canada

2013-07-10
America/Montreal


The Fourth Canadian Semantic Web Symposium will be held at Concordia University, Montreal, Quebec, on July 10, 2013. CSWS 2013 aims to bring together Canadian and international researchers in semantic technologies and knowledge management to discuss issues related to the Semantic Web.

The event is part of the Semantic Trilogy 2013 featuring:

  • International Conference on Biomedical Ontologies (ICBO 2013)
  • Canadian Semantic Web Symposium (CSWS 2013)
  • Data Integration in the Life Sciences (DILS 2013)

For more information, please refer to:

Web: http://www.unbsj.ca/sase/csas/data/ws/csws2013/index.html
Twitter: https://twitter.com/CSWS2013 (@CSWS2013)
Google+: https://plus.google.com/events/cdkiqq1fuatjplirn5gcvm2i31c
Registration: http://www.unbsj.ca/sase/csas/data/ws/semantic-trilogy-2013/registration...

Wiki-NLP Integration Research in Concordia NOW Newsletter

Our research on Natural Language Processing (NLP) for wiki systems has been featured in Concordia University's NOW newsletter. Explaining the technology and its applications to a general audience, it quickly become one of the most read and shared articles of the week.

Natural Language Processing for MediaWiki: First major release of the Semantic Assistants Wiki-NLP Integration

We are happy to announce the first major release of our Semantic Assistants Wiki-NLP integration. This is the first comprehensive open source solution for bringing Natural Language Processing (NLP) to wiki users, in particular for wikis based on the well-known MediaWiki engine and its Semantic MediaWiki (SMW) extension. It allows you to bring novel text mining assistants to wiki users, e.g., for automatically structuring wiki pages, answering questions in natural language, quality assurance, entity detection, summarization, among others, which are deployed in the General Architecture for Text Engineering (GATE) and brokered as web services through the Semantic Assistants server.

OpenTrace Showcased at the WCRE'12 Conference


Last week I introduced the very first release of the OpenTrace tool at this year's WCRE conference in the lovely city of Kingston, Ontario. This 4-day event was the 19th conference on reverse engineering and hosted talks from research and industry on the state-of-the art techniques for program comprehension of software systems.

Wiki-NLP Integration at the WikiSym'12 Conference


WikiSym is an international symposium on wikis and open collaborative techniques, mainly focused on wiki research and practice. Back in 2007, we coined the term "self-aware" wiki systems in our paper submitted to the WikiSym '07, fostering the idea that the integration of Natural Language Processing (NLP) techniques within wiki systems allows the wiki systems to read, understand, transform, and even write their own content, as well as supporting their users in information analysis and content development. Now after a few years, we have realized this idea through an open service-oriented architecture.

Syndicate content