Skip navigation.
Home
Semantic Software Lab
Concordia University
Montréal, Canada

Blogroll

WeVerify: Algorithm-Supported Verification of Digital Content

Announcing WeVerify: an algorithm-supported method for digital content verification. The WeVerify platform will provide an independent and community driven environment for the verification of online content, with further verification provided by expert partners. Prof. Kalina Bontcheva will be serving as the Scientific Director of the project.


Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Content verification is currently far from trivial, even for experienced journalists, human rights activists or media literacy scholars. Moreover, recent advances in artificial intelligence (deep learning) have enabled the creation of intelligent bots and highly realistic synthetic multimedia content. Consequently, it is extremely challenging for citizens and journalists to assess the credibility of online content, and to navigate the highly complex online information landscapes.

WeVerify aims to address the complex content verification challenges through a participatory verification approach, open source algorithms, low-overhead human-in-the-loop machine learning and intuitive visualizations. Social media and web content will be analysed and contextualised within the broader online ecosystem, in order to expose fabricated content, through cross-modal content verification, social network analysis, micro-targeted debunking and a blockchain-based public database of known fakes.



Add captionA key outcome will be the WeVerify platform for collaborative, decentralised content verification, tracking, and debunking.
The platform will be open source to engage communities and citizen journalists alongside newsroom and freelance journalists. To enable low-overhead integration with in-house content management systems and support more advanced newsroom needs, a premium version of the platform will also be offered. It will be furthermore supplemented by a digital companion to assist with verification tasks.

Results will be validated by professional journalists and debunking specialists from project partners (DW, AFP, DisinfoLab), external participants (e.g. members of the First Draft News network), the community of more than 2,700 users of the InVID verification plugin, and by media literacy, human rights and emergency response organisations.

The WeVerify website can be found at https://weverify.eu/, and WeVerify can be found on Twitter @WeV3rify!
Categories: Blogroll

Coming Up: 12th GATE Summer School 17-21 June 2019

It is approaching that time of the year again! The GATE training course will be held from 17-21 June 2019 at the University of Sheffield, UK.

No previous experience or programming expertise is necessary, so it's suitable for anyone with an interest in text mining and using GATE, including people from humanities backgrounds, social sciences, etc.

This event will follow a similar format to that of the 2018 course, with one track Monday to Thursday, and two parallel tracks on Friday, all delivered by the GATE development team. You can read more about it and register here. Early bird registration is available at a discounted rate until 1 May.

The focus will be on mining text and social media content with GATE. Many of the hands on exercises will be focused on analysing news articles, tweets, and other textual content.

The planned schedule is as follows (NOTE: may still be subject to timetabling changes).
Single track from Monday to Thursday (9am - 5pm):
  • Monday: Module 1: Basic Information Extraction with GATE
    • Intro to GATE + Information Extraction (IE)
    • Corpus Annotation and Evaluation
    • Writing Information Extraction Patterns with JAPE
  • Tuesday: Module 2: Using GATE for social media analysis
    • Challenges for analysing social media, GATE for social media
    • Twitter intro + JSON structure
    • Language identification, tokenisation for Twitter
    • POS tagging and Information Extraction for Twitter
  • Wednesday: Module 3: Crowdsourcing, GATE Cloud/MIMIR, and Machine Learning
    • Crowdsourcing annotated social media content with the GATE crowdsourcing plugin
    • GATE Cloud, deploying your own IE pipeline at scale (how to process 5 million tweets in 30 mins)
    • GATE Mimir - how to index and search semantically annotated social media streams
    • Challenges of opinion mining in social media
    • Training Machine Learning Models for IE in GATE
  • Thursday: Module 4: Advanced IE and Opinion Mining in GATE
    • Advanced Information Extraction
    • Useful GATE components (plugins)
    • Opinion mining components and applications in GATE
On Friday, there is a choice of modules (9am - 5pm):
  • Module 5: GATE for developers
    • Basic GATE Embedded
    • Writing your own plugin
    • GATE in production - multi-threading, web applications, etc.
  • Module 6: GATE Applications
    • Building your own applications
    • Examples of some current GATE applications: social media summarisation, visualisation, Linked Open Data for IE, and more
These two modules are run in parallel, so you can only attend one of them. You will need to have some programming experience and knowledge of Java to follow Module 5 on the Friday. No particular expertise is needed for Module 6.
Hope to see you in Sheffield in June!
Categories: Blogroll

Python: using ANNIE via its web API

GATE Cloud is GATE, the world-leading text-analytics platform, made available on the web with both human user interfaces and programmatic ones.

My name is David Jones and part of my role is to make it easier for you to use GATE. This article is aimed at Python programmers and people who are, rightly, curious to see if Python can help with their text analysis work.

GATE Cloud exposes a web API for many of its services. In this article, I'm going to sketch an example in Python that uses the GATE Cloud API to ANNIE, the English Named Entity Recognizer.



I'm writing in Python 3 using the really excellent requests library.

The GATE Cloud API documentation describes the general outline of using the API, which is that you make an HTTP request setting particular headers.

The full code that I'm using is available on GitHub and is installable and runnable.

A simple use is to pass text to ANNIE and get annotated results back.
In terms of Python:

    text = "David Jones joined the University of Sheffield this year"
    headers = {'Content-Type': 'text/plain'}
    response = requests.post(url, data=text, headers=headers)
The Content-Type header is required and specifies the MIME type of the text we are sending. In this case it's text/plain but GATE Cloud supports many types including PDF, HTML, XML, and Twitter's JSON format; details are in the GATE Cloud API documentation.

The default output is JSON and in this case once I've used Python's json.dumps(thing, indent=2) to format it nicely, it looks like this:
 {
  "text": "David Jones joined the University of Sheffield this year",
  "entities": {
    "Date": [
      {
        "indices": [
          47,
          56
        ],
        "rule": "ModifierDate",
        "ruleFinal": "DateOnlyFinal",
        "kind": "date"
      }
    ],
    "Organization": [
      {
        "indices": [
          23,
          46
        ],
        "orgType": "university",
        "rule": "GazOrganization",
        "ruleFinal": "OrgFinal"
      }
    ],
    "Person": [
      {
        "indices": [
          0,
          11
        ],
        "firstName": "David",
        "gender": "male",
        "surname": "Jones",
        "kind": "fullName",
        "rule": "PersonFull",
        "ruleFinal": "PersonFinal"
      }
    ]
  }
}The JSON returned here is designed to have a similar structure to the format used by Twitter: Tweet JSON. The outermost dictionary has a text key and an entities key. The entities object is a dictionary that contains arrays of annotations of different types; each annotation being a dictionary with an indices key and other metadata. I find this kind of thing is impossible to describe and impossible to work with until I have an example and half-working code in front of me.

The full Python example uses this code to unpick the annotations and display their type and text:

    gate_json = response.json()
    response_text = gate_json["text"]
    for annotation_type, annotations in gate_json["entities"].items():
        for annotation in annotations:
            i, j = annotation["indices"]
            print(annotation_type, ":", response_text[i:j])
With the text I gave above, I get this output:
Date : this year
Organization : University of Sheffield
Person : David JonesWe can see that ANNIE has correctly picked out a date, an organisation, and a person, from the text. It's worth noting that the JSON output has more detail that I'm not using in this example: "University of Sheffield" is identified as a university; "David Jones" is identified with the gender "male".

Some notes on programming
  • requests is nice.
  • Content-Type header is required.
  • requests has a response.json() method which is a shortcut for parsing the JSON into Python objects.
  • the JSON response has a text field, which is the text that was analysed (in my example they are the same, but for PDF we need the linear text so that we can unambiguously assign index values within it).
  • the JSON response has an entities field, which is where all the annotations are, first separated and keyed by their annotation type.
  • the indices returned in the JSON are 0-based end-exclusive which matches the Python string slicing convention, hence we can use response_text[i:j] to get the correct piece of text.
Quota and API keys
The public service has a fairly limited quota, but if you create an account on GATE Cloud you can create an API key which will allow you to access the service with increased quota and fewer limits.

To use your  API key, use HTTP basic authentication, passing in the Key ID as the user-id and the API key password as the password. requests makes this pretty simple, as you can supply auth=(user, pass) as an additional keyword argument to requests.post(). Possibly even simpler though is to put those values in your ~/.netrc file (_netrc in Windows):

    machine cloud-api.gate.ac.uk
    login 71rs93h36m0c
    password 9u8ki81lstfc2z8qjlae

The nice thing about this is that requests will find and use these values automatically without you having to write any code.

Go try using the web API now, and let us know how you get on!
Categories: Blogroll

Brexit--The Regional Divide


Referendum result
Although the UK voted by a narrow margin in the UK EU membership referendum in 2016 to leave the EU, that outcome failed to capture the diverse feelings held in various regions. It's a curious observation that the UK regions with the most economic dependence on the EU were the regions more likely to vote to leave it. The image below on the right is taken from this article from the Centre for European Reform, and makes the point in a few different ways. This and similar research inspired a current project the GATE team are undertaking with colleagues in the Geography and Journalism departments at Sheffield University, under the leadership of Miguel Kanai and with funding from the British Academy, aiming to understand whether lack of awareness of individual local situation played a role in the referendum outcome.
Our Brexit tweet corpus contains tweets collected during the run-up to the Brexit referendum, and we've annotated almost half a million accounts for Brexit vote intent with a high accuracy. You can read about that here. So we thought we'd be well positioned to bring some insights. We also annotated user accounts with location: many Twitter users volunteer that information, though there can be a lot of variation on how people describe their location, so that was harder to do accurately. We also used local and national news media corpora from the time of the referendum, in order to contrast national coverage with local issues are around the country.
Topics representation in different media
"People's resistance to propaganda and media‐promoted ideas derives from their close ties in real communities"
Jean SeatonUsing topic modelling and named entity recognition, we were able to look for similarities and differences in the focus of local and national media and Twitter users. The bar chart on the left gets us started, illustrating that foci differ between media. Twitter users give more air time than news media to trade and immigration, whereas local press takes the lead on employment, local politics and agriculture. National press gives more space to terrorism than either Twitter or local news.

NER diff between national and local press
On the right is just one of many graphs in which we unpack this on a region-by-region basis (you can find more on the project website). In this choropleth, red indicates that the topic was significantly more discussed in national press than in local press in that area, and green indicates that the topic was significantly more discussed in local press there than in national press. Terrorism and immigration have perhaps been subject to a certain degree of media and propaganda inflation--we talk about this in our Social Informatics paper. Where media focus on locally relevant issues, foci are more grounded, for example in practical topics such as agriculture and employment. We found that across the regions, Twitter remainers showed a closer congruence with local press than Twitter leavers.
The graph on the right shows the number of times a newspaper was linked on Twitter, contrasted against the percentage of people that said they read that newspaper in the British Election Study. It shows that the dynamics of popularity on Twitter are very different to traditional readership. This highlights a need to understand how the online environment is affecting the news reportage we are exposed to, creating a market for a different kind of material, and a potentially more hostile climate for quality journalism, as discussed by project advisor Prof. Jackie Harrison here. Furthermore, local press are increasingly struggling to survive, so it feels important to highlight their value through this work.
You can see more choropleths on the project website. There's also an extended version here of an article currently under review.
Categories: Blogroll

Code submission should be encouraged but not compulsory

Machine Learning Blog - Tue, 2019-02-26 12:27

ICML, ICLR, and NeurIPS are all considering or experimenting with code and data submission as a part of the reviewer or publication process with the hypothesis that it aids reproducibility of results. Reproducibility has been a rising concern with discussions in paper, workshop, and invited talk.

The fundamental driver is of course lack of reproducibility. Lack of reproducibility is an inherently serious and valid concern for any kind of publishing process where people rely on prior work to compare with and do new things. Lack of reproducibility (due to random initialization for example) was one of the things leading to a period of unpopularity for neural networks when I was a graduate student. That has proved nonviable (Surprise! Learning circuits is important!), but the reproducibility issue remains. Furthermore, there is always an opportunity and latent suspicion that authors ‘cheat’ in reporting results which could be allayed using a reproducible approach.

With the above said, I think the reproducibility proponents should understand that reproducibility is a value but not an absolute value. As an example here, I believe it’s quite worthwhile for the community to see AlphaGoZero published even if the results are not necessarily easily reproduced. There is real value for the community in showing what is possible irrespective of whether or not another game with same master of Go is possible, and there is real value in having an algorithm like this be public even if the code is not. Treating reproducibility as an absolute value could exclude results like this.

An essential understanding here is that machine learning is (at least) 3 different kinds of research.

  • Algorithms: The goal is coming up with a better algorithm for solving some category of learning problems. This is the most typical viewpoint at these conferences.
  • Theory: The goal is generally understanding what is possible or not possible for learning algorithms. Although these papers may have algorithms, they are often not the point and demanding an implementation of them is a waste of time for author, reviewer, and reader.
  • Applications: The goal is solving some particular task. AlphaGoZero is a reasonable example of this—it was about beating the world champion in Go with algorithmic development in service of that. For this kind of research perfect programmatic reproducibility may be infeasible because the computation is to extreme, the data is proprietary, etc…

Using a one-size-fits-all approach where you demand that every paper “is” a programmatically reproducible implementation is a mistake that would create a division that reduces our community. Keeping this three-fold focus fundamentally enriches the community both literally and ontologically.

Another view here is provided by considering the argument at a wider scope. Would you prefer that health regulations/treatments be based on all scientific studies including those where data is not fully released to the public (i.e almost all of them for privacy reasons)? Or would you prefer that health regulations/treatments be based only on data fully released to the public? Preferring the latter is equivalent to ignoring most scientific studies in making decisions.

The alternative to a compulsory approach is to take an additive view. The additive approach has a good track record amongst reviewing process changes.

  • When I was a graduate student, papers were not double blind. The community switched to double blind because it adds an opportunity for reviewers to review fairly and it gives authors a chance to have their work reviewed fairly whether they are junior or senior. As a community we also do not restrict posting on arxiv or talks about a paper before publication, because that would subtract from what authors can do. Double blind reviewing could be divisive, but it is not when used in this fashion.
  • When I was a graduate student, there was also a hard limit on the number of pages in submissions. For theory papers this meant that proofs were not included. We changed the review process to allow (but not require) submission of an appendix which could optionally be used by reviewers. This again adds to the options available to authors/reviewers and is generally viewed as positive by everyone involved.

What can we add to the community in terms reproducibility?

  1. Can reviewers do a better job of reviewing if they have access to the underlying code or data?
  2. Can authors benefit from releasing code?
  3. Can readers of a paper benefit from an accompanying code release?

The answer to each of these question is a clear ‘yes’ if done right.

For reviewers, it’s important to not overburden them. They may lack the computational resources, platform, or personal time to do a full reproduction of results even if that is possible. Hence, we should view code (and data) submission in the same way as an appendix which reviewers may delve into and use if they so desire.

For authors, code release has two benefits—it provides an additional avenue for convincing reviewers who default to skeptical and it makes followup work significantly more likely. My most cited paper was Isomap which did indeed come with a code release. Of course, this is not possible or beneficial for authors in many cases. Maybe it’s a theory paper where the algorithm isn’t the point? Maybe either data or code can’t be fully released since it’s proprietary? There are a variety of reasons. From this viewpoint we see that releasing code should be supported and encouraged but optional.

For readers, having code (and data) available obviously adds to the depth of value that a paper has. Not every reader will take advantage of that but some will and it enormously reduces the barrier to using a paper in many cases.

Let’s assume we do all of these additive and enabling things, which is about where Kamalika and Russ aimed the ICML policy this year.

Is there a need for go further towards compulsory code submission? I don’t yet see evidence that default skeptical reviewers aren’t capable of weighing the value of reproducibility against other values in considering whether a paper should be published.

Should we do less than the additive and enabling things? I don’t see why—the additive approach provides pure improvements to the author/review/publish process. Not everyone is able to take advantage of this, but that seems like a poor reason to restrict others from taking advantage when they can.

One last thing to note is that this year’s code submission process is an experiment. We should all want program chairs to be able to experiment, because that is how improvements happen. We should do our best to work with such experiments, try to make a real assessment of success/failure, and expect adjustments for next year.

Categories: Blogroll

GATE team wins first prize in the Hyperpartisan News Detection Challenge

SemEval 2019 recently launched the Hyperpartisan News Detection Task in order to evaluate how well tools could automatically classify hyperpartisan news texts. The idea behind this is that "given a news text, the system must decide whether it follows a hyperpartisan argumentation, i.e. whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.

Below we see an example of (part of) two news stories about Donald Trump from the challenge data. The one on the left is considered to be hyperpartisan, as it shows a biased kind of viewpoint. The one on the right simply reports a story and is not considered hyperpartisan. The distinction is difficult even for humans, because there are no exact rules about what makes a story hyperpartisan.






In total, 322 teams registered to take part, of which 42 actually submitted an entry, including the GATE team consisting of Ye Jiang, Xingyi Song and Johann Petrak, with guidance from Kalina Bontcheva and Diana Maynard.

The main performance measure for the task is accuracy on a balanced set of articles, though additionally precision, recall, and F1-score were measured for the hyperpartisan class. In the final submission, the GATE team's hyperpartisan classifying algorithm achieved 0.822 accuracy for manually annotated evaluation set, and ranked in first position in the final leader board.


Our winning system was based on using sentence representations from averaged word embeddings  generated from the pre-trained ELMo model with a Convolutional Neural Network and Batch Normalization for training on the provided dataset. An averaged ensemble of models was then used to generate the final predictions. 

The source code and full system description is available on github.

One of the major challenges of this task is that the model must have the ability to adapt to a large range of article sizes. Most state-of-the-art neural network approaches for document classification use a token sequence as network input, but such an approach in this case would mean either a massive computational cost or loss of information, depending on how the maximum sequence length. We got around this problem by first pre-calculating sentence level embeddings as the average of word embeddings for each sentence, and then representing the document as a sequence of these sentence embeddings. We also found that actually ignoring some of the provided training data (which was automatically generated based on the document publishing source) improved our results, which leads to important conclusions about the trustworthiness of training data and its implications.

Overall, the ability to do well on the hyperpartisan news prediction task is important both for improving knowledge about neural networks for language processing generally, but also because better understanding of the nature of biased news is critical for society and democracy.





Categories: Blogroll

Russian Troll Factory: Sketches of a Propaganda Campaign

When Twitter shared a large archive of propaganda tweets late in 2018 we were excited to get access to over 9 million tweets from almost 4 thousand unique Twitter accounts controlled by Russia's Internet Research Agency. The tweets are posted in 57 different languages, but most are in Russian (53.68%) and English (36.08%). Average account age is around four years, and the longest accounts are as much as ten years old.
A large amount of activity in both the English and Russian accounts is given to news provision. Secondly, many accounts seem to engage in hashtag games, which may be a way to establish an account and get some followers. Of particular interest however are the political trolls. Left trolls pose as individuals interested in the Black Lives Matter campaign. Right trolls are patriotic, anti-immigration Trump supporters. Among left and right trolls, several have achieved large follower numbers and even a degree of fame. Finally there are fearmonger trolls, that propagate scares, and a small number of commercial trolls. The Russian language accounts also divide on similar lines, perhaps posing as individuals with opinions about Ukraine or western politics. These categories were proposed by Darren Linvill and Patrick Warren, from Clemson University. In the word clouds below you can see the hashtags we found left and right trolls using.

Left Troll Hashtags
Right Troll HashtagsMehmet E. Bakir has created some interactive graphs enabling us to explore the data. In the network diagram at the start of the post you can see the network of mention/retweet/reply/quote counts we created from the highly followed accounts in the set. You can click through to an interactive version, where you can zoom in and explore different troll types.
In the graph below, you can see activity in different languages over time (interactive version here, or interact with the embedded version below; you may have to scroll right). It shows that the Russian language operation came first, with English language operations following after. The timing of this part of the activity coincides with Russia's interest in Ukraine.

In the graph below, also available here, you can see how different types of behavioural strategy pay off in terms of achieving higher numbers of retweets. Using Linvill and Warren's manually annotated data, Mehmet built a classifier that enabled us to classify all the accounts in the dataset. It is evident that the political trolls have by far the greatest impact in terms of retweets achieved, with left trolls being the most successful. Russia's interest in the Black Lives Matter campaign perhaps suggests that the first challenge for agents is to win a following, and that exploiting divisions in society is an effective way to do that. How that following is then used to influence minds is a separate question. You can see a pre-print of our paper describing our work so far, in the context of the broader picture of partisanship, propaganda and post-truth politics, here.
Categories: Blogroll

Teaching computers to understand the sentiment of tweets

As part of the EU SoBigData project, the GATE team hosts a number of short research visits, between 2 weeks and 2 months, for all kinds of data scientists (PhD students, researchers, academics, professionals) to come and work with us and to use our tools and/or datasets on a project involving text mining and social media analysis. Kristoffer Stensbo-Smidt visited us in the summer of 2018 from the University of Copenhagen, to work on developing machine learning tools for sentiment analysis of tweets, and was supervised by GATE team member Diana Maynard and by former team member Isabelle Augenstein, who is now at the University of Copenhagen. Kristoffer has a background in Machine Learning but had not worked in NLP before, so this visit helped him understand how to apply his skills to this kind of domain.

After his visit, Kristoffer wrote up an excellent summary of his research. He essentially tested a number of different approaches to processing text, and analysed how much of the sentiment they were able to identify. Given a tweet and an associated topic, the aim is to ascertain automatically whether the sentiment expressed about this topic is positive, negative or neutral. Kristoffer experimented different word embedding-based models in order to test how much information different word embeddings carry for the sentiment of a tweet. This involved choosing which embeddings models to test, and how to transform the topic vectors. The main conclusions he drew from the work were that in general, word embeddings contain a lot of useful information about sentiment, with newer embeddings containing significantly more. This is not particularly surprising, but shows the importance of advanced models for this task.



Categories: Blogroll

3rd International Workshop on Rumours and Deception in Social Media (RDSM)

June 11, 2019 in Munich, Germany
Collocated with ICWSM'2019AbstractThe 3rd edition of the RDSM workshop will particularly focus on online information disorder and its interplay with public opinion formation.

Social media is a valuable resource for mining all kind of information varying from opinions to factual information. However, social media houses issues that are serious threats to the society. Online information disorder and its power on shaping public opinion lead the category of those issues. Among the known aspects are the spread of false rumours, fake news or even social attacks such as hate speech or other forms of harmful social posts. In this workshop the aim is to bring together researchers and practitioners interested in social media mining and analysis to deal with the emerging issues of information disorder and manipulation of public opinion. The focus of the workshop will be on themes such as the detection of fake news, verification of rumours and the understanding of their impact on public opinion.  Furthermore, we aim to put a great emphasis on the usefulness and trust aspects of automated solutions tackling the aforementioned themes.
Workshop Theme and TopicsThe aim of this workshop is to bring together researchers and practitioners interested in social media mining and analysis to deal with the emerging issues of veracity assessment, fake news detection and manipulation of public opinion. We invite researchers and practitioners to submit papers reporting results on these issues. Qualitative studies performing user studies on the challenges encountered with the use of social media, such as the veracity of information and fake news detection, as well as papers reporting new data sets are also welcome. Finally, we also welcome studies reporting the usefulness and trust of social media tools tackling the aforementioned problems.


Topics of interest include, but are not limited to:
  • Detection and tracking of rumours.
  • Rumour veracity classification.
  • Fact-checking social media.
  • Detection and analysis of disinformation, hoaxes and fake news.
  • Stance detection in social media.
  • Qualitative user studies assessing the use of social media.
  • Bots detection in social media.
  • Measuring public opinion through social media.
  • Assessing the impact of social media in public opinion.
  • Political analyses of social media.
  • Real-time social media mining.
  • NLP for social media analysis.
  • Network analysis and diffusion of dis/misinformation.
  • Usefulness and trust analysis of social media tools.
  • AI generated fake content (image / text)

Workshop Program Format

We will have 1-2 experts in the field delivering keynote speeches. We will then have a set of 8-10 presentations of peer-reviewed submissions, organised into 3 sessions by subject (the first two sessions about online information disorder and public opinion and the third session about the usefulness and trust aspects). After the session we also plan to have a group work (groups of size 4-5 attendances) where each group will sketch a social media tool for tackling e.g. rumour verification, fake news detection, etc. The emphasis of the sketch should be on aspects like usefulness and trust. This should take no longer than 120 minutes (sketching, presentation/discussion time).  We will close the workshop with a summary and take home messages (max. 15 minutes). Attendance will be open to all interested participants.

We welcome both full papers (5-8 pages) to be presented as oral talks and short papers (2-4 pages) to be presented as posters and demos.


Workshop Schedule/Important Dates
  • Submission deadline: April 1st 2019
  • Notification of Acceptance: April 15th 2019
  • Camera-Ready Versions Due: April 26th 2019
  • Workshop date: June 11, 2019  
 Submission Procedure
We invite two kinds of submissions:

-  Long papers/Brief Research Report (max 8 pages + 2 references)
-  Demos and poster (short papers) (max 4 pages + 2 references)
Proceedings of the workshop will be published jointly with other ICWSM workshops in a special 
issue of Frontiers in Big Data.


Papers must be submitted electronically in PDF format or any format that is supported by the 
submission site through https://www.frontiersin.org/research-topics/9706 (click on "Submit your manuscript"). 
Note, submitting authors should choose one of the specific track organizers as their preferred Editor.
You can find detailed information on the file submission requirements here:
https://www.frontiersin.org/about/author-guidelines#FileRequirements
Submissions will be peer-reviewed by at least three members of the programme
committee. The accepted papers will appear in the proceedings published at 
 https://www.frontiersin.org/research-topics/9706


Workshop Organizers
Programme Committee (Tentative)
  • Nikolas Aletras, University of Sheffield, UK
  • Emilio Ferrara, University of Southern California, USA
  • Bahareh Heravi, University College Dublin, Ireland
  • Petya Osenova, Ontotext, Bulgaria
  • Damiano Spina, RMIT University, Australia
  • Peter Tolmie, Universität Siegen, Germany
  • Marcos Zampieri, University of Wolverhampton, UK
  • Milad Mirbabaie, University of Duisburg-Essen, Germany
  • Tobias Hecking, University of Duisburg-Essen, Germany 
  • Kareem Darwish, QCRI, Qatar
  • Hassan Sajjad, QCRI, Qatar
  • Sumithra Velupillai, King's College London, UK
 Invited Speaker(s)
To be announced
SponsorsThis workshop is  supported by the European Union under grant agreement No. 654024, SoBigData. 


And the EU co-funded horizon 2020 project that deals with algorithm-supported verification of digital content


Categories: Blogroll

SoBigData funded travel grant for short-term visiting Scholar

As a part of SoBigData's Transnational Access (TNA) activities, the Department of Computer Science at Sheffield University is keen to host scholars from non-UK universities who would like to visit Sheffield to undertake a short period of research as part of a scheme to promote international cooperation and the dissemination of knowledge. Grants are made available to cover 1-2 month research for scholars at non-UK universities/organisations. During the visit scholars will join in one of the following research projects:

       • Social media part of speech tagging in multiple languages           — Part of Speech is one of the most widely used linguistic features to analyse social media content. The project aims to build models to tag social media content with the universal POS tag set.
       • Social media named entity recognition in multiple languages           — The presentation of named entities in social media is generally different from the presentation of named entities in news articles. NER systems trained on news articles cannot perform well in social media analysis. The aim of this project is to build NER models for social media in different European languages
       • Sentiment Analysis for Twitter posts          — Sentiment analysis is one of the basic components used to analyse societal debates. This project aims to build a sentiment analysis model based on short and noisy twitter posts.

 What is covered (up to 4500 euros): Return flight/train tickets to Sheffield Accommodation during the visiting period Daily subsistence GATE Summer School Mentor from GATE members 
 Deadlines: Application before: 30 March 2019 Notification: within 2 months after submission

 Eligibility Requirements: Candidates must:
       • have PhD degree or be enrolled in a doctoral programme offered by an educational institution recognised by that country’s authorities
       • not be enrolled as a student or worked in a higher education institution of the United Kingdom
       • resume studies/work in their home country after the end of the grant period

 How to apply:Applicants should apply though SoBigData TransNationalAccess (http://www.sobigdata.eu/content/open-call-sobigdata-funded-transnational-access) p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; min-height: 14.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; color: #dca10d} span.s1 {color: #000000} span.s2 {text-decoration: underline}
Submit completed application form (http://www.sobigdata.eu/sites/default/files/SoBigData%20TNA%202018-c%20Application%20form.doc) to ta-admin@sobigdata.eu
Any question related to the projects please contact: Xingyi Song (x.song@sheffield.ac.uk)
Categories: Blogroll
Syndicate content