Doctoral dissertation defence by Isabelle Augenstein

isabelle augenstein doctoral dissertation banner

Isabelle Augenstein, Associate Professor

Natural Language Processing Section, Department of Computer Science at UCPH.

Title

Towards Explainable Fact Checking

Abstract

The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact-checking, from approaches to detect check-worthy claims and determining the stance of tweets towards claims, to methods to determine the veracity of claims given evidence documents.

These automatic methods are often content-based, using natural language processing methods, which in turn utilise deep neural networks to learn higher-order features from text in order to make predictions. As deep neural networks are black-box models, their inner workings cannot be easily explained. At the same time, it is desirable to explain how they arrive at certain decisions, especially if they are to be used for decision making. While this has been known for some time, the issues this raises have been exacerbated by models increasing in size, and by EU legislation requiring models to be used for decision making to provide explanations, and, very recently, by legislation requiring online platforms operating in the EU to provide transparent reporting on their services.

Despite this, current solutions for explainability are still lacking in the area of fact-checking. A further general requirement of such deep learning-based methods is that they require large amounts of in-domain training data to produce reliable explanations. As automatic fact-checking is a very recently introduced research area, there are few sufficiently large datasets. As such, research on how to learn from limited amounts of training data, such as how to adapt to unseen domains, is needed. This thesis presents my research on automatic fact-checking, including claim check-worthiness detection, stance detection and veracity prediction.

Its contributions go beyond fact-checking, with the thesis proposing more general machine learning solutions for natural language processing in the area of learning with limited labelled data. Finally, the thesis presents some first solutions for explainable fact-checking. Even so, the contributions presented here are only a start on the journey towards what is possible and needed. Future research should focus on more holistic explanations by combining instance- and model-based approaches, by developing large datasets for training models to generate explanations, and by collective intelligence and active learning approaches for using explainable fact-checking models to support decision making.

Time and place

Monday, 6 December 2021 at 13:00.

This defence will be a hybrid defence, taking place both online and physically.
A Zoom link will be announced later.
The defence will take place physically at Universitetsparken 1, Auditorium “Store UP1”, 2100 København Ø.

 

Leader of the defence

Jacob Grue Simonsen, Professor, Head of Department at the Department of Computer Science, UCPH.

Chairperson

Christian Igel, Professor, Machine Learning Section at the Department of Computer Science, UCPH.

Official Opponents

  • Professor, Dr. Iryna Gurevych, Technical University of Darmstadt.
  • Professor, Lucia Specia, Imperial College London and University of Sheffield.

Opponents ex auditorie must contact the chairperson of the defence, Christian Igel, at igel@di.ku.dk.

For a copy of the assessment committee’s report and co-author statements, please contact the Faculty Secretariat.

A virtual copy of the dissertation is available here.

Please note that the entire defence will be recorded.

https://di.ku.dk/english/event-calendar-2021/doctorate-defence-by-isabelle-augenstein

https://scholar.google.co.uk/citations?user=DjJp0dcAAAAJ&hl=de