Semantics for Online Misinformation Detection, Monitoring, and Prediction
#SEMIFORM2020

Workshop at ISWC 2020

News

The workshop proceedings has been published as part of the Advances in Semantics and Linked Data: Joint Workshop Proceedings from ISWC 2020, CEUR Volume 2722.

While the submission deadline expired, and given that this workshop and the main ISWC 2020 conference are now virtual events, we are happy to continue receiving new submissions until September 15th.

Motivation

Digital misinformation is becoming pervasive in all online media to the extent that it has been listed by the World Economic Forum as one of the main threats to our society, by generating various technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of global governance. In particular, there is evidence that misinformation has been affecting our perceptions in critical domains, such as health, politics, foreign policy, economy, and environment. Indeed, disinformation was found to pose high risks to society, by accelerating extremism, hysteria, and herding behaviour.

In spite of the rising addiction to rapid consumption of online news information, people and current technologies are yet to adapt to the age of misinformation. As a recent European Parliament report states, there is an urgent need to develop a new generation of content verification and misinformation analysis and tracking tools.

Some technologies have been developed to help validating different types of content (images, videos, bot accounts, news sources, reviews, etc.). Examples include NewsGuard, B.S.Detector, ClaimBuster, TinEye.com, and Rbutr.com. Others focused on developing techniques for automatically identifying false news, rumour posts, disputed arguments, measuring posts’ credibility, validating specific claims, or tracking the spread of disinformation. These technologies often use standard machine learning techniques, with little to no use of semantic techniques and representations.

Although disinformation is a common problem in media, it is exacerbated in digital social media due to the speed and ease with which content spreads. The social web enables people to spread information rapidly without confirmation of truth, and to paraphrase this information to fit their preconceptions, goals, and beliefs. This phenomenon is affecting society at all levels, including citizens, communities, journalists, and policymakers. Recent studies showed that false news spread far more virally than real news; sometimes thousands of times more. As a result, social media platforms have been put under heavy criticism and pressure to tackle the spread of misinformation, disinformation, and “fake news” on their platforms. However, these platforms are yet to offer adequate socio-technical solutions to this problem, and are unlikely to do so for various economic, technical, and ethical reasons.

On another front, more than 188 independent fact-checking groups and organisations emerged online in over 60 countries over the past decade, with an increase of 26% in the last year alone, and 61 of them are in Europe. These organisations provide independent professional fact checking to the public on various current news and information (e.g. elections, coronavirus), and many of them publish their fact checks using the schema.org ClaimReview semantic standard. However, simply publishing corrective information by fact checkers is often regarded as insufficient for changing misinformed beliefs and opinions.

To this end, semantics could offer major contributions to these efforts, in detecting misinformation content, monitoring its spread and impact, predicting its evolution, identifying misinforming sources, and locating relevant fact checking articles, relying on background knowledge graphs with adequate references such as Wikidata. Misinformation is a major societal challenge, and it is time for the Semantic Web researchers and practitioners to join their efforts to tackle this challenge.

Examples of recent external relevant publications:

  1. Fernandez, Miriam and Alani, Harith (2018). Online Misinformation: Challenges and Future Directions. In: WWW’18 Companion: The 2018 Web Conference Companion, ACM, New York, pp. 595–602.
  2. Mensio, Martino and Alani, Harith (2019). News Source Credibility in the Eyes of Different Assessors. In: Conference for Truth and Trust Online, 4-5 Oct 2019, London, UK.
  3. Mensio, Martino and Alani, Harith (2019). MisinfoMe: Who’s Interacting with Misinformation? In: 18th International Semantic Web Conference (ISWC 2019): Posters & Demonstrations, Industry and Outrageous Ideas Tracks, 26-30 Oct 2019, Auckland, New Zeeland, CEUR WS.
  4. Farrell, Tracie; Piccolo, Lara; Perfumi, Serena Coppolino and Alani, Harith (2019). Understanding the Role of Human Values in the Spread of Misinformation. In: Conference for Truth and Trust Online.
  5. A Aker, A Sliwa, F Dalvi, K Bontcheva. Rumour verification through recurring information and an inner-attention mechanism. In: Online Social Networks and Media, 2019
  6. M Lukasik, K Bontcheva, T Cohn, A Zubiaga, M Liakata. Gaussian processes for rumour stance classification in social media. In: ACM Transactions on Information Systems (TOIS), 2019
  7. JZ Pan, S Pavlova, C Li, N Li, Y Li, J Liu. Content based fake news detection using knowledge graphs In: International Semantic Web Conference (ISWC), 2018
  8. Tchechmedjiev A., Pavlos Fafalios, Katarina Boland, Malo Gasquet, Matthäus Zloch, Benjamin Zapilko, Stefan Dietze, Konstantin Todorov. ClaimsKG: A Knowledge Graph of Fact-Checked Claims. In: International Semantic Web Conference (ISWC), 2019.