Modern communication does not rely anymore solely on classic media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users. The speedup in the amount of information available, however, also led to an increased amount and quality of misleading content, disinformation and propaganda Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens’ opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as politics, health and religion.
Disinformation campaigns are leveraging, among others, market-ready AI-based tools for content creation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of “deepfakes”, undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new analysis tools able to detect the presence of synthetic and manipulated content, accessible to journalists and fact-checkers, robust and trustworthy, and possibly based on AI to reach greater performance.
Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed in order to reduce the effects of disinformation campaigns. The workshop, in its second edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation.
When preparing your submission, please adhere strictly to the ACM ICMR 2023 instructions, to ensure the appropriateness of the reviewing process and inclusion in the ACM Digital Library proceedings.
Find out more about the event here and here.