Loading...
Detecting pro-kremlin disinformation using large language models
Kramer,Marianne ; Golovchenko,Yevgeniy ; Hjorth,Frederik
Kramer,Marianne
Golovchenko,Yevgeniy
Hjorth,Frederik
Abstract
A growing body of literature examines manipulative information by detecting political mis-/disinformation in text data. This line of research typically involves highly costly manual annotation of text for manual content analysis, and/or training and validating automated downstream approaches. We examine whether Large Language Models (LLMs) can detect pro-Kremlin disinformation about the war in Ukraine, focusing on the case of the downing of the civilian flight MH17. We benchmark methods using a large set of tweets labeled by expert annotators. We show that both open and closed LLMs can accurately detect pro-Kremlin disinformation tweets, outperforming both a research assistant and supervised models used in earlier research and at drastically lower cost compared to either research assistants or crowd workers. Our findings contribute to the literature on mis/-disinformation by showcasing how LLMs can substantially lower the costs of detection even when the labeling requires complex, context-specific knowledge about a given case.
Description
Date
2025-04
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Keywords
large language models, misinformation, social media
Citation
Kramer, M, Golovchenko, Y & Hjorth, F 2025, 'Detecting pro-kremlin disinformation using large language models', Research & Politics, vol. 12, no. 2. https://doi.org/10.1177/20531680251351910
