Our scientists' solution for verifying the robustness of text classification methods took first place in the international competition "CLEF-2024 CheckThat! Lab"
A team of scientists from the Department of Information Systems at the Poznań University of Economics and Business took part in the “CheckThat! 2024” organized as part of the international conference CLEF (Conference and Labs of the Evaluation Forum). The goal was to verify the robustness of popular text classification approaches used for credibility assessment problems.
The task of the competition participants was to create adversarial examples – modify the texts in such a way that the classification algorithms change their decision to the opposite, but without changing the meaning of the text. The challenge was that the texts had to be modified in such a way as to not only change the decision of each of the three different classifiers (based on BERT, BiLSTM, RoBERTa models) on each of over 2000 text examples, but also ensure that the number of symbol (word) changes was kept to a minimum in texts with maximum preservation of their semantic meaning. In addition, the task included texts from 5 different problem areas: assessing news bias, detecting propaganda, fact-checking, detecting rumors, disinformation related to COVID-19.
Of all the teams that reported their results, our scientists’ method obtained the highest result according to BODEGA metric, which takes into account measures related to the level of effectiveness of changes in texts, semantic similarity and Levenshtein editing distance. The highest number of points allowed us to take first place in the ranking, overtaking methods developed by, among others, the University of Zurich and the University of Qatar.
The problem of generating adversarial examples and using them in order to test the robustness of classification algorithms is an important research challenge – it allows for assessing how well existing algorithms cope in situations where texts are intentionally modified in such a way as to confuse the model. Thanks to this, it is possible to identify weak points of classification algorithms and improve their reliability, which is crucial in the context of assessing the information credibility, where precise classification is necessary to detect fake news and counteract disinformation.
The “CLEF CheckThat! Lab” has been organized since 2018 and its goal is to develop automatic methods and technologies supporting journalists in the process of information verification. The international CLEF conference was held, among others, in Bologna, Bucharest, Lugano, Avignon and Thessaloniki, and the results of this year’s edition will be presented in September at a conference in Grenoble, France. “CLEF CheckThat! Lab” and “FEVER” are the most important global events devoted exclusively to the issue of automatic verification of fake news. It is worth noting that in 2023, the OpenFact project team took first place in the “CLEF-2023 CheckThat! Lab” competition – the best method for detecting sentences in English that require checking because they may be misleading.
The winning team is currently implementing the OpenFact research project, within which it is developing tools for automatic detection of fake news in Polish. In July 2024, the results of the OpenFact project were rated as the best in Poland by National Center for Research and Development for the second year in a row. Victory in the prestigious CheckThat! competition confirms that the achievements of the PUEB team are important on a global scale and that the methods developed by the OpenFact team achieve equally high effectiveness in other languages.
The OpenFact project is financed by the National Center for Research and Development under the INFOSTRATEG I program “Advanced information, telecommunications and mechatronic technologies”.