People are using LLMs, such as Chat GPT, to verify facts, so it’s essential to understand how well they perform at this task. This is what we did in this article. We also propose a framework that enables LLMs to retrieve contextual data and allows users to verify their reasoning and the sources they use to reach a verdict. We find that GPT-4 performs well, but accuracy varies based on language and claim veracity. As they still make mistakes, it is important to integrate mechanisms that allow for verifying LLMs verdict. In particular, they hold potential as tools accelerating human fact-checkers’ work.
]]>I am joining my colleagues Prof. Reinhard Furrer, Prof. Delia Coculescu and Prof. Jan Dirk Wegner.
The new department will become a hub for data analysis and mathematical modeling within the faculty of sciences.
]]>ClarifAI employs Large Language Models aims at fighting propaganda and disinformation in digital news. It identifies propaganda techniques, fact-checks content, and provides explanations. This empowers users to think critically and discern media biases, enhancing democratic integrity.
I am looking forward to working on this with our team: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones and Dorian Quelle.
You can find more about it on the DIZH page.
]]>My PhD student, Yas Asgari, will present our work on the effect of the Arab Spring on the research power dynamics (collaboration with Hongyu Zhou, Ozgur Kadir Ozer, Juven Nino Villacastin, Mimi Byun, Mary Ellen Sloane and Rezvaneh Rezapour).
My collaborator Nicola Pedreschi (University of Oxford) will present our work on Flow stability for community detection on directed networks with missing edges (collaboration with Renaud Lambiotte).
]]>This time we not only studied the spread of fake news but also the changes and polarization of news media influencers between 2016 and 2020.
On the positive side, we measured a decrease in the number of tweets and users propagating fake and extremely biased news in 2020 compared to 2016, probably due to the measures put in place by Twitter to tackle such content. But we also revealed an increase in polarization, at the level of the top influencers and of the average users, in 2020, i.e. users were less likely to share information from other users with opposite political ideologies. This indicates increasing echo chambers for users with a lack of contrary views.
We also observed interesting changes in the top news influencers. Between 2016 and 2020, for influencers with center and right-leaning political ideologies, the number of influencers affiliated with media organizations (journalists and accounts belonging to news outlets) declined by 10%, replaced mostly by politicians. On the other hand, influencers spreading fake news, who were largely comprised of users not affiliated with political or media organizations in 2016, have been replaced in good part by new users affiliated with media organizations that emerged between 2016 and 2020. This change in the news media landscape on Twitter indicates a shift in the relative influence of journalists and political organizations as well as a professionalization of the disinformation industry.
Full list of authors and citation below:
James Flamino, Alessandro Galeazzi, Stuart Feldman, Michael W. Macy, Brendan Cross, Zhenkun Zhou, Matteo Serafino, Alexandre Bovet, Hernán A. Makse & Boleslaw K. Szymanski. Political polarization of news media and influencers on Twitter in the 2016 and 2020 US presidential elections. Nat Hum Behav (2023). https://doi.org/10.1038/s41562-023-01550-8
]]>]]>The Service Award goes to Yamir Moreno @cosnet_bifi and the Initiative of the Winter Workshop on Complex Systems @winter_complex Congratulations! 🏆🎉 pic.twitter.com/soWsmnFlpf
— CCS2022 (@ccs2022) October 21, 2022