The Web Conference in May awarded the 2022 Best Paper Award to "Rewiring what-to-watch-next Recommendations to Reduce Radicalization Pathways", authored by Francesco Fabbri, Yanhao Wang, Francesco Bonchi, Carlos Castillo and Michael Mathioudakis.
– The two lead authors of the paper also used to work at our university: the first author Fabbri as an intern and the second author Wang as postdoc, says Mathioudakis, who works as associate professor in the department of Computer Science.
The Web Conference is a top international research conference, one of the few in computer science with Jufo 2 ranking for prestigious scientific publishing content.
Algorithm to strengthen recommendations of fact-checked content
Web platforms, such as YouTube, typically recommend further watching among videos that are similar to the ones that the user has recently watched.
– If the user receives and follows a series of recommendations with similar content, the user might not encounter content that represents a different point of view. As a hypothetical example, suppose that a user watches a video on an online video platform, and that the video promotes a conspiracy theory about the war in Ukraine, says Mathioudakis.
– If the platform subsequently recommends only similar conspiracy videos about the war, then there is the possibility that the user will not be informed of official or fact-checked reporting but end up watching a series of conspiracy videos.
Conspiracy content and extremism is a rising concern in western countries that has lead to, for example, the January 6th storming of the Capitol building in the United States. Finding ways to deradicalize social media content is an urgent issue of the European Union, as well as the social media platforms.
– In our work, we develop algorithms that video and other Web platforms could use to make minimal changes to the recommendations they provide to their users, so that the users are not “stuck” in watching content of questionable quality (like content that is extremist or contains misinformation), says Mathioudakis.
For the paper, the team used simulations on a public dataset of YouTube recommendations to study the effectiveness of the algorithms they proposed.
– Essentially, our algorithms aim to choose a small number of existing recommendations and replace them with other appropriately chosen ones, so that a user who uses the recommendations to browse content on the platform would have enough options to get away from questionable content, says Mathioudakis.
The algorithm would allow the user to better navigate the content on a platform or go to other places as well. The incentive would be to offer better quality videos.
– It´s not a magic system that fixes everything, but it´s a way to stop users getting stuck on questionable quality content.