Networks are everywhere, and their mathematical representations as graphs are investigated in virtually every discipline across the formal sciences, the natural sciences, the social sciences, and the humanities. Consequently, the relevance of network data for reasoning about — and ultimately improving — our complex, interconnected world can hardly be overstated. Given the ubiquity and importance of network data, the intricate process of transforming real-world phenomena into graphs — i.e., data modeling — has received remarkably little attention. Integrating the perspectives of a domain data scientist and a method developer, in this talk, I will explore the connection between data modeling and research validity in network analysis. I will discuss how our current research culture encourages data-modeling practices that endanger research validity, and I will present a research agenda on meta-methods to foster a productive exchange between subject-matter experts seeking to understand relational data and computing experts striving to design novel network methods.
Corinna Coupette (they/she) is an Assistant Professor of Computer Science at Aalto University, where they lead the Telos Lab conducting research in the intersection of law, computer science, and complex systems. They are also a Fellow at the Bucerius Center for Legal Technology and Data Science, a Guest Researcher at the Max Planck Institute for Informatics, and a Research Affiliate at the Max Planck Institute for Tax Law and Public Finance. Before joining Aalto University, they spent ten months as a Digital Futures Postdoctoral Fellow at KTH Royal Institute of Technology and the Stockholm Resilience Center.
Corinna studied law at Bucerius Law School and Stanford Law School, completing their First State Exam in Hamburg in 2015. They obtained a PhD in law (Dr. iur., summa cum laude) from Bucerius Law School and a BSc in computer science from LMU Munich, both in 2018, as well as an MSc in computer science in 2020 and a PhD in computer science (Dr. rer. nat., summa cum laude) in 2023, both from Saarland University. Their legal dissertation was awarded the Bucerius Dissertation Award in 2018 and the Otto Hahn Medal of the Max Planck Society in 2020, and their interdisciplinary research profile was recognized by the Caroline von Humboldt Prize for outstanding female junior scientists in 2022.
If in 2019 Twitter worked as a platform for political communication across Europe for the European Parliamentary elections, how to form a dataset for social media political communication in 2024? This is the challenge that the HEPP research team started to explore, and in the leadership of Emilia Palonen they took up a study of Instagram, TikTok and YouTube – in the fragmented media field. Challenges emerged both in the lack of API access for a lot of these platforms but also the multimodal character of the current formats of communication. With the help of the expertise from the HSSH, new methodologies for data gathering were planned and tested in early 2024 for Finnish presidential and Portuguese parliamentary elections. When the time came to gather data for the European Parliamentary campaigns, the project involved researchers across Europe, and suddenly 30 academics from ten countries were gathering data from two platforms while a data steward (Tomi Toivio) at the HSSH was accessing data online.
The presentation outlines principles of data gathering in the post-API era in multi-lingual and cross-cultural context. The project aims at generating a data set for several research consortia and to research use after that. The videos are categorised in an AI assisten pipeline. The research notes are transformed into ten country reports that also constitute part of the dataset but also contribute as research to knowledge of the discourses around the distinct platforms and in three types of political profiles. We hope to discuss also the ways in which large datasets can benefit from the interpretive gaze of Large Language Models (LLM). Ultimately, what is at stake in this experiment is the heuristic use of the AI and some theoretical concepts such as social contract, grievance politics and populism. It demonstrate how “big data” can be approached from a post-foundational and interpretivist perspective, and at the same time yield comparative results – one day with some explanatory power to contemporary political transformations in Europe.
Emilia Palonen is one of the three Programme Directors in Datafication at the Helsinki Institute for Social Sciences and Humanities. She leads the HEPPsinki research group on Emotions, Populism and Polarisation, with many projects and researchers. Palonen’s work is in between politics and communication studies. She on leave from her position as Senior University Lecturer in Political Science, University of Helsinki, but she did all her academic degrees in the UK, including an MA and a PhD in Ideology and Discourse Analysis at the University of Essex. This is the interpretive approach she has taught for almost twenty years to the students of political science at Helsinki. Most recently, Palonen has been working on large and smaller sets of social media data Twitter for the analysis of the pandemic (Koljonen and Palonen 2021) and the EP2024 elections. An edited volume with Juha Herkman Populism, Twitter and the European Public Sphere came out in spring 2024 on Palgrave. With Salojärvi, Horsmanheimo and Kylli (2023) she published an article in Visual Studies operationalising her formula of populism to a comparative study of the Finnish far right on the YouTube. Palonen is in the leadership of the horizon projects CO3 for the Social Contract and PLEDGE for Emotional politics funded for 2024-2027. She also has been leading Cluster 4 in the Academy of Finland and other Trans-Atlantic Partnership consortium funders' ENDURE project exploring resilience in crisis (2022-2025). These consortia engaged in a study of EP 2024 elections from a multimodal perspective. Palonen is an engaged scholar in media and associations: She is an Executive Committee member in the International Political Science Association (IPSA). She is a vice chair of the Finnish Federation of Learned Societies (2023-2024), and serves in the National Coordination of Open Science in Finland and the Committee on Human Rights at the Council of Finnish Academies. She co-chairs the first ever general track on Populism and Polarisation in the International Political Science Association’s World Congress in Seoul South Korea in July 2025.
There is a notable gap in scholarship that provides methodological guidelines for how to examine immersive virtual reality (VR) experiences, particularly non-fiction VR storyworlds. These non-fiction immersive storyworlds encompass a range of genres, including VR documentaries and journalism, educational content, and historical and heritage experiences. VR is a medium of storyliving, not merely storytelling (Kazlauskaitė 2024; Maschio 2017, 2021; Vallance and Towndrow 2022). It is a spatial, embodied, interactive, multisensory, perceptually-rich, affective, and user-oriented medium. These specific characteristics of VR should inform the framework for the analysis of non-fiction VR storyworlds. With its emphasis on storyliving, VR draws strong parallels to the oral tradition, as it mirrors the immersive nature of rituals and performances of myths, where stories are not merely told, but lived.
Participation in rituals and ceremonies can provide identity markers, establish group boundaries, develop and reinforce community bonds, inform participants about their roles in the group, as well as facilitate personal and community transformations. Like ritualized performances of myths, VR storyliving allows individuals to experience complex emotional states as well as participate in and enact a story in an embodied way. VR stories, as immersive digital myths, are not merely recounted but are experienced firsthand, with participants actively engaging in and living out the narratives. Just as users can learn a new skill in VR by physically mimicking the movements, they can also adopt, learn, and rehearse certain modes of feeling and being in VR, by embodying story-prescribed movements and perspectives. What is lived is remembered as something that happened “to me,” the user. A key finding on the impact of VR on memory is that VR experiences become part of users’ autobiographical memory (Kisker et al. 2021; Schöne et al. 2019, 2023).
In this presentation, I introduce a qualitative analytical framework that addresses key elements of VR storyworlds, such as spatial and temporal design, user roles and perspectives, relationality, and multisensory engagement. Drawing on visual narrative studies, haptic media studies, and embodied narrative inquiry, this methodology provides a structured approach to analysing how VR experiences are constructed and experienced. While VR narrative inquiry intersects with approaches in visual narrative studies, it goes further by incorporating an exploration of the embodied and multisensory dimensions of VR narratives, beyond ocularcentrism. In particular, the elements of touch and haptics need to be incorporated into the analysis of VR narratives. In analyzing VR storyworlds, four foundational questions serve as the starting point for understanding the immersive experiences: Where am I? Who am I? Who am I with, and what is our relation? What am I doing and feeling?
Rūta Kazlauskaitė is a postdoctoral researcher at the Faculty of Social Sciences, University of Helsinki. Trained as a political scientist, she is an interdisciplinary scholar, working at the intersections of memory studies, media and communication studies, and political psychology. Through her ongoing and past projects, she examines perceptual as well as emotional engineering and memory politics in immersive digital storyworlds (VR/AR/MR). Currently, she is working in a Horizon Europe project “Politics of Grievance and Democratic Governance” (PLEDGE), where she investigates the role of immersive virtual reality content in shaping anti-/pro-democratic expressions of grievances and explores the opportunities and challenges these immersive experiences pose for democratic societies.
Thought experiments clearly play a central role in much contemporary ethical theorising. In the recent literature on thought experiments, some commentators (e.g. Wilson 2016; Dowding 2019) have criticised the lack of attention paid by moral philosophers to two ideas which are key notions in science. These are internal and external validity. Wilson argues that if thought experiments are indeed a kind of experiment, then philosophers should begin any plausible search for rigour in the scientific literature on experimental research design. When designing a thought experiment, Wilson suggests we consider the extent to which ethical judgements that are correct or endorsed in the world of the experiment generalise to the world beyond the experiment.
This is an important question to consider. However, I suggest that Wilson’s approach (i) overstates the connection between real-world scientific experiments and thought experiments (ii) focuses too readily on the formal structure of thought experiments at the expense of the argumentative context. With respect to the former claim, I suggest that this points towards a more general thesis that it is a mistake to treat the reasoning involved in the use of thought experiments as a subset of scientific reasoning. I shall also consider, towards the end of the talk, a more moderate (and plausible) view of the positive role that the concepts of internal and external validity might play in evaluating and assessing the legitimacy of thought experiments.
Adrian Walsh is Professor of Philosophy and Political Theory at the University of New England in Australia. Walsh works predominantly in political philosophy, the philosophy of economics and applied ethics, although he also has a keen interest in questions of philosophical methodology and in political questions concerning the proper boundaries between scientific disciplines. He has published widely in these areas. In addition to numerous articles in journals such as the Australasian Journal of Philosophy, Philosophy, Ethical Theory & Moral Practice, the Journal of Political Philosophy, Walsh has published 5 books including two edited collections undertaken while working at the University of Helsinki as a Research Fellow between 2012 and 2016.
Understanding the “context” of social media conversations has become essential in today’s rapidly shifting communicative environments to avoid overly simplifying or normative explanations of digital politics globally. This, however, can be challenging for two reasons. Firstly, the growing popularity of using digital/computational methods and LLMs to analyze large-scale social media conversations excel in identifying macro-level patterns in the data but they also risk overlooking the nuanced political and cultural idiosyncrasies underlying it in different regions globally. Secondly, the popular conceptual frameworks and theories used to make sense of our digitally connected world have not been always developed with the idiosyncrasies of the different global digital media environments in mind. At worst, empirical research on social media use in especially marginal regions globally is still mostly lacking. At best, as Cheruiyot and Ferrer-Conill (2021) argue, research on digital media outside the dominant Western contexts has often been relegated to the domain of area studies and not seen as legitimate forms of theory building that also advance the “universal” disciplinary canon.
Yet, despite the growing importance of pinning down context – that is, the ability to locate events and phenomena in broader networks of historical antagonisms, political narratives and shifting cultural patterns of media use – this is by no means an easy process. It requires protracted and time-consuming efforts of capturing the nuanced political and historical trajectories that extend beyond a simple use of “methods” – often clashing with the societal urgency to rapidly understand the new crises constantly manifesting on social media (hate speech, mis/disinformation, fake news, deepfake). More fundamentally, however, what we colloquially call “context” can also be seen as something that is expandable and infinitely so. A more honest, and humble, approach is to rather acknowledge that a claim about context is just “an articulation concerning a set of connections and disconnections thought to be relevant to a specific agent that is socially and historically situated, and to a particular purpose” (Dilley 2002: 454). Which specific connections and disconnections to emphasize at any given point in the research (and who decides on their relevance) can thus make a tremendous difference in how the results can be interpreted regardless of the methods used.
This presentation reflects on the question of what does it mean to “do research on social media in global and comparative contexts?” Through examples from old and new research projects on digital politics in Ethiopia, it reflects on the complex relationship between theory, methods and contextual interpretation that is necessary to understand the significance of emerging forms of digital politics globally and, in particular, and in relation to violent conflicts and digital activism.
Matti Pohjonen currently works as a Senior Researcher for the Helsinki Institute for Social Sciences and Humanities (HSSH), University of Helsinki, leading methodological development on the use of internet and social media data and he co-leads The EU Horizon-funded project ARM (Authoritarian Information Suppression) and InfoLead (Information and Media Leadership Programme for Judges and Policymakers) together with University of Oxford and University of Florence.
This talk considers the relationship between practical development work and academic research. I have been leading a national development initiative Sofi (Science Advice Initiative of Finland) between 2019-2022, and a science-for-policy platform at the Finnish Academy of Science and Letters. Our main objective has been to develop new operating models for science advice that are fit for the present age. In the talk, I reflect on how previous research on the topic has been helpful for practical work and the limitations of its application. I also discuss how we have methodologically approached the development work and how realities have influenced our approaches.
Dr. Jaakko Kuosmanen is the Academy Secretary of the Finnish Academy of Science and Letters. He has previously been the Chief Coordinator of Sofi - a national science advice development initiative which led to the establishment of a new science-for-policy platform in Finland. Jaakko holds a PhD in Politics From the University of Edinburgh, and he has worked as a research fellow and lecturer at the Martin School and Blavatnik School of Government at the University of Oxford. He has advised prime minister’s offices on science-for-policy topics on four different continents, and he is a long-time member of the National Foresight Steering Group at the Prime Minister’s Office in Finland. Jaakko also holds an Adjunct Professor position at the University of Helsinki.
The presentation discusses my side project where I estimated the precision of Large Language Models (LLMs) in making social predictions by replicating an article (Halawi et al. 2024) and improving on it. Even though LLMs hallucinate when asked foresight questions (Schoenegger & Park 2023), they contain embedded "world models" (Li et al. 2022) that reflect current social beliefs and causal relationships, offering an alternative means of prediction. By setting up synthetic prediction markets and addressing challenges such as hallucinations, I evaluate if newer LLMs demonstrate improved precision and better calibration in foresight questions. The findings highlight the significance of LLMs in advancing predictive capabilities. More broadly, attendees will gain insights into the capabilities and limitations of LLMs for foresight.
Johannes Koponen, PhD candidate, Founder. I am a practitioner and researcher with over 12 years of experience in strategic foresight and platform business models. Currently, I am working on my Ph.D. at the University of Helsinki. My research examines constraints to change and the role of information products that improve with use, exploring their potential impact on the public's right to hear. On the side, as the founder and CEO of Konsensus.me, I lead the development of an AI-enhanced information synthesis platform that helps experts make informed decisions for clients across government, private sector, and academia.
I have authored two non-fiction books on platform societies and platform companies (2019, 2022). Previously, I led foresight projects at Demos Helsinki, advised on strategic communication at the Prime Minister’s Office, and taught Futures Studies at Aalto University. I also advise organizations on strategic foresight and serve as a member of the Finnish Council for Mass Media, contributing to media ethics and standards.
Throughout decades of technological change in internet protocols and digital platforms, the production and circulation of online content genres like e-mail chains, viral videos, exploitable images and copypasta has been consistently theorized as a continuation and expansion of vernacular creativity—a digital folklore. Recent advancements in machine learning applications have brought new forms of automation to the forefront of online interactions, exposing users of social media platforms and apps to different and unfamiliar kinds of algorithmic logics, which range from the curatorial biases of recommender systems and content analytics to the expansive possibilities offered by large language models and synthetic media. All these forms of automation are not only shaping how content circulates, but also how it is produced, and this is already evident in new genres of vernacular creativity that emerge in response to algorithmic tools and their logics.
In the first part of this presentation, I formalize a definition of algorithmic folklore - the outcome of vernacular creative practices grounded in new forms of collaboration between human users and automated systems - and sketch a typology of the sort of content that is likely to dominate digital ecosystems to come. In the second part, I discuss the possible methodological approaches to algorithmic folklore, focusing on ethnographic and experimental modes of qualitative inquiry which can enable a more critical and reflexive interaction with these new computational actors.
Gabriele de Seta is, technically, a sociologist. He is a Researcher at the University of Bergen, where he leads the ALGOFOLK project (“Algorithmic folklore: The mutual shaping of vernacular creativity and automation”) funded by a Trond Mohn Foundation Starting Grant (2024-2028). Gabriele holds a PhD from the Hong Kong Polytechnic University and was a Postdoctoral Researcher at the Institute of Ethnology, Academia Sinica and at the University of Bergen, where he was part of the ERC-funded project “Machine Vision in Everyday Life”. His research work, grounded on qualitative and ethnographic methods, focuses on digital media practices, sociotechnical infrastructures and vernacular creativity in the Chinese-speaking world. He is also interested in experimental, creative and collaborative approaches to knowledge-production.
The operationalization on the concept of poverty into a poverty indicator has been regarded as one of the most difficult aspects of empirical poverty research. There exists a broad consensus on what poverty is; however, there is considerable disagreement regarding how it is best measured. During the last 120 years of modern poverty research, several different poverty indicators have been developed, but none are universally accepted. Debate about the best ways of measuring poverty would be futile if the indicators produced similar estimates about the prevalence and concentration of poverty. The presentation gives an overview on different poverty indicators and the results that these indicators produce.
Lauri Mäkinen holds a PhD in Social Policy from University of Turku, where he defended his thesis “What is needed at the acceptable minimum? Studies on the operationalisation of the concept of poverty”. Lauri’s research has focused on poverty, child poverty and especially poverty measurement. Before his current position at the Kela Research Unit, Lauri worked as a university teacher at the University of Turku and as a project researcher at Itla Children’s foundation.
Despite continued developments in the methodological scholarship in media and communication, content analysis remains one of the “most important research techniques in the social sciences” according to a pre-eminent pioneer in this field, Klaus Krippendorff. However, many methodological components of designing a strong content analysis remain misunderstood. This seminar will review the necessary components for a successful content analysis and argues that only by assiduously editing a robust code book and then coding media texts over extended periods of time largely outside of computer-aided techniques, can scholars perform statistical analyses that can confidently interpret, affirm and potentially reinforce multi-methodological assumptions in research. Conducting a powerful content analysis can be laborious work, but the alternative is a devolution into reductive research that does not address the broader context of media content.
Linda Jean Kenix is Professor and Head of the School of Language, Social and Political Sciences at the University of Canterbury in New Zealand. She has written 47 journal articles, 8 book chapters, and 57 conference papers, as well as 1 book and 1 edited book. In all of her research, she has fundamentally explored the representation of politically marginal groupings in mainstream (and alternative) media and the agenda setting function of that representation in the continual process of social change.
Detecting duplicate textual content online is important for improving the quality of information on online digital platforms. Classical text representation methods fail to capture the full complexity of text, resulting in less accurate results. In this talk, a new framework approach that combines multiple text representation techniques to better identify duplicate posts will be introduced. In the context of online Question and Answer platforms, a higher duplicate detection accuracy is achieved by leveraging the strengths of different text representation techniques. The benefits of this approach and key findings from the research will be discussed, along with potential applications of these techniques in other areas of text analysis.
Erjon Skënderi holds a Doctor of Science in Technology from Tampere University, where he defended his thesis on “Text Representation Methods in Big Social Data”. He also earned a master’s degree in computer science from Queens College, City University of New York. With a strong research background in natural language processing and machine learning, Erjon has focused on developing alternative approaches to text analysis in large-scale social data. He has contributed to various projects as a machine learning specialist, focusing on developing and applying NLP and text representation techniques across different application contexts.
Artificial intelligence (AI) tools play a critical role in democratizing data and improving decision-making processes in the dynamic field of business analytics.
This talk shows how advanced statistical techniques and AI are not only increasing productivity, but also enabling decision-makers to access top-tier data science. A case study in the marketing area that compares AI technology to conventional econometric and statistical methods for marketing mix modeling will be a major area of emphasis.
These technologies can provide business executives with the tools they need to understand complex market dynamics and promote a more strategic and knowledgeable approach to company growth. We will showcase the significant contributions of data democratization through real-world applications that maximize the return on investment in marketing budget allocation.
María Teresa Ballestar, an associate professor at Universidad Rey Juan Carlos in Madrid (Spain), has over 25 years of experience in data analysis and leadership across diverse sectors. She has led data science teams in IT consultancy, banking, pharmaceuticals, and Big Tech. Her professional background includes significant roles at companies like Cetelem, ING, Merck Sharp & Dohme, and Google.
Her research effectively bridges academia and practical business applications, focusing on the impact of data science on e-commerce, public policy, and digital transformation. She holds a B.A. in Statistics, an M.Sc. in Marketing & Market Research, an M.A. in Information and Knowledge Society, and a Ph.D. in Applied Economics.
The advent of the Internet, exponential growth in computing power, and rapid developments in artificial intelligence have raised numerous cybersecurity-related ethical problems in various domains. For instance, there is the threat to liberal democracy posed by the tsunami of disinformation and computational propaganda. A key element of the response to this latter threat is, I argue, the identification of collective moral responsibilities to combat this threat and the institutional embedding of these collective moral responsibilities in the form, for instance, of various interrelated institutional agencies, roles and processes e.g., in the news media, universities, social media companies and other ‘epistemic institutions’, that can function as 'webs of prevention' against cyberattacks.
Seumas Miller is a Professor of Philosophy at Charles Sturt University and a Distinguished Research Fellow at the University of Oxford. He is the author or coauthor of 22 books and over 250 academic articles.
Recommended pre-reading: Cybersecurity, Ethics and Collective Responsibility, Chapter 3 Section 3.3 and Chapter 7 Section 7.1 (link to PDF).