Dr Lina Dencik is Reader at the University of Cardiff’s School of Journalism, Media and Culture, co-director of the Data Justice Lab, and Principal Investigator in ERC-funded DATAJUSTICE project. Her most recent research publications have examined media studies in an age of datafication[1], an AI system being used for border control in Europe[2], and the crisis of media systems[3].
INEQ interviewed Dr. Dencik on its official launch day 16 April, 2019. The original interview was revised for publication in September 2020.
Q: Recent developments in digitalisation and artificial intelligence have opened up a whole new front of question related to inequalities and datafication. How did you get involved in thinking about data from the perspective of social justice?
LD: It came out of research that I had done previously, particularly around the Snowden leaks that in 2013 revealed the nature of digital surveillance programmes being carried out by a number of Western democracies, particularly the United States and the United Kingdom. Out of that discussion, it became very clear that the ability to collect and analyse data on a large scale now has a role for state governance also.
The way that that discussion was framed, was predominantly about a big concern for issues of privacy. When we started looking into that, we saw that this logic of collecting and analysing data as a way to govern and make decisions about people is actually a much broader development. It has implications for lots of other issues about who gets to participate in society and on what terms, who gets excluded from that, who is targeted, and so on. These types of questions are not just about privacy but actually about social justice and how the society is organised.
Q: Were you surprised by the Snowden revelations?
LD: In many ways, no. People who had looked at the evolution of the internet and digital technologies had already highlighted that these kinds of things were probably going on, and that surveillance was a real issue with the development of the internet.
I think what was made clear by the Snowden leaks, was really at one level the scale of it, but also the extent to which the business model that underpins our digital economy is based on the idea of collecting data about people in order to profile and target them. This is significant, not just for advertising but, in this case, for state surveillance programmes also; it was that business model that made those state surveillance programmes possible. In that sense, the leaks really illustrated a shift, I think, in how we should understand digital technologies and their significance.
Q: If you look at literature that is pouring out from various kinds of academic presses and popular publications, there is clearly a wave of concern about a new era or a new logic of capitalism[4].What would you say is the qualitative difference; what is it that has now happened and opened our eyes?
LD: As many of these technical developments are not necessarily radically different from what we have had before, we should think of this as an evolution of developments rather than a revolutionary or a paradigm shift. When it comes to, for example, the use of digital technologies for population management, it comes out of a history of bureaucratisation and other ways in which populations have been categorised and profiled over many years. We of course have things like census data, and also the notion of having citizenship is based on classification and categorisation. In many ways, these things obviously come out of a history.
What is markedly different, particularly in the last two decades, is how predominantly the internet has become captured by corporate interests, which has centralised much of our digital technologies. There is a new kind of incentive of gaining revenue through data-driven targeting and analysis based on behavioural data. I think that this business model has shifted the discussion. But I am not sure about the discussion on capitalism having fundamentally changed. What drives capitalism is still a concern with profit, and that logic dictates even this current era of digital technologies but certainly gives it a certain character that I think is quite significant and perhaps new.
Q: Besides the market side of the equation, what is new about the state, public, and governance policy side?
LD: The extent to which corporations and states have interlinked around this is probably quite significant because there is quite a strong congruence of interests. The idea of collecting data about people fits the corporate model of advertising and revenue generation, but it actually also fits very well in a social control model as well. Thus, you have two very powerful actors, the state and the capital, aligning quite nicely with this model of digital technologies that we now have. I think that makes it incredibly powerful and therefore difficult to challenge. So the extent to which the state has turned to these data-driven technologies is very significant for understanding contemporary governance.
Q: From the point of view of data as a way of understanding how life goes on in different parts of society for different kinds of people, is there anything positive, or productive and promising?
LD: I think technology is very powerful for identifying issues of need and social concern. The reason that we have concerns about it, I think, is not necessarily to do with the technology in itself. Rather, it is a question about the way technology is embedded into certain economic and political interests or agendas. I think that needs to be addressed. Technology as an instrument to, for example, advance social development is very powerful. However, we are seeing that this is not the way how a lot of these digital technologies are implemented. They are implemented in ways that raise concerns about issues of inequality, discrimination, and social justice more broadly.
Q: Do you think that people have had a false, or a half-truth, understanding about what was happening with social media, internet, and digitalisation? If you look at the academic literature, about 10-12 years ago, many authors saw the developments mainly in a positive light albeit they expressed some cautious concerns as well. Now, most of this seems to have turned upside down, and we are in a situation where we are bombarded by dystopias about the loss of humanity and so on.
LD: Nick Couldry[5] describes this issue around social media as a ‘myth of us’, a myth that these technologies happened in an institutional vacuum and us people would be able to just naturally do collective activity and express ourselves freely through them. As if media institutions were not relevant anymore for understanding, how we are represented or communicate in the world.
I think that these companies were quite willing to advocate that illusion of these technologies being entirely unmediated forms of interaction that truly represented us. However, it has been revealed that this was a myth, and that we are really talking about institutions that are embedded in a particular political economy that shapes our ability to interact also in a digitally mediated environment. The other thing is that there was a myth that we could somehow think of digital technologies as abstracted from broader society, and actually what has been revealed is of course that these technologies will also be shaped by the forces that exist within society.
Q: If you look at the field of academic research and reflection at the moment, what would you say are the most interesting questions that we do not know enough about in regard to data and its social significance?
LD: One of the things that we are struggling to understand is how to actually understand the impact of this on people’s lives and real lived experiences. In many ways, we are still talking about hypotheticals or abstract understandings of the potential implications of things. I think the stuff that we struggle with is really a systematic overview of how is this changing people’s lived experiences.
For a long time in academia, we have not had enough focus on speaking with actual, impacted communities, and trying to situate some of these developments amongst communities who have historically been discriminated against, or experienced the impacts of inequality, and who probably could articulate some of these experiences quite well. What we are now actually seeing is these groups and communities themselves, particularly in the United States but also elsewhere, beginning to engage with these issues of surveillance and data-driven technologies, and how this is an issue for them.
We have had a trend towards trying to silo off this issue of data and artificial intelligence, and questions of inequality and social justice have been dealt with predominantly as being technical issues that could therefore be fixed if we just had better algorithms. And there has been a lot of discussion about ethical guidelines to handle data better. But I think that we are now moving towards a critique of some of these discussions that highlight the extent to which we need to situate this into a broader social context. We need to deal with the social structures more broadly in order to engage with some of the issues that we are confronted with when it comes to data.
Q: This is the perspective where the question of talking about data justice comes in. Can you explain that in a nutshell? What does it mean when you raise questions about data justice?
LD: People are now using this term in different ways. On one level, it is about recognising the relationship between data and structural inequality and social justice. Some people are using it in order to inform questions of data governance, and establish principles that will deal with some of these questions of inequality when it comes to data.
From me and my colleagues’ perspective at the Data Justice Lab [6], we are trying to use it more as a critique, or a framework where it is really about highlighting the role of data in continuations of inequality and injustice. We try to integrate data into a social justice agenda, not understanding it as something that could be dealt with separately but as part of how we have to understand the manifestations of power relations today. To think about it in terms of, for example, what does data look like from an anti-discrimination agenda’s point of view, what are the issues around data from a workers’ rights perspective, or what is an issue of data from a migrants’ rights or migrant solidarity perspective.
Q: You are currently running a European Research Council project on data justice. Can you tell a little bit about that?
LD: It is a five-year project, and it focuses on trying to understand what is happening with developments in data in the European context. It is in part a response to the fact that a lot of our discussion has been dictated by US-centric developments and concerns, as well and articulations of issues.
We are looking at three areas in particular where we see most experimentation happening with data. One of them is border control and migration management, particularly the collection of data on refugees and how that is used to inform decision-making about their ability to live lives and move around. Then there is law enforcement, where we see a lot of developments happening around data-driven policing. Finally, there is low-wage work and particularly what is happening in terms of standard employment, how are standard workplaces being transformed by the introduction of data technologies, for example, in hiring or in performance assessment.
What we are trying to do is at one level to identify the practices happening with this but then tying that into experiences. We are actually speaking with people who are impacted in order to get an understanding of what is happening for people on the ground; what is the nature of these systems, what are technical issues around that, but also what does that mean for policy. We are particularly concerned with how this implicates not just data protection regulation, which is often how these issues are handled, but also how does it implicate issues of, for example, labour law or the protection of vulnerable communities or antidiscrimination policies. So, we are trying to identify links between these different social and economic rights frameworks.
Q: How would you describe the biggest or the most difficult obstacles for really grasping and showing what is actually going on?
LD: I think there are two main issues. One is access. A lot of these technologies are developed by corporate entities, which makes it very difficult to access the model. Even a basic auditing of issues of bias and discrimination, is very difficult in these areas because we don’t have access to these systems.
Then, there is the question about how do you document impact. The problem with a lot of this is not only that decision-making is automated in a straight line where you can trace the outcome of a decision. In addition, these systems are embedded within social institutions where there are many factors that will influence how decisions are made about people.
So, how do we discern the influence or the relevance of these types of systems within, say, border control or policing or work management? That is not a straightforward process because we live in complex social situations. This is also to the fact that people do not necessarily know how they are impacted by something because they do not know where their data goes. Thus, getting evidence or testimonials from that point of view is very difficult.
Q: Highlighting the complexity of data use in different institutional contexts raises the complex question about the interface between policy and technology. How do you see the chances of somehow reasonable, critical, enlightened policy initiatives in a world where the new logics go beyond borders and often makes states and public institutions looks fairly weak?
LD: I actually think that there is something interesting happening in this area, particularly in Europe, as the EU has set, at least in rhetoric, a goal of creating an alternative vision of data economies not based on this very extractive model that predominantly Silicon Valley has advanced. What is interesting is that the idea of sovereignty has taken off a lot in this context; why should Europe be subject to the control of US-based corporations in all of these areas considering the power that these big corporations now have for what are essentially considered governance functions?
This idea that we need to reclaim the sovereignty of people, in particular European data subjects, has some real resonance. Of course, in practice this is very difficult to do. However, we also see this happening beyond just European regional level. There is more interesting stuff happening in initiatives around cities wanting to claim sovereignty against these US-based, big, multinational corporations, for example, in Barcelona.
Barcelona City Council has an interesting technological policy roadmap to technological sovereignty, which is about reclaiming public ownership of digital infrastructure and making sure that it is much more accountable directly to the people who live within that city or community. Interestingly, you have similar ideas around sovereignty also emerging in places like New Zealand, where you have indigenous communities saying ‘We want to reclaim data sovereignty; all this data that is collected on us needs to actually somehow be accountable to the community of which that data is being collected.’ This might have some interesting results in thinking about who owns data and who is accountable.
Q: What do you think about the discourse in the field of policy initiatives that claims that we need more critical literacy and to educate people? Or do we just need more transparency?
LD: Both need work but are not sufficient in and of themselves. To take the digital literacy first, it is important that we have more critical awareness around developments and understandings of how our engagement with our digital environments may come back to impact us in different ways. There are things that we currently perhaps are not aware of in part because of a lack of transparency.
I think that where the literacy part needs to happen more is in terms of governance factors, in policing, in terms of decisions on asylum processes, in welfare provision and so on. We actually have a lack of critical literacy in public administration: people who are making decisions about people’s lives and are now having to engage with these data systems, do not necessarily have the critical engagement with how these technologies work and what the limitations of them might be.
The issue with this is that it has the danger of shifting the onus of responsibility on to the user or the citizens or the population for some of the issues that we are seeing. We should be careful that we make sure that we direct attention towards those who are really in the position of power to engage and deal with the issues and the challenges that we are seeing, rather than suggesting that this can be overcome if people just becoming more aware of it. Awareness and literacy do not necessarily mean that we can change anything. That is something that the Snowden leaks made quite clear. People became aware, but you could not really do much about it. That is very disempowering overall.
The transparency issue is a lot about ‘Transparency of what?’ For example, with the issue of algorithms and digital technologies, should we make the code transparent? That would not have much meaning for anybody. People talk about meaningful transparency, which is be more inclined towards describing the logic of the system and how decisions are made. The issue there is that, for example, something like artificial intelligence often cannot be explained because even designers do not know how certain correlations have been made or how certain outcomes are produced. So, we have a huge democratic issue about the lack of explanation for decisions.
When we are moving towards automated decision-making processes, there is a real problem. And then, again, the issue of transparency is that it can sometimes close the discussion as an end point; as long as we have a transparent model, we do not need to do anything more about it. That, however, does not really address the challenges of how this is affecting people’s lives.
Q: When we talk about issues of transparency and the kind of meaningful transparency, embedded in all that is a big question of trust. If you talk about the surveillance, intelligence institutions, organisations, and their relationship to democratic governance, who should you trust and based on what evidence?
LD: That is a complicated question. When it comes to digital technologies, research around automated decision-making has shown that people trust humans more than they do machines. There is this myth around machines being objective and unbiased, but actually, when it comes to important decision-making, people prefer to deal with other humans. Thus, there is definitely a trust issue here that becomes compounded when we start turning to machines for really important decision-making areas in our lives.
When it comes to the research that I have done with people around their feelings of data collection, this feeling of lacking control is very prevalent. Companies like Facebook are actually very aware of this, because they understand that their business model is based on some level of trust and they need people to share data and use their platforms in order to survive. This is why they are seen to do a lot of things about ‘We are concerned with your privacy.’ Apple is now trying to sell itself entirely as an alternative to Google and Facebook in this regard. So, I think corporations are very aware of trust being up for grabs at the moment.
Q: Finally, after all this talk about where we are heading, are you an optimist?
LD: Am I an optimist? I am not sure. I think that there are a lot of interesting and exciting things happening, and we are coming to a sort of a turning point where people are saying ‘We have had enough.’ That is not necessarily to do with the technology, but technology becomes part of that.
Generally speaking, there is a move towards trying to address some of the injustices that have permeated society for a long time. I live in the United Kingdom, and my students’ generations, for example, who have grown up entirely in a situation of austerity, are really articulating not just reforms within the existing system but alternative ways of structuring society. These alternative understandings of the economy are going to produce really fruitful and interesting ways forward. Technology will be a part of that because the way we organise society will dictate how technology will be organised as well.
Notes
[1] Dencik, Lina (2020). "Mobilizing Media Studies in an Age of Datafication." Television & New Media, 21(6), pp. 568–573, doi: 10.1177/1527476420918848.
[2] Sánchez-Monedero, Javier, and Lina Dencik (2020). "The Politics of Deceptive Borders: ‘Biomarkers of Deceit’ and the Case of iBorderCtrl." Information, Communication & Society, doi: 10.1080/1369118X.2020.1792530.
[3] Fenton, Natalie, Des Freedman, Justin Schlosberg, and Lina Dencik (2020). The Media Manifesto. Cambridge: Polity Press.
[4] See:
Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.
and
Couldry, Nick, and Ulises A. Meijas (2019). The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press.
[5] Couldry, Nick (2015). "The Myth Of ‘Us’: Digital Networks, Political Change And The Production Of Collectivity." Information, Communication & Society, 18:6, 608-626. doi: 10.1080/1369118X.2014.979216.
[6] Data Justice Lab https://datajusticelab.org/