Since its early days, media and cultural studies emphasized the importance of cultural representations in the processes and practices of signification that takes place in human communities. Representations were seen as produced and reproduced through sets of established cultural and media production practices and e.g. genres that resulted in “technologies” of gender or race. Less attention was paid to the actual technological environment conditioning the production processes and meaning-making practices. Representation and signification were seen as “cultural”, i.e. human practices, made and formed by human communities who also were considered to have the power to change representations. However, in our contemporary media environment representations are produced not only by people but by automated machines as well. The AI is often seen as a more neutral actor than humans, considered capable of producing objective results and unbiased representations. What are then the implications when representation and cultural signifying practices are automated, let alone managed by self-organizing machine learning systems?
To begin to understand these questions, we have looked into how commercial machine vision systems perceive the world, with a focus on religion, an area often neglected in the media and cultural studies, and social sciences. We are interested in how image recognition technology contributes to the cultural construction of attitudes, values, norms, and beliefs. Our research questions are: How do image recognition services label religious images? Are there differences between different services? What are the ethical implications of the representations for religious groups?
Empirically we have worked with a custom-built religious images dataset that contained two thousand images representing Christianity, Islam, Hinduism, Buddhism, Shintoism, and Spirituality in different contexts. They were collected from Google images. We pushed these images one by one through the application interfaces (APIs) of Google Cloud Vision, Amazon Rekognition, and Microsoft Azure Computer Vision, and collected the classification labels and accuracies. These were then qualitatively and statistically examined.
Based on our results, we argue that the image recognition systems of companies like Google, Amazon, and Microsoft are in many ways ethically problematic in the context of religion, reproducing many cognitive and representational biases. Similarly, as in the known cases of ethnicity, race, and gender, we argue biases are strengthened because the systems fail at transparency, representational diversity, and inclusion. All this is of importance in a world where perceptions of reality are shaped by the datafication processes often defined by commercial systems. As these systems are already in wide everyday and research use, their power in defining representations of different groups and people has consequences for human identities and relationships.
Our initial findings suggest that the biases reside in the training data, in the design architecture as well as on the systems level. Further research should focus on these elements that not only repeat biases of human communities but multiply and amplify them.
In this work we start from an analysis made by Pattat and Rocha (2020), where the authors compared the two specific pages for content verification - nominated, respectively, Fact or Fake and Polygraph - present in both news portals most accessed of Brazil and Portugal. The authors proved that in the South-American country the incidence of Fake News with political-ideological bias - with shares made, including, by the President of the Republic - was considerably superior to Portugal. From this, we decided to focus on the verified contents exclusively in Brazil and with the referred political-ideological bias in particular, using the content analysis (Bardin, 2007) in qualitative-quantitative way, to realize similarities and differences, especially in relation to language, propagation form and implicit objectives.
We crossed the obtained information with news circulating in the same period - having support in the Documental Analysis (Moreira, 2010) - to verify the divulgation context of the Fake News and, thus, try to comprehend some of the possible motivations for its propagation in Brazil. Even if there isn't a consensus about the exact and specific definition for the term Fake News (Amaral, Brites and Catarino, 2018), in this paper we start from the understanding of Alcott and Gentzkow (2017), to whom Fake News are "articles of intentional news and verified false and which may deceive the readers" (p. 213).