When AI meets radiologists: skilled vision in breast cancer screening

This blog sheds light on the dynamics between mammaradiologists and AI systems. It highlights the importance of recognising the irreplaceable dimensions of radiological expertise, such as visual memory, embodied knowledge and personal responsibility, as AI capabilities evolve.

In a dimly lit room, a radiologist examines mammograms on a high-resolution monitor. She moves her eyes methodically from an image taken from the side to another taken from above, then back again. Conflating these 2D images in her mind, she constructs a three-dimensional image of the breast tissue. Prior to her reading, an AI system has processed the same images, highlighting potential concerns. This scene, playing out daily in radiology departments in Europe and worldwide, reveals an interplay between human expertise and artificial intelligence, where different ways of “seeing” meet. 

Over a period of 5 months, I studied the daily work in a Danish breast cancer screening service that had recently implemented the use of AI in mammogram analysis after a thorough test period. I observed the mammaradiologists’ work in front of their screens while they were getting used to this new “companion”, conversed informally with them during their readings and listened to discussions among them in the screening room. Through this involvement, I gradually came to understand the nature of their work, their visual-sensory skills and their strong sense of responsibility toward the lives and bodies at the other end of the imaging process: the screened women.

Mammaradiology departments increasingly use AI to manage radiologist shortages and growing workloads, as Dorthe Kristensen (2025) also notes on in her related blog post. Regular workflows require two radiologists per set of mammograms - now an AI system pre-screens images, with only one senior experienced breast radiologist reviewing “low-risk” cases. This change is not simple or straightforward: it introduces uncertainties about radiologist skills, diagnostic accuracy, professional responsibility and the future of radiologist training and expertise. One of my particular interests lies in understanding how professional vision is altered when specialists engage with AI technologies that “see” differently - and often in quite opaque ways.

Learning to see: a sensory anthropology perspective

As a sensory anthropologist, I explore how different professional communities develop specialised modes of sensing and seeing - what Charles Goodwin (1994) calls “professional vision”, the particular ways of seeing that characterise expert practices, and how they intersect with technologies of vision. As Cristina Grasseni (2022; 2024) demonstrates, “skilled vision” emerges through apprenticeship and practice within specific cultural and professional contexts, and is characterised by learned perceptual habits, embodied knowledge, and shared ways of seeing shaped by collective experience. David MacDougall (2006) further shows how visual knowledge becomes embedded both in practitioners and in the technologies they use.

Mammography provides a convincing case study of this intersection between expert and automated ways of seeing. Radiologists undergo years of training to develop skilled vision. This is not simply a matter of having eyes and good eyesight, but of learning to perceive and interpret visual information in culturally and professionally specific ways. So when AI is introduced, how do radiological and algorithmic forms of vision intertwine, influencing and reshaping one another in practice?

As the radiologists’ decision-work becomes entangled with the AI system, their visual attention and interpretive practices shift in subtle and sometimes disconcerting ways. And although AI systems don’t literally “see”, their pattern recognition is largely shaped by human concepts of visuality. A few say they ignore the markings, and I observe that they skip the phase where the AI's annotations are most clearly displayed, on the last screen of the workflow. Others, even among the most experienced, express concern that it distracts their attention. They use more time assessing the AI’s suggestions, trying to identify specific patterns that the AI consistently misses or, more often, misunderstands, and measuring and questioning their own reading and visual skills against the system’s outputs. As they say, this requires additional effort and time - when time is exactly what they have least of.

Learning to see in space and time

The capacity for spatial and temporal reasoning and correlation provides a revealing example of differences in human and algorithmic visual skills: Routinely combining the two-dimensional images appearing on their screens, one taken from the side and one from above of the same breast, the mammaradiologists construct mental three-dimensional models, perceiving spatial relationships not visible in the individual images. AI, in contrast, processes the images in isolation, establishing no correlation between them. What might appear to the AI as a cancer growth in a two-dimensional reading could, by correlating two two-dimensional images into a three-dimensional view, be recognised by the radiologist as a superposition of smaller occurrences in the same plane, such as insignificant or non-malignant cysts, calcifications, or blood vessels.

Comparing current images with previous mammograms is another fundamental aspect of radiological expert vision, as women in Denmark are offered breast cancer screening every two years between ages 50 and 69. I observed the mammaradiologists scrolling quickly back and forth through successive images of the same breast, seeming to create a mental video sequence despite quite big variations in image quality and angles. This longitudinal approach allows them to see changes that might be imperceptible in the single radiograms but become meaningful when viewed as part of a temporal progression, helping them assess whether growths are rapidly developing (interval) breast cancers or have actually shrunk over time. The AI, in contrast, does not capture these gradual transformations, because earlier images are, at least in the current set-up, not integrated into its analysis.

Visual memory, embodied expertise and responsibility

These practices of mammaradiologists expose a fundamental contrast between human and machine processing of radiograms, revealing not just technical limitations, but profound differences in ways of seeing. These differences become increasingly crucial, requiring attention to human sensory expertise, rather than being treated as a mere technical issue to be resolved by the inevitable progress of technology development. Consider the following case: 

Hanne is an experienced mammaradiologist with over 25 years in the field. Though now semi-retired, she continues working part-time to help manage the screening workload. She describes what I consider a remarkable aspect of her practice: when she identifies suspicious growths on a mammogram and marks them for clinical follow-up, she retains vivid visual memory of those images when she sees them again when examining the patient in person.

During clinical breast examinations, often weeks after the initial screening, Hanne performs what could be described as a form of image-guided clinical examination. As she carries out palpations and ultrasound scans, she integrates her tactile sensations with her visual memory of the mammograms seen prior to meeting the patient. This process actually resembles a sort of augmented reality visualization or image fusion technique used in surgical navigation, overlaying real-time imaging with pre-operative scans. Except that Hanne carries out this “augmented” examination using only her hands and memory.

This integration allows her to, one could say, “see through” the skin, correlating what her fingers sense with what her visual memory recalls from the mammographic images. What enables this vivid retention of specific images among the hundreds of screening mammograms she reviews daily is not simply technical expertise, but what she describes as an ever-present sense of responsibility toward the women’s health and well-being.

The example illustrates how expert radiological vision extends beyond technical competency to encompass embodied knowledge and responsibility. Hanne’s ability to maintain precise visual memory is sustained by her awareness that each image represents a person whose life may be significantly affected by her interpretive decisions. The emotional and ethical dimensions of her work become integral to her technical expertise, creating a form of professional vision that is simultaneously analytical and sensitive, caring.

This merging of technical skill with embodied responsibility raises important questions about what happens as AI systems increasingly mediate the relationship between radiologists and patients. While AI can enhance pattern recognition and diagnostic accuracy, it cannot replicate the very personal sense of responsibility that motivates and sustains human expertise over time.

The future of mammaradiologist expert vision

As AI becomes integrated into mammography, questions arise about how the next generation of radiologists will develop visual expertise. Traditionally, trainees learned by seeing thousands of mammograms, including the routine, normal cases that constitute the vast majority of cases. Currently, they only see those marked by the AI as medium- or high-risk screenings - those requiring two readers, the other being an experienced radiologist. During the first years of their specialisation, trainees will therefore only be first readers of screening mammograms being rated by the AI as medium- or high-risk cases - although the human readers will in the end classify the majority of these mammograms as normal.

Some senior radiologists worry that the trainees might struggle to develop the fundamental visual skills if they don’t encounter mammograms being rated by the AI as low-risk. This creates an interesting paradox: while AI systems rely on human expertise to improve, they may also be reshaping how that expertise develops, potentially creating a feedback loop that favours automated diagnostics and algorithmic vision over human judgment.

Attending to expertise

The daily interactions between mammaradiologists and AI that I observed during its early implementation - a transitional phase marked by emerging practices, uncertainties, and professional concerns - were far more complex than optimistic replacement narratives suggest. Some tech experts announce a goal of fully replacing radiologists with AI in the future. But human expertise remains fundamental to the processes we are observing. How we talk about these technologies matters - not only for current practitioners but also for the recruitment and training of future specialists. These professionals will continue the critical work of detecting potential cancers, and their skills and expertise must be acknowledged and emphasised, rather than suggesting they could be replaced by automated systems. This highlights another potential paradox in the AI implementation process: the very shortage of radiologists that motivated AI adoption might be exacerbated if overly enthusiastic claims in the media about AI capabilities discourage new doctors from becoming radiologists.

There is no doubt that automated processes, pattern recognition algorithms and machine learning systems can provide valuable supplements to human expertise, insofar as the premises for their implementation are well studied and analysed. But precision cannot be the only focus. The integration of visual memory, embodied knowledge, and personal responsibility demonstrated by experienced mammaradiologists reveals dimensions of radiological expertise that extend beyond simple pattern recognition - aspects that remain irreplaceable as AI capabilities advance.

Broader discussions of AI in healthcare and society must therefore address not only potential gains in efficiency, but also how professional expertise is being reconfigured, sometimes devalued, and potentially transformed, as well as the vital questions of learning the necessary visual and sensory skills and maintaining the acute sense of responsibility that comes with human expertise.

References

Goodwin, Charles. 1994. “Professional Vision.” American Anthropologist 96 (3): 606–33. 

Grasseni, Cristina. 2022. “More than Visual: The Apprenticeship of Skilled Visions.” Ethos, no. July 2020: 1–19. 

Grasseni, Cristina. 2004. “Skilled Vision: An Apprenticeship in Breeding Aesthetics.” Social Anthropology 12 (1): 41–55. 

Kristensen, Dorthe. 2025. Do we know enough about AI in breast cancer screening?

MacDougall, David. 2006. The Corporeal Image: Film, Ethnography, and the Senses. Princeton, N.J.: Princeton University Press.

**

Perle Møhl is a visual and sensory anthropologist working at the intersection of digital technologies, human expertise, and sensory skills at University of Southern Denmark (SDU). She is also working in the Reimagining Public Values in Algorithmic Futures project led by professor Minna Ruckenstein. This blog is based on Møhl's observations and discussions with mammaradiologists and data scientists in a Danish hospital having implemented AI in breast cancer screening.