Can artificial intelligence aid detection of breast cancer and save resources? A recent report from Denmark confidently answers this question with a definitive "Yes." All Danish women aged 50 to 69 receive bi-annual screenings for breast cancer, and those who have had prior surgery for the disease undergo mammography screenings until they reach 79. Previously, each screening was analyzed by two radiologists, with a third involved in case of disagreements. However, several regions in Denmark have recently altered this workflow by incorporating AI systems to evaluate screening results.
As part of the
In this blogpost, I touch upon a key question that occupied Danish radiologists: is the knowledge base for using AI in breast cancer screening sufficient? I was approached by radiologists in the regions where AI will be implemented in Denmark in 2025. We had previously collaborated on multiple projects and seminars on the implementation of technologies—so we knew each other's work. The radiologists expressed a pressing need to share their experiences and worries connected to the changes of workflows and the knowledge base for implementing AI. Based on these discussions I share their concerns here by contextualizing them with my ongoing research.
AI implementations are not simple
The shortage of healthcare staff is a concern both in Denmark and Europe. According, notably, to the WHO, this crisis is primarily attributed to demographic changes threatening the stability of our healthcare systems. Ideally, AI technologies are viewed as a potential solution for data-driven decision-making and optimizing workflows within the welfare and healthcare sectors. There is optimism that AI systems will alleviate the existing workload and improve efficiency.
In a study examining AI use in the breast cancer screening program over the period of three years in the Capital Region of Denmark, the combination of AI and specialized radiologists found significantly more cases of breast cancer, while reducing the number of false positives. The clear conclusion from the report was that technology could be the answer to the shortage of health-care professionals and, in this case, radiologists. After the introduction of AI, breast cancer radiologists identified on average an additional 12 cases per 10,000 screenings. These results also aligned perfectly with the recommendations from the Danish Robustness Commission report (
So, what is this blogpost about, if AI does the job?
As part of a larger European research project on AI, I continued to interview radiologists and leaders in hospitals where preparations for implementing AI are currently underway. These interviews suggest a more complicated picture. From the literature on implementing AI systems, we already know that it is rarely a simple plug and play process. The challenges of introducing algorithmic systems in workplaces are manifold. AI systems often arrive at decisions in ways that seem unclear—to users, their subjects, and sometimes even their developers (
Unsurprisingly, I learned that things are also not that simple in the Danish case. I felt that I needed to dig deeper, even if it meant complicating, rather than simplifying things.
Gathering evidence
“The more data we get into the algorithms, the better. So, it’s just about getting on board.” In 2022, this was how Professor Birthe Dinesen from Aalborg University expressed the necessity to start AI implementation. The same year the Capital Region implemented AI in breast cancer screening, and other regions in Denmark planned to adopt the system in 2025. The systems had been tested in the laboratory, and the results were promising, as a representative from the company Screenpoint Medical explained in an interview. Since AI does not get tired, it potentially discovers more cancer cases, cancer is detected faster, and the workload is reduced. However, they did note that evidence from the laboratory cannot be directly transferred to the clinic. Thus, the results from the first tests run by the company had not yet been replicated in clinical reality.
In initial studies, doubts were raised about whether the AI systems were as effective as had been promised (
A study in European Radiology (
Are We Doing the Right Thing?
Generally, radiologists expressed great optimism regarding the potential of AI tools to reduce time spent on tasks and to increase the quality of screening. They acknowledge, however, that there are still many unanswered questions and concerns. The most important question for them is how the technology will impact their workflows and, consequently, shape patient care.
The radiologists understand that it is crucial to be able to identify when AI makes mistakes. And AI does make mistakes—radiologists are very aware of this. Through observation, we became aware of the following sources of errors: scars from breast surgery, breast implants, calcifications, blood vessels, and lymph nodes. All of these can be signs that may indicate suspicious changes or signs of cancer for AI and therefore are drawn and registered as such. An experienced radiologist, on the other hand, has the opportunity to look back at the patient’s history by examining previous images, thereby evaluating developments since the last mammogram. If there has not been a significant change, it is often not indicative of what is referred to as a significant nodule, that is, potentially fatal cancer.
What this suggests is that interpretation of the screening result largely depends on experience and the radiologist's conscience, Yet, when their role shifts from decision-maker to quality controller and supervisor of the AI technology, there are no general guidelines for how quality control should be performed. Radiologists are not trained for this task. If AI-led screening is not performed correctly, it leads to overtreatment, unnecessary calls for screening, and, in the worst case, unnecessary surgeries. A lot of responsibility is thus placed on the shoulders of supervisor radiologists to avoid serious errors.
If we consider the time radiologists spend quality-checking AI to avoid the potential harms of unnecessary call-backs and overdiagnosis of indolent cancers—what will the actual time saving be? How will radiologists be trained to quality-check the AI? And how are younger radiologists prepared for this task if they also perform fewer screenings, meaning they have less experience? We do not know what it means for clinical practice that AI detects different nodules than doctors do, and that these nodules may not be potentially fatal. Several doctors pointed out that we lack mortality figures to assess the real effects of AI screening. In the end, the only clinically relevant endpoint, as noted by a chief physician, is to reduce mortality.
Concerns with responsibility
Thinking about my encounters with radiologists, many questions remain. How many non-fatal cancer nodules are at risk of being overdiagnosed? And who takes care of women who are called in for additional call-backs due to indolent cancers that are identified by AI, but not significant? Who is responsible if both the AI and the human radiologist make a mistake—is it the radiologist who performed the screening that bears the responsibility? Is the AI screening tool then absolved of responsibility? AI might place a significant burden on the shoulders of individual radiologists, one which was previously shared between two parties.
The conversations that I had with the radiologists underline how problematic it is when AI solutions are proposed as an easy plug-and-play solution. Such simplification can lead to a long list of unacknowledged challenges, including new types of work, unclear allocation of responsibilities, and maltreatment. It is therefore important to ensure that we have the expertise to make the right decisions and that we engage key people in working with and assessing the technology. In healthcare, AI implementations should take place without external political and commercial pressures and involve the health professionals who use the technologies.
References
Bruun, M. H., & Krause-Jensen, J. (2022). Inside Technology Organisations: Imaginaries of Digitalisation at Work: Organisation. In The Palgrave Handbook of the Anthropology of Technology (pp. 485-505). Singapore: Springer Nature Singapore.
Burrell, J. (2016) How the machine thinks. Understanding Opacity in Machine Learning Algorithm. Big Data and Society.
Freeman, K., Geppert, J., Stinton, C., Todkill, D., Johnson, S., Clarke, A., & Taylor-Phillips, S. (2021). Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. bmj, 374.
Kühl, J., Elhakim, M. T., Stougaard, S. W., Rasmussen, B. S. B., Nielsen, M., Gerke, O., ... & Graumann, O. (2024). Population-wide evaluation of artificial intelligence and radiologist assessment of screening mammograms. European Radiology, 34(6), 3935-3946.
Lång, K., Josefsson, V., Larsson, A. M., Larsson, S., Högberg, C., Sartor, H., ... & Rosso, A. (2023). Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study. The Lancet Oncology, 24(8), 936-944.rende befolkning.
Lauritzen, A. D., Lillholm, M., Lynge, E., Nielsen, M., Karssemeijer, N., & Vejborg, I. (2024). Early indicators of the impact of using AI in mammography screening for breast cancer. Radiology, 311(3), e232479.
Møhl, P. 2024. AI, human skills and responsibility in mammaradiology and breast cancer screening. Unpublished manuscript.
Robusthedskommissionen (2023). Robusthedskommissionens anbefalinger. Webpage. Referenced 3.2.2025.
Dorthe Brogård Kristensen is a professor in consumption studies at the University of Southern Denmark. Her current interests include digital health, self-tracking technologies, and algorithmic culture. She also works as a researcher in the Reimagine ADM project lead by professor Minna Ruckenstein at the University of Helsinki.