How to ensure anonymity of AI systems?

When training artificial intelligence systems, developers need to use privacy-enhancing technologies to ensure that the subjects of the training data are not exposed, new study suggests.

Artificial Intelligence (AI) systems trained using machine learning retain an imprint of their training data that allows identifying data that was used to train them.

Systems trained using larger datasets are less vulnerable to identification of the training data, but effectively eliminating the vulnerability would require impractically large datasets.

The finding from an article by researchers from the DataLit project at the Finnish Center for Artificial Intelligence (FCAI) at the University of Helsinki and from Kyoto University, published at the Conference on Neural Information Processing Systems (NeurIPS), has important implications to developers training AI systems using sensitive or personal data, such as health data. 

"Developers need to use privacy-enhancing technologies such as differential privacy to ensure that the subjects of the training data are not exposed. Differential privacy allows mathematically proving that the trained model can never reveal too much information about any individual in the training dataset," says professor Antti Honkela.

Risks in training AI with personal data

The European General Data Protection Regulation (GDPR) defines strict rules for processing of personal data. According to a recent opinion by the European Data Protection Board, an AI system would be considered personal data if training data subjects can be identified from it.

The new result highlights such risk for AI systems trained using personal data.

"An important application for the result is in AI systems for health. Finnish law on secondary use of health data and new European Health Data Space Act require that AI systems developed using health data must be anonymous. In other words, it must not be possible to identify training data subjects."

Previous work from the same group of researchers shows that it is possible to train provably anonymous AI systems using so-called differential privacy during training.

The reported result was obtained by studying the vulnerability when a large image classification model pretrained on a large dataset is fine-tuned using a smaller sensitive dataset. According to the results, the fine-tuned model is less vulnerable than a model trained from scratch using only sensitive data.

Article information

Marlon Tobaben, Hibiki Ito, Joonas Jälkö, Yuan He and Antti Honkela. In Advances in Neural Information Processing 39 (NeurIPS 2025).
 

Contact person