Timo Minssen is Professor of Law, specializing in Health & Life Science Innovation, at the University of Copenhagen (UCPH). He is the Founding Director of UCPH's Center for Advanced Studies in Bioscience Innovation Law (CeBIL). He is also an Inter-CeBIL/PFC affiliate at Harvard Law School and an LML Research Affiliate at the University of Cambridge. His research, supervision, teaching & part-time advisory practice concentrates on Intellectual Property-, Competition & Regulatory Law with a special focus on new technologies, big data & artificial intelligence in the health & life sciences.
Timo holds a German law degree from the University of Göttingen, as well as biotech & IP -related LL.M. M.I.C.L., LL.Lic. and LL.D. degrees from Lund & Uppsala University. Previously he has been a, Epigenetics fellow at the Pufendorf Institute at Lund University, a Global Visting Professor at the Technical University of Munich (TUM), Visiting Research Fellow at the Universities of Cambridge & Oxford, Waseda Law School (Tokyo), Harvard Law School, Ass. Prof. at the Chicago-Kent College of Law, and at the Max Planck Institute for Innovation & Competition. Moreover, he had been trained in the German Court system, the European Patent Office, law firms & life science start-ups. Timo serves as a member of several international committees and as an advisor to the WHO, WIPO, the EU Commission, companies, national governments, and law firms.
At UCPH he leads several large interdisciplinary research projects, including EU Horizon projects, on legal issues in synthetic biology, precision medicine, antimicrobial resistance & pandemic preparedness, sustainable innovation, medical AI and in responsible quantum technologies. He is the PI and grant holder of a large research program in bioscience innovation law, funded by the Novo Nordisk Foundation. It involves several international core partners, such as Harvard Law School, DTU, Harvard Medical School, as well as the Universities of Cambridge and Michigan. His research has been featured in international media, such as The Economist, The Financial Times, El Mundo, Politico, Times of India, WHO Bulletin & Times Higher Education. It comprises 8 books, as well as 230+ publications, including articles in law & science journals, such as Science, Harvard Business Review, Harvard Business Manager, JAMA, NEJM AI, NEJM Catalyst, Nature Medicine, Nature Genetics, Nature Biotech, Nature Machine Intel., Nature Electronics, Nature PJ Digital Medicine & Lancet Digital Health.
Blueprints for the Future: Rethinking the Law and Governance of European (Health) Innovation
The evolving legal landscape governing European innovation stands at a crossroads. While this holds true for many application areas of rapidly evolving general-purpose and multi-use technologies, it is particularly evident in the health sector. With Asia’s startup scene fast-tracking drug development and digital solutions, rising geopolitical pressures, and the U.S. changing regulatory priorities, the health and life science sector is undergoing structural change. Against this background and considering rapid technological development, and geopolitical challenges, this keynote critically reflects on recent developments in the law, regulation, and governance of pharmaceuticals, including ATMPs and antibiotics, medical devices, medical AI, and quantum technologies. It argues that legal frameworks and innovative regulatory tools increasingly shape—not merely respond to— technological change and must be assessed for both their enabling potential and structural risks. Drawing on recent EU reforms such as the Pharmaceutical Package, the AI Act, and the European Health Data Space, the talk explores key tensions, opportunities and trade-offs between innovation, prevention, conservation, and access. This also involves critical debates about regulatory ambition and feasibility, as well as digital opportunity, sovereignty, and justice. Emphasis will be placed on IP-related and regulatory exclusivities, (re-)calibrations of medical device and AI regulation, and the anticipatory governance of quantum technologies. The talk calls for a more realistic, feasible and flexible approach to law and regulation, that is still inclusive and ethically grounded —one that embraces novel legal and regulatory tools to support innovation and Europe’s competitiveness while promoting transparency, human rights, and public health.
Santa Slokenberga, LL.D., is an associate professor in Medical Law and a senior lecturer in administrative law. Her research, teaching, and supervision interests focus on questions on the human genome and biobanking, AI, privacy and personal data protection, the quality of pediatric health care, as well as governance of rare diseases. She is an EAHL board member and a board member of the Nordic Permed Law and is engaged in health policy questions nationally and internationally.
Transforming Health Data Governance: The European Health Data Space Regulation
Having deferred the adoption of the first data space several times, on February 11, 2025, the EU legislator passed the Regulation on the European Health Data Space (Regulation 2025/327). Framed as establishing “common rules, standards and infrastructures and a governance framework, with a view to facilitating access to electronic health data for the purposes of primary use of electronic health data and secondary use of those data” (Article 1.1), the EHDS Regulation embodies ambitious policy aspirations, difficult political compromises reached over nearly three years on matters of significant sensitivity for Member States, and new mechanisms aimed at transforming health data governance both at the Union level and across the EU. At the center of this transformation is the individual—the data subject—whose personal health data are being digitalized and used, alongside a range of other datasets, for various public interest purposes.
This talk seeks to provide insight into the key transformations introduced by the EHDS Regulation, highlighting both the regulatory innovations and the fundamental tensions between individual rights and public interests that underpin the new data governance paradigm envisioned for the coming decade.
Dr. Tuomas Pöysti is Senior Counsel and Head of Investigations at Geradin Partners, and holds the title of Docent in Administrative Law at the University of Helsinki. He previously served as Chancellor of Justice of the Government of Finland and as Under-Secretary of State for Governance Policy and Digitalization, as well as for the Social and Health Services Reform. Dr. Pöysti is among leading experts in EU and administrative law, digitalization and law, technology and information law and regulation. His active research spans critical areas such as artificial intelligence (AI), cyber & information security, and digital service ecosystems. His recent work focuses ,inter alia, on the legal frameworks surrounding the use of AI in doctor–patient relationships.
What Health Care Professionals and Patients Can Legally Expect from AI Design and Maintenance: Health Care at the Crossroads of AI Regulation and Human Rights
Health care presents numerous promising, already implemented, and rapidly expanding use cases for artificial intelligence (AI). These range from enhancing the doctor–patient relationship and providing analytical diagnostic support tools for both health care professionals and patients, to offering solutions for knowledge management, research, and administrative functions. While AI holds transformative potential in health care, it also operates within a highly specific and sensitive context. This necessitates robust legal frameworks and daily governance that ensure, through both preventive and ex post measures, the protection of human dignity and the trustworthy, safe deployment of AI technologies and the augmented intelligence resulting from human–machine collaboration.
This presentation explores the legally enforceable expectations that health care professionals and patients may hold regarding the design and maintenance of AI systems. These expectations are grounded not only in market regulation and AI product governance but also in broader legal and ethical principles. They stem from the fiduciary duties of health care professionals’ organizations toward patients, as well as from patients’ rights to reliable, high-quality care based on scientifically validated and empirically supported clinical methods.
In the European context, this translates into a multi-layered and complex legal framework. It begins with the Council of Europe’s Oviedo Convention on Human Rights and Biomedicine and extends through the European Union’s General Data Protection Regulation (GDPR), the Artificial Intelligence Act, and sector-specific legislation such as the European Health Data Space Regulation, the Medical Devices Regulation, and the In Vitro Diagnostic Regulation. At its core, this framework is concerned with the proactive and ongoing protection of human rights and ethical values, grounded in the law of personality.
Generic requirements for trustworthy AI—such as safety, reliability, transparency, explainability, and accountability—must align with fundamental legal and ethical principles, including human dignity, integrity, and the primacy of the human being over societal, scientific, or technological interests. For patients, this means that informed consent, privacy, non-discrimination, equitable access to care, and adherence to high professional standards must be embedded in the design and maintenance of AI systems. This includes the continuous management and training of algorithms. For medical professionals, the core expectation is that AI-enabled augmented intelligence demonstrably supports their ability to meet high professional standards. They must also possess a reasonable understanding of how these systems function across various use cases. AI systems in health care must preserve the integrity of the doctor–patient relationship. To achieve this, legal requirements mandate professional oversight and continuous quality assurance in the design and use of AI technologies.