The AI Act is swelling up into a difficult-to-apply monstrosity

The AI Act currently being drawn up by the EU is an enormous and exceptionally complex legislative project. In spite of its deficiencies, the regulation is necessary and will provide protection for ordinary people.

The EU is working hard to finalise its planned AI Act in the near future. However, as systems that utilise artificial intelligence evolve at a dizzying rate, the legislative package threatens to grow into a regulatory framework that is difficult to apply.

“It’s starting to look like the regulation will become very extensive and contain an enormous amount of articles. I consider it very unlikely that it will be a coherent whole.”

This is the opinion of Professor of Public Law Susanna Lindroos-Hovinheimo, who has closely observed the progress of the regulation. She has familiarised herself with the act, among other things, through the Generation AI project, which investigates the regulation of artificial intelligence, particularly from the perspective of children’s rights.

Lindroos-Hovinheimo believes that the regulation leaves far too much room for interpretation.

“It’s only when the Court of Justice of the European Union gives its first rulings that we will know exactly what the regulation says. This will take years. And yet, the regulation should be observed in the meantime. This will be very difficult for all operators, both private and public.”

Those unfamiliar with the matter may think that artificial intelligence is not regulated at all at the moment, as the AI Act is only being drawn up. This is not the case, since there are several existing laws, starting from the Constitution of Finland, that lay down the preconditions for the use of artificial intelligence. However, no single set of legislation encompasses all relevant technology. The need for such legislation is evident, and there appears to prevail a strong political will in the EU to bring it about.

“Digitalisation has progressed for a long time without anyone or anything problematising the development. We are decades behind in regulation, and this gap is now being closed both in the EU and at the national level,” says Riikka Koulu, Associate Professor of Social and legal implications of Artificial Intelligence and director of the Legal Tech Lab.

GDPR as an appropriate benchmark

The most appropriate benchmark for the AI Act under preparation is the General Data Protection Regulation of the EU (GDPR).

One evident unifying factor is scope. Just like with the GDPR, the AI Act’s scope of application is enormous. In simple terms, it applies to all areas of society. Another common aspect is supervision.

“The regulation will certainly not remain a dead letter. Rather, control mechanisms intended to be fairly effective will be established,” Lindroos-Hovinheimo says.

The EU has on several occasions imposed fines of hundreds of millions of euros on platform companies that have violated the GDPR. The AI regulation will also include the threat of fines, with the sums being of the same magnitude, making them a genuine deterrent that affects business operations.

At the same time, implementation, application and supervision are expensive. For the purposes of supervision, a new authority may be established or, alternatively, new duties will be created for the old authorities. The GDPR is supervised in Finland by the Data Protection Ombudsman and, presumably, a similar authority will be established to supervise the AI Act. The passage of new legislation will also be extensive and require a great deal of effort and money.

The clear difference between these two sets of laws is that the GDPR was created on existing templates, while the AI Act is created from scratch. The GDPR was built on top of an old directive, from which a considerable part of the regulation was directly imported, as the directive had been in existence for decades and elaborated on over the years through legal practice.

“Since there is no precursor to the AI regulation, neither is there any legal tradition or advance rulings on which the courts can rely. This is why we can assume that its application will be challenging," Lindroos-Hovinheimo says.

Defining artificial intelligence is difficult

One of the biggest stumbling blocks with the AI Act has been how artificial intelligence should be defined in the legal sense. This is one of the key questions for which a consensus is yet to be reached in the negotiations. It is illustrative that not even data scientists, engineers and similar professionals working with artificial intelligence have come up with a suitable description of what artificial intelligence is and what it is not.

However, the EU’s aim has been to keep the definition of artificial intelligence broad, and the regulation contains a number of general guidelines that would apply to all aspects of artificial intelligence. At the same time, the regulation goes into even granular detail, touching on, for example, the technical operating mechanisms of individual systems. In this sense, Lindroos-Hovinheimo considers the regulation severely unbalanced, as it reaches across too many levels.

Rapid technological advancement making it necessary to amend draft legislation on the fly has also created difficulties. Extensive language models such as ChatGPT, which made a major breakthrough at the turn of the year, are an example. AI applications based on this technology had not been taken into consideration in the original draft regulation of the European Commission, but they were added to the Parliament’s version during the drafting.

“Laws should be drawn up so that they stand the test of time and keep pace with technological advances. In that regard, the GDPR appears to work better, as it is neutral in terms of technology – the same rules apply even when processing data with pen and paper,” Lindroos-Hovinheimo says.

To solve this problem, a model has been planned in which the Commission could, after the regulation comes into force, draw up additional provisions and updates without amending the regulation itself. This would be a practical way of keeping up with development, but problematic from the perspective of parliamentarism.

Tech companies are also happy to point to this fundamental problem between technology and law, of regulation lagging behind technological advances. Companies in the Silicon Valley have suggested that it would actually be better not to regulate at all and instead let development progress at its own pace while relying on industrial self-regulation.

“In terms of the market for goods and services, we have seen time and again that self-regulation cannot be trusted. External regulation is necessary. And this is what the EU has done in recent years with large platforms such as Google and Facebook,” Riikka Koulu points out.

Regulation protects ordinary people

In spite of the difficulties, the regulation is quite likely to be finalised. Of course, legislative packages have been withdrawn in the middle of drafting, but in the case of the AI Act, the stakes are so high that abandoning it at this point is not a valid option – the issue is of such importance and the political pressure to act that great.

Should the regulation fail, EU members states would inevitably start enacting their own laws individually.

“This would be a big problem for trade and free movement, that is, an economic problem within the union. Establishing common rules within the EU is a specific capitalistic goal, not just trying to make the world a better place,” Lindroos-Hovinheimo notes.

Even though the legislative package has its problems and there will be uncertainties in its application, Lindroos-Hovinheimo believes it is a necessary and welcome whole.

“From the perspective of ordinary people, the regulation is most likely a positive thing. Having rules for AI systems, imposing certain restrictions on what they can be and, above all, preventing their use for any purpose, helps to provide some kind of protection for people.”

Legal Tech Lab highlights the legal perspective on technology

The Legal Tech Lab is a research community at the Faculty of Law, which since 2016 has focused on broadly investigating the societal connections between law and technology.

The multidisciplinary lab brings together researchers interested in digitalisation who would otherwise conduct their research on their own. The activities are not particularly limited, nor do they fit into any particular existing field of law. Instead, the community includes representatives of the very different fields of law and research as well as of a number of disciplines.

Because of the multidimensional nature, enormous scope and constant evolution of research themes related to digitalisation, research must be conducted in an interdisciplinary way and, therefore, preferably in research groups. In fact, one key way of boosting the operations of the lab has been the establishment of cooperation with other disciplines, particularly in the direction of research in the fields of science and technology.

“While law is a powerful way of guiding the digitalisation of society, legal research or the legal perspective on technology are usually underrepresented in the public eye,” says Associate Professor Riikka Koulu, director of the Legal Tech Lab.

“One of our key duties is to make the role of legal science visible. Legal science plays a significant role in the further development of legislation and legal interpretations made in court. For social research to be impactful, legal science must be involved. The field brings a concrete aspect to the interdisciplinary research needed to make a difference in society.”