Main Menu

Ethics and law for artificial intelligence

Giorgio Resta

Ethics and law for artificial intelligence

Ethics and law for artificial intelligence

RomeCup 2025, Giorgio Resta explains how Europe is trying to regulate AI with legal principles and democratic values

In his speech at the inaugural conference of RomeCup 2025, Giorgio Resta, full professor of comparative private law at the Department of Law of the Roma Tre University, addresses one of the central issues of technological innovation: how to reconcile the development of artificial intelligence with fundamental human rights. From the risk pyramid to transparency in chatbots, the new European regulation on AI introduces clear limits, prohibited systems and new rights for citizens for the first time. A lucid and passionate reflection on the role of law as a tool for guiding the technological future, without renouncing the values that underpin our democratic civilisation.

We are making this content available to schools so that we can continue to reflect together on the evolution of the relationship between humans and intelligent technologies.
 

Watch the video of the speech

 

Listening to the debate, I was reminded of a phrase often repeated by Stefano Rodotà, quoting Hegel: when it comes to technological innovation, the law often arrives like Minerva's owl at dusk. That is, when many of the phenomena altering our social conditions have already taken shape and it becomes difficult to change them.

However, it is through the law that we can build new conditions for framing and guiding future developments.

This has always been the case, since the industrial revolution. Every time we have faced major technological innovations, we have imagined that they would spell the death of law or ethics. Fortunately, we are still here.

There is an Italian word that can be translated into German with two different terms: ‘power’. In German, it is ‘können’ and ‘dürfen’.

‘Können’ means ‘we can physically do’, ‘dürfen’ means ‘we are allowed to do’.

This is precisely the challenge of law: to set red lines, limits and principles, often derived from different ethical systems, to guide innovation.

The recently approved European regulation on artificial intelligence is a unique experiment on a global scale. It is not the first piece of legislation on AI: China, for example, had already adopted three separate regulations on recommendation algorithms, deep synthesis and generative AI.

But it is the first horizontal regulation, i.e. it aims to cover all applications of artificial intelligence, not just specific sectors.

This challenge is consistent with the European approach to regulation, which is based on values and rights built up over centuries, including through tragedies.

It is often said that the European approach is ‘sceptical’ towards innovation. This is partly true. But it is also an approach that draws a clear line towards the future.

We can already see this in the key words of the European regulation. I would also like to remind you that Italy approved a bill on AI in the Senate (on 20 March) that refers to the same broad principles: democracy, the rule of law, health and dignity.

These are values that we must reaffirm with regard to the possible applications of AI.

Uncontrolled use of artificial intelligence can jeopardise these values. We saw this in the recent elections: the excessive use of social media, data collection and algorithmic analysis can distort electoral behaviour.

The Romanian Constitutional Court has in the past annulled the results of presidential elections for precisely these reasons. But we have also seen this in the United States, and we continue to see it every day.

I remember a case in point: in the United States, an individual was convicted on the basis of predictive software called COMPAS, which was used to estimate the risk of recidivism.

The software was based on questionnaires that considered various factors, including ethnic origin and social conditions.

An investigation by the website ProPublica showed that COMPAS did nothing more than reproduce existing discrimination, multiplying it and turning it into a rule.

Since, for example, the American prison population is largely African American, the system attributed a higher risk of recidivism to African Americans, fuelling a discriminatory spiral.

Yet it is precisely the law that should combat these inequalities. This is clearly stated in the second paragraph of Article 3 of the Italian Constitution, which requires not only formal equality but also the removal of de facto obstacles.

How, then, can this be regulated? Of course, the technologies are similar throughout the world. But the social systems in which they are embedded are different. We see very different models:
 

  • The United States attempted a ‘light touch’ approach under the Biden administration, but with the arrival of Trump, many executive orders have been cancelled. In general, there are no strong controls on the use of data.
  • China, with a strong public-private push, often with military-industrial contiguity, favours innovation, but with limitations on rights, especially in relations between citizens and the state.
  • Europe, on the other hand, has chosen a model based on the risk pyramid: unacceptable risk systems, high-risk systems, limited-risk systems and low-risk systems.

Here are some examples of systems prohibited by European regulations:

  • Social scoring (as in some Chinese contexts), which evaluates citizens based on their behaviour, awarding rewards or penalties;
  • Emotion inference systems in the workplace and schools;
  • Predictive systems such as COMPAS, which estimate the likelihood of criminal recidivism;
  • Systems that exploit individual vulnerabilities to influence behaviour.
     

These red lines have already been in place since 2 February 2024. The rest of the regulation will come into force in August 2026. The second category includes high-risk systems, for example in healthcare, the judiciary, immigration and border control. Developers of these systems will have to follow strict procedures: assessment, risk minimisation and registration.

Then there are generative systems, such as chatbots or generative AI. For these, the regulation does not impose any bans, but it does impose the principle of transparency: I have the right to know that I am talking to a machine, and not to a doctor or a human being.

Finally, every technological innovation brings with it new rights. One of the big problems with AI is that of the black box, of opaque decision-making.

Not knowing the logic behind a decision can be very serious, as in the case of autonomous weapons (an issue which, unfortunately, the regulation does not address because it concerns national security).

Or in the case of automated dismissal, such as that of a delivery rider who refused to deliver a meal 50 km away by bicycle.

How does the law respond?

The European Regulation and already the GDPR provide for the right to obtain an explanation of the logic behind an automated decision.

There is also the so-called Kafka provision: no one can be subject to a fully automated decision. And if this happens, the person has the right to request human intervention.

These are examples of good law. Not because we should be afraid of technology, but to govern it for social purposes, for example to help the most vulnerable people.

And in this regard, artificial intelligence has extraordinary potential.

Other news that might interest you

Our Projects

Get updated on our latest activities, news and events