Main Menu

Ethics, intelligence and sustainability

Andrea Orlandini alla RomeCup 2025

Ethics, intelligence and sustainability

Ethics, intelligence and sustainability

Andrea Orlandini at RomeCup 2025: ‘Only a scientific approach can help us distinguish the real from the plausible’

In his speech at the inaugural conference of RomeCup 2025, National Research Council researcher Andrea Orlandini emphasised scientific and civil responsibility in the use of artificial intelligence. He made a strong call for critical awareness, especially among the younger generations, and for the urgent adoption of sustainable and human-centric use of technology. Without forgetting the weight of choices: ‘The system applies the objective function we give it. And if that is wrong, it will do so without question’.

We are making this content available to schools so that we can continue to reflect together on the evolution of the relationship between humans and intelligent technologies.
 

Watch the video of the speech

 

 

First of all, let me convey the greetings of our president, Professor Maria Chiara Carrozza, who has delegated me to be here today on her behalf. What a responsibility it is to speak on behalf of President Carrozza... and after so many rich speeches!

Allow me to add a personal note: it is a pleasure to be here, because I studied here, I obtained my doctorate here, and I see many professors with whom I took exams. Returning to these places for events like this is always special.

Many important issues have been raised. As a representative of the National Research Council, I feel I can say that technological developments present us with complex questions and challenges. And perhaps the most correct approach – the one we have always adopted as humanity – remains the scientific one: investing in research, studying phenomena from all angles, not just technological ones.

At the moment, artificial intelligence and robotics are technologies that we are often passively experiencing, overwhelmed by waves of information. Some of these, such as the idea of fully robotised hospitals, should be taken with a pinch of salt. It is precisely the scientific approach that allows us to distinguish between what is true, plausible and pure marketing – and advertising gimmicks do not only come from small companies.

This is why it is essential that the younger generations understand that these technologies, sold as gospel, are actually very delicate. In the 1980s, John Searle, with his ‘Chinese room’, had already dismantled the idea of general artificial intelligence. Luciano Floridi, a well-known philosopher and critic of the term ‘artificial intelligence’ itself, also invites us to question the mechanisms behind what we see.

Today's systems do not have a model of the world: they react intelligently but reactively, without a deductive structure. A system, like a newborn baby, learns by interacting with the world. But our machines do not possess – and by design cannot possess – the physiological and biological complexity of human beings. They will always be simplifications of cognitive processes.

That is why the real question, as Dr Asciutti also pointed out, is: ‘What's next?’. We all need to ask ourselves this question – not just researchers, but every citizen. How can we use these tools in the right way?

When I teach artificial intelligence, I tell my students: the first important thing is to define an objective function. But who defines it? If we give the system the wrong objective, it will maximise it without any ethical criteria. We need awareness, we need the right questions and reasoning mechanisms. And we need to imagine how we want our world to change.

Robotics, in particular, enters our lives in a physical way. The anthropomorphic appearance of robots makes them more acceptable, but those of us who work in assistive and rehabilitative robotics know that even the most efficient machine can be rejected if it is ‘ugly’ or difficult to use. And then there is human contact, empathy: no machine today can replicate that. Not even a robot can replace a joke from a porter that lightens the atmosphere in the ward.

The challenge is ours: researchers, lawmakers, public service organisers. We need to understand how to use these tools to distinguish good from evil and build a better world.

Often, we forget about what goes on ‘behind the scenes’. ChatGPT, Copilot and other systems consume enormous amounts of energy. Recently, there has also been discussion about the consumption of drinking water to cool data centres. Interaction with these tools also has an environmental impact.

So yes, we can ‘play’ with these technologies, but we must do so in a conscious and sustainable way.

Other news that might interest you

Our Projects

Get updated on our latest activities, news and events