At RomeCup 2026, a conference at Sapienza University dedicated to the legacy of Tullio De Mauro
As part of RomeCup 2026, Sapienza University of Rome hosted the conference “From natural language to artificial language”, organised by the Fondazione Mondo Digitale and broadcast live on CorriereTv. A gathering to reflect on how language, now increasingly mediated by intelligent systems, is transforming knowledge, education and citizenship.
The starting point was the scientific and civic legacy of Tullio De Mauro, linguist, former Minister of Education and first president of the Fondazione Mondo Digitale from 2001 to 2011. A legacy which, almost ten years after his passing, continues to offer valuable tools for understanding the present: from the centrality of democratic language education to the relationship between words, knowledge and civic participation.
The discussion, moderated by Alessia Cruciani, a journalist with Corriere della Sera, opened with a tribute to De Mauro as an intellectual capable of bringing together language, education, technology and democracy. “Language as a tool of democracy sums up Tullio De Mauro very powerfully,” recalled Alfonso Molina, scientific director of the Fondazione Mondo Digitale. For Molina, De Mauro had understood, well ahead of his time, “the harmony between mastery of language and personal development”: “There is no culture without language, there is nothing without language, because we cannot even think if we do not have a language”.
In his speech, Molina also traced De Mauro’s contribution to the foundation of the Foundation’s mission. “Our mission is to work towards a democratic knowledge society,” he explained. “Coming from Britain, I spoke of an inclusive knowledge society, where everyone has the right to access technology without discrimination. Tullio said to me: yes, but in the Italian Constitution, the word ‘inclusive’ translates as ‘democratic’.” From that insight emerged a formulation that still guides the Foundation’s work today.
The conference then addressed the issue of language learning in the age of artificial intelligence. Monica Barni, full professor of Modern Language Teaching at Sapienza University, urged us to move beyond the oversimplifications of public debate: “We must move away from the logic of polarisation. We cannot simply say: ‘technology is either useful or useless, artificial intelligence is either useful or useless’.” The issue, she explained, must be brought back to knowledge: understanding “why to use artificial intelligence, when, with whom, and how to use it”. For Barni, studying languages does not lose its meaning with the spread of machine translation tools. On the contrary, it becomes even more important. A language, to quote De Mauro, “is not merely a tool for communication”, but “a way of life”, “a creative tool” and a prerequisite for exercising citizenship. Artificial intelligence, too, he emphasised, must not be ignored: “We must come to terms with it as teachers; we must make it our own; we must understand it and know how to guide it.” But the final choice must remain human: “The final choice must be ours. We may have a text translated or checked, but then that text must be reviewed, double-checked by us and reinterpreted on the basis of what we want to say.”
Filomena Diodato, associate professor of Semantics, Cognitive Semantics and Language Theory and AI at Sapienza University, spoke on the relationship between language, meaning and algorithms. The question of whether a machine truly ‘understands’, she explained, risks being misleading if one does not distinguish between language as a human faculty and historical-natural languages.
“We are dealing with extremely powerful linguistic tools that are, however, not endowed with language,” she stated. Machines are “extraordinary tools for modelling existing languages”, but they do not experience language as a social practice.
This leads to a central point of the discussion: the responsibility of words. “Words do not have a meaning independent of speakers,” Diodato noted. “A system of this kind does not mean: we are the ones who give it meaning when we use it.” For this reason, the use of artificial intelligence requires more, not less, linguistic competence: “Using these tools requires greater linguistic sensitivity, greater linguistic competence, not less linguistic competence.” And above all, it does not allow us to delegate responsibility: “Ultimately, the author of the text is never the machine. We are responsible for our own words”.
The second part of the meeting shifted the focus to the technological foundations of artificial intelligence. Maurizio Lenzerini, full professor of Computer Engineering at Sapienza University, noted that artificial intelligence did not originate with ChatGPT, but as a much broader discipline, dating back to the 1950s. Current systems belong to a different paradigm from the original one: they do not so much seek to build machines capable of reasoning logically, but rather systems that learn from vast amounts of data. Precisely for this reason, according to Lenzerini, the educational aspect is crucial: it is not enough to explain how to use the tools; we must strengthen “critical thinking, reasoning skills and an understanding of the world”.
Alfio Ferrara, full professor of Computer Science at the University of Milan and delegate for AI Literacy, brought the discussion directly into the school setting. Addressing the pupils present, he posed a radical question: “Why do you go to school? Why do you do your homework? Why do you have to write an essay or sit an exam?”. If the issue is merely “performing”, he explained, machines are extremely powerful tools. But if the aim is “to build a personality, knowledge, a truly human dimension”, then artificial intelligence must be used differently: as a support for developing creativity, testing ideas, and engaging in dialogue with an “intellectual partner”. For Ferrara, true literacy does not consist in knowing how to use platforms, but in understanding how they work. “ We must not learn to use these machines; we must know how they work. We need knowledge, not skills.” Language models, he explained, generate plausible sequences, but they have no experience of the world: “They are completely confined within language.” For this reason, the risk lies not in their use per se, but in unwitting delegation, especially when systems produce texts that are formally correct but lack verification, context and accountability.
In his concluding remarks, Walter Quattrociocchi, full professor at Sapienza University, chair of the Data Science degree programme and director of the Centre for Data Science and Complexity for Society, strongly emphasised the issue of verification. Generative artificial intelligence, he observed, produces linguistically plausible outputs, but this does not make them true. The risk lies in confusing linguistic coherence with knowledge. “The large language model simulates judgement, it does not create it,” he stated. “It creates the sentence most similar to judgement.” Hence the urgent need for a new form of critical education, capable of integrating the humanities and the scientific method. For Quattrociocchi, the point is not to reject technology, but to prevent linguistic plausibility from becoming the sole criterion of knowledge. “The moment we decide that the output of a large language model is good, based on linguistic plausibility, we are saying that linguistic plausibility is sufficient to enter into the realm of knowledge, completely bypassing verification.”
The meeting concluded with De Mauro’s voice returning, through a reflection of his recalled by Alessia Cruciani: faced with new developments, “Italians are vigilant, ironic and reactive”.
A phrase born in another context—that of television’s impact on language—but surprisingly relevant today. Even in the face of artificial intelligence, the challenge is to remain vigilant and responsive: not to defend the past, but to shape the future with greater knowledge, greater responsibility and greater democracy.