In a new article published in Agenda Digitale, Director-General Mirta Michilli raises a crucial question for the future of education: not how to use artificial intelligence, but how to develop digital discernment to navigate a complex ecosystem. In a public debate that swings between enthusiasm and fear, the risk, Michilli emphasises, is losing sight of the central point: without awareness and responsibility, innovation can turn into a new form of fragility.
Beyond the tools: educating for discernment
Artificial intelligence highlights a transformation already underway: today, education means above all developing the capacity for judgement. It is not enough to know how to use increasingly powerful tools. We need to develop critical, ethical and decision-making skills that enable us to navigate complexity. Digital discernment thus becomes a cross-cutting skill that concerns schools, lifelong learning and the entire educational ecosystem, including the third sector.
Digital participation: between opportunities and risks
The article also cites data from the Terzjus Foundation, which paint a more nuanced picture of online participation. The internet is not just a space for disinformation, but also a place for civic engagement, relationships and community building. However, critical issues are emerging:
- ‘low-threshold’ forms of participation
- exposure to disinformation
- intermittent engagement
Once again, it is not technology that makes the difference, but the quality of the skills with which it is used.
Schools between fragmentation and the need for coordination
One of the most critical issues concerns the Italian school system, where the introduction of AI risks generating uncertainty rather than providing guidance. Michilli highlights:
- the lack of a clear distinction between individual use and educational use
- the fragmentation of responsibilities regarding data and security
- the growing administrative burden on schools
A comparison with international experiences, such as that of Estonia, reveals a different approach: centralisation of risk, agreements with providers and system-wide coordination.
AI and learning: the risk of cognitive delegation
Another key element concerns students’ daily use of AI. The risk is not merely ‘copying’, but delegating cognitive effort, achieving results without building skills.
Evidence cited by the OECD shows that the passive use of generative tools can:
- improve performance in the short term
- but reduce metacognitive engagement and genuine learning
This gives rise to a new form of inequality, linked to differences in digital capital and educational opportunities.
Towards an educational model for AI
In the absence of a fully defined national framework, the article proposes a clear direction:
to rethink AI not as an individual tool, but as a shared learning environment. A model inspired by collaboration, in which:
- AI is a collective tool
- interaction is observable and discussed
- the teacher retains pedagogical control
- individual profiles and unnecessary data collection are avoided
An approach that reduces risks and brings technology back within an intentional educational framework.
A new technological humanism
The final message is clear: the added value of education lies not in the early adoption of technologies, but in the ability to train people who know how to choose. Digital discernment thus becomes the foundation of a new technological humanism, capable of:
- guiding innovation
- managing risks
- balancing equity, responsibility and development
Because, as Michilli points out, the real challenge is not using artificial intelligence, but not relinquishing human judgement.
Digital discernment
The real crux of artificial intelligence in schools? Cultivating judgement
Artificial intelligence raises an educational question that goes beyond the use of tools. Schools, the third sector and training must build digital discernment, manage risks and make innovation compatible with responsibility, equity and human judgement
by Mirta Michilli
Digital Agenda, 20 March 2026