Skip to main content
Italy's Dual-Track AI Strategy: Vatican Ethics Meets Brussels Compliance
Long read
· 10 min read

Italy's Dual-Track AI Strategy: Vatican Ethics Meets Brussels Compliance

Italy has carved out one of the most distinctive AI policy positions in Europe, threading Vatican-backed ethical frameworks together with rigorous EU AI Act compliance. Whether the Meloni government can sustain both threads simultaneously, and turn that duality into genuine soft-power export, is the central question for Italian AI in 2025.

Italy is doing something genuinely unusual in European AI governance: it is running a principled two-track policy that ties hard regulatory compliance to a moral-philosophical framework that has the Pope's endorsement behind it, and it appears, for now, to be working.

That is not a sentence you expect to write about a G7 member navigating the most consequential technology regulation in a generation. Yet the evidence is there. The Rome Call for AI Ethics, launched in February 2020 at the Vatican with signatories including Microsoft, IBM, the Food and Agriculture Organisation, and the Italian government, has grown into a recognisable brand in international AI diplomacy. Meanwhile, the Garante per la protezione dei dati personali, Italy's data protection and privacy authority, has emerged as one of the most assertive regulators in the EU, most visibly through its temporary ban on ChatGPT in March 2023 and its subsequent investigations into AI-driven profiling systems.

Advertisement

Together, these two threads tell a story about a country that has decided it wants to be more than a rule-taker on AI. The question is whether President Giorgia Meloni's government has the coherence, the institutional depth, and the political patience to make that ambition stick.

"The Rome Call will only retain credibility as an export product if Italian institutions can demonstrate that they are doing the hard technical and organisational work of making AI ethics operational, not merely aspirational."
AI in Europe editorial analysis

The Rome Call and the Ethics Export Machine

The Rome Call for AI Ethics is, on paper, a soft instrument. It asks signatories to commit to six principles: reliability, security, privacy, inclusivity, transparency, and accountability. There is no enforcement mechanism, no compliance audit, and no binding sanction. Critics, and there are serious ones within the European AI research community, are right to point that out.

But soft power rarely works through hard enforcement. What the Rome Call does is provide Italy with a repeatable diplomatic platform. Cardinal Paolo Ruffini, Prefect of the Vatican's Dicastery for Communication and one of the architects of the Call, has described the initiative as an attempt to ensure that AI development is guided by human dignity rather than market logic alone. That framing resonates in the Global South, in Catholic-majority nations across Latin America and sub-Saharan Africa, and in European states still uneasy about whether the AI Act's risk-based compliance model captures everything that matters morally about algorithmic systems.

Since its first signing, the Rome Call has attracted additional institutional signatories and has been cited in UNESCO discussions around AI ethics. Italy has used it to position itself as a bridge-builder between the technocratic Brussels approach and a values-first conversation that many countries feel more comfortable having. That is not nothing. In fact, in a world where AI governance is rapidly fragmenting into competing normative blocs, having a credible ethical anchor with universal religious legitimacy is a strategic asset few European nations possess.

An editorial photograph of the exterior of the Garante per la protezione dei dati personali offices in Rome, shot at street level during daylight. The building should appear functional and institution

The Garante's Hard Edge

If the Rome Call represents Italy's ethical soft power, the Garante represents its regulatory hard power, and the two are more complementary than they might appear.

The Garante's March 2023 ChatGPT ban was not a headline-grabbing stunt. It was a carefully constructed GDPR enforcement action that forced OpenAI into substantive dialogue about data retention practices, age verification mechanisms, and the legal basis for training data collection from Italian users. OpenAI restored the service in Italy in late April 2023 after committing to a series of transparency and user-rights measures. The Garante also launched a broader investigation into large language model compliance that has informed how other EU data protection authorities, including the Irish DPC and the French CNIL, have framed their own inquiries.

The authority has since investigated AI-based profiling systems used in recruitment and credit scoring, consistent with its pre-AI Act approach of treating algorithmic decision-making as a GDPR issue rather than waiting for the AI Act's full entry into force. This is legally sound, and it keeps the pressure on operators even during the AI Act's staggered implementation timeline.

What is striking is that the Garante has operated largely independently of the Meloni government's political direction. Italian privacy enforcement has maintained its technical rigour regardless of which coalition is in power, a sign of genuine institutional maturity. Whether that independence survives increased political pressure as AI becomes more economically and strategically important is a live concern.

Meloni's Balancing Act

Giorgia Meloni's government has, to its credit, not tried to hollow out either thread. The Prime Minister attended the AI Safety Summit at Bletchley Park in November 2023 and participated in the follow-up Seoul Summit in 2024. Italy has signalled willingness to implement the AI Act in good faith and has moved to designate national competent authorities under the regulation's governance structure.

At the same time, the government has been enthusiastic about AI's economic potential, with Accenture Italy and domestic players including Leonardo S.p.A., the defence and aerospace group, investing significantly in AI-driven systems for both civilian and military applications. Leonardo's work on AI for autonomous systems creates a tension the Rome Call's principles cannot entirely resolve: how do you square a commitment to human dignity and accountability in AI with defence procurement that necessarily involves lethal autonomous systems research?

The government's answer, so far, has been to keep the two conversations in separate rooms. Rome Call diplomacy happens through Vatican-adjacent channels; defence AI happens through NATO and European Defence Agency frameworks. That separation may be pragmatic, but it is not intellectually coherent, and sooner or later, civil society and the academic community will force the question into the open.

An editorial photograph inside a university research lab in Milan or Rome, showing researchers in their late twenties or thirties working at screens displaying data visualisations or model outputs. Th

Research Infrastructure and the Cefriel Factor

Cefriel, the Milan-based research and innovation centre connected to Politecnico di Milano, has been one of the more thoughtful Italian institutions trying to translate ethics frameworks into engineering practice. Cefriel's work on trustworthy AI and its engagement with EU-funded research projects illustrates the gap that still exists between high-level ethical commitments and the technical implementation of fairness, robustness, and explainability in deployed systems.

That gap matters for Italy's dual-track ambition. The Rome Call will only retain credibility as an export product if Italian institutions can demonstrate that they are doing the hard technical and organisational work of making AI ethics operational, not merely aspirational. Research centres like Cefriel, together with universities such as the Politecnico di Milano and Sapienza Universita di Roma, are the places where that demonstration has to happen. Funding them adequately, and connecting their output to both regulatory enforcement and Vatican-backed ethical diplomacy, is a coherence challenge the government has not yet fully addressed.

Italy's AI policy landscape is shaped by a set of concrete facts: the scale of the Rome Call's signatory base, the Garante's enforcement record, and Italy's relative position in European AI investment all tell a more nuanced story than either optimists or sceptics usually allow.

Can the Dual Track Survive?

The honest answer is: probably, but not without deliberate effort. Three risks are worth naming directly.

First, the AI Act's implementation will consume enormous institutional bandwidth over 2025 and 2026. Designating national competent authorities, building market surveillance capacity, and processing conformity assessments for high-risk AI systems will stretch Italian regulatory bodies. The temptation will be to treat the Rome Call as a communications exercise rather than a substantive policy commitment. Resisting that temptation requires political will.

Second, Italy's fragmented AI ecosystem, strong in pockets of research excellence but lacking the venture capital density of the UK, France, or Germany, means that the country is more likely to be implementing other people's AI systems than building its own. That matters for sovereignty arguments and for the credibility of ethical positioning: it is harder to lecture about AI accountability when your economy depends on deploying accountability-light systems built elsewhere.

Third, the Vatican connection is a double-edged asset. The Rome Call's moral authority derives partly from its religious legitimacy, which resonates widely but also narrows its appeal in secular or pluralist contexts. Building out the framework's appeal beyond Catholic-majority nations, and engaging more seriously with Islamic, Buddhist, and secular humanist ethical traditions around AI, would strengthen it considerably. The Pontifical Academy for Life, which has been involved in Rome Call development, has shown some openness to interfaith dialogue, but the initiative still reads as predominantly Catholic in its cultural register.

None of these risks are fatal. Italy's dual-track strategy is more coherent than it appears from the outside, and the combination of a serious privacy regulator, a functioning ethics diplomatic platform, and genuine research capacity is a stronger hand than most mid-sized European nations hold. The question is execution.

If the Meloni government, or its successors, can treat AI ethics as a long-term institutional project rather than a branding opportunity, Italy has a real chance of becoming the EU's most distinctive voice on the values dimension of AI governance. If not, the Rome Call will fade into the long history of beautiful Italian declarations that preceded insufficient action.

THE AI IN EUROPE VIEW

Italy's dual-track AI posture is the most intellectually interesting governance experiment in continental Europe right now, and it deserves more serious analysis than it typically receives. The instinct to dismiss the Rome Call as Vatican optics and to treat the Garante as an outlier regulator misses what Italy is actually attempting: a synthesis of hard enforcement with moral-philosophical ambition that the AI Act alone cannot provide. Brussels has built an excellent risk-based compliance machine. What it has not done is offer a convincing account of why the principles behind the regulation matter beyond economic efficiency and fundamental rights lawyering. Italy, through the Rome Call, is attempting to supply that account, drawing on one of the most globally recognised moral institutions on earth. That is a genuinely useful contribution to European AI governance, not a distraction from it. The Garante's ChatGPT enforcement, meanwhile, demonstrated that member-state regulators can move faster and more decisively than the EU's centralised machinery when they choose to. Italy should be building on both advantages, not managing them separately as though they belong to different governments. The Meloni administration needs to close the gap between its ethical ambitions and its research investment, fund Cefriel and comparable institutions properly, and stop pretending that defence AI sits in a separate ethical universe from everything the Rome Call stands for. Hard choices, honestly made, are what turn soft power into lasting influence.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 6 terms
AI-driven

Primarily guided or operated by artificial intelligence.

ecosystem

A network of interconnected products, services, and stakeholders.

AI governance

The policies, standards, and oversight structures for managing AI systems.

AI safety

Research focused on ensuring AI systems behave as intended without causing harm.

explainability

The ability to understand and describe how an AI reached a particular decision.

trustworthy AI

AI that is reliable, transparent, and respects privacy and fairness.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment