"I'm deeply uncomfortable with these decisions": Anthropic's CEO sounds the alarm on AI power concentration
Dario Amodei, chief executive of Anthropic, has delivered one of the sharpest calls for AI regulation from inside the industry, admitting that a handful of unelected executives hold disproportionate power over transformative technology. His warning lands as the company discloses what it describes as the first large-scale AI cyberattack executed without substantial human intervention.
Dario Amodei, chief executive of Anthropic, has made the most candid admission yet from a sitting AI company leader: the concentration of power over artificial intelligence development in a few corporate hands is dangerous, and he is the first to say so from inside the room where those decisions are being made.
[[KEY-TAKEAWAYS:Amodei publicly admits discomfort with AI power sitting with a handful of unelected executives|Anthropic discloses what it calls the first large-scale AI cyberattack without substantial human intervention|The EU AI Act remains the world's only comprehensive AI law; the UK is still consulting|European cybersecurity researchers warn AI-agent attacks will outpace traditional defences|Commercial pressure and safety commitments remain in direct tension at every frontier AI lab]]
Advertisement
In a November 2024 interview on CBS News' 60 Minutes, Amodei told host Anderson Cooper: "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people. And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology." When Cooper pressed him on democratic legitimacy, asking who elected him and OpenAI chief executive Sam Altman, Amodei replied simply: "No one. Honestly, no one."
That exchange crystallises a tension that European policymakers have been grappling with since the European Commission first tabled the AI Act in April 2021. The question is not merely philosophical. It has direct consequences for how AI is built, deployed, and governed across the EU and UK, and for the hundreds of millions of people who will live with those consequences.
The first AI cyberattack: a watershed for European security
Anthropic's governance concerns gained fresh urgency when the company disclosed what it described as "the first documented case of a large-scale AI cyberattack executed without substantial human intervention." The incident, revealed ahead of the CBS broadcast, represents a qualitative shift in the threat landscape that European critical infrastructure operators cannot afford to ignore.
Marietje Schaake, international policy director at Stanford's Cyber Policy Center and a former member of the European Parliament with a long record on digital regulation, has argued consistently that AI-enabled cyberattacks pose systemic risks to democratic institutions. Her warnings align closely with Anthropic's disclosure: autonomous AI agents are not a future risk but a present one.
The European Union Agency for Cybersecurity (ENISA) flagged AI-assisted attacks as an emerging priority in its 2023 threat landscape report, noting that automation could dramatically lower the cost and skill threshold for sophisticated intrusions. For energy grid operators, financial institutions, and public-sector bodies across the EU and UK, this incident is a forcing function. Traditional perimeter defences were designed for human-paced adversaries. AI-agent attacks operate on a different clock entirely.
The competitive dimension compounds the problem. Amodei has previously warned that stepping back from frontier development would mean Anthropic would "lose and stop existing as a company." That pressure is not unique to Anthropic. Every major lab faces the same calculation, and it creates an inherent conflict between safety priorities and commercial survival that no amount of corporate mission statements resolves on its own.
EU and UK regulatory divergence: opportunity or liability?
The regulatory picture facing companies operating in Europe is complex, and it is worth being precise about what exists and what does not.
European Union: The EU AI Act entered into force on 01/08/2024, with obligations phasing in through 2027. It is the world's only comprehensive, binding AI law and sets a de facto global benchmark.
United Kingdom: The UK government has so far opted for a principles-based, sector-led approach rather than a single statute, with the AI Safety Institute conducting evaluations of frontier models. Primary legislation remains under discussion.
Switzerland: Operating outside the EU but closely aligned through bilateral agreements, Switzerland is monitoring the AI Act's extraterritorial reach, which will affect Swiss-based AI developers serving EU markets.
United States: Federal AI-specific legislation does not exist. A patchwork of state-level rules and executive orders fills the gap, creating compliance complexity for transatlantic operators.
China: Algorithm and generative AI regulations are in force, with strict enforcement and content controls that differ substantially from European frameworks.
For multinational AI companies, this fragmentation is not merely an administrative inconvenience. It forces genuine architectural choices about data residency, model behaviour, and product features by jurisdiction. The EU AI Act's extraterritorial scope means that any system placing outputs on the EU market must comply, regardless of where the developer is headquartered.
Safety commitment or strategic positioning?
Anthropic's founding narrative is inseparable from AI safety. Amodei departed OpenAI in 2021, citing disagreements over safety priorities, and established Anthropic with a cohort of researchers committed to alignment research. The company has since introduced several formal safety measures:
Constitutional AI, which trains models using explicit values rather than purely reward-based feedback
A Responsible Scaling Policy committing the company not to release models capable of catastrophic harm
Regular published safety reports documenting model vulnerabilities
Transparency initiatives covering political neutrality testing
Financial support for external AI safety research organisations
Whether these measures constitute genuine commitment or sophisticated positioning is a live debate in European AI policy circles. Yann LeCun, chief AI scientist at Meta, has accused Anthropic and similar labs of pursuing "regulatory capture," using safety rhetoric to shape legislation in ways that disadvantage open-source competitors. LeCun made this argument explicitly at VivaTech in Paris in 2023 and has repeated it since, making him one of the most prominent European-based voices sceptical of closed-lab safety claims.
Internal tensions at Anthropic have surfaced publicly. AI safety researcher Mrinank Sharma resigned from the company, stating that "the world is in peril" and expressing frustration with the difficulty of holding safety values against commercial pressures. That kind of resignation is notable precisely because it comes from someone who chose Anthropic over other employers on safety grounds.
The economics of AI safety: why profitability matters
The tension between safety and profitability is not abstract. It is baked into the cost structure of large language models in ways that traditional software economics do not prepare executives to handle.
Unlike a web search, which costs a fraction of a penny per query, each interaction with a frontier AI model demands substantial compute. Data centres, graphics processing units, and cloud infrastructure create capital expenditure that scales with usage rather than declining as a fixed asset depreciates. This means that as Anthropic grows its user base, its infrastructure costs grow in parallel, making the path to profitability longer and steeper than investors accustomed to software margins typically expect.
Konstantinos Karachalios, managing director of the IEEE Standards Association and a longstanding voice on AI ethics in Europe, has argued that the venture capital model funding most frontier AI development is structurally misaligned with long-horizon safety work. The pressure to demonstrate revenue growth within fund lifecycles pushes labs toward product velocity over precautionary research.
Amodei acknowledges the pressure candidly. Anthropic faces "incredible commercial pressure" whilst trying to maintain safety standards that he argues exceed industry norms. The honest version of that statement is that both things can be true simultaneously: the safety work can be real, and the commercial pressure can still erode it at the margins.
Three horizons of AI risk: what European regulators must plan for
Amodei categorises AI risks across three distinct timelines, and the framing is useful for European policymakers designing regulatory instruments that must be durable across a decade of rapid capability growth.
Short-term risks, already materialising, centre on bias, misinformation, and manipulation of public discourse. The European Parliament's experience of AI-generated disinformation during the June 2024 elections illustrates that these are not theoretical concerns.
Medium-term threats involve AI systems with enhanced scientific knowledge generating harmful information, including potential assistance with biological or chemical weapons development, as well as the kind of autonomous cyberattacks Anthropic has now documented. The EU AI Act's high-risk classification system and the UK AI Safety Institute's model evaluations are both attempts to get ahead of this horizon.
Long-term existential risks focus on AI systems that could progressively remove human agency from critical decisions. These concerns align with arguments made by Geoffrey Hinton, the British-Canadian computer scientist who shared the 2024 Nobel Prize in Physics for his foundational work on neural networks and who has warned publicly that systems capable of surpassing human intelligence could emerge within a decade. Hinton's stature gives these warnings a credibility that dismissive critics find difficult to deflect.
The question for European industry and policymakers is not whether to take these horizons seriously but how to design governance that addresses the near term without foreclosing adaptive responses to the medium and long term. The EU AI Act's risk-based tiering is a reasonable start. It is not a finished answer.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article6 terms
generative AI
AI that creates new content (text, images, music, code) rather than just analyzing existing data.
benchmark
A standardized test used to compare AI model performance.
AI safety
Research focused on ensuring AI systems behave as intended without causing harm.
alignment
Ensuring AI systems pursue goals that match human intentions and values.
bias
When an AI system produces unfair or skewed results, often reflecting prejudices in training data.
compute
The processing power needed to train and run AI models.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.