Anthropic's Cowork Agent Moves AI From Chat to Action, and Europe's Public Sector Should Pay Attention
Anthropic has launched Cowork, an autonomous AI agent for macOS that manages files, executes tasks, and operates as a genuine digital colleague rather than a conversational chatbot. With European public sector bodies already exploring AI-driven workflow automation, this shift from reactive to proactive AI carries real implications for how governments and institutions deploy digital tools.
Anthropic has unveiled Cowork, an autonomous AI agent that moves decisively beyond conversational interfaces to manage files, execute tasks, and act as a digital colleague. Launched as a research preview for macOS users in January 2026, Cowork represents the clearest signal yet that the AI industry is leaving the chatbot era behind and entering the era of action.
The origins of Cowork are telling. Developers Boris Cherny and Felix Rieseberg observed on X that users of Claude Code, originally a developer-focused tool, were repurposing it for entirely mundane administrative work: holiday research, presentation creation, email organisation, tax preparation, and file management. Nobody at Anthropic had planned this. Users simply found a way to make it happen.
Advertisement
That organic behaviour pattern is exactly the kind of signal that should resonate with European public sector technology leads. When civil servants and knowledge workers start bending a tool to fit their workflow, the tool is telling you something about unmet demand. Anthropic responded by building Cowork in roughly ten days, using the same Claude Agent SDK that underpins its coding assistant.
What Cowork Actually Does
Cowork operates with considerable autonomy, planning and executing multiple tasks simultaneously rather than waiting for step-by-step instructions. It can sort cluttered download folders, generate expense spreadsheets from photographs of receipts, and compile comprehensive reports from scattered documents. It integrates with existing Claude connectors for Gmail, Notion, and Google Calendar, and extends to browser operations via Claude in Chrome.
The comparison with traditional chatbots is stark. Where a conventional AI assistant reads files and provides instructions, Cowork reads, writes, creates, and organises across system directories. Where a chatbot responds sequentially, Cowork handles simultaneous operations. This is not an incremental improvement; it is a different category of tool entirely.
Anthropic describes this as "augmented" AI interaction, where humans and AI collaborate rather than simply converse. The framing matters because it changes the risk calculus. An AI that advises is one thing. An AI that acts is another.
The Security Question Europe Cannot Ignore
Anthropic is explicit about the dangers. The company warns that "Claude can take potentially destructive actions such as deleting local files if it is instructed to." Prompt injection attacks, where malicious instructions embedded in processed content could redirect the agent's behaviour, remain an active and unsolved challenge.
For European institutions, this is not a theoretical concern. The EU AI Act, which entered full application in 2025, imposes strict conformity assessment requirements on high-risk AI systems, and autonomous agents operating within public sector workflows will face scrutiny from national market surveillance authorities. Lucilla Sioli, Director for Artificial Intelligence and Digital Industry at the European Commission's DG CONNECT, has consistently emphasised that trustworthiness and auditability are non-negotiable baseline requirements for AI deployed in public-facing contexts. Deploying an agent that can silently reorganise or delete files within a government system, without clear human oversight mechanisms, would sit uncomfortably against those expectations.
Equally, the UK's AI Safety Institute, now operating under the Department for Science, Innovation and Technology, has flagged autonomous agent behaviour as one of its core evaluation priorities. Chief Executive Ian Hogarth has argued publicly that the reliability gap between impressive demos and dependable production systems remains the central problem the industry must solve before autonomous agents earn widespread institutional trust.
These are not voices calling for a halt. They are voices demanding evidence. European public sector buyers evaluating tools like Cowork should treat that demand as a procurement checklist, not a bureaucratic obstacle.
Pricing, Availability, and the Enterprise Question
Cowork is currently available exclusively to Claude Max subscribers, who pay between £80 and £160 per month. It is macOS-only for now, with Windows support and cross-device synchronisation described as future priorities but without confirmed timelines. That exclusivity limits immediate enterprise applicability, particularly in public sector environments where Windows remains dominant and budget scrutiny is intense.
The subscription tier also places Cowork firmly in the premium productivity bracket. For a central government department or a large municipal authority evaluating AI procurement, the per-seat cost at scale is a meaningful variable. Framework agreements negotiated through Crown Commercial Service in the UK, or equivalent centralised procurement bodies in EU member states, will need to account for this pricing model as agentic AI tools proliferate.
Key capabilities on offer include:
Intelligent file organisation and cleanup across system directories
Automated expense tracking with receipt image processing
Multi-document report compilation and formatting
Cross-platform integration with productivity suites
Proactive task planning with minimal user oversight
Real-time progress updates and error handling
The Funding Context
Cowork's launch coincides with reports that Anthropic is seeking approximately £8 billion in fresh funding, a raise that would push its valuation to around £280 billion, up sharply from the £146 billion figure reported in September 2025. That trajectory matters to European buyers not because it changes the product, but because it signals the pace at which Anthropic intends to develop and scale its agent ecosystem. Organisations building workflows around Cowork are, in effect, betting on a roadmap that depends on continued investment at a significant rate.
The recursive dimension of that development is worth noting. Claude Code contributed to building Cowork itself, illustrating how AI tooling is now accelerating its own evolution. That feedback loop compresses timelines in ways that procurement and governance processes, designed for slower-moving software cycles, have not yet fully adapted to.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article4 terms
agentic
AI that can independently take actions and make decisions to complete tasks.
at scale
Applied broadly, to a large number of users or use cases.
ecosystem
A network of interconnected products, services, and stakeholders.
AI safety
Research focused on ensuring AI systems behave as intended without causing harm.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.