Skip to main content
Claude Can Now Control Your Computer
· 7 min read

Claude Can Now Control Your Computer

Anthropic has given Claude the ability to take over a user's desktop from a smartphone prompt, opening applications, filling spreadsheets, and completing multi-step tasks without further input. It is the most consequential shift in consumer AI since ChatGPT launched, and it arrives just as European regulators and enterprises are wrestling with what autonomous AI agents actually mean in practice.

Anthropic's Claude has stepped off the chat window and onto the desktop. The company has launched a genuinely new capability for its Claude AI: the ability to take full control of a user's computer and autonomously complete tasks end to end. Send a prompt from your smartphone, and Claude will open applications, navigate browsers, populate spreadsheets, and execute multi-step workflows on your desktop without any further input from you. This is no incremental update. It is a meaningful leap towards AI that acts rather than merely advises, and it lands at a moment when European enterprises, regulators, and AI labs are only beginning to grapple with what agentic AI really means.

[[KEY-TAKEAWAYS:Claude can now operate desktop apps and files from a smartphone prompt, running locally on-device|Anthropic's Dispatch feature enables persistent, asynchronous agent workflows across phone and desktop|Permission gates require Claude to request access before entering any new application|Prompt injection attacks remain an unresolved security risk that Anthropic has not fully detailed|European regulators under the AI Act will likely scrutinise autonomous agents as high-risk systems]]

Advertisement

What Claude Can Actually Do Now

The feature, announced earlier this week, allows Claude to act on a user's computer in response to natural-language instructions sent from any device. In a demonstration, Anthropic showed a user running late for a meeting who asked Claude to export a pitch deck as a PDF and attach it to a calendar invite. Claude completed the task from start to finish without further prompting.

The capability set is broad:

  • Opening desktop applications on demand
  • Navigating a web browser autonomously
  • Populating and managing spreadsheets
  • Handling local files without routing data through a cloud intermediary

That last point is architecturally significant. Claude runs locally on the user's device, giving it direct access to local files and applications rather than operating through a remote server. Anthropic was notably candid about the product's present limitations: the company acknowledged that computer use is "still early compared to Claude's ability to code or interact with text", and warned that "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving." It is a rare piece of corporate honesty, and it deserves to be taken seriously.

A software developer at a standing desk in a modern open-plan office in central Berlin, watching a desktop screen on which multiple applications open and close autonomously, their hands resting at the

The OpenClaw Effect and a Fast-Moving Race

No account of AI agents in early 2026 is complete without addressing OpenClaw, the third-party application that changed the conversation entirely. OpenClaw connects to AI models from both OpenAI and Anthropic, can be messaged through WhatsApp or Telegram, and carries out tasks directly on the user's device. Its consumer-friendly design and viral growth put agentic AI on the mainstream radar in a way that API announcements never managed.

Jensen Huang, chief executive of Nvidia, called OpenClaw "definitely the next ChatGPT", and the industry responded accordingly. The competitive moves came in rapid succession:

  • OpenAI hired Peter Steinberger, OpenClaw's creator, to lead development of its next generation of personal agents
  • Nvidia announced NemoClaw, an enterprise-grade equivalent designed for business use
  • Anthropic has now brought its own native computer-use capability to market

The race has shifted. It is no longer about language model benchmarks or context windows. It is about who can most reliably act on your behalf in the real world, across real workflows, without breaking things.

Dispatch: The Architecture Behind the Agent

Alongside the computer-use announcement, Anthropic has integrated the capability into a broader productivity platform. Dispatch is a newly released feature inside Claude Cowork that enables users to maintain a continuous conversation with Claude across phone and desktop, assigning tasks that the agent then carries out asynchronously. It is a persistent AI assistant that does not vanish when you close a tab.

This architecture matters more than it might appear. Rather than a one-shot query-and-response model, Dispatch enables an ongoing working relationship between user and agent. You assign a task, move on, and check back when it is complete. That mirrors how people actually work across a day, and it represents a genuine design shift away from the chatbot paradigm that has dominated consumer AI products since 2022. For European businesses that have been cautiously piloting AI tools, this kind of persistent, asynchronous agent could be the form factor that finally makes enterprise adoption feel practical rather than experimental.

A close-up of a laptop screen in what appears to be a Brussels office, showing a file management interface with folders being reorganised by an unseen process. A smartphone lying beside the laptop dis

The European Dimension: Regulation, Adoption, and Risk

European enterprises and regulators will be watching this launch with a mixture of interest and unease. The EU AI Act, which is already in phased implementation, places general-purpose AI systems and high-risk applications under significant scrutiny. Autonomous agents that can operate files, submit forms, and interact with external services on a user's behalf will almost certainly attract regulatory attention. Whether they fall under high-risk classification will depend on deployment context, but enterprises using Claude for anything touching HR, finance, or legal workflows should be consulting their compliance teams now, not later.

Luc Julia, chief scientific officer at Renault and one of France's most prominent voices on practical AI deployment, has argued consistently that the gap between AI capability and AI governance is the defining challenge for European organisations adopting these tools. That tension is sharpened considerably when the AI in question can take actions, not merely generate text.

At the regulatory level, the European AI Office, established under the AI Act to oversee general-purpose AI models, has not yet issued specific guidance on agentic AI systems. Dragoș Tudorache, the Romanian MEP who was one of the principal architects of the AI Act during its parliamentary passage, has previously signalled that autonomous decision-making and action-taking by AI systems would be a priority area for enforcement attention as the Act matures. European enterprises would be wise to treat that signal as a live constraint rather than a distant possibility.

On the adoption side, mobile-first working patterns across European markets make the remote-prompt, desktop-execution model genuinely compelling. A professional in Amsterdam, Milan, or Warsaw who can send a Telegram message during a commute and arrive at the office to find a completed task is not looking at a hypothetical productivity gain. It is a tangible one. The question is whether the reliability and security of the underlying agent are yet sufficient to justify that trust.

Safety, Autonomy, and the Trust Problem

The enthusiasm around computer-use agents carries a real shadow: trust. Allowing an AI model to operate applications, access files, and take actions on your behalf requires a level of confidence in the system's reliability and integrity that current AI cannot fully justify. Anthropic's own candour about evolving threats is welcome, but it raises questions about what those threats look like in practice.

There are several plausible failure modes that any enterprise user should understand before granting broad permissions:

  • Misinterpreted instructions leading to deleted files or incorrect form submissions
  • Draft emails sent prematurely due to ambiguous task framing
  • Prompt injection attacks, where malicious content in a webpage or document tricks the agent into taking unintended actions
  • Cascading errors in multi-step workflows where an early mistake compounds downstream

Prompt injection in particular is a well-documented concern in the AI security research community, and Anthropic has not detailed how its safeguards address this specific threat vector. The permission-gate model, which requires Claude to request approval before accessing new applications, is a sensible starting point. But for power users who grant broad permissions up front, that safety net effectively disappears. This is the core tension in agentic AI: the more autonomous the agent, the more useful it becomes, and the more damage it can do when something goes wrong.

European organisations deploying these tools at scale will need answers to questions that neither Anthropic nor the broader industry has fully resolved. The capability is here. The governance framework, at both the product and regulatory level, is still catching up.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 4 terms
agentic

AI that can independently take actions and make decisions to complete tasks.

API

Application Programming Interface, a way for software to talk to other software.

at scale

Applied broadly, to a large number of users or use cases.

AI governance

The policies, standards, and oversight structures for managing AI systems.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment