Val Kilmer Never Shot a Single Scene. His AI Did. And Europe Should Be Paying Attention.
Val Kilmer died in April 2025 without filming a single frame of As Deep as the Grave. His entire performance was constructed by generative AI, with family consent and union oversight. The technology is advancing fast, costs are falling, and European regulators are only beginning to grapple with what synthetic performance means for performers, estates, and audiences.
Val Kilmer is dead. He died of pneumonia in April 2025, aged 65, after a decade-long battle with throat cancer that stripped him of the voice that once commanded screens from Tombstone to Heat. He never set foot on the set of As Deep as the Grave. He never delivered a line of dialogue. He never stood opposite Tom Felton or Abigail Breslin or Wes Studi. And yet, when the film finds a distributor later this year, audiences will watch him play Father Fintan, a Catholic priest and Native American spiritualist, across multiple stages of his life. Every frame of his performance was constructed by generative AI.
This is not a cameo. It is not a fleeting digital touch-up of the kind we saw in Top Gun: Maverick. It is a full, credited role for a man who was already gone when the cameras rolled. His family approved it. His estate was compensated. SAG-AFTRA guidelines were followed. And if you think this story is purely a Hollywood concern, you are not paying close enough attention to what is already happening across European film and television production, where the same technology is beginning to reshape who gets to perform, who gets to profit, and who gets to say no.
Advertisement
The Film, the Family, and the Consent Question
Director Coerte Voorhees first cast Kilmer in As Deep as the Grave (originally titled Canyon of the Dead) five years before the actor's death. The film, produced by First Line Films, tells the true story of Southwestern archaeologists Ann and Earl Morris, who excavated Canyon de Chelly in Arizona to trace the history of the Navajo people. Kilmer, who identified as part Native American, was drawn to the spiritual dimension of the project.
But Kilmer's cancer, first diagnosed in 2014, made it impossible for him to perform. His voice, damaged by a tracheostomy, had already been reconstructed using AI by Sonantic (now acquired by Spotify) for his brief appearance in Top Gun: Maverick in 2022. For As Deep as the Grave, the filmmakers went further: they used state-of-the-art generative AI to reconstruct not just his voice but his face and physical presence, drawing on younger photographs provided by the family and footage from his final years.
His daughter Mercedes Kilmer publicly endorsed the project, stating that her father "was a deeply spiritual man" and that he "always looked at emerging technologies with optimism as a tool to expand the possibilities of storytelling." Director Voorhees has said plainly: "Despite the fact some people might call it controversial, this is what Val wanted."
The estate granted permission and received compensation. The production says it followed SAG-AFTRA's collective bargaining agreement, which requires consent from an authorised representative when a performer's approval was not obtained before death. SAG-AFTRA itself has stated that "any use of digital replicas must be transparent, properly authorised and fully aligned with the rights of performers and their estates."
On paper, every box has been ticked. In practice, the questions this film raises are far more uncomfortable than any checklist can resolve.
The Precedent Problem
What makes As Deep as the Grave significant is not the technology. Hollywood has been digitally altering actors for years, from the de-aged Robert De Niro in The Irishman to the posthumous Peter Cushing in Rogue One. What makes it significant is the scale of the absence. Kilmer did not perform a single scene that was later enhanced. The entire performance is synthetic. The AI is not augmenting an actor; it is replacing one.
This distinction matters enormously, and it is one that European policymakers are only beginning to internalise. California's AB 1836, signed into law in 2024, expands posthumous right-of-publicity protections to cover digital replicas of voices and likenesses. SAG-AFTRA fought for AI protections during the 2023 strikes and secured provisions in its 2024 collective bargaining agreement. In 2025, the union filed an unfair labour practice charge against Llama Productions over the use of an AI-generated James Earl Jones voice for Darth Vader in Fortnite, arguing that replicating a deceased performer's voice without proper bargaining violated member rights.
The legal scaffolding in the United States is being built, however imperfectly. Europe's equivalent architecture is thinner, and in some areas non-existent.
Where Europe Stands, and Where It Gaps
The EU AI Act, which entered into force in August 2024, imposes transparency obligations on AI systems that generate synthetic media, including requirements to label deepfakes and disclose AI involvement. Article 50 of the Act specifically covers AI-generated audio and video content. But the Act is a horizontal regulation: it sets baseline rules for disclosure and risk classification without creating performer-specific rights analogous to SAG-AFTRA's collective bargaining provisions.
Luca Bertuzzi, a technology policy journalist who has reported extensively on the AI Act's implementation for MLex and Euractiv, has noted that the Act "was never designed to be an entertainment industry regulation" and that the gap between its general transparency requirements and the specific consent questions raised by synthetic performance is substantial. The European Commission's AI Office, established under the Act to oversee frontier model compliance, has yet to issue sector-specific guidance for film and television production.
On the collective bargaining side, FIA (the International Federation of Actors), which represents performer unions across Europe including Equity in the UK and the French union SFA-CGT, has been pushing for binding consent and compensation standards. Pearce Quigley, a council member of Equity, the UK performers' union, has been vocal about the risks to members: Equity has explicitly called for legislation requiring consent before any AI replication of a performer's voice or likeness, living or dead, and has warned that voluntary industry codes are insufficient without enforcement mechanisms. The union's position is that the Kilmer case, however carefully handled, normalises a practice that, without robust legal backing, will rapidly be abused at the expense of working performers who lack Hollywood star power or well-resourced estates.
The contrast with the United States is instructive. Hollywood's framework, for all its imperfections, has a functioning union with real bargaining power, state laws with explicit posthumous protections, and a growing body of case law. Europe has a horizontal AI regulation, a patchwork of national personality rights laws, and performer unions whose leverage varies enormously between markets. France has relatively strong moral rights traditions under its intellectual property code. Germany's right of personality offers some post-mortem protection. The UK, post-Brexit, is developing its own AI and intellectual property framework, but the government's current proposals have been criticised by Equity and the creative industries for prioritising technology sector interests over performer rights.
The European Production Landscape Is Already Changing
It would be a mistake to assume this is a problem arriving from elsewhere. European productions are already deploying AI in ways that raise the same questions as As Deep as the Grave, if not yet at the same scale.
De-ageing technology is now standard in high-budget European drama. AI voice cloning is being used in dubbing across multiple language markets, a practice that has accelerated since the 2023 Hollywood strikes created demand for non-SAG-AFTRA dubbing pipelines. Several European streaming platforms have piloted AI-generated background performers to reduce costs. And at least one major European broadcaster has internally explored the feasibility of resurrecting archive talent digitally for anniversary programming, according to industry sources, though no such project has yet been publicly announced.
The synthetic performance market is not a distant horizon. Respeecher, a Kyiv-founded voice AI company with clients across European broadcasting, has demonstrated the capability to reconstruct deceased performers' voices from archive recordings. ElevenLabs, which has a significant European user base and offices in London, offers voice cloning at a price point accessible to independent producers. Metaphysic, the company behind the de-ageing work on Here and other productions, has European studio partnerships. The tools exist. The question is what rules govern their use.
Professor Nathalie Smuha of KU Leuven, one of Europe's leading scholars on AI governance and a contributor to the EU AI Act's academic underpinnings, has argued that the current regulatory framework creates a "consent gap" for creative AI applications. The AI Act's transparency obligations tell audiences that synthetic content exists; they do not tell performers that their likeness is being used, nor do they create a right to refuse or negotiate compensation. Closing that gap, Smuha has suggested, requires either sector-specific legislation or a significant expansion of existing personality rights law to explicitly cover AI replication, a project that would require coordination across EU member states and, separately, within the UK's post-Brexit legal framework.
The Numbers Behind the Shift
800 million euros: Estimated value of the European deepfake and synthetic media market by 2026, growing at over 25% annually (Research and Markets).
47%: Share of European dubbing studios that have piloted AI voice cloning for at least one production as of 2024, according to industry surveys cited by the European Audiovisual Observatory.
1.29 billion dollars: Global deepfake AI market size in 2026, growing at a 25.8% compound annual growth rate (Research and Markets).
2026: Target date by which the EU AI Act's obligations for general-purpose AI models, including those underpinning synthetic performance tools, will be fully enforceable across all member states.
The Real Question Is Not Whether It Can Be Done
The technology will improve. The costs will fall. The incentives, in an industry where a recognisable face can be worth tens of millions at the box office, will only grow stronger. The question is not whether AI can reconstruct a dead actor convincingly enough to carry a feature film. It clearly can. The question is what guardrails Europe builds before capability outpaces consent.
Val Kilmer's case is, in many respects, the best-case scenario. He knew about the project. His family approved. His estate was compensated. A union was involved. The best-case scenario is rarely the one that defines an industry's trajectory. The cases that will define this era are the ones where consent is ambiguous, where estates are pressured, where performers in countries without strong collective bargaining have no mechanism to say no, and where audiences cannot tell the difference between a performance that was given and one that was manufactured.
European performers, particularly those without the leverage of major star power, those working in smaller national markets with weaker union structures, those whose archive footage is held by broadcasters who see a commercial opportunity in synthetic revival, are precisely the people most exposed to the harms this technology enables when governance lags behind adoption. The Kilmer case sets a template. But templates are only useful if there is a system to enforce them.
There is a scene in Top Gun: Maverick where Kilmer's Iceman communicates mostly through text on a screen, his voice ravaged by the same cancer that ravaged the real man. It was a moment of devastating honesty: the technology existed to smooth over his condition entirely, and the filmmakers chose not to. They let the audience see what time and illness had done. As Deep as the Grave makes the opposite choice. It uses AI to show Kilmer as he was, as he might have been, as he never will be again. Whether that constitutes tribute or trespass depends on where you draw the line between honouring an actor's legacy and manufacturing a performance he never gave.
What is certain is that the line, once crossed, will be crossed again and again, by studios and streamers and independent producers across Europe and beyond, who now have the tools to bring anyone back, for any role, at any time. The ghost is in the machine. The question is who holds the keys, and in Europe right now, the honest answer is that nobody has worked out whose hand that should be.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article6 terms
generative AI
AI that creates new content (text, images, music, code) rather than just analyzing existing data.
state-of-the-art
Represents the highest level of development at a given time.
robust
Strong, reliable, and able to handle various conditions.
leverage
Use effectively.
AI governance
The policies, standards, and oversight structures for managing AI systems.
guardrails
Safety constraints built into AI systems to prevent harmful outputs.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.