With deepfakes on the rise, celebrities have found little recourse than to play whack-a-mole. That strategy has become more fraught with AI technology growing increasingly realistic and accessible, ensnaring everyone from Taylor Swift to Scarlett Johansson to presidential candidates Donald Trump and Kamala Harris.
Now, one of Hollywood’s go-to law firms is tackling the scourge. Venable LLP, whose clients include Swift, Peyton Manning and “La La Land” producer Automatik, is launching Takedown, a sophisticated program that proactively identifies and removes illicit and unauthorized deepfake videos and images and pirated content online. The program was created by Venable Blue, the firm’s consulting arm that deals with cybersecurity and privacy issues.
Available to new and existing Venable clients, Takedown is designed to protect both individuals and companies from the online spread of unauthorized and abusive content, disinformation and false endorsements as well as copyright and trademark violations. (Swift has been a client of the firm for more than a decade and recently enlisted Venable to stop a college student from tracking her private jet.)
“This is absolutely needed, especially for talent and high-profile individuals who are the first targets of threat actors,” says Venable LLP partner Hemu Nigam, who spearheaded the creation of Venable Blue. “With the current status quo, threat actors not only gain visibility but they also exploit the public who may be consuming [artificial] content without realizing they’re looking at an illicit deepfake video or image or a fake endorsement. So, this can be a double-edged sword with both the celebrity and the public becoming victims.”
Nigam, who previously served as chief security officer at Fox and NewsCorp and was VP of worldwide internet enforcement at the Motion Picture Association, says so-called threat actors can range from disgruntled fans to nation states looking to create disinformation campaigns. As such, deepfakes can cause significant financial and reputational harm.
In January, images of Taylor Swift that were fake and sexually explicit circulated on social media, with one post on X (formerly known as Twitter) attracting nearly 50 million views. (She did not use Takedown for that issue.) Johansson’s digital likeness was used to promote an AI app without her permission. Fake images of Trump falling down while being arrested and Harris cozying up with Jeffrey Epstein recently spread far and wide.
Venable is well poised to take on the problem given its footprint in the entertainment industry. The 124-year-old firm, which is based in Washington, D.C., but expanded to Los Angeles in 2006, has represented such film and TV entities as ViacomCBS and Boardwalk Pictures.
VIP+ Analysis: Why Watermarking Is Just One Part of Combating Deepfakes
As Hollywood grapples with the problem, companies have turned to AI to combat AI-created deepfakes. In April, WME inked a partnership with the Seattle-based software firm Loti, which flags unauthorized content posted on the internet. But Venable Blue’s Takedown also leans on its human staff, who will liaise with clients through every stage of the process, spanning from identifying the threat, removing the content and working with law enforcement if necessary.
In addition to takedown requests, Venable Blue provides 24/7 monitoring of new or repeat threats, metric and data analysis that includes quantifying the number of harmful posts removed, where the harmful content gained traction and identification of top repeat offenders.
Though Venable Blue does not disclose the names of its clients, Nigam says it has already used a beta version of the program to support attacks against high-profile actors, athletes and entertainers.
Now dig into a VIP+ subscriber report …
Source Agencies