AgrippinaX
02-03-23, 07:32 AM
Thoughts?
‘In a century you could have Tom Cruise doing Mission: Impossible 20’
AI is changing the way films are made and soon Hollywood stars will live for ever, whether they want to or not, says Ed Potton
February 3 2023, The Times
Welcome to Hollywood, February 2026. Shooting this week are a thriller in which Grace Kelly is chased across a rooftop by Dwayne Johnson, a comedy that has Richard Pryor trading quips with Rebel Wilson and a romantic epic in which Marilyn Monroe falls desperately in love with Denzel Washington. What’s that? Kelly, Pryor and Monroe are dead? No problem. Their estates have given permission for their computer-generated likenesses to be used. For a fee, naturally. And it’s not as if Johnson, Wilson or Washington will be on set themselves — they’ve sold the producers the rights to digitally graft their faces and voices onto body doubles while they appear in other projects. Ker-ching! Their fans won’t be any the wiser, because it’s now impossible to tell the deepfake Johnson et al from the real ones.
None of that is really happening, of course, but movies along those lines will absolutely be possible in the near future, such is the rapidly increasing sophistication of artificial intelligence software. We have already seen Peter Cushing resurrected in Rogue One and Abba performing as their lithe young selves in London. Soon even that technology will be old hat.
The director Robert Zemeckis has just announced a film, Here, in which Tom Hanks will be de-aged using cutting-edge AI software called Metaphysic Live. That follows Deep Fake Neighbour Wars, a new series on ITV in which “Kim Kardashian’s neighbour Idris Elba restricts her access to the shared garden and Greta Thunberg is upset by Conor McGregor and Ariana Grande’s perennial Christmas decorations”. And just this week audio clips were posted online that appeared to feature the Harry Potter star Emma Watson reading from Mein Kampfand David Attenborough being a racist bigot. They were actually the result of naughty members of the public using a new text-to-audio tool that allows you to type in words and hear them reproduced immediately in a human voice. Last year a deepfake Bruce Willis “appeared” in an advert for the Russian mobile phone network Megafon without setting foot on its set, having agreed to have his face digitally transplanted onto another actor’s.
How worried should we be? “There’s a lot to be concerned about when it comes to deepfakes — disinformation, non-consensual pornography, cybersecurity,” says Henry Ajder, a British specialist in artificial intelligence and synthetic media. Nina Schick, another AI expert, predicted this week that 90 per cent of online content will be generated by AI by 2025. Ajder thinks that’s about right, although that would include all kinds of computer tweaks, such as Instagram filters.
Ajder’s biggest concern is around the “ethical ambiguities” of the technology. “If a performance wins an Oscar but the actor is a body double and the voice and face are cloned from another actor, who gets the award?” Or say you have a young actor who is desperate for work. “There may well be organisations who essentially prey on their desperation by offering to license their face for a film and giving them a contract with an indefinite clause about how they can use their face in future content. A lot of less established actors aren’t going to have the legal resources of people like Tom Hanks.”
Bigger stars, conversely, may let movies use their synthetic likenesses for a reduced fee. “That could have pretty severe implications for fresh talent breaking through,” Ajder says. “We’ve already seen certain studios rehashing the same old franchises over and over again. This could be an extension of that, in the context of actors. It could also continue to happen after the talent has died. In a century you could still have Tom Cruise doing Mission: Impossible 20.”
With dead performers you have the issue of consent. Cushing’s estate was closely involved with his appearance in Rogue One but, Ajder points out, “an actor could have never even been in a position to understand the consent”. When Monroe died in 1962 deepfake technology was about as plausible as teleportation. She may have hated the idea, as did Robin Williams, who stipulated in his will that he didn’t want to have any kind of electronic resurrection, did.
The rise of deepfaking is about “realism, efficiency and accessibility”, Ajder says. When Ridley Scott and his team brought Oliver Reed back to life for Gladiator in 2000 it took hundreds of hours and cost $3 million. “Now it’s a much more streamlined process, faster and cheaper, and it can yield much more realistic and nuanced results.”
“These tools are now becoming incredibly accessible,” Ajder says. ChatGPT, an AI chatbot, and DALL-E, a text-to-image program that creates realistic images from word prompts, are available free online, as is the software in the Emma Watson deepfake, which was developed by ElevenLabs, a company founded by former engineers from Google and the big-data analytics firm Palantir.
“We’ll almost certainly see software providers like Adobe or Apple starting to offer tool sets specifically for creatives in this space,” Ajder says. He thinks that text-to-image software will soon be followed by text-to-video. You could type in “bouncing gorilla” and an animation would appear before your eyes.
These toys can be put to nefarious use. Maybe somebody in a basement somewhere is creating a video in which a photorealistic Vladimir Putin declares war on the US. What can we do to stem the tide? “It’s moving very, very quickly, and it’s inevitable that legislation moves slowly,” Ajder says. In 2020 the state of New York passed a law to protect a performer’s likenesses from unauthorised commercial exploitation for 40 years after their death. “We saw Disney lobbying quite heavily against that, because they’ve got a trove of footage and they’d love to be able to use AI tools in many different ways.”
● The remarkable rise of artistic AI could write an elegy for tech giants
● Bruce Willis’s deepfake face-off: AI double appears in Russian ad
“If someone is making a film through legitimate channels, then the legal processes are going to be able to kick in fairly well,” Ajder says. “Random people on the internet, with access to incredibly powerful tools, are much harder to track down. Someone in their bedroom could potentially generate a feature-length film, without permissions.” All they will need are good “datasets” — high-quality video and audio of the stars involved. “Funnily enough, with actors who have been in films which have been produced with high-quality audio and video, there’s a lot of data out there.”
There are ways to prove if a picture or video is genuine. Ajder talks about “provenance-based approaches”, where you cryptographically secure an image or a video from the moment it is captured on a device. That image then has a piece of metadata attached to it that can be tracked throughout its lifetime online.
Sounds great, but is it too late? “I wouldn’t want people to come away thinking that this technology is inherently evil and terrifying,” Ajder says. “It is opening up entirely new possibilities about what kind of content we can now create.” So focus on that Grace Kelly-Dwayne Johnson thriller and try to forget about the looming hellscape of scarily convincing deception.
Henry Ajder’s Radio 4 documentary series The Future Will Be Synthesised is on BBC Sounds
https://www.thetimes.co.uk/article/in-a-century-you-could-have-tom-cruise-doing-mission-impossible-20-dzx70x0tn
‘In a century you could have Tom Cruise doing Mission: Impossible 20’
AI is changing the way films are made and soon Hollywood stars will live for ever, whether they want to or not, says Ed Potton
February 3 2023, The Times
Welcome to Hollywood, February 2026. Shooting this week are a thriller in which Grace Kelly is chased across a rooftop by Dwayne Johnson, a comedy that has Richard Pryor trading quips with Rebel Wilson and a romantic epic in which Marilyn Monroe falls desperately in love with Denzel Washington. What’s that? Kelly, Pryor and Monroe are dead? No problem. Their estates have given permission for their computer-generated likenesses to be used. For a fee, naturally. And it’s not as if Johnson, Wilson or Washington will be on set themselves — they’ve sold the producers the rights to digitally graft their faces and voices onto body doubles while they appear in other projects. Ker-ching! Their fans won’t be any the wiser, because it’s now impossible to tell the deepfake Johnson et al from the real ones.
None of that is really happening, of course, but movies along those lines will absolutely be possible in the near future, such is the rapidly increasing sophistication of artificial intelligence software. We have already seen Peter Cushing resurrected in Rogue One and Abba performing as their lithe young selves in London. Soon even that technology will be old hat.
The director Robert Zemeckis has just announced a film, Here, in which Tom Hanks will be de-aged using cutting-edge AI software called Metaphysic Live. That follows Deep Fake Neighbour Wars, a new series on ITV in which “Kim Kardashian’s neighbour Idris Elba restricts her access to the shared garden and Greta Thunberg is upset by Conor McGregor and Ariana Grande’s perennial Christmas decorations”. And just this week audio clips were posted online that appeared to feature the Harry Potter star Emma Watson reading from Mein Kampfand David Attenborough being a racist bigot. They were actually the result of naughty members of the public using a new text-to-audio tool that allows you to type in words and hear them reproduced immediately in a human voice. Last year a deepfake Bruce Willis “appeared” in an advert for the Russian mobile phone network Megafon without setting foot on its set, having agreed to have his face digitally transplanted onto another actor’s.
How worried should we be? “There’s a lot to be concerned about when it comes to deepfakes — disinformation, non-consensual pornography, cybersecurity,” says Henry Ajder, a British specialist in artificial intelligence and synthetic media. Nina Schick, another AI expert, predicted this week that 90 per cent of online content will be generated by AI by 2025. Ajder thinks that’s about right, although that would include all kinds of computer tweaks, such as Instagram filters.
Ajder’s biggest concern is around the “ethical ambiguities” of the technology. “If a performance wins an Oscar but the actor is a body double and the voice and face are cloned from another actor, who gets the award?” Or say you have a young actor who is desperate for work. “There may well be organisations who essentially prey on their desperation by offering to license their face for a film and giving them a contract with an indefinite clause about how they can use their face in future content. A lot of less established actors aren’t going to have the legal resources of people like Tom Hanks.”
Bigger stars, conversely, may let movies use their synthetic likenesses for a reduced fee. “That could have pretty severe implications for fresh talent breaking through,” Ajder says. “We’ve already seen certain studios rehashing the same old franchises over and over again. This could be an extension of that, in the context of actors. It could also continue to happen after the talent has died. In a century you could still have Tom Cruise doing Mission: Impossible 20.”
With dead performers you have the issue of consent. Cushing’s estate was closely involved with his appearance in Rogue One but, Ajder points out, “an actor could have never even been in a position to understand the consent”. When Monroe died in 1962 deepfake technology was about as plausible as teleportation. She may have hated the idea, as did Robin Williams, who stipulated in his will that he didn’t want to have any kind of electronic resurrection, did.
The rise of deepfaking is about “realism, efficiency and accessibility”, Ajder says. When Ridley Scott and his team brought Oliver Reed back to life for Gladiator in 2000 it took hundreds of hours and cost $3 million. “Now it’s a much more streamlined process, faster and cheaper, and it can yield much more realistic and nuanced results.”
“These tools are now becoming incredibly accessible,” Ajder says. ChatGPT, an AI chatbot, and DALL-E, a text-to-image program that creates realistic images from word prompts, are available free online, as is the software in the Emma Watson deepfake, which was developed by ElevenLabs, a company founded by former engineers from Google and the big-data analytics firm Palantir.
“We’ll almost certainly see software providers like Adobe or Apple starting to offer tool sets specifically for creatives in this space,” Ajder says. He thinks that text-to-image software will soon be followed by text-to-video. You could type in “bouncing gorilla” and an animation would appear before your eyes.
These toys can be put to nefarious use. Maybe somebody in a basement somewhere is creating a video in which a photorealistic Vladimir Putin declares war on the US. What can we do to stem the tide? “It’s moving very, very quickly, and it’s inevitable that legislation moves slowly,” Ajder says. In 2020 the state of New York passed a law to protect a performer’s likenesses from unauthorised commercial exploitation for 40 years after their death. “We saw Disney lobbying quite heavily against that, because they’ve got a trove of footage and they’d love to be able to use AI tools in many different ways.”
● The remarkable rise of artistic AI could write an elegy for tech giants
● Bruce Willis’s deepfake face-off: AI double appears in Russian ad
“If someone is making a film through legitimate channels, then the legal processes are going to be able to kick in fairly well,” Ajder says. “Random people on the internet, with access to incredibly powerful tools, are much harder to track down. Someone in their bedroom could potentially generate a feature-length film, without permissions.” All they will need are good “datasets” — high-quality video and audio of the stars involved. “Funnily enough, with actors who have been in films which have been produced with high-quality audio and video, there’s a lot of data out there.”
There are ways to prove if a picture or video is genuine. Ajder talks about “provenance-based approaches”, where you cryptographically secure an image or a video from the moment it is captured on a device. That image then has a piece of metadata attached to it that can be tracked throughout its lifetime online.
Sounds great, but is it too late? “I wouldn’t want people to come away thinking that this technology is inherently evil and terrifying,” Ajder says. “It is opening up entirely new possibilities about what kind of content we can now create.” So focus on that Grace Kelly-Dwayne Johnson thriller and try to forget about the looming hellscape of scarily convincing deception.
Henry Ajder’s Radio 4 documentary series The Future Will Be Synthesised is on BBC Sounds
https://www.thetimes.co.uk/article/in-a-century-you-could-have-tom-cruise-doing-mission-impossible-20-dzx70x0tn