Monday, December 23, 2024

‘Hold on to your seats’: how much will AI affect the art of film-making?

Must read

Last year, Rachel Antell, an archival producer for documentary films, started noticing AI-generated images mixed in with authentic photos. There are always holes or limitations in an archive; in one case, film-makers got around a shortage of images for a barely photographed 19th-century woman by using AI to generate what looked like old photos. Which brought up the question: should they? And if they did, what sort of transparency is required? The capability and availability of generative AI – the type that can produce text, images and video – have changed so rapidly, and the conversations around it have been so fraught, that film-makers’ ability to use it far outpaces any consensus on how.

“We realized it was kind of the wild west, and film-makers without any mal-intent were getting themselves into situations where they could be misleading to an audience,” said Antell. “And we thought, what’s needed here is some real guidance.”

So Antell and several colleagues formed the Archival Producers Alliance (APA), a volunteer group of about 300 documentary producers and researchers dedicated to, in part, developing best practices for use of generative AI in factual storytelling. “Instead of being, ‘the house is burning, we’ll never have jobs,’ it’s much more based around an affirmation of why we got into this in the first place,” said Stephanie Jenkins, a founding APA member. Experienced documentary film-makers have “really been wrestling with this”, in part because “there is so much out there about AI that is so confusing and so devastating or, alternatively, a lot of snake oil.”

The group, which published an open letter warning against “forever muddying the historical record” through generative AI and released a draft set of guidelines this spring, is one of the more organized efforts in Hollywood to grapple with the ethics of a technology that, for all the bullish or doomsday prophesying, is already here and shaping the industry. Short of regulation or relevant union agreements, it has come down to film-makers – directors, producers, writers, visual effects and VFX artists and more – to figure out how to use it, where to draw the line and how to adapt. “It’s a project by project basis” for “use cases and the ethical implications of AI”, said Jim Geduldick, a VFX supervisor and cinematographer who has worked on Masters of the Air, Disney’s live-action Pinocchio and the upcoming Robert Zemeckis film Here, which uses AI to de-age its stars Tom Hanks and Robin Wright. “Everybody’s using it. Everybody’s playing with it.”

Some of the industry’s adoption of AI has been quiet – for years, studios and tech companies with entertainment arms have already engaged in a tacit machine learning arms race. Others have embraced the technology enthusiastically and optimistically; Runway, an AI research company, hosted its second annual AI Film Festival in New York and Los Angeles this spring, with presenting partners in the Television Academy and the Tribeca Festival. The latter featured five short films made by OpenAI’s Sora, the text-to-video model yet to be released to the public that prompted the film mogul Tyler Perry to halt an $800m expansion of his studios in Atlanta because “jobs are going to be lost”.

Late Night with the Devil. Photograph: Courtesy of IFC Films and Shudder

The industry’s embrace has engendered plenty of pushback. Last month, in response to Tribeca and other nascent AI film festivals, the director of Violet, Justine Bateman, announced a “raw and real”, no-AI-allowed film festival for spring 2025, which “creates a tunnel for human artists through the theft-based, job-replacing AI destruction”. And in the year since the dual actors and writers’ strikes secured landmark protections against the use of generative AI to replace jobs or steal likenesses, numerous non-protected instances of AI have drawn attention and scorn online. Concerns about job and quality loss surrounded AI-generated images in A24 promotional posters for the film Civil War, interstitials in the horror film Late Night with the Devil and a fake band poster in True Detective: Night Country. The alleged use of AI-generated archival photos in the Netflix documentary What Jennifer Did reignited discussions about documentary ethics first sparked by similar outcry over three lines of AI-generated narration to mimic Anthony Bourdain in the 2021 film Roadrunner. And that’s not to mention all of the bemoaning of disposable AI filler content – or “slop”, as the parlance goes – clogging up our social media feeds.

Taken together, the burgeoning use of generative AI in media can feel overwhelming – before the ink is dry on any new proclamation about it, the ground has shifted again. On an individual level, film artists are figuring out whether to embrace the technology now, how to use it and where their craft is headed. It has already rendered dubbing and translation work nearly obsolete. Visual effects artists, perennially on the bleeding edge of new technology for Hollywood, are already working with machine learning and some generative AI, particularly for pre-production visualizations and workflows. “From an artist’s perspective, we’re all trying to get ahead of the game and play with open source tools that are available,” said Kathryn Brillhart, a cinematographer and director whose credits include The Mandalorian, Black Adam and Fallout.

Both Geduldick and Brillhart noted numerous limitations on the use of generative AI in film projects at this point – for one, the security of these platforms, especially for big studios worried about leaks or hacks. There’s the legal liability and ethics of the current generative AI models, which to date have trained on scraped data. “Some studios are like, ‘We don’t even feel comfortable using gen AI in storyboards and concept art, because we don’t want a hint of any theft or licensing issues to come through in the final,’” said Brillhart. Studio films that do employ AI have limited uses and a clear data trail – in the case of Zemeckis’s Here, the new de-aging and face replacement tech, designed by the AI firm Metaphysic and the Hollywood agency CAA, uses the faces of Hanks and Wright, famous actors who have signed on to the roles, to play characters over the course of 50 years. “I’ve always been attracted to technology that helps me to tell a story,” Zemeckis said in 2023 of his decision to use Metaphysic. “With Here, the film simply wouldn’t work without our actors seamlessly transforming into younger versions of themselves. Metaphysic’s AI tools do exactly that, in ways that were previously impossible!”

And then there’s the output of generative AI, which often plunges deep into the uncanny valley and leaves much to be desired. (Or, in the words of the AI skeptic David Fincher, “it always looks like sort of a low-rent version of Roger Deakins”). Geduldick, who has integrated AI into his workflow, sees current generative AI models as more “assistive” than truly imitative of human art. “Are they implementing generative models that are going to speed up both the business and the creative side of what we’re doing? Yes,” he said. “But I think that there is no generative model out there today that doesn’t get touched by artistic hands to get it to the next level. That is for the foreseeable future.”

Still, like the digital revolution before it, the one certainty about generative AI is that it will change the field of visual effects – making pre-visualization cheaper and more efficient, streamlining tedious processes, shaping storyboard design. As the work shifts, “I think everybody needs to pivot,” said Geduldick.

“The craft has gone from hand-making models to using a mouse to now using text and using your brain in different ways,” said Brillhart. “What’s going to happen is more of a forced learning curve,” she added. “I think there’s going to be growing pains, for sure.”

On the documentary side, generative AI opens new opportunities for nonfiction storytelling, though also threatens trust. “All technology has a kind of a dual moral purpose. And it’s up to us to interrogate the technology to find the way to use it for good,” said David France, an investigative journalist and film-maker whose 2020 documentary Welcome to Chechnya is one of a handful in recent years to employ generative AI as an anonymization device. The film, which follows the state-sanctioned persecution of LGBTQ+ people in the Russian republic, used AI to map actors’ faces over real subjects who faced harrowing violence. France and his team tried several different methods to get around risking exposure; nothing worked cinematically, until trying the equivalent of deepfake technology, though with multi-step processes of consent and clear limitations. “We realized that we had an opportunity to really empower the people whose stories we were telling, to tell their stories directly to the audience and be faithful in their kind of emotional presentation,” said France.

The film-makers Reuben Hamlyn and Sophie Compton employed a similar technique for the subjects of their film Another Body, who were the victims of nonconsensual, deepfake pornography. Their main subject, “Taylor”, communicates through a digital veil – like deepfakes, an AI-generated face that interprets her real expressions through different features.

Along with demonstrating the convincing, uncanny power of the technology that someone used to target Taylor, the AI translated “every minute facial gesture”, said Hamlyn. “That emotional truth is retained in a way that is impossible even with silhouetting.”

“It’s such an important tool in empowering people to share their story,” he added.

Crucially, both Welcome to Chechnya and Another Body clue their audiences to the technology through implicit or explicit tells. That’s in line with the best practices put forth by the Archival Producers Alliance, to avoid what has landed other films in hot water – namely Roadrunner, whose use of AI was revealed in the New Yorker after the film’s release. The group also encourages documentary film-makers to rely on primary sources whenever possible; to think through algorithmic biases produced by the model’s training data; to be as intentional with generative AI as they would with re-enactments; and consider how synthetic material, released in the world, could cloud the historical record.

Another Body. Photograph: Publicity image

“We never say don’t do it,” said Jenkins, the APA member, but instead “think about what you’re saying when you use this new material and how it will come across to your audience. There is something really special about the human voice and the human face, and you want to engage with [generative AI] in a way that is intentional and doesn’t fall into some sort of manipulation.”

That line between human and machine is perhaps the most fraught one in Hollywood at the moment, in flux and uncertain. Compton, the co-director of Another Body, sees the emotionally loaded debates around AI as a series of smaller, more manageable questions involving pre-existing industry issues. “There are genuinely existential aspects of this discussion, but in terms of film and AI, we’re not really talking about those things,” she said. “We’re not talking about killer robots. What we are talking about is consent, and what is the dataset that’s being used, and whose jobs are on the line if this is adopted massively.”

Geduldick, an optimist on the assistive uses of generative AI, nevertheless sees a gap between its day-to-day applications, tech companies’ lofty rhetoric, and “soulless” AI content produced for content’s sake. Companies such as OpenAI – whose chief technology officer recently said generative AI might eliminate some creative jobs, “but maybe they shouldn’t have been there in the first place” – have “repeatedly shown in their public-facing interviews or marketing that there’s a disconnect [in] understanding what creatives actually do,” he said. “Film-making is a collaborative thing. You are hiring loads of talented artists, technicians, craftspeople to come together and create this vision that the writers, director, showrunners and producers have thought up.”

A still from a film made using OpenAI/Sora. Photograph: openai.com/sora

For now, according to Geduldick, the “hype outweighs the practical applications” of generative AI, but that does not obviate the need for regulation from the top, or for guidelines for those already using it. “The potential for it to be cinematic is really great,” said France. “I don’t know yet that we’ve seen anybody solve the ethical problem of how to use it.”

In the meantime, film-making, both feature and nonfiction, is at a fluid, amorphous crossroads. Generative AI is here – part potential, part application, part daunting, part exciting and, to many, a tool. There will likely be more AI film festivals, more backlash, more and more AI content creation – for better or for worse. There are already whole AI-generated streaming services, should you choose to generate your own content. How the human element will fare remains an open question – according to a recent Deloitte study, a surprising 22% of Americans thought generative AI could write more interesting TV shows or movies than people.

The only certainty, at this point, is that AI will be used, and the industry will change as a result. “This will be in films that are coming out,” said Jenkins. “So hold on to your seats.”

Latest article