OpenAI granted early access to Sora, its new generative-AI video tool, to some 300 visual artists and filmmakers to “gain feedback” on the technology. The tech company got it — but not the kind it was hoping for.
On Tuesday, a group of Sora testers released a version of the tool publicly alongside a manifesto decrying OpenAI’s program as exploitative and “more about PR and advertisement.” According to the artists’ statement, which was posted on AI development site Hugging Face, OpenAI cut off access to Sora three hours after the group had made it available freely online.
“We received access to Sora with the promise to be early testers, red teamers and creative partners. However, we believe instead we are being lured into ‘art washing’ to tell the world that Sora is a useful tool for artists,” said the open letter, which was addressed “Dear Corporate AI Overlords.”
“ARTISTS ARE NOT YOUR UNPAID R&D,” the letter said. “we are not your: free bug testers, PR puppets, training data, validation tokens.”
In the wake of the protest, OpenAI said it had suspended access to Sora. “Hundreds of artists in our alpha [testing program] have shaped Sora’s development, helping prioritize new features and safeguards,” OpenAI spokesperson Niko Felix said in a statement to the Washington Post. “Participation is voluntary, with no obligation to provide feedback or use the tool.”
OpenAI’s statement continued, “We’ve been excited to offer these artists free access and will continue supporting them through grants, events and other programs. We believe AI can be a powerful creative tool and are committed to making Sora both useful and safe.”
The artists’ letter said that through the Sora early-access program, “Hundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the program for a $150B valued company. While hundreds contribute for free, a select few will be chosen through a competition to have their Sora-created films screened — offering minimal compensation which pales in comparison to the substantial PR and marketing value OpenAI receives.”
In October, OpenAI raised $6.6 billion in new funding from investors including Microsoft and Nvidia, giving it a post-money valuation of $157 billion. The San Francisco-based company has around 1,700 employees after hiring more than 1,000 since the beginning of the year.
OpenAI says the Sora text-to-video model can generate videos up to 60 seconds in length. According to the company, Sora is “able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”
The anti-OpenAI missive posted Tuesday was signed by 19 artists: Jake Elwes, Memo Akten, CROSSLUCID, Maribeth Rauh, Joel Simon, Jake Hartnell, Bea Ramos, Power Dada, aurèce vettier, acfp, Iannis Bardakos, 204 no-content (Cintia Aguiar Pinto and Dimitri De Jonghe), Emmanuelle Collet, XU Cheng, Operator, Katie Peyton Hofstadter, Anika Meier and Solimán López.
“We are not against the use of AI technology as a tool for the arts (if we were, we probably wouldn’t have been invited to this program),” the artists’ letter said. “What we don’t agree with is how this artist program has been rolled out and how the tool is shaping up ahead of a possible public release. We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.”
The advent of gen-AI video platforms that can produce realistic-looking imagery has alarmed Hollywood creators, including Tyler Perry — who, citing Sora specifically, earlier this year said he was suspending a planned $800 million expansion to his Atlanta studios. “I have been watching AI very closely,” Perry said. He said his studio expansion “is currently and indefinitely on hold because of Sora and what I’m seeing.”
Separately, Meta this fall unveiled Movie Gen, a new generative-AI tool that can create videos clips (with synchronized AI-generated audio) up to 16 seconds in length based on text prompts. It’s not publicly available, but the company plans to start rolling it out on Instagram and Facebook in 2025. As part of a pilot program to garner feedback, Meta is working with Jason Blum’s horror studio Blumhouse and creators including Casey Affleck, Aneesh Chaganty and the Spurlock Sisters to experiment with Movie Gen.
(Pictured above: Image from a Sora-generated video OpenAI says is based on the following prompt: “A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.”)