OpenAI’s ambitious text-to-video AI tool, Sora, is at the centre of controversy after a group of early-access artists leaked the tool in protest. Sora, unveiled earlier this year, showcased the potential to generate detailed video scenes and dynamic camera movements from simple text prompts.
While its promise intrigued many, OpenAI’s handling of the tool’s development has sparked dissent among some in the artistic community.
Artists criticise OpenAI’s approach
The conflict began when OpenAI granted a select group of artists free early access to Sora for testing and feedback. However, around 20 members of this group publicly accused the company of using them as “PR puppets” rather than genuine collaborators. In a statement posted on the AI art repository Hugging Face, the artists claimed they were misled into unpaid labour under the guise of shaping Sora’s future. They argued that OpenAI, valued at $150 billion, was exploiting their creative input for marketing purposes.
The group also criticised the restrictions imposed on Sora-generated content, which required OpenAI’s approval before publication. They described the process as less about fostering creativity and more about promoting the tool as artist-friendly. In response, they leaked access to Sora, aiming to give the public a chance to experience it and push OpenAI to genuinely support the arts.
OpenAI shuts down Sora access
The protest prompted OpenAI to suspend Sora’s early access just three hours after the leak. A company spokesperson defended its approach, stating that participation in the program was entirely voluntary and emphasising the valuable contributions from hundreds of artists in shaping the tool. OpenAI also highlighted safeguards and feature prioritisation influenced by artist feedback.
Not all artists in the program shared the protesters’ views. Artist Andre Allen Anjos expressed support for OpenAI, noting that the dissenting group’s stance did not represent the majority of participants. OpenAI is now investigating the incident while reassessing its approach to Sora’s development and testing.
Broader concerns about Sora’s development
Even before this incident, Sora faced scrutiny over its training practices. OpenAI CTO Mira Murati previously admitted uncertainty about whether Sora’s training data included content from platforms like YouTube, which sparked additional controversy. In April, YouTube’s CEO warned OpenAI that training AI models on its videos violated its terms of service, raising questions about the ethical and legal implications of Sora’s development.
As the debate over AI tools and their relationship with artists continues, the Sora controversy highlights the growing tension between technology companies and creative communities. While OpenAI’s ambitions for Sora remain high, the company may need to rebuild trust with artists and address concerns about fairness, transparency, and support for the arts.