Four Months That Changed What a Film Crew Can Look Like
The last four months have been genuinely strange for anyone paying attention to how films get made.
Not strange in a "disruption narrative" way. Strange in the way where you open Runway or fire up Veo 3 and something that would have cost a full VFX house last year comes out of a text prompt in under a minute. Strange in the way where you start asking real questions about what a film crew actually needs to look like.
This is a quick take on what's moved, what's real, and what's still mostly noise.
The tools got genuinely good
For a long time, AI video generation was impressive in demos and embarrassing in practice. Faces melted. Hands had seven fingers. Motion felt like a fever dream. That era isn't totally over, but it's shrinking fast.
Platforms like Runway, Moonvalley, and Pika Labs — each built on top of models like Google Veo 3 and OpenAI's Sora — are now tools serious filmmakers are actually building workflows around. Not for entire films, not yet, but for specific shots, previs, and sequences where practical production would have been prohibitively expensive.
A short film shot that would have required expensive VFX or very complex rigging on set — both out of budget for an indie production — can now be generated. That's not theoretical. That's happening in projects right now.
Runway's Gen-3 Alpha in particular has found a real user base in advertising and film. Its hyper-realistic outputs rival Sora, but with a sharper focus on creative workflows. Veo 3 from Google is the newer entrant and has been making noise — it offers higher prompt adherence, turning a singular prompt into what's being described as Hollywood-level video content.
These aren't toys anymore. They're production tools with real limitations you have to work around.
Indie filmmakers are the early winners
Indie filmmakers are reporting substantial savings: a short film that once required £40,000–£80,000 for crew, locations, and effects can now be completed for under £8,000–£16,000 by using AI across script-to-screen processes, video generation, editing, sound, and VFX.
That's not a rounding error. That's a different business model.
Secret Level reports that $10 million budgets can yield $30 million production values through AI-enhanced pipelines. Studios like Promise and Pigeon Shrine are building around AI specifically to make culturally diverse IP financially viable — projects that couldn't get greenlit under traditional cost structures.
For big studios, the savings are real but politically complicated. While development executives use ChatGPT to analyze scripts and marketers use it to assist with creative campaigns, many of those same people are worried their companies will use AI to eliminate their jobs. The tools are moving faster than the internal politics.
Hollywood is scared of its own experiments
The big studios are in an uncomfortable position. They want to show investors they're adopting AI. They don't want to tell their talent what they're actually doing with it.
In December, Disney entered a licensing agreement for OpenAI's Sora and invested $1 billion in the company. Until now, studios have been eager to tout the potential benefits of AI to investors, but afraid to divulge their biggest experiments, lest they antagonize talent and alienate labor unions.
That tension has a real history behind it. The 2023 WGA strike was driven in significant part by AI concerns. The WGA pushed for protections against the use of generative AI in screenwriting and fair pay — negotiating with the Alliance of Motion Picture and Television Producers, which represents over 350 film and television producers. Those protections exist on paper now. Whether they hold as AI gets better is a different question.
Director Guillermo Del Toro said in October he'd rather die than use the technology in his films. That quote is easy to dismiss as dramatic, but it represents a real creative position — that there's something being lost when the struggle of making a thing gets outsourced.
What's actually broken
Consistency across shots is still the main problem. You can generate a stunning single shot. Generating the same character, in the same location, with the same lighting, in 40 consecutive shots that cut together — that's still a genuine engineering problem. The tools that have made the most progress here (Runway's multi-shot features, LTX Studio's pipeline approach) are getting closer, but "close" still means manual cleanup and a lot of iteration.
The training data question is unresolved and quietly eating at adoption. Many writers and artists object to how their materials are scraped and co-opted as training data for machine learning models without their consent or compensation. Some AI studios like Asteria are building around this directly with what they call "clean model" strategies — training only on licensed content. Enterprise buyers care about this more than individual creators do, and that gap is going to matter as adoption moves up-market.
There's also a craft question that isn't going away. Many film workers have made compelling cases for why AI cannot take over the tasks that truly define filmmaking — fostering authentic human connection on and off screen and telling stories that matter to people. That's not just a defensive argument from people protecting their jobs. It's a real creative concern about what gets lost when the path of least resistance is always "generate it."
Where this lands
The last four months made one thing clear: the capability curve is steep and it's not slowing down. What required a VFX team in 2023 required a skilled prompt in 2024. What required a skilled prompt in early 2025 is now approachable by anyone with a Runway subscription and patience.
The more interesting question isn't whether AI changes filmmaking — it already has. The question is what filmmakers choose to do with the saved time and money. Spend it on more AI output, or spend it on the parts of the process that actually require humans to be in a room together.
The directors making the most interesting work right now seem to be using these tools to get to harder problems faster, not to avoid hard problems entirely. That distinction matters more than any single model release.