AI Tools & Products

OpenAI Opens the Sora 2 API Wide: Characters, 1080p, 20-Second Clips, and Batch Rendering

OpenAI Opens the Sora 2 API Wide: Characters, 1080p, 20-Second Clips, and Batch Rendering

OpenAI just turned its Sora video API from a demo toy into a genuine production tool. The March 12 update adds four features that collectively address the biggest complaints developers had about the original API: inconsistent characters, low resolution, short clips, and no way to handle volume. Let's break down what's new and why it matters.

Reusable Characters: Consistency Finally Solved

The single biggest headache in AI video generation has been character consistency. You could generate a beautiful shot of a character in one clip, but the next clip would give you a completely different-looking person. It made storytelling essentially impossible — like casting a different actor for every scene.

Sora 2's new character reference system lets you create a character asset once and reuse it across multiple generations. The API stores the character reference and applies it to every video you generate with that asset, maintaining visual consistency across shots. For anyone building narrative content, marketing campaigns, or even simple explainer videos, this is the feature that makes Sora actually usable rather than just impressive.

1080p Output on Sora 2 Pro

The Pro tier now supports full 1080p output in both landscape (1920x1080) and portrait (1080x1920) orientations. Previously, developers were limited to 720p — fine for prototyping, but completely inadequate for any professional use case. Social media platforms compress video aggressively, so starting with 720p source material often meant the final output looked muddy.

At $0.70 per second of generated video, 1080p Pro isn't cheap. A 20-second clip runs $14. But compare that to hiring a video production crew, even a small one, and the economics make sense for certain categories of content — product demos, social ads, concept visualization, and stock footage replacement.

20-Second Generations: Longer Beats, Fuller Scenes

Both Sora 2 and Sora 2 Pro now support up to 20-second generations, up from the previous limits. Combined with the new video extension feature — which lets you continue a completed clip — you can now build meaningfully longer sequences by chaining generations together.

Twenty seconds might not sound like much, but in video production terms, it's substantial. Most social media ads run 15-30 seconds. A TikTok or Instagram Reel is typically 15-60 seconds. Being able to generate a solid 20-second base clip and extend it puts AI-generated video firmly within the range of common commercial formats.

Batch API: Built for Production Scale

Perhaps the most quietly significant addition is Batch API support. You can now submit large queues of video generation jobs through OpenAI's batch processing system, which is designed for offline workloads where latency doesn't matter but cost and reliability do.

This is the feature that separates hobby use from production deployment. A marketing team generating 500 ad variations, a stock footage company creating themed libraries, or a game studio producing cutscene concepts — these workflows need to submit hundreds of jobs and get results back reliably, not babysit individual API calls.

Video Editing Arrives

OpenAI also quietly introduced a video editing endpoint (POST /v1/videos/edits) that lets you modify existing videos with targeted changes. This replaces the older remix endpoint, which will be deprecated in six months. The details are still sparse, but the implication is clear: Sora is evolving from a generation-only tool into a full video manipulation platform.

Sora went from 'impressive demo' to 'actual production tool' in a single API update. The batch processing alone changes the economics for any team generating video at scale.

How It Stacks Up

The AI video space is competitive. Google's Veo 3.1 offers strong quality with tight Vertex AI integration. Runway's Gen-3 Alpha remains popular with creative professionals. Pika and Kling continue to iterate rapidly. But OpenAI's API-first approach with Sora gives it a unique advantage: developers can build video generation directly into their products, not just use it through a web interface.

The character consistency feature is particularly notable because it's a genuine differentiator. Most competitors still struggle with multi-shot consistency, which limits AI video to standalone clips rather than coherent narratives.

Key Takeaways

  • Reusable character references enable consistent multi-shot storytelling
  • 1080p output on Sora 2 Pro at $0.70/second for production-quality video
  • 20-second generation limit with video extension support for longer sequences
  • Batch API support enables high-volume production workflows
  • New video editing endpoint signals Sora's evolution toward full video manipulation

Our Take

This update is less about flashy new capabilities and more about making Sora actually work for real businesses. Character consistency, higher resolution, longer clips, and batch processing are all table-stakes features for professional video production — and their absence was what kept Sora in the 'cool demo' category. Now it's a legitimate tool that marketing teams, content creators, and SaaS companies can build on. The $0.70/second pricing for 1080p Pro is aggressive but not outrageous when you consider what stock footage and basic video production actually cost. The real question is quality consistency: can Sora 2 Pro reliably produce footage that meets professional standards, or will every third generation need to be thrown away? At scale, that acceptance rate matters more than the per-second cost. The batch API is the under-appreciated gem here — it signals that OpenAI sees Sora as infrastructure, not just a consumer feature.

Sources