AI Video Startup Decart Achieves 4x Faster Real-Time Video Generation at Half GPU Cost Using AWS Trainium3, Challenging NVIDIA's Inference Dominance

December 8, 2025
|

Amazon Web Services has scored a major win for its custom AWS Trainium accelerators after striking a deal with AI video startup Decart, with the partnership seeing Decart optimize its flagship Lucy model on AWS Trainium3 to support real-time video generation CNBC. Decart is achieving 4x faster inference for real-time generative video at half the cost of GPUs OpenAI, demonstrating that custom AI accelerators can challenge NVIDIA's dominance in computationally intensive generative AI applications.

Decart is essentially going all-in on AWS, making its models available through the Amazon Bedrock platform, allowing developers to integrate real-time video generation capabilities into almost any cloud application without worrying about underlying infrastructure CNBC. The company has obtained early access to the newly announced Trainium3 processor, capable of outputs of up to 100 fps and lower latency CNBC.

Lucy has a time-to-first-frame of 40ms, meaning it begins generating video almost instantly after prompt, and by streamlining video processing on Trainium, can match the quality of much slower, more established video models like OpenAI's Sora 2 and Google's Veo-3, generating output at up to 30 fps CNBC. By running Lucy on Trainium3, Decart hopes to improve current 30 fps outputs and generate live video at up to 100 FPS while reducing time-to-first frame to less than 40 milliseconds Thriveholdings.

Trainium3 UltraServers deliver up to 4.4x more compute performance, 4x greater energy efficiency, and almost 4x more memory bandwidth than Trainium2 UltraServers, with systems scaling up to 144 Trainium3 chips delivering up to 362 FP8 PFLOPs OpenAI. Built on 3-nanometer technology, each UltraServer delivers 362 FP8 PFLOPs with up to 20.7 TB of HBM3e memory, enabling massive models to train in weeks instead of months Yahoo Finance.

The partnership reflects broader industry movement toward custom AI accelerators as alternatives to NVIDIA GPUs. AI coding startup Poolside is using AWS Trainium2 to train its models with plans to use its infrastructure for inference as well, while Anthropic is hedging its bets by training future Claude models on a cluster of up to one million Google TPUs, and Meta Platforms is reportedly collaborating with Broadcom to develop custom AI processors CNBC. AWS claims Trainium and Google's TPUs offer 50-70% lower cost-per-billion-tokens compared to high-end NVIDIA H100 clusters Yahoo Finance.

Dean Leitersdorf, Decart co-founder and CEO, stated that Trainium3's next-generation architecture delivers higher throughput, lower latency, and greater memory efficiency, allowing the company to achieve up to 4x faster frame generation at half the cost of GPUs CNBC.

Leitersdorf emphasized that generative video is one of the most compute-intensive challenges in AI, and by combining Decart's real-time video models with AWS Trainium3, the partnership is making real-time video generation practical and cost-effective at scale Thriveholdings.

Anthropic's early adoption carries symbolic weight as Amazon holds an $8 billion stake in OpenAI's rival, yet chose Trainium for production workloads, with that endorsement signaling Trainium3 isn't experimental but production-ready and competitive with NVIDIA's flagship offerings Yahoo Finance. Yet NVIDIA's moat remains formidable, with CUDA becoming the industry standard for AI development, and switching to Trainium requiring rewriting code and retraining teams Yahoo Finance.

By generating high-fidelity AI video in real time, Decart says it can power use cases that simply weren't possible before, including live gaming where video clips can be incorporated into open-ended video games to generate environments based on player interactions, and social media applications where influencers can integrate AI video into live streams Thriveholdings.

For organizations spending millions monthly on AI infrastructure, Trainium3's economics are transformational, with the chip delivering over 5x more output tokens per megawatt than previous generations, directly slashing data-center power bills Yahoo Finance. Enterprises evaluating AI infrastructure strategies now face credible alternatives to NVIDIA-exclusive architectures, potentially reducing vendor lock-in risks. Amazon acknowledges reality by announcing Trainium4 will support NVIDIA's NVLink Fusion interconnect technology, enabling mixed deployments within the same racks Yahoo Finance.

The real question isn't whether Amazon can match NVIDIA's raw performance as Trainium3 already does, but whether cost and energy efficiency alone reshape a $50 billion+ AI chip market, or whether ecosystem lock-in and customer inertia keep NVIDIA entrenched Yahoo Finance. Decision-makers should monitor whether real-time video generation adoption validates custom accelerator economics across other computationally intensive AI applications. While ASICs aren't going to replace GPUs completely as flexibility of GPUs means they remain the only real option for general-purpose models, specialized workload optimization may fragment AI infrastructure markets CNBC.

Source & Date

Source: Artificial Intelligence News, AWS, Tech Startups, HPCwire, TechCrunch, Invezz
Date: December 3, 2025 (AWS re:Invent 2025

  • Featured tools
Scalenut AI
Free

Scalenut AI is an all-in-one SEO content platform that combines AI-driven writing, keyword research, competitor insights, and optimization tools to help you plan, create, and rank content.

#
SEO
Learn more
Symphony Ayasdi AI
Free

SymphonyAI Sensa is an AI-powered surveillance and financial crime detection platform that surfaces hidden risk behavior through explainable, AI-driven analytics.

#
Finance
Learn more

Learn more about future of AI

Join 80,000+ Ai enthusiast getting weekly updates on exciting AI tools.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI Video Startup Decart Achieves 4x Faster Real-Time Video Generation at Half GPU Cost Using AWS Trainium3, Challenging NVIDIA's Inference Dominance

December 8, 2025

Amazon Web Services has scored a major win for its custom AWS Trainium accelerators after striking a deal with AI video startup Decart, with the partnership seeing Decart optimize its flagship Lucy model on AWS Trainium3 to support real-time video generation CNBC. Decart is achieving 4x faster inference for real-time generative video at half the cost of GPUs OpenAI, demonstrating that custom AI accelerators can challenge NVIDIA's dominance in computationally intensive generative AI applications.

Decart is essentially going all-in on AWS, making its models available through the Amazon Bedrock platform, allowing developers to integrate real-time video generation capabilities into almost any cloud application without worrying about underlying infrastructure CNBC. The company has obtained early access to the newly announced Trainium3 processor, capable of outputs of up to 100 fps and lower latency CNBC.

Lucy has a time-to-first-frame of 40ms, meaning it begins generating video almost instantly after prompt, and by streamlining video processing on Trainium, can match the quality of much slower, more established video models like OpenAI's Sora 2 and Google's Veo-3, generating output at up to 30 fps CNBC. By running Lucy on Trainium3, Decart hopes to improve current 30 fps outputs and generate live video at up to 100 FPS while reducing time-to-first frame to less than 40 milliseconds Thriveholdings.

Trainium3 UltraServers deliver up to 4.4x more compute performance, 4x greater energy efficiency, and almost 4x more memory bandwidth than Trainium2 UltraServers, with systems scaling up to 144 Trainium3 chips delivering up to 362 FP8 PFLOPs OpenAI. Built on 3-nanometer technology, each UltraServer delivers 362 FP8 PFLOPs with up to 20.7 TB of HBM3e memory, enabling massive models to train in weeks instead of months Yahoo Finance.

The partnership reflects broader industry movement toward custom AI accelerators as alternatives to NVIDIA GPUs. AI coding startup Poolside is using AWS Trainium2 to train its models with plans to use its infrastructure for inference as well, while Anthropic is hedging its bets by training future Claude models on a cluster of up to one million Google TPUs, and Meta Platforms is reportedly collaborating with Broadcom to develop custom AI processors CNBC. AWS claims Trainium and Google's TPUs offer 50-70% lower cost-per-billion-tokens compared to high-end NVIDIA H100 clusters Yahoo Finance.

Dean Leitersdorf, Decart co-founder and CEO, stated that Trainium3's next-generation architecture delivers higher throughput, lower latency, and greater memory efficiency, allowing the company to achieve up to 4x faster frame generation at half the cost of GPUs CNBC.

Leitersdorf emphasized that generative video is one of the most compute-intensive challenges in AI, and by combining Decart's real-time video models with AWS Trainium3, the partnership is making real-time video generation practical and cost-effective at scale Thriveholdings.

Anthropic's early adoption carries symbolic weight as Amazon holds an $8 billion stake in OpenAI's rival, yet chose Trainium for production workloads, with that endorsement signaling Trainium3 isn't experimental but production-ready and competitive with NVIDIA's flagship offerings Yahoo Finance. Yet NVIDIA's moat remains formidable, with CUDA becoming the industry standard for AI development, and switching to Trainium requiring rewriting code and retraining teams Yahoo Finance.

By generating high-fidelity AI video in real time, Decart says it can power use cases that simply weren't possible before, including live gaming where video clips can be incorporated into open-ended video games to generate environments based on player interactions, and social media applications where influencers can integrate AI video into live streams Thriveholdings.

For organizations spending millions monthly on AI infrastructure, Trainium3's economics are transformational, with the chip delivering over 5x more output tokens per megawatt than previous generations, directly slashing data-center power bills Yahoo Finance. Enterprises evaluating AI infrastructure strategies now face credible alternatives to NVIDIA-exclusive architectures, potentially reducing vendor lock-in risks. Amazon acknowledges reality by announcing Trainium4 will support NVIDIA's NVLink Fusion interconnect technology, enabling mixed deployments within the same racks Yahoo Finance.

The real question isn't whether Amazon can match NVIDIA's raw performance as Trainium3 already does, but whether cost and energy efficiency alone reshape a $50 billion+ AI chip market, or whether ecosystem lock-in and customer inertia keep NVIDIA entrenched Yahoo Finance. Decision-makers should monitor whether real-time video generation adoption validates custom accelerator economics across other computationally intensive AI applications. While ASICs aren't going to replace GPUs completely as flexibility of GPUs means they remain the only real option for general-purpose models, specialized workload optimization may fragment AI infrastructure markets CNBC.

Source & Date

Source: Artificial Intelligence News, AWS, Tech Startups, HPCwire, TechCrunch, Invezz
Date: December 3, 2025 (AWS re:Invent 2025

Promote Your Tool

Copy Embed Code

Similar Blogs

December 8, 2025
|

UK and Germany Launch £14 Million Quantum Commercialization Initiative Targeting £11 Billion GDP Contribution by 2045 as European Race for Quantum Supremacy Intensifies

Read more
December 8, 2025
|

Google Unveils Aluminium OS: Android-ChromeOS Merger with AI at Core Targets 2026 Launch, Positioning for Desktop Market Challenge Against Windows and macOS

Read more
December 8, 2025
|

AWS re:Invent 2025: Amazon Launches "Frontier Agents" Designed to Work Autonomously for Days, Signaling Industry Shift from Chatbots to Production-Ready Autonomous AI

Read more
December 8, 2025
|

AI Video Startup Decart Achieves 4x Faster Real-Time Video Generation at Half GPU Cost Using AWS Trainium3, Challenging NVIDIA's Inference Dominance

Read more
December 8, 2025
|

AI memory hunger forces Micron's consumer exodus: A turning point in semiconductor economics

Read more
December 4, 2025
|

China's Chip Stacking Strategy Leverages 3D Architecture to Challenge NVIDIA, But Performance Gap and CUDA Dependency Remain Critical Barriers

Read more