Co-creator of Stable Diffusion releases AI that generates videos from text

On March 20th, in San Francisco, Runway, the startup company that helped create the widely-used Stable Diffusion AI image generator, unveiled a new AI model. This model can create three seconds of video footage that matches any written description, for example, “a lion in a living room.”

Runway has decided not to make their Gen-2 text-to-video model available to the public for safety and business reasons. Unlike the open-source Stable Diffusion, this model will not be made available to everyone. Initially, the Gen-2 model will only be accessible through a waitlist on the Runway website, and will be hosted on Discord.

While the idea of employing AI to produce videos from written descriptions is not novel, Meta Platforms Inc (META.O) and Google (GOOGL.O) both issued research papers on text-to-video AI models towards the end of the previous year. Nonetheless, according to Cristobal Valenzuela, the CEO of Runway, the distinction is that Runway’s text-to-video AI model is now accessible to the general public.

Valenzuela, Runway’s CEO, stated that the company aspires for creatives and filmmakers to utilize the product.

Here’s an explainer video from Runwayml.com

 

 

Sources:
Runwayml.com
Reuters.com

AI-PRO Team
AI-PRO Team

AI-PRO is your go-to source for all things AI. We're a group of tech-savvy professionals passionate about making artificial intelligence accessible to everyone. Visit our website for resources, tools, and learning guides to help you navigate the exciting world of AI.

Articles: 72