On March 20th, in San Francisco, Runway, the startup company that helped create the widely-used Stable Diffusion AI image generator, unveiled a new AI model. This model can create three seconds of video footage that matches any written description, for example, “a lion in a living room.”
Runway has decided not to make their Gen-2 text-to-video model available to the public for safety and business reasons. Unlike the open-source Stable Diffusion, this model will not be made available to everyone. Initially, the Gen-2 model will only be accessible through a waitlist on the Runway website, and will be hosted on Discord.
While the idea of employing AI to produce videos from written descriptions is not novel, Meta Platforms Inc (META.O) and Google (GOOGL.O) both issued research papers on text-to-video AI models towards the end of the previous year. Nonetheless, according to Cristobal Valenzuela, the CEO of Runway, the distinction is that Runway’s text-to-video AI model is now accessible to the general public.
Valenzuela, Runway’s CEO, stated that the company aspires for creatives and filmmakers to utilize the product.
Here’s an explainer video from Runwayml.com