Sora is a new AI model from OpenAI that generates video from text

by alex

OpenAI's latest model turns text prompts into “complex, realistic scenes with multiple characters, specific types of movement, and precise object and background details”—up to a minute long.

The company also notes that Sora can understand how objects “exist in the physical world,” as well as “accurately interpret props and generate compelling characters that express powerful emotions.” The model can also generate video from a still image and fill missing frames in an existing video or expand it.

Sora-generated demo videos published on X include, in particular, a camera flying along a snowy Tokyo street – although, if you look closely, you can find signs of artificial intelligence (for example, tree crowns disconnected from trunks).

A few years ago, it was text-to-image generators like Midjourney that brought a lot of attention to the AI ​​industry, but now companies like Runway and Pika have stepped up to improve the technology for video. Google's Lumiere can now be considered OpenAI's main competitor in this area (although the length of the video for this model is limited to 5 seconds).

READ
Axiom Space is preparing new spacesuits for Artemis lunar expeditions

QA Manual Course. successful career in the IT field without the need for in-depth programming knowledge. Receive a salary of $600 upon completion of your job. Find out about the course

Sora is currently only available to “red teams” who evaluate the model for potential harm and risks. OpenAI also offers access to select artists, designers and filmmakers to receive feedback.

Earlier this month, OpenAI announced that it was adding watermarks to its text-to-image tool DALL-E 3, but noted that they can be “easily removed.”

Join the ITS Authors Contest! Win cool prizes from our partners Logitech – professional gaming steering wheel and low-profile gaming keyboards.

You may also like

Leave a Comment