Adobe has integrated AI video generator Firefly Video Model into Premiere Pro editor

Adobe has officially introduced a new generative neural network, Firefly Video Model, which is designed for working with video and has become part of the Premiere Pro application. Using this tool, users will be able to complement the footage, as well as create videos based on static images and text prompts.

Image source: Adobe

The Generative Extend function based on the mentioned neural network is becoming available to Premiere Pro users as part of beta testing. It will allow you to extend the video by a few seconds at the beginning, end, or some other segment of the video. This can be useful if during the editing process you need to correct minor defects, such as a shift in the gaze of a person in the frame or unnecessary movements.

Generative Extend can only extend a video by two seconds, so it’s only suitable for making small changes. This tool works in 720p or 1080p resolution at 24 frames per second. The function is also suitable for increasing the duration of audio, but there are limitations. For example, a user can extend a sound effect or ambient noise up to 10 seconds, but this cannot be done with conversation recordings or music tracks.

The web version of Firefly has two new video generation tools. We are talking about the Text-to-Video and Image-to-Video functions, which, as the name suggests, allow you to create videos based on text prompts and static images. Both features are currently in limited beta testing and may not be available to all Firefly web users.

Text-to-Video works similarly to other AI video generators, such as OpenAI’s Sora. The user needs to enter a text description of the desired result and start the video generation process. Simulation of different styles is supported, and the generated videos can be further refined using a set of “camera controls” that allow you to simulate things like camera angle, movement and change the shooting distance.

Image-to-Video allows you to add a static image to the text description so that the generated videos more accurately meet the user’s requirements. Adobe suggests using this tool, among other things, for reshooting individual fragments, generating new videos based on individual frames from existing videos. However, published examples make it clear that this tool, at least at this stage, will not allow you to abandon reshoots, since it does not accurately reproduce all objects in the image. Below is an example of the original video and a video generated based on a frame from the original.

It will not be possible to shoot long videos using these tools, at least at this stage. Text-to-Video and Image-to-Video features allow you to create 5-second videos in 720p quality at 24 frames per second. By comparison, OpenAI says its AI generator Sora can produce videos up to a minute long “while maintaining visual quality and following user prompts.” However, this algorithm is still not available to a wide range of users, despite the fact that several months have passed since its announcement.

It takes about 90 seconds to create a video using Text-to-Video, Image-to-Video and Generative Extend, but Adobe said it’s working on a “turbo mode” to reduce generation times. The company noted that the tools created based on the Firefly Video Model are “commercially safe” because the neural network is trained on content that Adobe is allowed to use.

If you notice an error, select it with the mouse and press CTRL+ENTER.

Source: 3dnews.ru