“When it came to creating environments, Lumen was crucial,” Aurelien explains. “With it, we were able to work on lighting in real time, without using a lot of hardware resources. Together with Niagara, which we used for particles like rain, we knew we could generate a render that corresponded exactly to what we saw in our UI in just seconds.”The last step was to run Unreal Engine output through three separate AI algorithms to create the final Otomo style, depending on the scene required. The first algorithm enabled Sagans artists to describe the style of scene in text, then see it appear as a video. The second algorithm took a reference image to produce an animated video, while the final algorithm turned live-action film into a stylized animated result.

“We used our text-to-video algorithm on the majority of the shots,” Aurelien adds. “For this, we used Disco Diffusion 5 to help generate individual frames, which the algorithm then turned into video. We had to ensure our text was extremely detailed, describing the style and filling any gaps: If you’re not mentioning the sky, the machine could simply forget that there had to be a sky on top of the city. Once we got a good first frame, we would then make sure there was enough description to generate the next frames depending on the camera moves we desired, then check the render one last time on the 20th frame.”
 

Source: Unreal Engine Blog

0 0 votes
Article Rating