To summarize, our goal is to move the simulation from the end of the character pipeline to the front of the pipeline, and use it to generate character deformations. This simulation becomes a library of how the character moves.

You can harvest pose data either by machine learning the simulation directly, or by extracting shapes to a PSD corrective system. Character artists can then add a sparse layer of art direction shapes using traditional methods, instead of having to correct for the entire body’s deformations manually. Once that shape data is in the rig, everyone usually benefits from the result.

Materials, grooming, and cloth have clean surfaces to work from, so rigging and character artists get to spend much more time on art direction instead of fixing broken deformations. Plus, the animation department gets an asset with high-fidelity deformation and corrections that work well together as a cohesive system.

Using machine learning on these types of systems leads you to capture all kinds of cool deformation effects; the kind of effects that cross over all of the interacting zones of the body, that might otherwise be difficult to incorporate into a character.

Watch my full SIGGRAPH 2022 presentation, which goes into more detail on the techniques used in simulation for stylized characters. Or, if you’d like more information about Ziva VFX, contact an expert.

Source: Unity Technologies Blog