OMNIVERSE AUDIO2FACE

For making the metaverse more livable, the content creators should increase the alignment digital twins and virtual entities behavior with the real world behaviors.

NVIDIIA claims that Omniverse Audio2Face beta simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

Audio2Face is a great example of the importance of AI for Metaverse.

Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple—just select your audio and upload. The audio input is then fed into a pre-trained Deep Neural Network and the output drives the 3D vertices of your character mesh to create the facial animation in real-time. You also have the option to edit various post-processing parameters to edit the performance of your character. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

NVIDIA Solution for Facial Animation

Here is a funny example of using this technology in chat bots.

Also it is important to mention that NVIDIA try to support more and more languages.

Game Engine Compatibility :

The latest update to Omniverse Audio2Face now enables blendshape conversion and also blendweight export options. Plus, the app now supports export-import with Epic Games Unreal Engine 4 to generate motion on MetaHuman characters using the Omniverse Unreal Engine 4 Connector.

the app now supports export-import with Epic Games Unreal Engine 4 to generate motion on MetaHuman characters using the Omniverse Unreal Engine 4 Connector.

Become a Member