Photorealistic faces in video games have been with us for some time, but they are still a long way from what you would call realistic, while the polygons and texture are nearly perfect, the facial animations get stuck in the depths of the eerie valley.
This isn’t about pissing off animators and modelers – mimicking the nuances of human facial expressions is clearly very hard work, and the teams have come a long way since. Golden eye To Half Life 2 To Yakuza– but it’s like that. Progress is still being made, however, and while we’ve all gotten used to accepting that video game characters can look a little real until they open their mouths or try to get it. scared, someday we’ll get to the point where the strange valley is bridged, and they just have to do look real.
This is one way to do it. You are looking at the work of VFX studio Ziva Dynamics, whose Unreal Engine-based examples here show the creation of facial animation technology capable of doing some truly exceptional things. As an example, look at the model on the left generated from the realtime capture on the right:
This realtime stuff looks good, excellent even, but not either this way beyond what we are used to right now. Now watch what happens when you run the same model through a few terabytes of benchmark data and pass it through some AI training:
Holy shit. While Ziva works across the entertainment industry from TV to movies, this technology especially has the gaming industry in mind, as it would allow developers to introduce a whole range of emotions. facial expressions that are difficult, if not impossible to convey at the moment using only expressions.
For a slightly more technical example, here’s an industry-focused corporate trailer, which at the end shows one of the more promising side effects of the approach used: that it can work through ethnic groups and even (fictitious) species: