Fascinating new technology could help reducing the workforce in the 3D industry

Here’s two fascinating examples of a popular trend in technology - training a neural network to perform a task ordinarily done by gainfully employed human agents:

“Photorealistic Facial Texture Inference Using Deep Neural Networks” automatically creates a 3D model of a face from just a single photo exemplar:

“Phase-Functioned Neural Networks for Character Control” creates novel animation from a collection of motion captures, on-the-fly:

Quality is already comparable to acclaimed AAA titles such as Mass Effect™ Andromeda, leading to hopes that more 3D-artists and motion-capture-actors could be made redundant within the next years.

there has to be some kind of catch

maybe the topology is the worst thing ever so it can’t actually be used for anything practical
or
maybe it doesn’t work very well with people with extreme skin tones
or
the source photo has to be really good

I mean
they can’t really replace us, right?

https://i.ytimg.com/vi/p-msmx9yW5Q/hqdefault.jpg

Very impressive.

Looks great. I like this technology.

Told you so.

I cannot wait until it is implimented into a game engine.

The second one seems to be the more interesting of the two, but I do wonder just how many samples it needs?

The video shows the figure climbing up stairs, balancing on ledges, and running over rocks, how much manual work is needed to get enough samples (ie. how much of it is synthesized from pre-existing samples and how much of it is procedural)? Is it more sophisticated than the entirely procedural animation system tech. known as Euphoria (utilized by Star Wars; The Force Unleashed)?

At this point, neural networks are only capable to handle very specific tasks. It is impressive, that a face can be created like this. But that network can really just do that under the right conditions.
The character controller is impressive too, but still far away from being practically useful. Especially the feet are rather weird at this point. In practice, it would be necessary to define the style and that the animations work for any character size and proportion. This will for sure change a lot, but it is still unknown when this will happen. The step from research to an actual product can be huge. My impression however is that since there is so much research going on in this field, that the practical solutions are not years away.

that procedural generate animation system looks very promising and interesting, it might finally become a stepping stone toward another level of fidelity for modern game play as I’m never too happy about the dynamic animation in the games, endorphin in GTA 4 was interesting, then we see this kind of technology went nowhere after that, wondering what’s the reason behind it.

Quality is already comparable to acclaimed AAA titles such as Mass Effect™ Andromeda,

am I the only one that sees the irony in this?

What the whole synthetics taking over the universe plot of the games?
I don’t think that it’s a thing in the Andromeda game

The first video is about creating albedo map from photo. 3d model was not created from photo by computer.

Nice technologies, but if history can tell us anything, this is not going to reduce the workforce, instead the studios will compete to develop bigger projects…

Yes, of course. The actual advantage of those technologies in my opinion is, that artists are going to be able to focus more on the aspects that actually matter. If you want to exaggerate e.g. the sadness of a character through the animations in a game, that is a tremendous amount of work today. You need to have a new set of animations or you need to procedurally adjust the existing ones. This would take weeks to properly integrate and test. However, with neural networks, there is the potential to e.g. have parameters for the style or to pass an example animation which is used as style reference which is applied to the overall animation. This can already be done in procedural animations, but the results are kind of mixed in my opinion. The potential of neural networks is far more promising.
It is going to be interesting to see how those solutions evolve, especially whether they are going to be able to also take physics into account, both for more believable animations overall and to appropriately react to collisions or to even avoid them.

The dynamic animation stuff blows my mind (reminds me of

too, but better). When I think of it, for any film where you have a bunch of background characters, one could just bind a controller to a set of animation clips - just as in a video game - and control the characters this way - then bake the results. Instead of huh… spending two months making characters walk around the set, by hand. For instance.

The model was computer-generated as well, it’s just not the contribution of that paper.

This is the method they used:

There will always be humans involved in the process, just like there are still humans in factories and humans attending U-scan checkout counters.

Main characters will always get hand done artistic details. But for all the people walking down the street in GTA 6, algorithmically generated characters like this are more than detailed enough.

No, he meant the bad quality of models and animations.

Of course! These are just tools. Human operators will still be required.

Just look at agriculture, where technological progress has reduced the workforce by a mere 98%: