Reproducing Siren from Unreal

Hey guys,

I’m sure you’ve seen Siren, the digital human from Epic Games rendered in Unreal.

Siren’s performance animation was made using state of art mocap and a lot of preparation.

Now, imagine giving a good animator the task of reproducing the exact performance of Siren in the video demo without using any sort of mocap system.

Would he be able to reproduce Siren’s exact same performance or it’s simply impossible to be accomplished and why? Thank you and have a nice day!

With a face scanned morph based rig and tedious rotoscoping maybe. Question is why would someone even bother.

This showcase may look good and somehow real but put another person in that mocap suite and it starts to looks odd. The only thing new here is that you don’t need this big camera rigs that they used for LA noire 8 years ago. I dont wan’t to depreciate what the technicians have accomplished here, but its still looks like CG specially around the mouth.

I wonder if Bill Haders Deep Fake videos are real time as well. Now thats impressive (and very creepy)


assuming you are exclusively talking about the animation aspect of things, I’d say of course it would be possible.

Here it is really just about PBR with high res textures, high poly face-mesh and a real time mocap fed into their engine. And who knows how their rig looks like… but it must have quite a few handles doing different things at once. They could just rely on skinning or use physics for simulating muscle contraction and what not… who knows.

With blender’s tracking module and parenting baked tracking points to bones, you could do the same thing (except for it being live). Another way would be drivers and shape keys.

Facial expressions of Gollum in Lord of The Rings Trilogy were all keyframed.

I will just leave it here.

The thing no one talks about for these rigs is that the most important thing tying it all together are the quality of the blendshapes. You can slap the best tracked facial capture onto a joint based rig with no drivers and it would look horrible.

This is where the marriage of modeling and rigging comes center stage. Research and pushing and pulling and iteration upon iteration is the only way this stuff gets done. They have it down to a science because they know exactly what poses they need out of an actor, and can just make the subject run through em all upon scanning. If it’s a character you’re modeling from scratch, however, you’ll need to sculpt all that info yourself.