How did Tangent bake the Q-Bot's face animation in Next Gen?

There’s very little technical behind-the-scenes info out there about Netflix’s Next Gen animation, but I’m really curious to figure out how they animated the faces on the Q-Bots.

From what I’ve been able to find they created an add-on that takes geometry and curves on the model/rig and converts or projects it into a texture map internally in Blender, then pixelates it in the material network. Jeff Bell’s talk at the last Blender conferences goes the most in-depth, but he still only scratches the surface.

For the life of me though, I can’t figure out how they were able to convert the geometry data into a texture map. Any chance someone with some insider knowledge could share a little more?

We used a custom CurveTexture shader node:


WOAH. That’s so cool. Thanks!