Bake Wrangler - Node based baking tool set

It’s possible to recalculate the normals in the shader every frame, but that’s an expensive proposition. Or you can store it as a separate texture.

I dont actually think its that expensive since its on the GPU. Ive used Unreal Engines shader graph quite a lot and particularly math operations like these I find to be pretty cheap, its procedural textures and such that usually hogs the shader performance by my experience.

You’d optimally want to sort the mesh vertices in a particular way so that vertices with similar paths in a given animation are grouped together. This would help with compression even if it has to be lossless.

I like this idea, tho it sounds difficult to implement (both on the Python script side and on the game engine shadr side)

I just meant that it’s like a UV set in that it defines blocks in the image. It’d just be a bounding box when passed to the shader via four variables, so no extra draw calls there. It’s not draw calls but texture binds I wanted to avoid.

So you mean like that you would split up the mesh into sections, like for

instance that the legs would be UV unwrapped to use a specific animation and the arms another kinda deal?

AFAIK ordinarily you just rotate the normals based on the bones, which is cheap. There are no bones here so you have to find connected vertices and average, which is quite a bit more expensive. Oh and, in the unreal video, the guy did indeed bake out a separate map with the normals.

AFAIK ordinarily you just rotate the normals based on the bones, which is cheap. There are no bones here so you have to find connected vertices and average, which is quite a bit more expensive. Oh and, in the unreal video, the guy did indeed bake out a separate map with the normals.

I think its still rather cheap since its all on the GPU, remember that animated meshes usually doesnt have that dense geometry either.

Games already do this non the less, there are a plethora of “tutorials” on it on Google - and they often do it particularly for crowd rendering because its cheaper than bone animations.

No, sections within the map containing the animation cache. Like on a bird animation, this box has wings flapping, this box has head bobbing, this one has walking forward, and so on.

But then you would need multiple UV sets tho?

I mean, yeah, you can do it. Which technique you should use just depends on which is faster on your target hardware - calculating the normals, or a texture fetch. Answer will be different on desktop vs on something like the Oculus Quest.

UVs have nothing to do with it though? You’re not doing a mapping based on UVs, just based on the vertex order of the mesh stream.

Also regarding this with whether the normals should be a separate texture:

In this quite recent video by Epic Games they do have a texture dedicated to the normals, same goes for that older tutorial on the forum: Vertex Animation Tool - Timeline Meshes | Unreal Engine Documentation

So maybe that is the best way to go, bake both normals & position separately

UVs have nothing to do with it though? You’re not doing a mapping based on UVs, just based on the vertex order of the mesh stream.

Im not sure I follow, wouldnt that force you to have all animations be the same length?

I just assumed you meant UVs since having a separate uv set for each animation would let you define the length of the animation as well

It’s exactly why you need that metadata, so they don’t have to be the same length. I said it was like UV (and I’m sorry for having confused you), but it wouldn’t really be UVs since you’re not actually storing anything per-vertex in the mesh. It’s just a bounding box per animation so you can point the shader where to sample the vertex offsets from.

No, sections within the map containing the animation cache

Oh wait, so you mean that the pixels of the texture itself defines where each animation starts/ends?

1 Like

Yup! That’s it! :smiley:

Awesome :slight_smile: Well, yeah I also think that sounds like a solid solution, if it we can implement it all

Also as long as the animations can still be baked one by one, so that the artist never has to rebake anything (this is important from a workflow standpoint) - more important than performance in the game engine

Implement? Haha. I was just thinking about this more as a thought experiment. I’m not actually that keen on implementing any of this. The technique is kinda cute though! Sassy somehow. Lol. :stuck_out_tongue:

I mean… It’s not raytracing, you’re just copying position data into pixels. It should be quick enough that re-baking everything shouldn’t be an issue. And you wouldn’t want to bake individually anyway because any change in vertex order or number would invalidate all animations.

I just think the “packing” into a less direct format should be a second step :man_shrugging:

I wouldnt consider it any kind of requirement from the addon either, as long as the animations can be extracted.

Implement? Haha. I was just thinking about this more as a thought experiment. I’m not actually that keen on implementing any of this. The technique is kinda cute though! :stuck_out_tongue:

Im curious, how would you export mesh animations that doesnt use 100% bones from Blender to third party softwares (and that arent just blendshapes)?

Alembic. It’s there and well supported and performs well. Not in a crowd simulation, of course. Also has the advantage that you don’t have to have the same number of vertices or the same topology in every frame, so you can do stuff like fluid sims.

There was also that technique where for Uncharted 4 at Naughty Dog they’d generate animated bones based on vertex deltas so that they could include baked cloth sims in animations and such cheaply, and that performed even nicer, but AFAIK the implementation is proprietary to Sony. It’d probably be possible to recreate it from the paper though. Not that I have a pressing use case for this.

So, a few things… It’s a ‘cool’ technique, but it also has a bunch of drawbacks. I really don’t think you would want to replace your entire animation system with it…

It’s like with any software, you optimise where you need it. Like if you have a few thousand birds flapping their wings, hell yes this is a great idea to use. If you have a couple of characters with complex animations that should be able to blend into each other from any arbitrary point in their cycle, then it’s going to be terrible!

Even for example you use it for some simple stuff, let’s take a door and a landing gear as examples:

With the door it really depends on the shape, but you need a collider on it right? A collider can follow a bone easily by copying rotation, etc. But making it follow a vertex deformation? Sure, if the shape and movement is simple you could have separate logic that moves the collider, but if it’s that simple you probably had no reason to use the baked vertex animation :stuck_out_tongue:

Then with a landing gear. So there are no blending problems, it just goes up and down. Detailed collision probably isn’t needed so that can be solved. But what if you want it to touch the ground? With a couple of bones you would use an IK solver, easy. But there isn’t really any solution when you have baked vertex positions…?

And while I’m no expert and pretty much have to look it up every time I want to do it. There are ways to bake blenders various animation systems into each other already. I don’t really feel like a completely non standard vertex animation format makes a good universal animation exporter :stuck_out_tongue: and even if there was a standard, exporting skeletal animation would be way better because you can do a lot more with it. Including converting it to vertex animations if you wanted. It’s much harder to go the other way…

With all that in mind, I don’t think the simple exporter I talked about earlier is particularly challenging to create and it certainly has uses. I’m unsure that anyone WOULD use it…

There are a couple of implementation questions I’m unsure of: Firstly how easy it is to get vertex positions in blender from a mixed set of animations and secondly if vertex order is maintained when exporting and importing various model formats.

1 Like