Am really stuck looking for information on how to use the ‘speed vector’ information (generated when doing fluid simulations) in the compositor? This is part of the Video Sequence Editor?
There is a further complication as I’m working in Lightwave and importing the meshes. Hoping to duplicate the Lightwave camera postion in Blender, assuming the camera position is relevent to this ‘speed vector’ information???
I have searched everywhere for any hints on this and can’t find anything. Any kind of help or clues would be much appreciated.
Now, the speed vector of an object is usually calculated at render-time by Blender, which can extrapolate from the movement of faces from the previous and next frames. This is because the objects, no matter how they’re deformed, are always the same object. (same vertex structure, same base shape, etc)
Blender’s fluid simulator, however, uses a voxel approach underneath everything, so that means a mesh has to be created from every single frame blender calculates. Because the meshes for ever single frame are completely different–topologically speaking–Blender has no way of calculating the speed vector of the fluid for use in the motion blur. The ‘speed vector’ fluid option essentially compensates for this by slipping speed information, baked from the runtime of the simulation, into the gaps left in the vector pass by the fluid simulation mesh.
The upshot of this is that you probably don’t need to worry, unless you have a compositor that supports blender’s type of motion blur.
But I don’t think i was too clear. I want to use images rendered in Lightwave3D and substitute these in the compositor of Blender. But if the speed vectors are applied to or through the Blender rendering process than, I suppose, this isn’t going to work.
I had thought the speed vectors were part of a post processing thing(?), that is, applied to the image after the render.