'speed vectors' in Fluid Simulation to Lightwave3D???

Hoping that someone can point me in a direction to understanding how to use the ‘speed vector’ information (generated when doing fluid simulations) in the Compositor/Video Sequence Editor.

I’m working and rendering in Lightwave3D and importing the meshes from Blender. Wondering is it necessary to duplicate the Lightwave camera postion in Blender, assuming the camera position is relevent to this ‘speed vector’ information???

Want to use the images rendered in Lightwave3D and substitute these in the compositor of Blender if this is possible.

I had thought the speed vectors were part of a post processing thing(?), that is, applied to the image after the render, but maybe this is not true?

Here is a link (fluid4 movie) with an example of what is being attempted, it would he good if I could double the speed of this (so as to be real time and not slow-mo). To be able to use the ‘speed vector’ information (for motion blur or whatever) would greatly enhance the reality.

http://www.iol.ie/~powers/

Here is the link at Newtek where I’m trying to get help on the Lightwave3D side:

http://www.newtek.com/forums/showthread.php?t=73375

No news on this then?

Any material on how this in done in the Blender Compositor/Video Sequence Editor?

Sorry, finally got around to this. Meant to do it sooner. Now, this might not work in every single case, since you have to have the speed vectors rendered in blender which means same geometry(roughly) and same camera angles(again, roughly, but more accuracy helps). Lining everything up will most likely get harder the more complex your scene is, but it should be easy if you’re just using fluids and little else.

You’ll need to use composite nodes for the blur, btw.

Also, I noticed you said you wanted the sim sped up. I don’t know if you knew this, but the default simulation time scale in blender is always set to .3, try setting it to 1(at least) or higher to get a more real-time speed.

Now, what you’d have to do is get your fluids in blender, and get a mini-scene set up–I don’t know if you can in fact export stuff from lw and import it properly in blender, but you’ll need to have basically the same camera angle and set-up, etc. Maybe you could use your original fluid sim .blend scene as an initial reference.

So, first-off, make sure your lw scene has a reasonable facsimile of a mock-up in blender.

Second, render the light-wave scene as a movie or image sequence or some such, it doesn’t honestly matter which type, cause blender treats everything external it can interpret as bundles of frames, in the end.

Third, open up your mock-up scene in blender, get a node window open, set the type to ‘composite nodes’ and press ‘use nodes’, you’ll know it’s right so far because two nodes will appear, an input from your rendered scene, and a composited output. You need to add two other nodes, to do this trick. An ‘image’ [add>input>image] input (which should really be called ‘file’ because it loads movies and frame-sequences as well) and a vector blur [add>filter>vector blur] node. If you want, you can press the space-bar hotkey to open up the menu instantly under your mouse pointer.

Load up your lw rendered movie in the image node, be sure to set the type in the node to whatever happens to be appropriate (generated, sequence, image, or movie) to your media format of choice. There are a few options you can set, but they’re fairly basic; How many frames you want used, start frame, offset, whether or not it loops, etc.

Attach the image output from your input node (with lw movie) and attach it to the image input of the vector blur node. Attach the z-buffer output from your render layer node(this will be the zbuffer from your mock-up scene) and attach it to the z-buffer input of your vector blur node.

Now, at this point you will probably notice a problem. The vector blur node requires one more input, which is the speed (vector speed in this case) info input, and the render layer node output does not have one. The reason for this is that Blender does not calculate this render-layer output automatically, because barely anyone has the sense to use it. :stuck_out_tongue:

This state of nodelessness is easily fixable however. Go to the buttons window, render button-set, ‘render layers’ tab, and turn on the vec(tor) button for render layer 1, (you do not probably need to select the render layer first, cause 1 is the default) You should see a new output handle on your ‘render-layer’ node titled ‘speed’. You are close to victory now.

Connect the speed handles of the render-layer and the vector blur nodes.

Connect the image output from the vector blur node to the image input of the final composite output node.

Make sure the start and end frames frames of your image input node(with the video) and the start and end frames of your mock-up timeline line up. You don’t want to get frame numbers mixed up.

Btw, before doing this next bit, you’d probably be better off hiding/getting rid of the mock-up of your glass container mesh. Blender’s vector blur–while fine-looking–is a post-pro effect alone, and blender has nothing like weird multi-level z-buffers or anything(in fact, I don’t think these exist) so the container mesh will only get in the way. You mostly want the fluids in there, because they’ll be doing the most moving.

Now, this is important, turn off unnecessary render settings, and TURN ON ‘do composite’ in the blender render>anim tab. Also, match the render resolution to the resolution of your movie so you don’t get filtering and unnecessary scaling. Now pick a suitably action packed scene and do a test render. If you have do composite on, it should ignore what it doesn’t need to render, and just calculate the vector pass. The nodes should stall for a bit while they work, and then you should see your rendered fluids, with their newly applied vector blur, unless I missed something horribly.

If I screwed something up, tell me. If it looks good, pick an output type (blender can do video output to or dump an image sequence in the format of choice to a selected file/folder), press ‘anim’, and it should go through all the frames within the start and end limits.

Feel free to tell me if I made a mess, I typed this up in a bit of a hurry.

LATE EDIT: Also free feel to tell me if I misunderstood you almost completely, because I have a tendency to do that to people. :stuck_out_tongue:

BlackBoe,

Thanks ever so much for the help. This IS EXACTLY what I was looking for. It seems to be pretty much as one would imagine it, but to get it to work practically, of course, will be another matter!

Haven’t used Blender in this sort of editor capacity ever so may a have one or two more questions about hooking up the nodes and things, but hope to manage. Think the only real stumbling block will be as you sort of suggest, the matching of the cameras in the two programs. Not only the postition, but the field etc. -a bit frightened of cameras here, they are just so complicated unless you have had an awful lot of experience.

Anyway, THANKS again for writing so much and so detailed an answered.

All went very smooth and exactly as you described without a hitch. (only problem was using a targa sequence, something about the alpha perhaps, was just getting black images. Used a Quicktime movie millions+ which worked fine instead.

Just 3 questions. Where is Blender getting the motion blur information? Is this stored in the files that the fluid simulation generates?

Also what is the problem you mention:

" Btw, before doing this next bit, you’d probably be better off hiding/getting rid of the mock-up of your glass container mesh. Blender’s vector blur–while fine-looking–is a post-pro effect alone, and blender has nothing like weird multi-level z-buffers or anything(in fact, I don’t think these exist) so the container mesh will only get in the way. You mostly want the fluids in there, because they’ll be doing the most moving."

Say a scene with a glass and water being poured into it. In the Blender setup I just leave the water mesh (deleting the glass after the simulation calculation is done) and just export only the water mesh. But in the lightwave program I will place the glass in the correct position for rendering. These same rendered images than are brought thru this motion blur method in Blender, will the compositing here in Blender start to blur the glass when its in front of fluids/water it is meant to be bluring?

And then just the last thing, is it possible to put a background image in the Blender interface? Say behind the camera view, to use this to align the Blender camera position and field with the Lightwave one?