Motion data from other renderers

Greetings, fine folks!

I’m curious to know if it’s possible to use motion/vector data output from mental ray, RenderMan or other renderers with the Vector (Motion) Blur filter node in Blender. If so, would it connect to the ‘speed’ socket? What would be the general workflow?


I’ve used it before for mental ray (mv2dtoxik pass). It works mostly ok, the only issue is Mray’s pass only includes one set of speed vectors in the R and G channels, Blender is expecting a second pair in the B and A channels as a second motion stop. To fix this, just use the separate/combine RGB nodes to copy the first set to the B/A channels. And don’t forget to connect up the depth data.

Here’s a demo, sorry about the render-layer node, I don’t have an MR file laying around to demo with atm. Just pretend it’s an image node loading your render file.

I am not sure that node network is correct… the way that blender expects hte speed vector is as follows :

R: pixel displacement in X from current frame to previous frame
G: pixel displacement in Y from current frame to previous frame
B: pixel displacement in X from current frame to next frame
A: pixel displacement in Y from current frame to next frame

What your node network is saying that the pixel same position before and after the current frame

Look at this thread for more info –

If you need any help, just message me!

Thanks to both of you for your help, Double and J. And, DoubleBishop, I shall surely message you if I run into roadblocks.

For Blender’s speed pass it is wrong, but I was just using that as an example (like I said, I didn’t have a mental ray file handy to demonstrate with). Mental Ray’s speed pass does not contain the second set at all, it just contains the R and G channels, with a single set of X/Y vectors. The B/A channels are blank (all pixels=0). Blender’s vector blur node will not interpret this correctly, it will believe all motion stops after the current frame. To workaround that with Mray’s pass, you need to copy the first set into the second set. It’s not perfect, but it’s better than nothing.

You are both correct. The thing to note is that the current version of mental ray handles motion blur with the option of key frame at start of motion, end and middle. You can output two sets of images (one for start and the other for end) to complete the set for Blender and sock them in with a combine node. The Z remains the same for both images in the set, so you can connect the Z of one and it’ll do it. This is theory at this point. I’ll have to experiment and see if it’ll do the trick.

If there’s a way to successfully separate this data from a single pass for both before and after frame data, then I’ll post it. Or if you guys figure it out, beat me to it, by all means.

Turns out I can’t make heads or tails of this vector blur node. In context to Blender, makes sense and docs are very useful. However, in the case where you’re using externally output renders, I can’t get even a bad blur to occur. What should I know about this node? Does it not work on single (non-sequence) images? I’ve got Beauty, Z and mv2DToxik passes all separately rendered. I’ve separated the channels and rerouted them as above and other ways. Nothing.

Clearly, my newbieness in Blender compositer is showing and I need help.