How do I apply motion blur data from other 3d app?

Using XSI to render out the vector pass (found that ReelSmart looks much like the blender vec pass), z-depth, and the diff pass.

Is it even possible to use the Blender Compositor to apply the motion blur – without rendering from Blender?

Usually motion blur can only be applied to objects that are actually in motion, not objects that mearly appear to be in motion which is the case with movie or image sequence files. Try it and let us know what happens for you.

Yes, this object is in motion. It was rendered out of XSI along with a motion vector pass. I’ve been trying to plug both the diffuse pass and the motion vector pass into Blender’s compositor but am not having any luck. Looks like it might require z-depth also. I thought I would ask if it’s actually possible before I spent more time on this.

I thought it might work since the motion vector pass from XSI looked alot like the vec pass from a blender test I did (visible in the screen dump attached.)

Attachments


I think I may not have been very clear, you have moving pictures not moving geometry. Pictures only simulate motion, thus there is nothing for motion blur to act on. I see that you’re trying to apply vector blur which is not the same as motion blur (motion blur pales in comparison and is much slower in rendering too!) but this can only be applied to a render layer, not an image layer. The reason is that the pixel data is converted back into 3D geometry in the blur pass, but you have to have the vectors to work with as 3D image information is not interpolated from a 2D movie or still sequence. If you’re only wanting to use Blender for the blur pass I dont think that you’re going to be able to do it, but I could be wrong. You can read about vector blur here: http://blender.org/cms/Vector_Blur.748.0.html

You can however render only the geometry that has motion in Blender and composite it into your previously rendered scene. Be sure to enable a vector pass in the renderlayers pallet. Be aware that if you have objects in your foreground that pass in front of your vector blured layer these will either have to be rotoscoped or re-rendered in blender (as a seperate renderlayer if they are not supposed to have vecctor blur applied). Alternately you could go the openexr/z-depth matte route but that’s a whole different ball of wax.

ok, the node setup is all wrong. but I’ll get to that later.

First thing, why don’t you simply use blender for the vector blur data? As you said yourself, the results are very similar. It seems to me that you are looking for a difficult solution to what is a very easy question.

I use yafray and indigo all the time to create motion images like this. All you do is add another node with your yaf/indigo render (as you’ve done) and plug it into the image socket of the vect blur node. You then plug the Z and the speed node from the blender scene into the corresponding sockets of the vect node, then finally composite.

So yes, you do have to also render with blender. but if you turn off shadows and raytracing it’ll only take a few seconds.

I think I may not have been very clear, you have moving pictures not moving geometry.

Well, as long as his blender scene matches his XSI scene exactly, you can use the motion data from blender to composite.

But I have to be honest, Im not entirely sure what secundar wants to do here. Maybe some elaboration would help.

@RamboBaby: Yes, I did read and re-read a couple of sections there. Took me some time to wrap my brain around.

@M@dcow: We have quite a few passes coming from XSI so I don’t think duplicating the setup in Blender will work for us during this deadline.

I did confuse the terms motion and vector blur. I do mean to perform vector blur within Shake using the vector pass rendered from XSI. Our animator is using a plug-in for XSI called LM2DMV_v2 which outputs a rather colorful pass that resembled the Speed output on a Blender RenderLayer node. From both of your comments I gather that there’s other, invisible data being used – being passed behind the scenes. Crumb!

It makes perfect sense that Blender’s compositor is so closely tied with the rest of the app. However, I do hope the developers will consider how other VFX pipeline apps will be used in conjunction with Blender as they move forward. Allowing motion vector passes from other apps would be a very nice feature.

Blender’s vector blur needs a colour pass, a Z depth pass, and a speed vector pass to work. Currently since the speed pass really only has been seen to work coming straight out of the speed pass in Blender’s renderlayers, the format of that data is a bit of a mystery. I’d imagine it’s not too difficult to get it working, but the data may need a bit of massaging in some way to get it in a format that makes sense to Blender, whether that be the channels that the various directions are encoded in, or the range of values - i.e. I’m confident (without peeking at the source yet) that the vector blur expects values in a wide range, greater than normalised from 0 to 1 (or 255) as the lm2dmv shader might produce. Same goes for the Z channel.

I don’t have any software that can use that lm2dmv shader, so I’d be curious if you could post an example of the files it produces to examine (i.e. speed pass/Z pass).

I do hope the developers will consider how other VFX pipeline apps will be used in conjunction with Blender as they move forward. Allowing motion vector passes from other apps would be a very nice feature.

I agree. Blender’s vector blur is pretty darn good, and definitely competitive with other post-pro vector blur plugins like Reelsmart or ones for Shake. This (and the compositor in general) is a considerable foot in the door for Blender in more ‘high end’ workflows so it would be nice to see that sort of interoperability supported. If you could post some examples of the files that you get out of lm2dmv it would be a good start.

Thanks, broken. Here’s the test files I was working with. Frame 5 is included from a series of 9 frames from XSI, using the aforementioned shader…

http://ritchie.dnsalias.com:8080/CMX/MV_TEST.tar.gz

mblur_depth_0005.exr - 16-bit depth pass
mblur_test_default_0005.tga - diffuse pass
mblur_vector_reelsmart_0005.exr - motion vector in ReelSmart flavor
mblur_vector_smoothkit_0005.exr - motion vector in SmoothKit flavor

As I mentioned earlier, I felt the ReelSmart motion vector pass looked much like Blender’s Speed pass when attached to a viewer.

I’m already using Blender, albeit in limited capacity. It’s a valuable tool which I would like to see become more valuable in our VFX workflow. While I’m merely a padawan compositor I strongly feel the animation/rendering could have been performed just as well in Blender as opposed to XSI.

Hi, I’m getting no permission to access that file when I try to download it here.

sorry, it’s fixed now.

Got the file, will check it out soon. cheers

The reason you can’t use the render passes from Soft Image is that output sockets have not yet been coded into the image nodes for any auxillary information other that Alpha and Z-depth. Blender seems to install with export scripts for everything under the sun including Soft Image, but the only commercial app. import scripts that I see in my menu are for Maya, 3Ds, and Light Wave. If you can pack your file and save to one of those formats then you’ve got a good shot at success. Alternately, you may be able to hunt down an XSI import script somewhere, just search the net.

As far as the workflow involving other 3D platforms, Blender is at the dawn of a new and highly professional era. Due to recently added support for Industrial Light And Magic’s OpenExr image file format you will soon be able to access, modify, or remove virtually every parameter of your rendered images (Materials, Textures, Normals, UVs, Alphas, Scale, etc) as if you were still working with the 3D models themselves. This will eventually allow Blender to manipulate auxillary channel information from other 3D platforms which embrace the .exr file format. Have a look:

Excerpted from,
“Technical Introduction to OpenEXR”,
which can be found in it’s entirety here:
http://www.openexr.com/index.html

“• arbitrary image channels
OpenEXR images can contain an arbitrary number and combination of image channels, for example
red, green, blue, and alpha; luminance and sub-sampled chroma channels; depth, surface normal
directions, or motion vectors.
• scan-line and tiled images, multiresolution images
Pixels in an OpenEXR file can be stored either as scan lines or as tiles. Tiled image files allow
random-access to rectangular sub-regions of an image. Multiple versions of a tiled image, each with a
different resolution, can be stored in a single multiresolution OpenEXR file.
Multiresolution images, often called “mipmaps” or “ripmaps”, are commonly used as texture maps in
3D rendering programs to accelerate filtering during texture lookup, or for operations like stereo image
matching. Tiled multiresultion images are also useful for implementing fast zooming and panning in
programs that interactively display very large images.
ability to store additional data
Often it is necessary to annotate images with additional data; for example, color timing information,
process tracking data, or camera position and view direction. OpenEXR allows storing of an arbitrary
number of extra attributes, of arbitrary type, in an image file. Software that reads OpenEXR files
ignores attributes it does not understand.”

Today I went back through the 2.42 releaselog and found that the devs do indeed intend to support all our wildest dreams.
This excerpt is straight from Ton Roosendaal and can be found here:
http://blender.org/cms/High_Dynamic_Range_Gra.765.0.html

"Multi-layer, Multi-pass, tile-based files
An OpenEXR file can hold unlimited layers and passes, stored hierarchically. This feature now is in use for the “Save Buffers” render option. This option doesn’t allocate the entire final Render Result before render (which can have many layers and passes), but saves for each tile the intermediate result to a single OpenEXR file in the default Blender ‘temp’ directory.
When rendering is finished, after all render data has been freed, this then is read back entirely in memory.

In a next release we will make this format available as a standard render output option too, allowing to re-use it in the Compositor for example, with access to all Layers and Passes like the current RenderLayer Node."

Cross platform data sharing may take a bit longer if some type of Translator Nodes need coding before Blender can actually read auxillary channel info written by other apps (sort of like java virtual machine), but this will surely come to pass too.

God bless the Developers!

So say we all!

If I somehow have time to explore your suggestion can you recommend a format that supports animation? We’re not using armatures in XSI – just a simple arrangement of constraints. My thinking is, all I should need is the model and it’s animation (oh yeah, camera too!) for creating the speedpass. In the end, bring the foreground element passes, i.e. the animated character, as one layer into blender for motion blurring. Then I can finish the comp in Blender or send the blurred result back to Shake.

The only other 3D apps I’ve used are Animator and Maya PLE. Both were short lived. Experiment or maybe ask in some of the other forums. There’s one specifically for questions about other software compatible with Blender.

We’re purchasing the RSMB plugin for Shake 4.1 (on Mac).

In the meantime, after becoming more familiar with Blender’s Z channel I put together the following node setup to support a hypothesis. It works, to some degree. Perhaps someone would like to take a look and perhaps offer how we might get rid of the artifacts… I just realized I need to pre-multiply the Z channel output - that will get rid of some of the artifacting.

(blender cvs 061014 linux build)

.blend (794 KB)

Attachments


anyone looked at the .blend? any thoughts?

I would take a look at it but it’s not compatible with windows. Exactly what type of artifacts are you getting?