Motion blur alternative methods

I have an animation I am rendering that averages around 20 minutes per frame. It is quite extensive but my animation is only about 2 minutes and I am patient. However, I noticed when I apply motion blur the render time goes up an astonishing 3 hours per frame…just unacceptable. I decided to render without motion blur and will add it later. What alternative ways are there to add motion blur later? Render times are consistent without adding the blur right now.

I am rendering png sequence out and have a newly built computer with an 8gig graphics card so my computer is built for it all.

I haven’t done this in the compositor before, but I’m pretty sure pngs can’t store vector data. You need to enable the vector pass and render the animation out as a multi-layer exr sequence. Sorry, but you’ll have to re-render everything.

I tried to work out the exact series of steps you need just now, and I am a bit confused myself. When you add an exr sequence to the compositor, the vector pass isn’t exposed, but you can access it when you add an individual image and set it to the name of the render layer you used. You might have to use another compositor for this.

EDIT: Sorry, I was being dumb. The option is at the bottom of the node for the image sequence. I just had to expand the node more to see it.

Anyway, you set it to the render layer then use the vector blur node:

Thanks for the explanation. Would the exr sequence require the same amount of time to render out once this is added with a vector blur?

My guess is that it would take just a little more time, probably less than a second. It has to save the vector pass along with the combined result, so I figure that would take at least a little more time. I’m not a developer though, maybe it doesn’t add any time.

However, it will definitely take up a lot more space. That test scene with the cube rendered out to files that took 1-4 megs each. When it was set to png they used 250 kilobytes or less.

IIRC you can render the scene in the Blender Internal renderer and just use the vector pass. Much quicker especially if you turn everything off.

1 Like

As 3point edit says, you don’t need to rerender everything. You can just do a quick render that includes the vector pass. I did this years ago with an external renderer (yafaray). Finish your png sequence, then render out the vector pass in compositing.

there is also a commercial plugin called RealSmartMotionBlur that you can buy and run with Natron.
It’s available on all OS. You don’t need to use a Vector pass and it will analyze the video to add motion blur.

I don’t like that much blender’s vector blur , you can try a demo version of RSMB before buying it so you’ll see if it fits your project.

Motion blur is expensive because it renders the shot more than one time. Vector blur is a cheap and usually very effective alternative that is computationally added given vector information.

Also, sometimes simple blur works just fine, too! “If it doesn’t draw my eye, go with it.”

IMHO you should do all renders to MultiLayer OpenEXR, except the “distribution prints,” because this is a loss-less digital format that is specifically designed to capture all of the numeric outputs of the render engine. (The format was created by Industrial Light & Magic, and the “multi-layer” enhancement was introduced by … Blender.) The file is “a file of floating-point numbers.”

When you have your final frames (as possibly not-multilayer OpenEXR), now you busy yourself producing movie-files, JPGs, PNGs and so on from them. This is the first and only point where you introduce “lossy compression,” “gamma,” and encoding, to suit the needs of each distribution-file that you need to make, all drawing from a single master OpenEXR source that has none. Render to a higher resolution than you need for any final, so that the compression and sampling is always down.

Sozap, where do I find this plugin you mentioned?

Thanks everyone…some very good information here and t know the reasons behind using EXR. So sundialsvc4, are you saying that your should render out to EXR multilayer, then import the sequence and do whatever you need on compositing and export again as png sequence to make the final version? I am saying png sequence since this is usually recommended for making the final version (not using Blender’s movie export since it is not that efficient).

I am now exporting multilayer EXR and that has cut the render time down to half. Also, what seetings are important when rendering EXR multilayer?

RealSmartMotionBlur :
Natron :

Maybe you should try what the vector blur of blender is doing , and if that doesn’t work well have a look into this. It’s worth the 90$ licence and a great asset in a professional toolkit .

About OpenExr multilayer : it’s pretty straightforward, you can activate half float option that will save images in 16bit float instead of 32bit float. It’s faster to read and will take less space on disk at a cost of little less accuracy that you won’t notice at all unless you’re doing really heavy compositing.

As much as I hate After Effects, it does have a pretty good motion blur tool that you just apply to your final non-motion blur render from Blender.

I guess it all depends on how far one wants ro take the process balanced with the amount of time you have to render. Seems like there are many ways to do this… the EXR method seems to be the ultimate depending on needs of the project. Looking at the plugin looks really nice. I wonder if fcp has an equivalent i havent checked yet.

Ahhh, if only Blender had Optical Flow technology…

Optical flow is by definition never better than vector based blur. Because vectors are precise and calculated by render engine, optical flow tries to deduce these exact same vectors from footage and in quite a lot of cases fails miserably.

MultiLayer OpenEXR is basically a bunch of individual numeric data-sets (“layers”), all bundled together into one file. This is the digital data … all of it … that was produced by the rendering process. The file format is “loss-less,” with only moderate efforts at file-size reduction. (And, who really cares anymore how big the files are?) It isn’t intended to be “an image.” It’s intended to be an intermediate file in a digital pipeline.

It was a “great improvement” that the Blender(!) team cooked up, and it has since been widely adopted.

OpenEXR is the original, non-layered format first conceived by Industrial Light & Magic, which contains only one layer of information but which once again is intended to be an intermediate data set.

In the case at bar, “vector data” can be one of the layers that ypu capture. Once you’ve got it, you’ve got it, and you only pay the price once. I frankly recomend that you capture all of the data that you might possibly need. Basically, I capture “everything.” (P.S. When I “re-render” something, I keep the old outputs too. Data is a terrible thing to waste.)

Thanks sundialsvc4 for explaining EXR …makes a lot of sense especially when explaining it as a DI. I will use this in my animations from here on out.

The problem with all “image” files is not only that they don’t capture a lot of the data at all, but even the data which they do capture … has been processed to be “easily displayed on cheap hardware.” :slight_smile: For instance, it can be lossy-compressed, interlaced, numerically re-mapped, gamma corrected and more. It doesn’t know how to be whiter than white or blacker than black. This isn’t what you want … yet. At every point in your production pipeline except the very last, "this is not ‘an image’ … this is 'expensive, hard-won data.’ " (“I spent a week to get this, and heated my entire house with the residual heat from the poor chips inside my machine… Don’t make me spend two.”)

Only when you are printing “distribution files,” in whatever format the customer needs in order to “watch it on his gear,” do you produce movie-fles, image-files and so forth, making individual adjustments to suit each target. But all of these are produced from the same (OpenEXR) master data-sets which were “your final-cut movie.” (Never from each other.) These are the only points where correction, compression, and so-on ever occurs, and artifacts have no chance to accumulate since the (same-) source data is pristine.

It would be so nice if this was explained in so many Blender courses both in book and online. So many go straight from modeling to animation and rendering without even mention OpenEXR

A million times this. How I wish I could convince my working partner to switch to exr. unfortunately we work mostly in After Effects and AE is not a friend of Exrs.

And this applies to all 3D/compositing work. Exrs all the way, with jpg proxies to speed up the workflow.

Of course it is somewhat understandable that After Effects would not be prepared to handle “another application’s data files,” even if those files were in a file-format that it could “read.” Because, even if it could “read” it, it could not presume to “understand.”

So, if your down-stream workflow will involve an image- processing program such as AE, you will of course need to prepare for it something that it can actually use.

But, from a Blender workflow perspective, I suggest that this should be “an explicit, final, data-conversion step.” (And, therefore, one that you can of course “tweak” without re-rendering anything.)

The input to this process is the final OpenEXR files which contain "Blender data." Meanwhile, the process(es) which now undertake to preparing output “deliverable file(s)” from these data, are cleanly separate, and involve no significant computation.

  1. “The render outputs, themselves” are the things that you paid umpteen hours for. Therefore: “first, exactly-capture those!” [MultiLayer] OpenEXR can reliably do just that. (Whew!!) They might well have some weirdnesses :roll_eyes: that are entirely unique to your project, but “each of your step-2 processes know how to take care of this.”

  2. With this accomplished and “safely in the can” (and backed-up fifty thousand times) … hey, "the rest of it is easy, and free." Simply a new blend-file that converts the OpenEXR data into “whatever deliverable files AE wants.” (What? You need more than one deliverable? Sure thing!)

You can endlessly repeat “Step 2” because “Step 1” has been precisely captured.

(And, for what it’s worth, I usually inject a _“step 1(a),” where the MultiLayer final outputs are formally committed to a non-MultiLayer form. Which gives me yet another place to tweak.)