[updated july 19 2019] Eevee real object based Motion Blur v0.4.3

:warning: attention: some recent change in the python API broke previous versions of the addon. It’s already fixed, download v 0.4.3

The API change broke the automatic Compositor setup, so if you had problems with that, try again.
You should start with no Viewer node at all, or with the Viewer node attach to what you intend to render, same as before. The addon takes care of the rest.
Sorry for the inconvenience.

[latest version: 0.4, see below for changes]

[ warning for 0.31.3:

I changed the inverse gamma to direct, so if you used the former default of 2.2 you should change that to 0.454545, if you used another gamma you should change that to 1/old_gamma. Sorry for the inconvenience, we are trying to solve a slight difference in color between F12 renders and MB ]


(scroll down for updates)
I made an addon to implement real Motion Blur in Eevee.
At this time the built in option for motion blur under eevee works only for camera movements, and only if the camera itself moves, it ignores movement if the camera is parented or rigged. That renders eevee pretty useless for profesional work.
But that’s a pity because Eevee is fast and yields very good looking results, so it’s tempting to use it for real animated renders, so I came across a sort of workaround that works pretty well, and wrapped it into an addon that handles the whole process seamlesly and in one click.
Just install the addon, look for the panel under render properties called Forced Eevee motion blur.
You have one button to render stills and another to render image sequences. It outpus EXRs only at the moment.
The effect is achieved using a very brute force approach: rendering one subframe for each motion blur sample, so it supports everything that Eevee supports, objects, particles, shadows, lights, volumetrics, etc. as long as its animations are at least linearly interpolated and not stepped.

This is in beta stage, just released, so be patient with some bugs/annoyances/unfinished features you may encounter.
You can download it from Github.
Enjoy.

The issues you WILL enounter, today, are:

  • May cancel hitting CTL+C in the console a few times.
  • The effect relies on a Viewer node connected to the same output as the Composite. It adds the node all by itself, but first it checks if a Viewer node exists, just to avoid disrupting some fine node setup, but in that case it will render whatever is connected to the Viewer.
  • This may be a bug or a feature… :wink: this way you can motion blur any step of the compositor tree.

For the next releases I plan to add:

  • Adaptive subframe sampling: roughly estimate the amount of movement in each frame (measuring the maximum speed of the corners of the bounding boxes of each object, relative to camera) and tune the amount of temporal subsampling, so frames with faster movements get more frames, and and frames with little or no movement get little to no subsampling :wink: DONE!
  • Better integration with the UI (keyboard shortcut, CTL+SHIFT+F12 ?, ability to show the render in the Image viewer automatically)
  • ability to choose output format

[ Update July 1 2019] 0.21 is out! now supports alpha channel export.


[Update july 5 2019] new feature: Adaptive samples!

Calculates the amount of samples each frame based on the actual movement at that frame. This way frames with lots of movement get more samples, and the ones with little to none may get no subframe samples at all. This way the render resources are allocated more efficiently :+1:
There’s a minimum and maximum samples override, to avoid rendertime-crushing peaks (I got over 500 frames in some tests, especially when an object gets too close to the camera the pixel speed can go up like crazy).
The minimum samples option is to compensate for some special cases (like particles) that doesn’t get calculated for speed (still), and whatever. Just in case.

[Update July 15 2019] New –barely noticeable– update: 0.4.

  • Most changes occurred under the hood: code cleanup, some functions are optimized, should perform faster and overall the code is neater.
  • Main change is in the technique used for the subframe rendering itself, that now uses real subframes instead of the time rescaling hack I used before. I wasn’t able to make it work that way before, so the need for the hack. Now I’ve realized i was simply making a silly mistake. I fixed it and rewrote the function :wink:
  • The only visible improvement is that –if you manage to– cancel the render, your timeline isn’t screwed up anymore.
  • And the script gives a neater console output too.
  • The properties are now stored inside and object instead if directly under scene. It’s neater, less error prone and is the official recomendation too. Doesn’t change anything on the user side, except that maybe if you open a project created with older versions of the EMB, the settings (adaptive samples, etc) may be reset to default values.
    In time i’ll fix everything I can.

Tell me what you think/if it works ok for you.


Issues/caveats:

  • Particles are NOT taken into account yet. I’ll try to solve that, in the meanwhile if there’s particles in your render, set the minimum to, say 4 or 8 and you should be covered.
  • Texture animations are NOT taken into account either. If you have fast moving textures –or anything that’s not an object set the minimum samples accordingly or turn off adaptive sampling.
  • The movement is calculated using just a couple vertices of the bounding box, for speed, so the estimate is very approximate. Speed readings may not be always accurate, and may be ridiculously high when objects cross the camera plane. That’s what the maximum samples are for.

12 Likes

Is it ? No way :scream:
It seems crazy for me, it might be a bug they are working on, don’t you think ?

See you :slight_smile: ++
Tricotou

It’s been that way for some time and no plans to fix it yet. It will at some point I guess.
It’s not a bug, it’s a non-present-feature.
The official motion blur in eevee is in fact a linear blur applied in post, based on camera movent, so it can’t isolate objects or whatever. It gives some funky results when BOTH camera and objects are moving fast, the blurs follow the camera mov, but not the objects, so they end up at weird angles.
See this video –not by me–

Or I just noticed that in Eevee you cannot compute the vector pass :thinking:
With the vector pass, combined with z pass, feeding the Vector Blur Node I had very good results in Cycles without using any motion blur at render time. But indeed if it’s not available in eevee, it’s a big problem.


Quite strange, by the way :thinking: . For me computing the vector pass is not Ray Tracing related, it could totaly be done in an OpenGL based renderer like Eevee … I hope it will be available soon ! :stuck_out_tongue:

See you :slight_smile: ++
Tricotou

Exactly. It has been discussed for a while. No real MB and no vector pass either.
Until very recently even the compositor itself was unavailable for eevee.
But now we are at a feature lock until 2.81, so it will be a while without those.
That was the motive behind me looking for this workaround :wink:
See you!

Hi Pablo,
great project.

Unfortunately nothing happens to me and when I connect the composite output to the viewer node Blender crashes.
The saved EXR is black.
Blender from today - bbb3500c9716-win64

Eevee-Blur-Test.blend (1.5 MB)

Hi, thatnks for your interest.
maybe a bug of your build? Are you using the latest build?
you had checked “use alpha” that unfortunately is not supported in this first beta, maybe is that. I’ll run some tests. Thanks for reporting!
Try deleting the viewer node so the script creates it’s own, worked for me:

I just can’t get it to work.
I’ve tried your suggestions… no change.
Load Factory settings… no change

Try a new build tomorrow.

Are you able to simply render (F12) that file? maybe is an issue with your build/system or even drivers… I’ll run a few more tests on your file.

Send me the output from the console when it crashes if you can. So I can see what exactly crashed.

Hi, Pablo,
it works (even with Ease-in/Ease-out) with a fresh build without any settings or plugins from me.
Great work of yours!
Now I only have to find out which setting or which plugin interferes with my configuration.

I found a little bug.
If the shutter is >= 9 an error message appears and the animation jumps back to frame 7.
(In relation to this scene [updated july 19 2019] Eevee real object based Motion Blur v0.4.3 - #6 by NewVisitor)

File "E:\Blender-28\2.80\scripts\addons\eevee_motion_blur.py", line 246, in execute renderMBx1fr(frame, shutter_mult, samples)
File "E:\Blender-28\2.80\scripts\addons\eevee_motion_blur.py", line 206, in renderMBx1fr bpy.context.scene.render.fps /= fr_multiplier ZeroDivisionError: division by zero

Thank you for your report!
You mean 9 or 0.9?
Technically it should never be above 1, because it represents the fraction of the frame that the shutter remains open. Any value above that may be used to achieve trippy effects, but is physucally impossible so it’s very likely to crush any equation. I’ll check out how to solve it.

Hi Pablo,
I mean 9 or above and I know that it Physical incorrect but I can make nice airflows or streams.

2 Likes

I understand, I don’t think it’ll be possible/practical with this technique (remember that it’s a workaround for a missing feature). The algorithm assumed a maximum of 1 to the shutter, and at 10 it causes divide by zero. By now I limited the shutter to 1, values above get clamped, so to avoid errors and crashes.
I will study the special case to see if it’s feasible to support ot, is true that the effect may be useful in some scenarios.
Thank you!

It’s great that you programmed this workaround for a long missing feature in Eevee.
I wish you continued success in this project and look forward to the next Version.

New version 0.31 Updated!, edited the main post

Hi Pablo,
works great.

A small request.
If you check Adaptive sampling please ghost the samples.
And vice versa.
If you uncheck Adaptive sampling ghost the min/max samples.

Any chance to convert the OpenEXR images to PNG before saving.
This would save a lot of hard disk space, especially for long animations.

1 Like

Your’re right, those are usability things that need to be addressed.
As for the PNG output is true, what I want to do is that the addon takes any output format set in the Render setting, but so far I haven’t found a way to output video from python (with processed Images, not the regular output). If I can’t find a way I can make it respect the image formats and default to EXR for the video formats.
Thanks for your feedback!

Done! the 0.31.1 is ready. Just this small fix :wink:

I think this is done the same as how eevee calculates soft shadows . From what I understand multiple shadows are calculated from slightly different angles and then merged and blurred together . Maybe the same could be done for objects motion blur ?