Why does motion blur do this?

Oh ok, I thought you were comparing both methods.

I don’t understand how the same problem would happen in reverse, though.

(Using the negative shutter offset) If on frame 1, the ball is at point A, then frame 2 it is at point B, then at frame 3 it is still at point B, this is what would happen, right?

Frame 1: Shutter opens before frame 1, closes at 1 – no blur
Frame 2: Shutter opens before 2, closes at 2 – blurs to point B
Frame 3: Shutter opens before 3, closes at 3 – no blur

There’s no reverse problem. Right? And just so I know I’m clear on what Blender is doing, this is what is happening now:

Frame 1: Shutter opens at frame 1, closes before 2 – blurs between point A and B
Frame 2: Shutter opens at 2, closes before 3 – no blur
Frame 3: Shutter opens at 3, closes before 4 – no blur

There is a problem if you have a situation like you have now, where things are moving in real life at sub-frame intervals.

check the attached image - the top ipo curve is showing what’s happening in the real world in this hypothetical situation, the bottom curve is showing what happens if the keyframes are only on integers, as would probably happen after importing tracked data. In the real world, the table is moving and comes to rest between frame 1 and 2.

The green area is representing a positive blur offset, and the red area is representing a negative blur offset. Notice how in this situation, with a negative (red) blur offset, frame 2 gets blurred when the object in real life should be stationary. The positive (green) offset performs as it should, not blurring frame 2.

Yes this is a somewhat contrived situation, but it shows that the offset is kinda arbitrary if you don’t have accurate information about where things are at what time, and that offsetting forward isn’t necessarily incorrect or correct, it just follows convention.

Attachments


Hmm. Interesting :slight_smile:

I think I’ve got it now. So what’s happening is, on the video, there’s a very slight blur on frame 2 (because the shutter opened, then the camera stopped moving, then the shutter closed) but Blender would create a “thicker” blur since it thinks it moved through the entire shutter period.

Then you’re right, it would come down to personal preference. Neither way seems perfect. But, just based on theory and not experience, I would much rather use the negative offset method so that even if the blur is slightly more prominent than it should be at times, I’d at least know that the object is sitting where it should.

Can you tell me if there’s an advantage to using the positive blur offset over the negative one? It seems to me that the negative offset would create less prominent problems, but again, I’m not speaking from experience.

If you really want to get technical about it then motion blur should only render frames from the current point in time forward. That’s not how it happens in your eye but it is how it happens in a camera. After Effects does this but the result still looks pretty much the same. The only real problem that I ever encounter with this is when position keyframes start or stop motion at the beginning or end frames of an animation (that’s easy enough to fix though). Here’s an AE video, 1 second long, 3 clockwise rotations, 180 degree shutter angle, 134 KB download, XviD codec. AE is a dedicated compositing app with an advanced 3D camera (no real 3D capabilities other than that though) but there’s still not much difference in the final outcome:

http://uploader.polorix.net//files/89/Comp%201.avi

I guess the real problem lies in Blender’s camera which is probably the weakest point in Blender. It has some really cool features but it lacks some real basics too. There was some discussion several months back about giving it an upgrade but I have no idea if anyone actually started working on it.

This years Blender conference: “Case studies and presentations give the audience a chance to engage with applications of Blender in a variety of fields. For this year’s conference, we again specifically look for case studies that address the theme of “Professional uses of Blender”, and specifically examples of animation and the integration of Blender in the movie or game studio pipeline. How is Blender being used and developed in different business areas?”

http://www.blender.org/community/blender-conference/call-for-participation/

I’m pretty sure the camera will be brought up but it will still need someone to work on it if it isn’t already in the works.

I see what you’re saying. It just makes more sense to me that, since this is the digital world, the shutter would close at the current frame, not open. What would be ideal for me is to be able to slide the Bf value down into the negatives. Maybe I could make a feature request for that? Vector blur is a lot faster though so I’d probably want to use that instead… but it does blurring in both directions (past and future), which if I recall is how 3dsmax does it. I don’t know if 3dsmax lets you customize the blur more though.

That’s interesting about the conference. It’s amazing how much dedication there is for Blender’s development :slight_smile:

That’s what AE’s shutter angle is for. It will hold open for a max of 720 degrees which is insanely blurred. I hope we get something better soon but I have no problem living with what Blender already has. Vector blur has some serious limitations campared to motion blur, especially where curved paths and z-space are involved (the vectors will always follow straight lines and completely disrespect z-space if the object crosses it’s own path) but you can get a lot more samples a whole lot quicker. More samples is much prettier in my opinion.

You might be intrested in this too, who knows if or when it will happen though:

http://wiki.blender.org/index.php/BlenderDev/ReconstructionAndMotionTracking

I’ve haven’t read everything (which is not wise of course), but motion blur does behave like that. Read “Digital: Lighting and Rendering”. A motion blurred object doesn’t reveal direction.

----> OK off to read the whole thread now.

toontje: I think you missed my point. I never said motion blur should reveal a direction. You might want to read whole threads :wink: What I was saying was that if you knew the direction already, and looked at a motion blurred image of a ball moving from left to right, you’d know that the ball’s last position while the shutter was open was the farthest right position of the blur (even though that isn’t the clearest point in the blur). What I don’t like about Blender’s motion blur is that it opens the shutter at the beginning of the frame, therefore it physically moves the object forward in time on that frame. It creates a disaster when you’re compositing a CG element onto a real video, as the object is no longer where it should be on a specific frame. The object slides around before the real video does.

RamboBaby: That would be freakin sweet if motion tracking was built into Blender. That’s exciting :smiley:

Both the problems I’m describing and Broken was describing would still exist, though. Well, the one Broken was talking about would be fixed if the tracker could track the motion blurs on the video to find the camera’s movement in between frames. But that seems insane… I don’t even know if that’s possible.

The blur leading the object’s location would only be fixed though if you could use negative blur offset.

Edit: Broken, this made me realize that the problem you’re describing is something entirely different from what I was talking about. The problem you’re describing does happen in both positive and negative offsets in different situations, but the problem I’m describing will only happen in positive offset.

At the risk of jumping in after only having read this thread in its entirety twice, may I say that there is no concept of shutter speed in Blender. If your fps is 30 fps, then Blender calculates an image blur based on distances traveled for 1/30 of a second. The shutter “opens” at the beginning of the frame and closes after 1/30th of a second. There is no option to set the shutter speed at, say, 1/60th or 1/120th, as you have to have with real cameras because of film speed and lighting. With a real camera with fast film in bright light moving real fast, you would get the jumping that broken descibes. In Blender, you will get a schmear.

So, if object is at position X at the beginning of frame 1, the exposure slide/pic/image for frame 1 will be of the object starting at position X and everywhere else it was during that exposure; for the next 1/30th of a second, until the shutter closes. So, in that sense, Blender predicts where the object will be, in order to calculate the schmear which would be exposed during that shutter open time. If you render frame 1 by itself no blur, Blender will render the image of the object at the start of the frame.

Hope this helps.

I know there’s no real shutter function in Blender, we’re really just using that as an easy way to describe the function of motion blur.

“So, if object is at position X at the beginning of frame 1, the exposure slide/pic/image for frame 1 will be of the object starting at position X and everywhere else it was during that exposure; for the next 1/30th of a second, until the shutter closes”

This is where my beef is. In my opinion, it makes more sense to have the “shutter” open before the ball reaches position X, and close exactly when it reaches X. That way the ball is still on position X on that frame, instead of somewhere between position X and its next position. In other words, I think the 1/30th of a second exposure should come before the current frame, not after.

Edit: On a side note, I thought you could actually change the shutter speed. Isn’t that what the Bf value does? A value of .5 renders half a frame ahead, a value of 1 renders a full frame ahead, etc…

How is that so? It’s still just a matter of deciding if you want to have the object’s starting or ending position ‘accurate’.

What I was saying was that if you knew the direction already, and looked at a motion blurred image of a ball moving from left to right, you’d know that the ball’s last position while the shutter was open was the farthest right position of the blur

But what about the ball’s initial position? It seems that you’re concentrating so much on knowing where this object is at the end of the frame interval, but completely neglecting where it is at the start of the frame interval, which is just as important.

the object is no longer where it should be on a specific frame.

Again this is confusing the issue - you mean to say “the object is no longer where it should be at the end of a specific frame’s exposure.” Where it is at the start of that exposure is important too, and that’s what Blender happens to use.

Edit: On a side note, I thought you could actually change the shutter speed. Isn’t that what the Bf value does? A value of .5 renders half a frame ahead, a value of 1 renders a full frame ahead, etc…

That’s correct. Papasmurf I think forgot about this one - it’s only for the camera-motion blur, not the vector blur node.

BTW: I personally wouldn’t be against the idea of having this offset allowed to be negative, as an option. I have a feeling it wouldn’t be easy for someone to code though.

In any case, your question has been answered ‘why does blender act like this’ and hopefully you understand it now. I don’t think arguing that in this particular situation your personal preference of offsetting backwards is superior, is really going to change anything much…

The object’s position in the beginning of the shutter period would not matter. In the case of individual frames, the only difference the beginning of the shutter period makes is that it will extend/shorten the motion blur, which would be easily customizable through the “Bf” value. The end of the shutter period does matter because as it changes, the object’s actual position in the world changes.

Again, you’re mixing things up. There is no actual position in the world, there’s a range of actual positions in the world.

The position of an object at the start of its blur-streak can be just as important as its position at the end of its blur streak. In your hypothetical situation, if you’re talking about extending the shutter period back in time, it changes the object’s location at the start of the blur just as much as it changes the location at the end of the blur currently. It’s only on a case by case basis that this may be preferred or not.

The only importance that the beginning of the shutter serves is the blur’s length. Is there a specific scenario where the negative offset would create a problem?

hey sorry to jump into this discussion so late in the game, but i found it interesting. i did a brief amount of poking around in blender, and i’m pretty sure a simply python script can be written that would give you the flexibility to set where blender’s instantaneous “frame” moment corresponds to the integrated period of time that a real world shutter would be open.

blender’s time can be remapped, so all you need to do is expand the time of each frame so that you can navigate to the precise point you want to be your “shutter opening” moment. with python i’d imagine it’d be easy to automate this for animations.

the only limitation i can think of would be for things like fluids, where discrete geometry is created for each frame, so there wouldn’t be any intra-frame movement.

i think this thread is officially dead, but should some hapless blenderhead ever happen upon it, and share JasonBob45’s desire for a negative offset motion blur, i present my hacked together solution:


# A NON-DESTRUCTIVE METHOD FOR RENDERING MOTION BLUR
# WITH NEGATIVE SHUTTER OFFSET
# By Steve Miller
#
# KNOWN LIMITATIONS:
# - animation of fluids
# - blur factor (shutter speed) is taken from render
#   settings, not vector blur node
# - blur factor should be a number that divides evenly
#   into 1 (e.g. 0.5, 0.25, 0.2, 0.1)

import Blender
from Blender import *
from Blender.Scene import Render
import math
from math import *

scn = Scene.GetCurrent()
context = scn.getRenderingContext()

# get intial values
resetFrame = context.currentFrame()
Bf = context.motionBlurLevel()

# change values to for negative shutter offset
remap = int(1/Bf)
context.oldMapValue(1)
context.newMapValue(remap)

context.motionBlurLevel(1)

# render frames
for frame in range(context.startFrame(), context.endFrame()+1):
    context.currentFrame(frame*remap - 1)
    context.render()
    
    outputFile = "%04d.jpg" % (frame)
    context.saveRenderedImage(outputFile)

# reset values
context.motionBlurLevel(Bf)
context.currentFrame(resetFrame)
context.oldMapValue(1)
context.newMapValue(1)

EXAMPLE:

A) original 2 frame animation:
http://img.photobucket.com/albums/v422/shteeve/a.jpg
B) default rendering with motion blur (forward sampling):
http://img.photobucket.com/albums/v422/shteeve/b.jpg
C) simply run this script instead of pressing the ANIM button for reverse sampling:
http://img.photobucket.com/albums/v422/shteeve/c.jpg