Well, if you’re using vector blur it calculates vectors from previous and next frames. It needs this to make the blur realistic… it needs to know where the object is going to blur it in that direction ;).
If you’re using the mblur button, it just renders a whole bunch of frames before and ahead of the current frames and mixes them (takes a lot more time as you might imagine).
Yes that’s what I’m doing, but motion blur doesn’t work that way. If you snap a picture of something that’s moving from the left to the right, there will be a blur, but the LATEST instance of that object in the blur will be to the right. That photo does not tell the future, it doesn’t show what happened after that picture was taken.
Blender shouldn’t have to render frames from the future in order to make the blur look right. Here:
I made a square in photoshop, duplicated the layer 8 times, and nudged each one farther over to the right than the one before it, and got this as a result:
Then I used Blender’s method, where I started in the middle, duplicated it 4 times to the left, and then 4 times to the right, and got this as a result:
Notice how they’re exactly the same result, it’s just that Blender’s picture predicts the future while the other doesn’t. So unless there’s some technical handicap going on behind the scenes that I don’t know about… this just seems illogical to me.
O rly?
“Negative effects of motion blur
In televised sports, where conventional cameras expose pictures 25 or 30 times per second, motion blur can be inconvenient because it obscures the exact position of a projectile or athlete in slow motion.”
Off Wikipedia…
That’s a given, and it doesn’t change anything. Look at my picture examples above. The object farthest to the right won’t be CLEAR, but it will infact be the latest position of the object. The middle of the blur will always be the part that you can see the best, but that’s because it’s where most of the instances of the object overlaps, not because that’s its current position.
UPDATE: After some further testing with Blender’s motion blur, I’ve found that it’s even worse than I thought.
Here, I created an animation with a blue ball moving from the left to the right. I then rendered frame 5 with and without blur, then combined them in photoshop. Here’s the result:
The ball without motion blur is on the left of the blur. That means Blender ONLY renders frames from the future in order to get motion blur. It’s showing where the ball is going to go, not where it’s already been.
Sorry but this is a common misconception. Motion blur is an average of the total light received by the film/sensor during the time the shutter of the camera is opened. What Blender is doing is correct, motion blur does not leave cartoony ‘trails’, it is a constant smear.
No, I understand that fully well, but you’re not understanding what I’m saying.
Look at my last image, with the ball, and read that post again, carefully. I don’t want the blur to LOOK like that picture. I combined the image with and without blur – the blurred image of the same frame has physically moved my ball forward, farther than I want it, in that frame.
The problem with Blender’s blur can be described with a real camera’s shutter timing. What it’s currently doing is opening the camera’s shutter at the specific frame, and keeping it open for the specified number of increments until the next frame begins. What it SHOULD do, since this is the digital world and I want that ball to be EXACTLY where I placed it on that frame at that exact moment in time, it should open the shutter before the current frame, and close it EXACTLY at the end of that frame. THAT is how motion blur should be handled.
And think about this: try compositing a CG ball onto a real table in a shaky video taken on a handheld video camera. You use motion tracking software to track the video camera’s movement, then import that data into Blender. Place the ball on the table, then do a render. It looks great, except there’s no motion blur. So you turn it on, do another render, and all of a sudden, the CG ball appears to move BEFORE the video does. This is why Blender’s method is a problem – the ball is no longer at that exact physical place at the end of the shutter as it should be. It is instead somewhere farther ahead, a position of what should be inbetween the current frame and the next one.
Do you understand what I’m saying now? All I’m saying is Blender should not render extra frames from ahead of the current frame, it should render extra frames from BEFORE the current frame.
Ok, I’ve created some more example images. Read the words at the top of each image, then look at the picture.
Well, I understand what you are saying, and it appears to be the case, so try the vector blur? It gives great results much more quickly…but I don’t know if it does future and past frames rendering or just future frames rendering
What you seem to think is incorrect is the commonplace way of doing motion blur in most renderers. It takes the current frame, an offset, and integrates an average between that. You seem to want a negative offset. That’s fine if that’s what you’re into, but it’s nothing more than a matter of personal preference - of where you consider the infinitesimal ‘frame’ point in time to be during that average. Talking about where things should exactly be in space at an exact point in time, when referring to a motion blurred render is meaningless - there are no single exact points, but a range.
Consider your example about a ball on a table in reverse, the camera is moving, and comes to rest pointed at the table. Doing the motion blur with a negative offset, blending from previous frames, would give just the same issue as you mention - i.e. “why is the ball blurred when it should be at rest on this frame!” - the argument applies either way.
I think your problem has much more to do with the interpretation of keyframes coming out of your tracker, how the tracker calculates keyframes, and whether it supports keys which are on sub-frame intervals.
What trying to say can’t be summed up with “why is the ball blurred when it should be at rest on this frame!”. It’s more like “Why is the ball over here when I wanted it here?”
Ok, I’m going to try to another example, using the CG/Video composite scenario. You take a video, and the video is only 2 frames long. In that video, the camera is being rotated to the left. In the first frame, there’s a table on the left, and on the second frame, the table is on the right. The first frame has no motion blur. On the second frame, we will use an imaginary set of 3 “instances” of motion blur. There is an instance of the table in the middle of the frame, slightly farther to the right, and then in its most recent position of when the shutter closed, the far right of the frame.
Now add a CG ball onto the table and turn on Blender’s motion blur. Frame 1 ends up having motion blur, taking the ball off the table, and creating several instances of the ball in between where the table is on frame 1 and frame 2. Then on frame 2, there is no motion blur at all. The ball has physically jumped off the location that it needed to be on frame 1 in order to make that motion tracking work.
Now do the same thing using the other method. Frame 1 shows no motion blur on the ball. Frame 2, however, has motion blur that matches up exactly with the composite video, because it’s taking only the object’s previous motion into account.
About this: “Talking about where things should exactly be in space at an exact point in time, when referring to a motion blurred render is meaningless - there are no single exact points, but a range.” There is an exact place where the object is – the final place it was when the shutter closed. The blur just shows the range of where the object came from. There is definitely an exact spot that the object is, however unclear it is in the image.
As for this being personal preference? I don’t understand how. It won’t change the appearance of the blur, it will only change how it works behind the scenes so that a CG composite can match up with the video. It’s not a problem with the tracking software. It’s tracking just fine, and the video looks great if I don’t add Blender’s motion blur.
I may have misunderstood your problem completely and forgive me if this is a horrifyingly naive thing to say, but would offsetting forward, by one frame, the video you are compositing onto work at all?
I did try this once and it didn’t come out right, but I was in a hurry and I think I may have done something wrong.
It seems like if I move my frames forward the same amount of frames that the motion blur’s Bf setting is at, it might should actually line up. I’ll have to test it out.
Sorry if I caused us to butt heads at blender.org, I really didn’t mean to come off in the way you described.
I know it can be confusing to look at in stills sometimes but if you throw an image into Photoshop and add a motion blur filter to it you’re going to get the same result but with less control than you get in Blender. When you play the animation everything looks just fine anyway. Who cares whether or not it’s correct as long as it looks good? Movies don’t look like reality anyway. If you want technical perfection you’re never going to find it because it’s more of a question like “technically perfect according to who’s eyes?”
As far as trying to argue with Broken goes check this out: 1. you’re getting into it with one of Blender’s developers who is directly responsible for implementing the most realistic ray tracing effects available in Blender and 2. he is also a professional artist, animator, and compositor. In other words he is the real deal, the likes of which most of us are never going to be.
If you consider something in Blender to be a limitation or you just don’t like the implementation why not try focusing on developing your own work around? The point is that claiming something isn’t right isn’t going to change it. That’s what feature requests are for. This one isn’t likely to change any time soon though I wish it were a bit more configurable but, since I can’t code what I want, I’m left having to mask my way around it (yep, I’m not too fond of the advanced frame calculations for all situations either so I mask them out if I’m trying to achieve a particular effect…I’ve done this many times but it’s not always possible without rotoscoping, especially with complex geometry).
Yes, the motion blur looks fine when your entire scene is done in pure CG. But as I’ve said many times already, the problem shows when you are trying to do a CG/video composite. It is a plainly visible problem in the animation, not just a technical one behind the scenes. Maybe there’s a trick around it, but I don’t know of one.
And I didn’t know Broken was a developer until after I wrote my last response to him.
I feel like you guys are shooting me down simply because I’m a nobody in this community. I didn’t come here to demand that Blender’s developers change it, I came here to ask why it’s done like this, and say why I don’t think it should be that way. Was I wrong to do that? If I was wrong to do that here then I’ll apologize and completely forget about it.
Guys, thanks for the support, but everyone here has the right to discuss things as long as they’re being civil. I can actually be wrong sometimes!
Your example still works in reverse. If motion blur was done the way you seem to want it done, you’d still have problems with objects blurred that should be at rest, since the blur would be calculated from previous frames (in which that object could be in a different spot).
But anyway I think we’re getting closer.
In this example you have:
if S = the camera’s shutter speed, you have
Frame 1 –> Frame 1+S - table doesn’t move
Frame 2 –> Frame 2+S - table moves
This means that the table has moved between the time the frame 1 shutter closed, and the time the frame 2 shutter closed, so the table is in a different position between those two frames, but you don’t know exactly when the movement happened. That movement could have happened only in the small amount of time while the shutter was open to capture frame 2, not necessarily along the whole 1/24s between frame 1 and 2’s shutters closing.
If your tracker only supports keyframes on integer keys, you’ll get:
Frame 1 –> Ball in position A
Frame 2 –> Ball in position B
With only this information to go by, Blender will linearly interpolate from point A to point B over that interval. It doesn’t know that the ball should wait until right before Frame 2 is captured until starting moving.
Meaning that by Frame 1+S, the ball should have already started leaving, which won’t match up with the video since the table hypothetically only moves within the interval of Frame 2–>Frame2+S.
I would guess if somehow the tracker had sub-frame accuracy, this might be avoided, but I don’t know if that even exists.
Anyway, I don’t think the way Blender calculates motion blur is incorrect, or is going to change any time soon. For a practical solution for this situation, I’d try just offsetting the frames, but perhaps not by an integer amount. You can use TimeOffset, or just move the keys in the ipo editor around, so they’re not exactly on the keyframes, and see how that goes.
Ok, I follow your first example, but I don’t understand this one. If the shutter was going to open before the current frame and close exactly at the integer frame, wouldn’t the first frame not have any blur at all? Because there’s no movement happening before frame 1. Then, there would be a blur on frame 2, as the shutter would open just after frame 1, and close exactly at 2.
Oh, and I will play around with delaying the frames to see if that will fix it.
That’s right, but that’s not what I’m talking about there - I’m explaining why this is happening in Blender right now (mostly due to keyframe information from the tracker).
Having a negative shutter-offset would exhibit the same problems, just in reverse, i.e. if you’ve got a moving camera that stops moving, since blender interpolates in between frames if there’s no information there saying what should be happening.