Time Based Animation

Right now, Blender’s animation timeline is frame based. Frame based meaning that an object’s animation is built on and anchored to the frame rate of the global animation. This is flawed for several reasons: First, animations will distort when attempting to adjust the frame rate (PAL, NTSC, etc.) This is obviously a hindrance when one works on several projects using different formats and wish to transfer files between them. This limits the transferability of .blend files, a paramount feature of Blender. Second, in the real world, the speed of an object does not change depending on what sort of camera is filming it. As it stands in Blender, it does. Third, the current frame based animation setup limits the abilities of an animator, for they are unable to create animations which cycle faster than the framerate–such as a fly or hummingbird’s wings or the rotors on a helicopter or fan. They are forced to use awkward techniques to imitate those effects. And fourth, using a frame based animation system actually increases the amount of work needed to create a physically correct animation.

My proposed system would employ many of the features already in Blender, but currently, in my opinion, are not used to their full potential. The animation timeline already has a time meter. Instead of having to try and find a frame that approximates your desired time, you should instead be able to tell Blender that object x should move from point a to b in 2.5 seconds. The computer should be the one that calculates how many frames should be in between, not the animator.

A possible design for the new timeline would be for frames to be represented by vertical blue lines on the timeline at the time that they render the scenes. This way, those still wishing to use frame based animation will be able to depend on frames. Motion blur could be graphically depicted with a lighter blue box that indicates the time period in which the blur is sampled. “Keyframes” would simply be timestamps, and quite possibly may never land on a frame. The frames would use interpolation data in order to determine at what state the animation is.

First off, the immediate advantage of this new timeline would be that since keyframes are not dependent on frames at all, one can change the framerate on a whim and never need to worry about stretching or squishing the IPOs to fit the new speed. Also, the speed of an object is not dependent on what is filming it. This opens up fresh new possibilities, such as creating faux-high-speed cameras and matrix effects. Instead of having to adjust the time IPO of every animation path, all one would have to do is create a “high-speed” camera that rotates around the scene in .0001 seconds while recording x frames in the process and outputting them at 29.5 frames per second (this would require new code). This is simply not possible with frame based animation–the smallest unit of time for an animation at 25 fps is 1/25 of a second (25 Hz).

On a similar note, if one wanted to show the flapping of a the wings of a fly or the spinning rotors on a helicopter, one would be able to zoom into the timeline, create only a single cycle of the animation, extrapolate it, and viola, it is done. I have read many tutorials on how to fake this effect. The animator should not have to fake this effect! The “reverse-spinning” effect on helicopters is simply the product of motion blur and aliasing by imperfect camera equipment. If you notice, that as the rotors slow down, the effect changes. This is difficult to fake properly.

Also, if you were making a video of an explosion, and at one point wished to slow down at one point and show some detail (with regards to time) animation. In order to make it appear physically correct, you could animate the explosion in “real-time” and then “zoom in” to tweak things during the slow-motion part. You could create a second camera that runs at, say 100 frames per second in animation time but outputs the video at 25 frames per second for the slow motion, or have the first camera simply use those settings at the desired part.

Now for those who are still unconvinced of the merits of time based animation for “normal” tasks, imagine that an animator wished to animate a bouncing ball. Physically, it would be dictated that: d=vt+1/2a*t^2 For those less physics inclined, what this means is that the distance the ball (if starting from rest) has traveled from the starting point at time=t would be the distance traveled in the first second times the square of t. So, you calculate the times and distances you need to be proper. With time based animation, you simply find the time (even if it is fractional), move the ball to the proper distance, and insert a keyframe. With frame based animation, you’d have to figure out which frame is the closest to the time, and if it isn’t marked off in the timeline, you have to try and calculate it (try NTSC’s 29.97 fps), only to have it all ruined if you want to try and reuse it on a project with a different framerate.

I have never seen anything like this before, so I believe if Blender implemented this, it would be revolutionary. It pioneered a revolutionary user interface–one of a kind, and it’s not over yet!

For those interested, I have posted this on the wiki here.

I’m not an animator, but from the look of you’re proposal it seems like a very intuitive idea, and very well thought out. I have always had trouble with animation because i can never judge how long the keyframes are.

Good luck with your plan, doubleRing, and good luck finding someone to implement it…

This has already been discussed here: http://blenderartists.org/forum/showthread.php?t=79304

I think the fundamental difference between that proposal and my own is the fact that I’m separating the frame rate from the world. If you wish, you could still use frames as a reference–they would be clearly demarcated on the timeline. There are also some fundamental differences between traditional animation and 3d. First off, 3d is a lot closer to live action film. You don’t go, “Ok, so we want the stunt driver to start driving at frame 15 and at frame 73 we want him to skid.” You go, “So start after half a second and after x seconds go into a skid.” Also, this set up allows for a highspeed camera to be placed on the scene with very little hassle.

The biggest limitation with the frame based animation method is the inability to keyframe events less than the intervals of the frames–such as any rapidly rotating objects. This has been brought up before. Also, there is the inability to easily adjust the apparent speed of the animation. The only solutions I have found are to adjust the time IPO of EVERY animation in the scene. The concepts of frames comes from the camera, so put it back where it belongs–inside the camera. Nature doesn’t run on frames.

I, again, have to respectfully disagree. The problem with the majority of 3D animation, IMO, is that people try to convince themselves that it’s more like live action than traditional animation. The very best 3D animation that I’ve seen has all employed more traditional animation techniques than live action techniques. As I argued before, the punch comes from being able to keyframe on an individually targeted frame.

As for reframing for multiple framerates… I don’t see a huge advantage here. Distribution (film and video) is beginning to homogenize on 24p. And if you still want to do it, my preferred solution would be a speed IPO in the sequence editor that you can apply to scene strips.

Animation is not nature. It’s better than that. :wink:

Ok… this seems to be a hot topic, and I may be repeating something someone else has already mentioned…

But how about the mapping function in the Scene/Anim playback buttons. If you do have to rerender for a different frame rate, you can always do it there. But I agree with fweeb… it’ll never look good… I doubt the difference between a blender internal conversion and a proper video rerender will be noticeable, when done properly.

I like the idea, for one: either independence from (or easy conversion between) frame rates.

The time remapping panel makes it possible to compensate for different frame rates, but when working with libraries and such (e.g., importing animations from other projects produced for different media), the process becomes more cumbersome. But what if, say, when you changed your frame rate in the render buttons, Blender gave you the option to automatically adjust all of the IPO curves? (I know, it’s not exactly a trivial thing to do.)

There’s at least one usability improvement in DoubleRing’s idea, so it has its merits.

yeah easy conversion between, without loss of info would be fantastic, currently you can remap all, so it is available. but it doesn’t effect the sequencer (which is often tied into the animation itself)

Alltaken

Personally, I love DoubleRing’s idea. I have thought about doing a short slow motion sequence in the middle of an animation, but the animation involves particles, so changing the ipo curves won’t change the particles’ speed. Admitedly, this feature would take lots of work to implement, and it would only be really necessary for slow motion scenes. However, if the coders are just sitting around twiddling their thumbs, just itching for something to do, this seems like a good feature to me.

I partially agree with this, it would be easier than working out how long something has to be, then trying to divide and multiply some evil demented number like 24 to translate that into animation. However, there would be a lot of programming involved (I’m pretty sure the programmers aren’t sitting around twidling their thumbs at the moment;)) and it wouldn’t be easy to export to different formats (3D formats that is) as they generally use the tried-and-tested frame system.

Not only would it be good for slow-motion sequences, but for fast motion sequences as well. There are plenty of times that you need to create something that cycles faster than 24 Hz. The very best examples are hummingbirds and helicopters, but they definitely are not the only ones. With traditional animation, you’d just create a giant blur and that would be that…but this is not traditional animation.

The reason why reverse rotating appears in films is that because the rotors spin faster than the framerate, at certain points the motion blur they create overlaps itself, meaning those areas are more coherent. I still don’t see any way that frame based animation will ever address this problem using a natural and painless way. Also, a fly flapping it’s wings looks awfully dull with just a big blob where we should see brief images of its wings where it changes direction at the up/down motion. To force the animator to do this by hand is ridiculous.

Why insist on binding actions to framerates? Sure, you could always put your keyframes at the frame markers, as you always did, but with time based animation, you have the ability to dynamically change framerates. (Capturing, of course. It wouldn’t do to have the same video constantly change framerates. This is what I mean by high-speed and low-speed cameras)

At it’s very core, time based animation is NOT removing the concept of frames from animation, but is associating it with the camera. It’s freeing the “world” to operate. Nature doesn’t operate on frames…camera’s do.

Perhaps when we separate these two, we can play with the concept of “apparent time.” This is exactly what I mean with high/low-speed cameras. “real time” is not moving at that rate, but the camera is. As of now, the two are bound together, and in my opinion are screaming to be freed.

I think the simplest thing that could achieve what you’re asking for is a frame-rate and speed changing script that could, for example, change a 25 fps animation to a 30 fps animation by scaling the speed of the animation. Keyframes CAN actually be placed between frames in Blender, so this won’t be a problem.

If you’re animating something faster than the frame-rate, say a bee’s wings or some helicopter blades, you could alter the frame-rate to 100fps, animate the bee’s wings, then slow it down to 25 fps.

This script could also be used to speedup and slow down animaton, and to, for example, make an animation that plays between frames 1 and 100 play between frames 51 and 150.

All this CAN be done by scaling the ipo curves, but this is a long, fiddelly process.

This is how 3DS max handles the problem.

Such a script would not be that difficult as long as the sequencer is ignored. There is Time IPO as well, thats how I’ve done slow mo’s in the past. But you need to really understand the ipo copyPaste and parent effects well. The scene needs to be set up to do this from the start.

Personally I like both systems. ie time based and frame based. It would be nice to do some parts that are frame rate independant for my things that for sure.

Blender represents keyframes internally as floating point values. Although you can’t insert a keyframe on a fractional frame number in the interface (well you can move keys to non-integer values…), it should be easy enough to do, eg. when simulating things. But it would be much more efficient to know what you’re creating in advance (i.e. a slowmo effect or whatever) and set up the simulation accordingly. Generating all kinds of fractional data if it’s not going to be used would be very inefficient and waste a lot of storage.

On a similar note, if one wanted to show the flapping of a the wings of a fly or the spinning rotors on a helicopter, one would be able to zoom into the timeline, create only a single cycle of the animation, extrapolate it, and viola, it is done.

You can do this already by just scaling the keyframes down.

There are also some fundamental differences between traditional animation and 3d. First off, 3d is a lot closer to live action film.

You mention there are fundamental differences, but you don’t mention what they are or justify your idea that 3D is closer to live action film with rationale - I’d be curious to see why you think that way.

You don’t go, “Ok, so we want the stunt driver to start driving at frame 15 and at frame 73 we want him to skid.” You go, “So start after half a second and after x seconds go into a skid.”

As I mentioned in the other thread, that’s not animation, that’s directing, and although some people like to do both, the two are quite different. Even on an animated production, you’ll have a director who says “I want the car to drive for half a second and then skid”, but it’s up to the animator to take that direction and then worry about how that car drives and skids, creating movement in a convincing and/or expressive manner.

For this, it’s important for the animator to know exactly how the image looks on screen on each of the frames - i.e. what it’s silhouette looks like, how the lines of motion compare with other moving objects on the screen, what shapes or poses are visible at the extreme positions, how the camera might be moving to give the right impression, etc. This is on a completely different level to saying “start and skid after half a second”, a level on which it’s important to know how the animation will look as viewers will see it, frame by frame.

Of course having said all this, I think that having better tools to map and convert between frame rates, change the speed of obejcts globally or selectively would be very welcome in Blender. It’s a separate issue to whether time when animating is represented in seconds or frames though.

Most time re-scaling operations can be done easily in the NLA Editor, but there are some keyframes that aren’t shown there.

Here is where this method is flawed. The world does not slow down for a slow motion effect–the camera speeds up. Here, you have work these time dilation effects directly into the animation, because the “apparent time” of the camera directly controls the “real time” of the animation world. If you wanted to adjust the slomo effect, you’d have to adjust each individual animation! Ridiculous! Time based animation would let you simply adjust where the camera speeds up and slows back down. Ideally, it would also be able to have smooth changes with the dilation (ie, the new AllState commercial, for those in the USA. As the cars pass, time slows down gradually (looks like an inverse quadratic). This commercial also is an inspiration for another one of my ideas which is multiple timelines (the allstate guy would be on his own timeline, who’s rate you could tie to the camera, but that’s for another day :wink:

Having to scale the keyframes down and not being able to simply manually place these keyframes “point and click” comes out looking like a limitation in my eyes.

I thought with all of the examples I’d given it’d be obvious. All of these features that blender has been implementing and is planning to implement are because they wish to make it imitate the real world. SSS, bloom, sculpt modeling, the physics engine, they’re all (I’ll steal the drama term here) the “imitation of life.” The most indicative of 3d animation’s similarity to live action film is the existence of the camera as an entity inside of the “world.” Sure, you can imitate it in 2d, like the scene in The Lion King where Simba is running after Rafiki in the jungle. But, Donkey’s chase scene in Shrek is a lot more impressive because it employs a method much closer to reality, or live action film. Sure, we can make 3d imitate 2d instead…but why? Are we creating stop motion animation here? Well, some of us may be, but really, past cell shading, I don’t see anywhere that 2d animation should carry over into 3d.

It’s not so much as animating based on time (really, it would be easy to continue animating based on frames, the framework would be there (pun definitely intended)) but anchoring the animations to time, like they should be, taking the concept of frames out of the world and putting it back into the camera, and associating “apparent time” with the camera and “proper time”/“real time” with the animation world. With proper motion blur, some really cool things could be done with this.

The optimal solution, as I’ve pointed out, is to take the scene (un-rendered) into the sequence editor and have the ability to apply a speed curve to the scene strip.

I hate asking this, but how much animating have you done? Compare the two scenes you gave to the chase scene on the island in The Incredibles. That’s 3D animation very successfully employing 2D techniques and, IMO, outshining the look in both of your examples.

Sure, we can make 3d imitate 2d instead…but why? Are we creating stop motion animation here? Well, some of us may be, but really, past cell shading, I don’t see anywhere that 2d animation should carry over into 3d.

Traditional animation applies to 3d animation more than live action ever will. The principle is still unchanged; to re-create movement rather than record it. From whom do you think the best 3-d animators learned?

/me is humbled by that. Perhaps we should treat 3d as a completely different beast. A hybrid perhaps. I see what you mean by recreating movement, but in the recording movement part 3d is closer to live action. There are people smarter than me (hopefully, or the world is doomed!) who could probably create a thusly hybrid version of my concept. But that’s all it is–a concept. Despite my wildest dreams, I don’t rule the world! (cue megalomaniacal laughter)

Don’t be afraid to ask. The bulk of my animation experience is actually in 2d, not 3d, but I’m no maestro at it. I am comfortable with the frame based system, but at times grow frustrated with it. It’s a hobby for me, so I guess you could say I’m just a random guy who knows way more about relativity than anyone should ever need to know.