Right now, Blender’s animation timeline is frame based. Frame based meaning that an object’s animation is built on and anchored to the frame rate of the global animation. This is flawed for several reasons: First, animations will distort when attempting to adjust the frame rate (PAL, NTSC, etc.) This is obviously a hindrance when one works on several projects using different formats and wish to transfer files between them. This limits the transferability of .blend files, a paramount feature of Blender. Second, in the real world, the speed of an object does not change depending on what sort of camera is filming it. As it stands in Blender, it does. Third, the current frame based animation setup limits the abilities of an animator, for they are unable to create animations which cycle faster than the framerate–such as a fly or hummingbird’s wings or the rotors on a helicopter or fan. They are forced to use awkward techniques to imitate those effects. And fourth, using a frame based animation system actually increases the amount of work needed to create a physically correct animation.
My proposed system would employ many of the features already in Blender, but currently, in my opinion, are not used to their full potential. The animation timeline already has a time meter. Instead of having to try and find a frame that approximates your desired time, you should instead be able to tell Blender that object x should move from point a to b in 2.5 seconds. The computer should be the one that calculates how many frames should be in between, not the animator.
A possible design for the new timeline would be for frames to be represented by vertical blue lines on the timeline at the time that they render the scenes. This way, those still wishing to use frame based animation will be able to depend on frames. Motion blur could be graphically depicted with a lighter blue box that indicates the time period in which the blur is sampled. “Keyframes” would simply be timestamps, and quite possibly may never land on a frame. The frames would use interpolation data in order to determine at what state the animation is.
First off, the immediate advantage of this new timeline would be that since keyframes are not dependent on frames at all, one can change the framerate on a whim and never need to worry about stretching or squishing the IPOs to fit the new speed. Also, the speed of an object is not dependent on what is filming it. This opens up fresh new possibilities, such as creating faux-high-speed cameras and matrix effects. Instead of having to adjust the time IPO of every animation path, all one would have to do is create a “high-speed” camera that rotates around the scene in .0001 seconds while recording x frames in the process and outputting them at 29.5 frames per second (this would require new code). This is simply not possible with frame based animation–the smallest unit of time for an animation at 25 fps is 1/25 of a second (25 Hz).
On a similar note, if one wanted to show the flapping of a the wings of a fly or the spinning rotors on a helicopter, one would be able to zoom into the timeline, create only a single cycle of the animation, extrapolate it, and viola, it is done. I have read many tutorials on how to fake this effect. The animator should not have to fake this effect! The “reverse-spinning” effect on helicopters is simply the product of motion blur and aliasing by imperfect camera equipment. If you notice, that as the rotors slow down, the effect changes. This is difficult to fake properly.
Also, if you were making a video of an explosion, and at one point wished to slow down at one point and show some detail (with regards to time) animation. In order to make it appear physically correct, you could animate the explosion in “real-time” and then “zoom in” to tweak things during the slow-motion part. You could create a second camera that runs at, say 100 frames per second in animation time but outputs the video at 25 frames per second for the slow motion, or have the first camera simply use those settings at the desired part.
Now for those who are still unconvinced of the merits of time based animation for “normal” tasks, imagine that an animator wished to animate a bouncing ball. Physically, it would be dictated that: d=vt+1/2a*t^2 For those less physics inclined, what this means is that the distance the ball (if starting from rest) has traveled from the starting point at time=t would be the distance traveled in the first second times the square of t. So, you calculate the times and distances you need to be proper. With time based animation, you simply find the time (even if it is fractional), move the ball to the proper distance, and insert a keyframe. With frame based animation, you’d have to figure out which frame is the closest to the time, and if it isn’t marked off in the timeline, you have to try and calculate it (try NTSC’s 29.97 fps), only to have it all ruined if you want to try and reuse it on a project with a different framerate.
I have never seen anything like this before, so I believe if Blender implemented this, it would be revolutionary. It pioneered a revolutionary user interface–one of a kind, and it’s not over yet!
For those interested, I have posted this on the wiki here.