difference between RENDER (F12) & ANIMATE results with IPO drivers

I’m starting motion and rigging tests with my Kata character, and have noticed what may be a bug, but I’d like to confirm before reporting it, or make sure it’s not a known issue.

I’m using a pydrivers.py script (IPO drivers) to do corrective deformation of the model. Single-frame renders (F12) showed that this works very well. However, in an animated sequence, the IPO drivers seem to lag the action, and the deformations aren’t applied correctly, as can be seen in these example frames (note knee deforms indicated by red arrows):

http://img394.imageshack.us/img394/5553/ipodriverlagks7.jpg

I was under the impression that RENDER (F12) and ANIMATE produce exactly the same results, and with ANIMATE is simply applied to a sequence of frames. That does not seem to be the case when IPO drivers are used, though I think it should be. If I step through the animation in the UI and use F12, the renders are perfect. But a sequence rendered using ANIMATE shows different results, as if the IPO drivers aren’t being applied before the frame is rendered.

I’m using NLA strips to fine-tune the action and combine the keyed armature motions with the pydrivers.py driver scripting. This works great for single-frame renders (I had previously tested this), and so shouldn’t be an issue with ANIMATE, either.

Has anyone else noticed this? In many cases the difference may be so slight as to be missed, but in frame with large action (e.g., 53 & 56 above) there’s quite a big difference in the renderings.

Do you receive any such ‘lag’ problems when manipulating the rig in the 3d-view? What about if you transform and cancel (instead of confirming)?

Usually, these sorts of issues are caused by ‘cyclic’ dependencies. It sounds like there might be a case of this here, unless it is yet another little bug with shapekeys.

Aligorith

No. If the pydrivers.py channels are included in the current Action, the script-driven bones in the rig update in real time as I manipulate it. However, the “lag” does appear during Timeline playback, or when stepping through the frames using the arrow keys.

Here’s a typical “blow-by-blow”:

  1. advance frame by one using the right arrow key
  2. mesh shows “lagged” deformation in the UI
  3. press RENDER, hit F12, or ANIMATE a single-frame range (e.g., 56 to 56)
  4. rendering commences & proceeds. Rendered image shows the correct deformation for that frame (was updated somehow prior to rendering)
  5. render finishes or is canceled with Esc
  6. in the UI, mesh “updates” to the correct deformation with no further action on my part, I don’t even need to give the workspace focus, it happens immediately

In that case, the mesh “reverts” to its “lagged” state, with the driver value from the previous frame in effect, even if it had been updated & current prior to the Translate+cancel.

I’m not using shape keys, though. My rig has additional bones which transform in various ways to effect the deformation corrections. The transform values for these bones are driven by algorithms in the pydrivers.py script. For example, the hips/gluteus region verts are scaled in proportion to rotation of the thigh bone to correct for the “shrinkage” that can occur in the butt when the leg is flexed. This is more flexible than shape keys (imo) as I can “tailor” the scaling value for each axis on a particular sequence if necessary, with a simple change to the Python expression used in the IPO driver.

As far as cyclic dependencies, I don’t see how this could cause a difference between what’s rendered using the RENDER button (or F12) and what’s rendered using the ANIMATE button for a sequence. These should be exactly the same for each frame, even if cyclic dependencies caused some form of problem with how the mesh deforms.

edit_____

Another observation about the “lag” seen in the UI when stepping frame-to-frame: If the frame is advanced, and the deformation shows a “lag,” the transforms of the driven bones are updated when clicking on various items in the UI. For example, enabling or disabling a layer will do it. Clicking on the Action that has the driven IPO channels in the NLA editor will do it. There are a couple of other instances where similar seemingly unconnected clicks in the UI will update the IPO driver values. Can’t even begin to say why, though.

These problems are usually (and most likely to be too in this case) caused by the depsgraph either not being able to resolve some relationship or not knowing of it. In this case, it’s probably related to the fact that using IPO-drivers on bones which control other bones in the same armature, doesn’t work too well (it’s not really supposed to currently). This is because drivers are evaluated with the animation data for each bone they reside on. Bones do not compensate, or some may not be correctly updated yet.

Incidentally, cyclic dependencies are the most common situation when the depsgraph’s inability to resolve a situation results in ‘lag’ and stuff not updating correctly. Still-frame rendering is different from Animation rendering, as still-frame does 2 calculations/flushes.

In your case, it sounds like it’s a combination of the following factors:

  • depsgraph doesn’t really know about some of these relationships
  • due to the way pydrivers and bones are evaluated, things are not evaluated properly
  • you may have some cyclic dependency hidden somewhere (when in PoseMode, do TAB-TAB and look in the console for any warnings about a ‘Cycle from a to b’ or so)

There are a few things you can do about these problems. As you’re using a SVN build already, you could try the following things:

  • Use action constraints to do this instead. Unless you’re dependant on multiple factors to control the effect, this should work. If multiple factors need to be taken into account, a lot of fine-tuning may be required
  • Rewrite the drivers as PyConstraints. Constraints are evaluated after all animation data has been done, so this should be quite safe. Check out the Script-Constraint script template (open a text editor window in Blender, and look under the ‘File’ menu). It’s important to remember to read/write values as matrices (this will become clear when looking at the template script).

Hopefully that helps,
Aligorith

I’m not deeply familiar with the mechanics of Blender, just an artist trying to get some work done, so forgive my ignorance. What is the “depsgraph?”

The TAB_TAB test produced no messages, so can I assume there are no cyclic dependencies? And why a difference between a still frame and an animated frame? That sounds completely screwy to me, as it means you cannot accurately pre-test an animation by checking a series of key stills. The only way to reliably gauge actual results is to render every frame as a sequence, which is a huge time-waster imo. Also, there is no problem when ANIMATE is used in a single-frame range like frame 56 to 56. How is that different from a multi-frame sequence using ANMATE? Not trying to be argumentative, just trying to understand the rationale behind the way things seem to be working.

What relationships? I’m not clear on your meaning here.

is this saying the evaluation process is buggy or just that bones driving bones wasn’t anticipated and thus presents the process with difficulties?

I’ve looked into the Script Constraint, but not the Action Constraint as yet, though both these suggestions sound like overly-complicated workarounds to me. The pydriver.py/IPO driver system is very straightforward (though the scripting behind the drivers may not be :eek: ), and it seems to me a better way to “fix” my problem is to find a way to render a sequence of frames automatically using the F12 process rather than the ANIMATE button, where it seems to me the problem actually lies. That also sound more amenable to scripting on the level I can accomplish.

Thanks a lot for your input, some of it’s over my head but that’s no prob, lots of Blender stuff is :spin:. But I’m willing to learn.:yes:

The “depsgraph” stands for dependency graph. Its main purpose is to make sure that everything is evaluated in such an order that everything that needs to be updated is, and those things that are not affect aren’t. It’s currently not really implemented for everything (only for objects and bones), but has quite a few limitations.

A ‘relationship’ is basically any kind of link between two objects/bones. For example, parenting an object to another creates a relationship between the two objects. Adding a constraint between two bones, or two objects, or an object and a bone is also an example of creating a new relationship.

Regarding the evaluation process:
It was never really designed for this sort of stuff. In fact, PyDrivers between bones in the same armature were a bit of an oversight.

Have fun exploring the ways of the animating a sequence of frames. You may be interested in checking out the PyAPI docs, and/or some of the render-farm related work ideasman42 has been doing for Peach.

Aligorith

This suggests a possible workaround – make the “driven” bones part of a separate Armature. Would tthat resolve any issues, do you think?

On that trail right now. “Fun”? Well not really, but “of interest,” always.

Thanks for your info, 'tis appreciated.

Well, it only took a couple of hours of scripting to develop a workaround. I’ve written a short script that basically atuomates the process of “F12, F3, add frame index numbers to filename, save image file.” I’ve tested it with the same .blend used to produce the above images and it renders the frames with all proper IPO driver deformations current and correct. And it adds virtually no additional render time per frame compared to using ANIMATE for a sequence. Nice.

Wish I could fix the hair lighting with a script…

Sorry to resurrect this thread, or whatever the term is.

Could you share the script you wrote Chipmasque? I have some major cyclic dependencies going on - but besides the refresh options, everything is working. I’d like something that would fix this problem.