= extreme system loading?

Well, I’ve got the Eulers mystery under control and am proceeding well with my bone-driver code, but have experienced a number of very annoying system slowdowns while using the file I’m writing. At random intervals, not associated with any particular action, although it has always involved moving the bones (manual, keyboard, or timeline control), Blender will freeze, my hard drive will work like crazy, the system page file usage bloats to near maximum (from a norm of around 400Mb to nearly 2 Gb), and my comp is essentially useless. Sometimes Blender recovers after a long time (10-15 minutes), but just as often I have to call up TM and shut it down, and even that takes forever because of the HD activity.

This only started happening when I started writing/using the file. Nothing in it seems unusual, just the usual number crunching, and except for these intermittent hangups, it works exactly as expected, looks real good in real-time when the Timeline’s running.

So, is this a “known issue”? I have to de-bug the code and tweak the parameters by using in real time, but these hangups are really killing productivity.

I’d be happy to post the current code if it would help, but this doesn’t seem to be related to its functions, since it usually works fine, and the hangups can occur even when it’s not directly active.

A few tests seem to show this as a nasty bug. The same file with no causes no probs. With pydrivers, every call to the code causes the WinXP system Commit Charge/Page File Usage to go up, and never come back down. Running the timeline (lots of calls to pydrivers) runs the tab up really fast, and the hangups are caused by bumping up against the 2.05Gb Commit Charge limit.

I’ve run scripts from both the Scripts window and the Text Editor with nothing like this occuring, but this is my first experience with a My code is unlikely to be a factor, as I’ve run most of it out of the pydrivers context with no issues at all.

Anyone else using noticed anything like this happening?

I’m using release version 2.44, btw.

Could you paste the code you’re using?


Still a WIP – some of the statements are still being tweaked for proper math & values, but the code runs OK:

import Blender
from Blender import *
import math

# cut and paste as needed for individual bone functions
#    ROT_Wspc =     ROTSlist[0]
#    ROT_Pspc =     ROTSlist[1]
#    ROT_Aspc =     ROTSlist[2]
#    ROT_X_Wspc = ROT_Wspc[0]
#    ROT_Y_Wspc = ROT_Wspc[1]
#    ROT_Z_Wspc = ROT_Wspc[2]
#    ROT_X_Pspc = ROT_Pspc[0]
#    ROT_Y_Pspc = ROT_Pspc[1]
#    ROT_Z_Pspc = ROT_Pspc[2]
#    ROT_X_Aspc = ROT_Aspc[0]
#    ROT_Y_Aspc = ROT_Aspc[1]
#    ROT_Z_Aspc = ROT_Aspc[2]

def HipScalers(DrvrBone):
    # get list of rotation matrices    
    ROTSlist = getBoneData(DrvrBone)
    # decompose matrices to components
    ROT_Pspc =     ROTSlist[1]
    ROT_Aspc =     ROTSlist[2]
    ROT_X_Pspc = ROT_Pspc[0]
    ROT_X_Aspc = ROT_Aspc[0]
  # DrvrBone is 'R.Femur'
    # set driving ratios, rotation to scale
    # 150 is max absolute bone rotation in X
    Rat_Xsize = 1.6/150
    Rat_Ysize = 1.2/150
    Rat_Zsize = 0.4/150
    #set driving ratios, rotation to rotation
    Rat_Xrot = -15.0/150
    Rat_Yrot = -8.0/150
    Rat_Zrot = -30.0/150
    # calc absolute bone rotation
    Sign = 1
    AbsRot_X = math.modf((360 - ROT_X_Pspc + ROT_X_Aspc)/360)[0] * 360
    # account for backward rotation
    if (AbsRot_X > 360 - abs(ROT_X_Aspc)):
        AbsRot_X -= 360
        AbsRot_X *= 3
        Sign = -1
    #calc current driven values
    Targ_Xsize = 1.0 + AbsRot_X * Rat_Xsize * math.sin(AbsRot_X/2 * math.pi/180)
    Targ_Ysize = 1.0 + AbsRot_X * Rat_Ysize * math.sin(AbsRot_X/2 * math.pi/180)
    Targ_Zsize = 1.0 + AbsRot_X * Rat_Zsize * math.sin(AbsRot_X/2 * math.pi/180)
    Targ_Xrot = (AbsRot_X * Rat_Xrot * math.sin(AbsRot_X/2 * math.pi/180)) * Sign
    Targ_Yrot = (AbsRot_X * Rat_Yrot * math.sin(AbsRot_X/2 * math.pi/180)) * Sign
    Targ_Zrot = (AbsRot_X * Rat_Zrot * math.sin(AbsRot_X/2 * math.pi/180)) * Sign
    # adjust for left side bone
    if DrvrBone == 'L.Femur':
        Targ_Xrot *= -1
        Targ_Yrot *= -1
    # convert Eulers to quats
    Targ_RotEul = Mathutils.Euler(Targ_Xrot, Targ_Yrot, Targ_Zrot)
    Targ_RotQuat = Targ_RotEul.toQuat()
    # construct return list
    TargList = [Targ_Xsize, Targ_Ysize, Targ_Zsize,    Targ_RotQuat]    
    return TargList

def ThighCrease(DrvrBone):
    # get list of rotation matrices    
    ROTSlist = getBoneData(DrvrBone)
    # decompose matrices to components
    ROT_Pspc =     ROTSlist[1]
    ROT_Aspc =     ROTSlist[2]
    ROT_X_Pspc = ROT_Pspc[0]
    ROT_X_Aspc = ROT_Aspc[0]
    # DrvrBone is 'R.Femur'
    # set driving ratios, rotation to scale
    # 150 is max absolute bone rotation in X
    Rat_Xloc = -23.0/150
    Rat_Yloc = 7.50/150
    Rat_Zloc = 1.0/150
    #set driving ratios, rotation to rotation
#    Rat_Xrot = -15.0/150
#    Rat_Yrot = -8.0/150
#    Rat_Zrot = -30.0/150
    # calc absolute bone rotation
    Sign = 1
    AbsRot_X = math.modf((360 - ROT_X_Pspc + ROT_X_Aspc)/360)[0] * 360
    # account for backward rotation
    if (AbsRot_X > 360 - abs(ROT_X_Aspc)):
        Rat_Xloc = 0
        Rat_Yloc = 0
        Rat_Zloc = 0
    Targ_Xloc = AbsRot_X * Rat_Xloc
    Targ_Yloc = AbsRot_X * Rat_Yloc
    Targ_Zloc = AbsRot_X * Rat_Zloc
    # left side adjustment
    if DrvrBone == 'L.Femur':
        Targ_Zloc *= -1
    # construct return list
    TargList = [Targ_Xloc, Targ_Yloc, Targ_Zloc]#,    Targ_RotQuat]    
    return TargList

def KneeScalers(DrvrBone):
    # get parent bone of DrvrBone
    ParentBone = getParentBone(DrvrBone)
    # get list of driver's rotation matrices    
    ROTSlist = getBoneData(DrvrBone)
    # get list of parent's rotation matrices    
    ROTSlist_PAR = getBoneData(ParentBone)
    # decompose matrices to components
    ROT_Pspc =     ROTSlist[1]
    ROT_Aspc =     ROTSlist[2]
    ROT_X_Pspc = ROT_Pspc[0]
    ROT_X_Aspc = ROT_Aspc[0]
    ROT_Pspc_PAR =     ROTSlist_PAR[1]
    ROT_Aspc_PAR =     ROTSlist_PAR[2]
    ROT_X_Pspc_PAR = ROT_Pspc_PAR[0]
    ROT_X_Aspc_PAR = ROT_Aspc_PAR[0]
    #calc driver rotation relative to parent bone
    AbsRot_X = math.modf((360 - ROT_X_Pspc + ROT_X_Aspc)/360)[0] * 360
    if AbsRot_X >= 180:
        AbsRot_X = 360 - AbsRot_X
    print AbsRot_X
    AbsRot_X_PAR = math.modf((360 - ROT_X_Pspc_PAR + ROT_X_Aspc_PAR)/360)[0] * 360
    print AbsRot_X_PAR
    AbsRot_X = AbsRot_X - AbsRot_X_PAR
    print AbsRot_X
    Rat_Xsize_KSA = 0.30/150
    Rat_Zsize_KSA = 0.70/150
    Rat_Xsize_KSB = 0.30/150
    Rat_Zsize_KSB = 0.50/150
    Xsize_KSA = 1.0 + AbsRot_X * Rat_Xsize_KSA
    Zsize_KSA = 1.0 + AbsRot_X * Rat_Zsize_KSA
    Xsize_KSB = 1.0 + AbsRot_X * Rat_Xsize_KSB
    Zsize_KSB = 1.0 + AbsRot_X * Rat_Zsize_KSB
    TargList = [Xsize_KSA, Zsize_KSA, Xsize_KSB, Zsize_KSB]
    return TargList
def getParentBone(ChildBone):
    # specify Armature oject
    Armtr = Object.Get('FemBodArmtr')
    # specify Armature
    ArmtrAR = Armature.Get('FemBodAR')
    # get the pose data from Armature object
    pose = Armtr.getPose()
    # get bone
    C_Bone = ArmtrAR.bones[ChildBone]
    # get parent
    ParentBone = C_Bone.parent
def getBoneData(DrvrBone):
    # specify Armature oject
    Armtr = Object.Get('FemBodArmtr')
    # specify Armature
    ArmtrAR = Armature.Get('FemBodAR')
    # get the pose data from Armature object
    pose = Armtr.getPose()
    # get bone
    D_Bone = ArmtrAR.bones[DrvrBone]
    # get posebone
    D_PBone = pose.bones[DrvrBone]
    # calc Worldspace matrix
    WSMatrix = D_PBone.poseMatrix * Armtr.matrixWorld
    # get ROT transforms from bone matrices
    # and convert to Euler (degrees)
    ROT_Wspc = WSMatrix.rotationPart().toEuler()
    ROT_Pspc = D_PBone.poseMatrix.rotationPart().toEuler()
    ROT_Aspc = D_Bone.matrix['ARMATURESPACE'].rotationPart().toEuler()
#    ROT_Bspc = DriveBone.matrix['BONESPACE'].rotationPart().toEuler()    
    # contruct list of data
    ROTSlist = [[0,0,0], [0,0,0], [0,0,0]]
    ROTSlist[0] = ROT_Wspc
    ROTSlist[1] = ROT_Pspc
    ROTSlist[2] = ROT_Aspc
    return ROTSlist

I think it started with 2.43. My computer would suddenly become terribly unresponsive because Blender had used up all my memory and swap memory was being used.

This happened randomly and slowly while weight painting a mesh with pydrivers, but much more quickly while playing an animation of an armature with alt+a. A memory usage bar showed Blender taking my remaining physical memory (900MB) in less than a minute.

My pydrivers are here.

Thanks for the feedback, tolobán, I’d say that’s a confirmation. I have even less RAM than you, so would be even more susceptible. Good to know my code isn’t likely at fault. One avenue of testing would be to see if this is confined to only pydrivers used with armatures/bones, since that’s a common aspect of our mutual experience.

There’s a deeper issuer here I think. Half a year ago I was working on a Makehuman port to python in Blender. It was working okay, but I was getting huge memory leaks. Each time I stopped the script and restarted it (hence reloading about 200MB of data from the disk), the commit charge would jump dramatically and never go back (even after the script was killed. I had to restart blender once in a while to get around it.

I talked to Ton and the guys but they didn’t listen (mostly because I didn’t have quite enough evidence to give them). So maybe this can grease the wheels a bit more.

I’m doing some testing now. At first glance, there does seem to be a consistent drop in my free RAM as a is used, but I’m testing to determine as narrowly as possible the actual circumstances, whether it’s all pydriver usage, or just that associated with a Armature, or with Bones, or with Bones driving Bones. I’m using a very simple scene setup with a very simple pydriver to try and cut down on the possible variables. If it looks like a consistent pattern is emerging, I’ll file a bug report.

I could get the memory leak to happen before with something as simple as this:

c= []
for x in range(100):

    z = range(100000)

Run this a few times and then look at your memory useage. I’m at work, so I can’t try it.

At the moment I’m focusing on usage. Do you mean that the leak occurs with non-pydriver scripts as well (such as your example)?

Man, that’s some leakage! Running that loop code just once, the mem usage reported by Task Manager under processes jumped from 27.6 Mb (immediately after opening Blender) to 147.2 Mb. Doing File-> New made no change in mem usage.

Time to see if it’s related to creating that huge list (the append operation).

EDIT 2 -----------------
Your loop does not, I think, indicate a true leak, just a whopping big chunk of memory used by that 10-million-element-sized list that’s created. However, some further tests show that once allocated, memory usage doesn’t increase when the loop’s run repeatedly unless the size of c (the list) gets bigger. This may be a Python rather than a Blender issue, in that the mem allocation for the list c doesn’t seem to be dynamic except in the “up” direction: it’ll expand as needed, but not shrink when the list gets smaller.

Knock the range values back to more reasonable levels and run the loop a number of times – you’ll see the mem stay static after the first run, because the variable doesn’t need any more.

File->New won’t change the mem allocation because c is a Python variable, not tied to the scene file in any fashion. Run the loop, check the mem usage, do File->New, run the loop again with same or lower range values, and mem usage stays the same. c is still in memory across any file changes, and can only be cleared by quitting the current Python interpreter instance.

The difference with is that the mem usage seems to be cumulative for the same script (and thus the same variables and variable size, at least as far as lists are concerned).

The testing of the loop’s been helpful though, because it shows how mem usage can be consistent across file changes.

After a couple of hours of testing, I can say that there’s definitely a memory usage anomaly connected with running scripts. In all case I tested, Blender’s memory usage rose as calls were made to the functions. This was the case when both objects and bones were driven by, though it seems the increase in mem usage was slightly greater when bones were being driven.

In trivial cases, such as one IPO channel being driven on a single object, the cumulative mem usage increase even over an extensive run time was small and probably wouldn’t be noticeable or troublesome. However, the mem usage increases significantly as the number of IPO channels being driven increases. The difference between a small number of channels in each of a number of objects, and an equal number of channels on one object, wasn’t that great. The biggest factor seems to be the total number of channels being driven, and the type of object being driven – driving bones seems to make the problem worse.

As best I can determine without more formal testing capability, there’s a small increase in memory usage every time a call is made to a function. It can be the same function called repeatedly, the same function called by a number of objects, or numerous functions being called at the same time – all of these cases contribute to a rise in mem usage over time.

I call this an anomaly because 1) the behavior isn’t shown when using scripts (at least not in my experience), 2) I cannot say how the memory allocation is being done, and 3) whether any memory is tied up by faulty referencing (i.e., a true “memory leak”).

I’ll report it as a bug, with my test files, though given the length of time it seems to take to get bugs even looked at, I don’t expect anything to happen in my lifetime.

I’ve stumbled across a workaround for the massive memory use that can result from using scripts: simply minimize Blender.

Example from my project:
Open file, Blender mem usage = ~ 35Mb
Minimize Blender, mem usage drops to 5.5Mb
Restore Blender, mem usage rises to only 6.7 Mb
Run 180 frames of animation with my pydrivers,py, mem use soars to 208Mb !!!
Minimize, back to 5.5Mb
Restore, back up to only 6.7Mb again

This was very consistently successful. A bit of a kludge, but it might keep your comp from dying from mem starvation, and reduce the need to quit & restart.

I’m on WinXP, btw, not sure if it’ll work with other OSs.

Thanks for taking the time to do all those tests and confirm this bug, please don’t assume it will not be fixed.

I have had my frustrations reporting bugs, but Blender developers are great people and some bugs were fixed, even when they had been closed, when I finally could find the elusive cause.

I’m not really assuming the bugs won’t ever be fixed, just that it will likely take a very long time, based on the status of reports on the Bug Tracker. But the important thing is that the situation has been identified, and it’s good to have a workaround in the meantime. At least I can continue on my project, without interruptions by either system overload or forced restarts.

ARGH!!! It seems my workaround isn’t doing what I thought. Minimizing Blender does reduce Mem Usage as reported in the Task Manager, Processes. However, checking the Performance tab of TM, it seems that the memory is not really freed up – the available RAM value stays at the low value caused by the previously high Mem Usage figure. And minimizing doesn’t change the Page File usage, either, so the workaround is basically ineffective, but the bug’s sneaky and disguised itself.

So, Blender is no longer using a huge chunk o’ RAM, but that RAM is not immediately returned to the system. That, mi amigos, is the classic definition of a memory leak.

Ironically, it seems that the RAM does “leak” back into the “available” category in small dribs and drabs, but it’d take the better part of an hour for 300+Mb to drip back in at the rate I observed.

mmm, perhaps it’s a winxp problem i have same problems of memory with others apps …

mayble you could try this program
i know that’s quite old but maybe help you…

No, a classic memory leak is it’s NEVER released. As Willington points out, if it’s (eventually) returned…it’s a WinXP problem and it’s SOP (standard operating procedure). XP keeps it loaded in case it’s needed again. It’s released when the system needs it.

edit: my bad, assumed too much!

I have to admit I’m not an expert in these matters, so I’ll stand corrected in the terminology.

However, the accumulative mem use with scripts is a Blender issue, and from what I can discern, the chunk of memory no longer being needed after minimizing does not immediately become available again when a demand is made.

Run 180 frames of my project file (extensive use)
MemUsage > 270Mb (from a start of 8Mb); Free Mem < 10Mb (from a max of 300’Mb)
Minimize & restore: Mem Usage =~ 10 Mb; Free Mem starts to rise slowly
Run Timeline again (memory demand)
Free Mem starts to drop back down without first recovering the apparent deficit
Bottom line, about 200Mb of unavailable memory

WinXPs sloppy mem management may well be contributing to the problem, but the accumulative use of RAM isn’t part of that. Blaming an OS is a common practice (and Windows lends itself to balme, does it not :rolleyes: ? ) but apps are written to perform under an OS, so it’s a matter of practical programming to take their idiosyncracies into account whenever possible.

Although Free RAM does recover slowly, it seem that the Page File Usage and Commit Charge do not, and these both have limits that once reached can severely interrupt normal system operation. Again, I’m not an expert, but it seems that these resources should also be returned when the app’s memory needs drop by a factor of 50 or so.

'nother edit --------------------------
I’ve found the likely reason for driving Bones being more expensive in terms of mem usage increase than Objects: each Object makes one call per frame to a function per IPO channel being driven, while a Bone will make two calls to the same function per frame, per channel. This may or may not be normal operation, but in any case, it sheds some light on the difference between driving Objects and driving Bones.