.blend file analyser?

Hello

I´m looking for some utility that could help me with optimising .blend file size. Is there anything that could help me find out what part of .blend has how many kB´s?
(for exsample mesh:5KB
scripts:2KB)

I will be thankfull for any help

I am curious, for what reason would you want to optimize the .blend filesize or know the individual memory usage?
You might have a requirement to know that, but in general: It uses what it uses.
You can compress the file on save and that´s pretty much it.

Well, the reason is that blend file with my game character has grown to more than 4MB and I am interested, where did that memory go. I am suspecting that big blend size is causing long loading times when I launch my blend file in game engine.

It would be interesting if there was such a tool. I have a blendfile that frequently fluctuates between 50 and 130 mb, and one of the main offenders seems to be the multi res modifier, if I save it on a lower res (or in edit mode for example) it saves at 50 mb, if I have it on full and save it seems to save at 130 mb. Would be interesting to know if that’s really the cause.

Maybe it could be done in C, right into blender´s source code… but it may take long until one would apply it for all components of blend file

I havent´t found any external utility for this on the internet so Im putting this on my “When I learn C good enough” todo list.

Blender can display all entities in a scene, it´s in your properties panel under scene.
If it isn´t there you can enable it in the preferences. Some snaketamer would have look into the code if it is possible to display the sizes there as well. But it´s really not that easy to display the memory usage of data in the stack and heap. It requires a lot of efford. It´s rather easy to display the total memory usage, but the individual one.
And yeh, for BGE it makes sense to cut down the size. For everything else it doesn´t.

Or, you can probably take the outline, switch it to view the datablocks, there is a bit of that information in there (depends on specifically what you are looking for, so maybe not). As for a utility, it would probably be easier to code a python addon to do it then C code (assuming the correct information is published via the python API).

I think it would be very hard via python unless the API already has the information. In C++ it´s already hard to track the memory usage of objects and other variables, you often got to use the OS libraries or addon libraries, else you got to code a reference counter of your objects and calculate the memory it uses and keep track of it yourself.
In C96 which is the most part of Blender, it´s even harder, you don´t really have objects, you can only work with pointers, on the other hand it´s “easier” to track as you got to handle memory allocation yourself. So when you malloc for instance:

int *ptr = (int *) malloc(10 * sizeof (int));

You could keep track of the memory allocated, but it can get cumbersome. Especially when you start to work with arrays and got to realloc() memory.
On top of that you got to keep track of the pagesize, which varies with the OS and system architecture. It´s the smallest memory unit you can allocate.
Usually, or better traditionally it is 4096 bytes. It especially applies if you use virtual memory or paging. You virtually map a continous block of memory to your system memory. If the size of a page is smaller, the page dictionary gets bigger. If the pages are bigger you waste space with small variables but the dictionary is smaller… I better stop here :smiley:
I could go on for hours, suffice to say, keeping accurate track of memory usage is a non trivial matter and very hardware near.

I was just saying that information may be partially available via the datablocks (I don’t know for sure). … just for info, I am a professional embedded software engineer, I write code and hardware drivers for memory restricted devices, I’m intimately aware of the difficulties of tracking memory usage runtime (i.e. not using the mapfile).

Now that you mention it, I actually remember… we had this somewhere already… is it possible it was somewhere talking about ARM chips? Doesn´t matter - you still got my sympathies :stuck_out_tongue: As intresting as it is, often enough it´s plainly a PITA :smiley:

At least the peeps in this thread who are not into coding now have an idea that you don´t just wrap some tags around your code and get the memory usage. :wink:

Yes, and they are painful to learn, I miss the “old days”. A few processors I’ve done setups on… 68HC11, M68331, M68332, ATM128, ATM168, ATM1287, MCF5253, STM32, i.MX25… and of course a bunch of embedded OS’s (that make memory tracking even more painful if they don’t manage their heaps correctly or just plainly coded for poo to start with). At that level, dynamic allocation is just a bad idea all around anyways…

… and yes, I’m just confirming Arexma’s point, unless Blender has an abstraction layer for dynamic memory allocation per OS that is consistently used throughout the code base, this will be effort.

My point with the datablocks is that you might be able to make a generalized approximation at the memory being used from the RNA data in the datablocks. I’m not saying it’s so, or even saying it’s a good approach, I’m doing nothing more then brainstorming one possible approach to avoid the above mentioned effort.

Personally, I would think the most easy approach is to export the model one by one as an OBJ and load it into something like MeshLab for details. I’m not sure if it shows the memory usage or not, but it has so much stuff built into it, I wouldn’t be surprised to find that it does. If nothing else, the OBJ files on disk would provide a comparative check (model to model) and help identify the “biggest” models (along with their material files of course)… granted this isn’t automated.

… I forget now why I even posted in this thread in the first place …

Using Quandtum’s post as inspiration, I can’t think of a way to get the individual memory usages for datablocks displayed in a list, but…

You could open a completely empty file and then “append” your components to it one by one and then save the file after each appended component. Then look at the saved file’s size.

This would be a pretty laborious process with a large file, but a file containing a single character might be doable.

  • Append the object… Is the object the source of the problem or is there some unnecessary junk in there?
  • Append just the textures…
  • Append just the mesh…
  • Append just the armature…
  • Append individual actions…
  • etc.

Note that actions that have very many keyframes can use up a lot of memory. Especially if the action is “baked” to have a keyframe on every frame for every bone. (I found this out the hard way ;))

Also note that if datablocks are not assigned to an object then for them to be saved to a *.blend file and not discarded you will need to assign a “Fake user” to them.

Edit: Is there a way for Blender to save the “undo history” in the 2.5x series? I believe that the way Blender usually saves the undo history is to save a snapshot of the memory at every step. The undo history can grow very large if you let it but is configurable in user preferences.

I think bigbob was referring to the actual file on disk as apposed to what happens when its loaded in memory, which I would have thought would be quite easy to pass and show grouped data blocks and their size.

However, the following is quite interesting to myself,

My favorite graphics program of old was corels photopaint as it was a doddle to script as its internal history log used the actual scripting language used to script… so initial scripts could be generate by performing the tasks manually, exporting the log then adjusting it to suit the , usually batch, requirements.

Blender 2.5 no longer seems to have a history log viewer where as I believe 2.49 did?

I’m guessing, correct me if I’m wrong, that as the implementation of the guts of blender is C with python handling the interactions (menus etc.) and the idea is to, at some point, allow different languages access to its core over and above python its not really sensible to “log” the python commands so to speak and also the fact that some interactions are core (in C) such as select a virt, move it, etc?

That said, it would still be nice if it had a viewable history of some sort.