32 BPP Overkill?

I ran into a little problem while trying to work with large images in Blender (7680 X 4320 or 4X HD). It was just a little 8BPP RGBA .png @ a whopping 864KB. The problem is that upon conversion to 32BPP this image becomes MASSIVE. Any operations performed on the image results in Blender crashing on imediately because I only have a paltry 1GB of RAM. You can see the results below.

Compiled with Python version 2.5.
Checking for installed Python… got it!
write exr tmp file, 1920x1080, C: mp\UV.blend_Scene.exr
Calloc returns nill: len=530841600 in compbuf RGBA rect, total 874400256
Calloc returns nill: len=530841600 in compbuf RGBA rect, total 874400320

My question is really for any Dev who may be browsing. Will it be possible in the future to limit internal conversion to 16BPP? 32BPP is overkill for visual effects and almost none of us are working with film (though I hope that last part changes soon). Half type as a working format would greatly speed things up in the compositor and allow Blender to do a lot more B4 memory limits are reached. I know this may sound like whining and I have no idea how difficult it may be to install this kind of a switch. I’ll be spending the $130.00 U.S. next week on a 2 GB upgrade but I still think this makes sense.

Copyright © 2002, Industrial Light & Magic, a division of Lucas Digital Ltd. LLC All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of Industrial Light & Magic nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

I very much doubt it. There are a lot of advantages to having a consistent, simple bit depth, one that is easily represented in C with the 32 bit ‘float’ data type. Having further conversions and nodes/filters/render things handling multple bit depths internally adds layer upon layer of complexity that scales up and up and up making it much harder to make things, and maintain old things.

For the renderer, 32 bit float is needed for precision purposes (handling the interactions of light needs a lot more than just photoshop style work). For comp, well, working in full float gives a lot more freedom, and although some apps with huge teams (shake/fusion at least) can handle different types internally, I wouldn’t be surprised if they mostly got upscaled to float for many purposes inside. I know Nuke only works in 32 bit float.

The fact is that RAM is cheaper every day, and 64 bit will be widespread much earlier than any of the Blender coders will have time and inclination to add ‘backwards compatibility’ for lower bit depths alongside all the necessary functionality, sprawled throughout Blender. The decision to be completely 32bit float is one made thinking forward, it has paid off already by far via the extra functionality provided and the code simplicity required to add things to eg. the compositor so quickly with so few resources. And in a few years time when working with float is easily common place, it’ll be great not to have to worry about all the extra crap left in there that you need to hack around in order to push things further (i.e. it took after effects such a long time to get HDR/float capabilities, I presume for that reason).

By the way, that 800KB figure is misleading - I presume you mean the PNG file itself, compressed for storage on disk. Any app will have to decompress that PNG to memory in order to work on it, in which case even at a paltry 8 bits per channel it will take up at least 7680 x 4320 x 8 x 4 = 126MB.

Thanx. That’s exactly what I wanted to know. I’m really kinda clueless when it comes to coding and the intricacies involved. I can barely write even the simplest of scripts. I had no idea about the light.