Error rendering big images

I’m trying to render a 9646 x 5708 image. Blender gives error all the time.
With lower size it renders ok. I’m using a Pentium 4 3.0 with 2GB memory.

Does anybody have some info about this?

Thank you!

Could it be possible that you are running out of memory?

Which operating system are you using?

Could you post the error please?

I’m using windows xp sp2 with 2GB ram.

Calloc returns nill: len=800000000 in Combined rgba, total 489516836

I’m trying also with FarmerJoe (a great script for lan render) and the size of the images make Blender give error on slaves (i think it’s not memory, because all computers here have 2GB ram and rendering with 64 parts, i think each of the parts cannot fill 2 GB ram).

I found http://www.blender3d.com/forum/viewtopic.php?t=6344 that is similar to your issue.

It seems that it is trying to allocate too big a continuous block of memory and because it cannot allocate, it returns nill.

I thank you for helping me with this. I’ll try a command line render.
What’s strange for me is that FarmerJoe’s instructions for render are all from command line and the slaves gives error. Thank you the same.

p.s. I’m trying the same render with Blender 2.41, if all go ok i’ll say it here.

I’m trying now with the command line. One thing i notice is that the

Calloc returns nill
error appears right in the beginning of the render. It saves me hours because i don’t have to wait to Blender to try to finish the render to give me this error.

I am no expert on blender’s internal functionality, but i think blender needs to allocate a RGBA float buffer for the final image in any case, which in this case is about 800MB large. You only get 2GB continous virtual address space in 32bit windows, so after a few larger memory allocations and frees, you possibly simply have to continous free block anymore (has nothing to do with how much RAM the machine has).

I don’t know if there’s a way around this, if the coders were smart enough, it takes less memory if you enable “Border” AND “Crop” and then render the image in a few chunks manually, so you never need the full buffer for the 8k x 6k image. (uhm can you anywhere edit the crop region instead of guessing the size on shift+B?)

Another way if it’s just not enough would be setting the /3GB switch in window’s boot.ini and make the blender binary “LargeAddressAware”. The latter can be either done with editbin of Visual Studio or by adding /LARGEADDRESSAWARE to MSVC linker options before compiling AFAIR…not very convenient i admit :slight_smile:

I’ve seen many RAM induced blender crashes (big resolution render, huge polycount, and of course fluid simulation) when the memory usage is above 1.3GB, This seems to be an inherent limitation of 32bit windows.

The same blender file rendered at the same size with Blender 2.41 and everything when OK (took 8:30h to render).

There’s really some problem with the last version and memory management.

I can help you there from Solidworks experience.
Theoretically XP will use 2gb for an application however in reality it will only use up to about 1.6gb of memory without and about 2.7 gb with the 3gb switch.That is out of 4gb total addressable memory.
If you want to do the switch and have SP1 it definitely requires a ms patch first.Be warned -do your research first…
If you only have 2gb of ram you can benefit from the switch anyway.

There was an issue with memory use after orange recoding that Ton did a workaround for windows for I think - it required a different treatment than for linux anyway as I recall.
I remember seeing the committers archive entry because I did a test at the time and about the size of render you are doing is the limit I would get out of it. For me it was somewhat less than the full 1.6gb and it required even more memory at the end to actually save the image.
Something like 1.4 I think…can’t recall exactly sorry
I am not sure what the outcome of Tons work was…maybe the limit is still there and undocumented or his fix was unsuccessful but I would say even if it is ok you may have hit the limit without the switch
I think I am right in saying if you use Yafray rather than internal renderer then the memory is a separate allocation for that and the total useage about doubles. HTH

This is only internal renderer with AO, reflections, etc…

I hope this get fixed in the last version.

May be Ton is the best person to ask about it.

i’ve had the same problem and “unclezeiv” had made a nice script to render “x” small images instaed of a big one.

it is a really alpha state but it works.
i’m using it on xp sp2 to render a 10000x6000 image divided into 60 1000x1000 pieces

the script has a GUI and it is really simple to use
you can find it here

http://www.kino3d.com/forum/download.php?id=3675

and here you can find one of the HD “blocks” of my render:


ciao

So the border+crop does indeed work? That’s good to know :slight_smile:

2.41 was the last blender that had the “Fbuf” option to enable float buffer for the final image, and if i understood Ton correctly, blender now always uses float buffer, this should explain why it works in 2.41 but not 2.42 and later, unless you had Fbuf enabled.

But there was still something with mmap not available in the windows version yet, which got implemented for project orange to handle large renderings…