Virtual Memory

2.8 Gig P4 with 1MB L2 Cache, 512 MB RAM, 1406 MB Virtual Memory.
Yet, I still receive error messages saying I don’t have enough memory.
I used a few meta balls low on polygons, kept shading low and rendered it with the internal Yafray.
It goes smooth when I render on preview size but as soon as I say something like 1280 x 1024 it goes boom.
I checked my prestations and for all I can see my CPU only uses max. 70% of its capabilities.
Does anyone have any suggestions? Does this happen on Linux too, because I’m about to make the big switch.

I know this question may have been posted before, but somebody should make subdirs or something. There are over a zillion posts in here.
Peter

put bluntly, an application will crash if it allocates over a specific amount of memory, regardless of where the memory resides.

On my system blender crashes at 1Gb of memory [which happens to be the max ram of and in my motherboard], but I’d imagine the limit is more likely 4Gb on a 32 bit system
[I’m not sure if it varies from windows system to windows system]

it is fairly easy to try to get a high quality render by increasing the render size or subdivision settings to attempt to allocate too much memory

but anyway, enough offtopic rambling… Windows manages virtual memory size okay now [since 2k], set it to automatic and try again

For a minute there I thought I found it.
I made windows manage my virtual memory and started rendering again.
Yet, the problem persisted.
But I checked my task manager again and it seems I have a bottleneck.
My CPU only works at an average of 10% which is surprisingly low.
Considering the fact that I have an 800 Mhz Front Side Bus the bottleneck must either be the 512 MB RAM or the ATI Radeon 9200 128 MB Video Card.
There seem to be a few issues with the ATI and Blender, but it was an expensive card and I’m not really thinking about upgrading that.
So I’m willing to buy another 512 MB RAM.
But in the meantime I’d probably have to downsize my blends I suppose.
Do you think maybe the META balls are too heavy?
Tomorrow I’ll try a few tests, now it’s off to bed.
Peter

Put the ATI up on ebay. If you plan to use Blender, I don’t care what you paid for the ATI, you have the wrong card for Blender. Unfortunately the only game in town for now is nVidia. You can put even a cheapo nVidia in there and I bet Blender will run just fine.

I also had an expensive ATI in my system. Not any more.

blender reports in the info header how much memory it is using
[you could also see in the task manager]

if you’re using too much memory and having to swap a lot then yes, more ram would be good.

metaballs use a fair amount of memory [particularly if you have a bunch of them], but nothing like a high subdivision level [5 or higher is not necescary] or high poly mesh [in the millions of polygons]

it is peculiar that blender would not use all of your cpu… but only during operations like recalculating metaballs or rendering.

512Mb of ram should be enough for some moderately complex scenes
a radeon 9200 should work pretty well with blender [yes, ati has some quirks but the problems from before are gone on my system with current drivers and version of blender]

metaball calculation, even with just 32 will take up all your cpu power… can you describe your scene better or perhaps other problems?

If you’re running a render and you’re seeing 10% CPU utilization, I can tell you right-now that you must be swapping the hell out of your poor computer and disk-drive. (I’ll bet that the disk-drive light never does turn off, does it?) Even 70% would be unexplainably too-low: the CPU should have work to do 100% of the time all the time.

As far as I know, Blender does not use the video card for rendering so that really can’t be the problem.

I believe that the correct solution is to simplify the problem that you are presenting to the computer. Any way that you can.

ATI graphic card+metabals=blender crash
last week there was post from ROUBAL about that
https://blenderartists.org/forum/viewtopic.php?t=23964&start=30

[yes, ati has some quirks but the problems from before are gone on my system with current drivers and version of blender]

Are you using a different version from the standard 2.36?

BTW, it seems I may have to move my questions to the Yafray forum,
because Blender itself does a fine job at rendering.
Unfortunately, the results are quite different as well.

I believe that the correct solution is to simplify the problem that you are presenting to the computer. Any way that you can.

I just reduced the amount of polygons to an amount I can’t live with.
I turned of OSA,
but naturally it’s not an answer to my prayers.
I’ll go and see the Yafray forum,
tnx for all your help.
Peter

Same scene, same lighting conditions.
1600x1200, Full OSA, DOF you name it.
Deleted the META balls and put a few mesh cubes in place.
The whole thing rendered in under 10 minutes.
CPU at an average of 50%, only 300 MB of VM used.
I now understand there’s a reason why most people don’t use META balls at all.

The down part of all this?
It’s just not fun modelling constantly having to think “hmm, is my PC going to crash when I do this?”

If you could stuff your machine with several gigabytes of RAM and actually be able to do it (and I’m not sure that /Windows/ can do that…) you might be able to do the job. But when a machine starts “thrashing,” all hope is lost. The performance-degradation curve of thrashing is not a linear slope; it becomes exponential, like the crook of your knee. You “hit the wall,” and die. The only way to solve the problem is to throttle things back … some kind of governor or load-control … so that the machine does not /attempt/ to “go there.”

Plase note that no single task under Windows may be able to use more than 2GB of RAM. This is one of the current limitations.
Solutions are spreading a task through multiple processes and use inter-process communications.

Memory swap is the most obvious bottleneck of course. Hard drives are very slow sources of ram. Unless you are generating animated clouds or water one idea to reduce cpu time might be modeling the metaball stuff and converting to a mesh.

The performance-degradation curve of thrashing is not a linear slope; it becomes exponential, like the crook of your knee. You “hit the wall,” and die. The only way to solve the problem is to throttle things back … some kind of governor or load-control … so that the machine does not /attempt/ to “go there.”

:o

one idea to reduce cpu time might be modeling the metaball stuff and converting to a mesh.

Ah good, I didn’t know about this.
I’ll find out and keep you posted.
Peter

I gotta be the one of few with a working ATI card, did this one in a couple of minutes.

http://www.blenderman.org/modules/coppermine/albums/userpics/10162/mETAS.JPG

And yes the imgae worked at 1024x768 aswell.

Edit: Don’t know if it count as a lot of metaballs, but the nubmer is 384 balls.

I’ll be damned…
BTW, nice metallic texture.

The only question is: was it rendered with Yafray or internal Blender render?
Peter