i have posted similar question here before. i find when i have alot of polys blender is loving my ram, but it will never go above 50%, i have 10gb, soon 16. it will use about 100% CPU durring start up and when pylux is starting up. but during use, will never go above 30%. and when im doing water, cloth, and partical simulations, it will be the same case of 30% 50%, except, it laggs like no ones busyness. is there a way i can give it permission to use more power if necessary, because obviously it doesnt feel it can.
are you using 64bit blender? and 64bit Operating System obviously.
i only have 2Gb of Ram at the moment so…
i cant test… :o
I don’t believe these simulation processes are multithreaded so if you have a multi-core processor it won’t use all the cores.
More RAM won’t speed anything up if Blender is not maxing out your current set-up.
Upgrade the CPU and you’d see more of a performance gain, although you would need a chip with higher single-core performance at the moment as many of Blenders simulations are only single threaded.
Maybe try downloading an OpenMP build though and see if this helps out with your processor utilization.
What are you machine spec’s by the way?
no but simulations have to cache what they are doing, it shouldnt be so slow, with 10 gb of ram, if its only utilizing 30-50%, it could double that, cache twice as much, and be done in a little more than half speed! somthing is seriously wrong with this picture!
i am running 64 bit, 16gb of ram wont do me much good on a 32 bit. if you are on to somthing let me know.
Moved from “General Forums > Blender & CG Discussions” to “Support > Technical Support”
all i can say is, i see people with wors specs than mine, pull * so quickly its unreal, but when i do it, it takes me minutes to hours and sometimes up to 10 crashes! i had been advised to get an optimized build off graphicall. thats what i have now. i will say that it didnt help much.
just incase anyone wants to look, heres my specs:
amd phenom II x4 3.4ghz BE (upgrading to 1100t)
10gb ddr3 1333 ram (upgrading to 16gb of gskill 2850)
AMD 6970 2gb GDDR5 ram (going to crossXfire with a 6950, maybe another 6970)
thats all you should need to know in the way of my specs.
if anyone has any more ideas please let me know
Which operating system?
- some people say it sorks better on ubuntu, but i havent had such luck, just as bad if not worse
I’ve been doing plenty of fluid simulation lately on Ubuntu 64 with Fish builds from Graphicall (OpenMP enabled) and I can tell you they are processed as multithreaded tasks with all logical cores participating in the process simultaneously (not swapping the process from one core to another as it usually happens when the process is single threaded). My CPU is a I7-2600K (4 physical cores with hyperthreading enabled, which makes 8 logical cores).
That still doesn’t really answer my question. Which operating system are you normally running? Windows? Which version? Linux? Which distribution (and version)? MacOS? Which version?
Furthermore, the amount of RAM you have and the amount of RAM an application uses doesn’t have a huge correlation with speed. Perhaps your scenes simply aren’t complex enough to soak your entire available amount of RAM (not a great idea, BTW). In that case, it wouldn’t matter if Blender allocated all of your RAM; it would only process as fast as your CPU(s) allow.
im sorry for the lack of clarification, when i said “7” i meant windows 7
im not sure if you understood what i was saying, which i understand you know more about blender than i do most likely. however, i also assume you have a decent understanding of how RAM can affect “TEMP” files. let me put this in terms of buckets of water.
i have 10 gallons of water in my bucket (the information to process).
i have a friend helping me pour the water into the pool (pool being the finished result).
my friend, the one carrying the water, can lift a 5 gallon bucket (the amount my ram and transfer speed will allow).
but the max sized bucket we have to carry the water from point a to point b is 2.5 gallons (the max amount of information blender is dumping)
this now means that with my friend being the middle man, it will take 4 trips to get the water to the pool (10/2.5=4 just to show my work)
if my friend CAN lift 5 gallons of water, it would be more efficient to have him carry a 5 gallon bucket, now reducing the trips from 4, to 2 (10/5=2 just to be clear where im at)
do you see where im going with this, if blender LET me have more room to process information, i could process more of its information.
its not that im processing it any faster or slower, its that im doing MORE in the same amount of time
My suspicions are still that you’re looking in the wrong place for your bottleneck (i.e. it’s your CPU that’s the issue, not the RAM). The various cache systems in Blender (fluid, particle, smoke, etc.) are simply storing the per-frame processing result… so when you’re done building the simulation, it can play back at a speed that’s closer to real-time. While Blender processes an individual frame, though, Blender will use as much RAM as is required.
My assumption when you talk about RAM and temp files is that you’re actually talking about the swap file (that is, using the hard drive as a fallback when a process uses too much RAM). In nearly all circumstances, you never want this to happen; it makes your computer run at a snail’s pace. Fortunately, most sane OS’s only push to swap when RAM is at or near capacity… also Blender has no influence on this at all.
It sounds to me like Blender is not using all of your CPU cores when you think that it ought to. Not every process in Blender is multi-threaded and therefore not every process can take advantage of multiple CPUs. Rendering can use all of your CPU power, but unless you’re running a build of Blender with OpenMP enabled, most simulation processes will only use a single thread (part of this is because a lot of simulation tasks are deterministic and difficult to make work well in a parallel processing setup)… so one CPU core will be maxed out while the others sit idle. In a scenario like that, many simple system monitors (like the default Windows one, IIRC), will show a 4-core processor as working at 25% capacity. While this might be mathematically accurate, it doesn’t really tell the whole story.
Then again, perhaps we need a file that will saturate your available RAM as a test case.
ill start off by saying, if its truly is 1core per thread and is not keeping up with all the threads, it makes sence. all i can say is. there is a problem somewhere. if you look at the percenteges i posted in my first post, than its kindof obvious that it COULD be using more. id like to add, the project im working on that brought me to starting this thread, has over 4.8 million pollys inside the view port, thats with all the sub surfs at 1 inside view port. and believe me or not, this is a farily low poly scene still.
too add to the temp file debate. yes blender creates temp files in C: storage, however, almost all programs produce files to keep in temporary memory while they are being computed. its like typing into a calculator instead of XL’… xl (c drive) will store data, that can be potentially permanent if you where to name the file. where with a calculator, unless you go out of your way to make it SEMI-permanent… it will only last as long as you need it. same concept with blender. i dont know to what point blender does this, but all programs have temporary information that must be stored like short term memory.
ok, so i just did some testing, the scene im working on is using 3 cores at 4% during pan. view, and 4 cores at 90% while moving an object. i think youve found the culperate on that one with the cpu. so the big question… do you think an 1100t will be a big step up. i asked this on the seven forums also, and they said that would be my best bet, i personaly think intels products arent worth what they charge, however maybe the 2600k will go down in price in the next month or so. ultimatly i need atleast 6 cores and would like aprox 3.3GHz+. i have been informe the amd FX series SUCKS so i wont tough that. but i know everyone with the i7 loves it, just dont know its worth the money if i have to sacrifice $70 for what its actualy worth. anyways. apinions are helpful, if someone can convince me that an i7 is worth the money and trouble than ill go from there. ultimatly its my preferance. im an amd fan, but facts just show 2600 is better, but i need some testimonials lol
There are plenty of places to look for benchmarks between AMD and Intel, but I would have to say out of personal experience and the struggle that I go through when building a new system and ask myself AMD or Intel, I always spend the extra money. I find the performance to be worth the extra cash that you spend especially for intensive applications like Blender and other 3D software, as well as multithreaded applications. Intel just does it better in my opinion. That’s not saying that you can’t have a capable system by going with AMD, and I truly hope that AMD can be more competitive in the future in the CPU arena, at the same affordable cost, but right now they are not. That’s why you never see Intel lower their prices to match, they don’t have to, since they have the superior product.
In the end though you can listen to all of the input from a bunch of people, read all of the benchmark tests and reviews, but it really comes down to what you need and are looking for that fits within your budget. I do know, though, that there are many i7 quadcores out there that out perform AMD six core processors, but it also comes down to how efficiently the software utilizes those cores, and when you are talking about 3D software it comes down to rendering, and in rare cases, dynamics and deformation calculations.
That’s my take.
thank you, i think that is the first post in any forum i have posted on that remotely covers what i need to be convinced, im goin[g to redo my budget and see if i can work in the extra expences some how seing as how my mobo will cost about $40 less