Cycles renders half the samples in longer time?

My first rendering of Ann Darrow & King Kong, with particle hair filling the screen, I expected to take a while and it did – 7.25 hours @ 1024 samples:


In this image Ann’s hair is just acceptable in terms of render noise factor.

Looking for ways to optimize I tried many different approaches, and finally settled on altering the hair parameters for both Ann & Kong to reduce the number of samples needed from 1024. I removed all SSS nodes from the hair shaders for Ann’s hair and Kong’s longer hair (the silvery fur, most of which is hidden, needed the SSS node to work well at all), and I reduced the number of children hair strands in Ann’s wig from 125 to 100. Tests showed both hair systems would then render acceptably at 484 samples, Ann’s better even than @ 1024. The only other changes made to the scene were some weight painting tweaks on Kong and editing the details of two masking images used for Kong’s skin. No mesh edits were made, no Subsurf level changes (max is 3, on the Kong & Ann body meshes, all else @ 2 or none), and no new materials or image textures incorporated.

I’m now rendering the scene @ 484 samples, and look at the render times:


My Render>Samples parameters are also shown in this UI capture

So why in the name of the 3D gawds is Cycles taking an hour longer to render the same scene @ less than 1/2 the samples, even after optimizing? I’m hoping someone with more Cycles savvy than myself can clue me in to this totally illogical anomaly, so I can find a way to render my planned animation sometime before the original King Kong’s 90th anniversary!

PS. The file is much too large to provide for analysis, around 240Mb before packing in dozens of image textures, many of which are 2K & 4K, and a stripped-down file would defeat the purpose.

This is crazy. I dug up an auto-backup version of my file, made sure all the “optimizing” edits hade been made, went through and set all my level 3 Subsurf to level 2, used Outliner to hide both in the UI and for rendering every object not to be visible in the rendered scene (eyeball & camera icons off), and for all my trouble, Blender/Cycles has ADDED yet another 1/2 hour to the rendering time. On top of that, when I use the Rendered Preview option, the UI reports it will take 5 hours to finish the image. When I render it for real (F12) it takes 8.5 hours. Yet every value that has a Preview & Rendered option (such as SubSurf) has exactly the same value for both, and there are no objects hidden in the UI (eye icon off) that are visible when rendered (camera icon on).

WTF?

EDIT – 2.78rc2 results: Same file as used in 2.76 (no changes) – adds 1 hour to render time. Using Adaptive Subsurf on only 3 items (regardless of settings) adds at least 3 hours. So much for progress.


Good news & bad news (isn’t that always the case?): I gave up trying to “optimize” the file since all it did was increase the rendering times, moved on setting up other frames as I build a kind of storyboard. Last images were frame 400, this one is frame 535. Same exact file, same exact scene, only difference is the poses and camera angle. But look at the render time – 3 hours less! That’s the good news. Bad news is I still can’t understand what the heck is going on, why this should be, which means I have no sound basis for planning how long the animation rendering might take because there is apparently no consistency in rendering times.

Any insights would be very welcome.

Are you using the GPU to render this?

If you are, it might explain your issues because that mode of rendering can sometimes see large swings or increases in render times on complex scenes. You might want to make sure your drivers are up to date for starters (in case they fix anything).

Thanks for the suggestion, Ace, but I’m using CPU exclusively. My 2Gb Nvidia GTX950 card just cannot handle the memory requirements for the scene, another reason I’ve tried optimizing it. But I think that the memory issue is more related to the dozens of texture maps I use, many 2K & 4K – the BG image alone is a 7.2K wide panorama – that’s a LOT of texture data to load up! In the run-up to rendering a frame, memory usage is around 350Mb until all the image maps load, then it jumps to around 1.4Gb or more, so texture data accounts for 2/3 at least.

My main question involves why a scene rendered at 484 samples takes as much or more time as a scene (which is more complex in many ways) set for 1024 samples. It just defies logic, as far as I can tell.

Have you tried to render in different layers to compose them later?
Independent from any optimization you do on the scenes, layers may help on getting faster renders and less memory requirement.

That will be my last ditch fall-back position, julperado, mainly because so much of every scene element interacts with the other stuff. Plus I already realize I will have to render-farm this puppy (which I cannot currently finance, damn it!) and for that reason want to try to keep as much of the entire scene “in camera” as possible.

There’s also the problem that Blender as yet does not have well-developed traveling matte capability (at least none I know of), and that would be essential for later compositing. I’ve also done some compositing with hair in the past and it is troublesome to say the least. Toss in motion blur (essential for the heavy-duty action) and it becomes a bit of a nightmare. Not that I will avoid such nightmares if they become absolutely necessary, but I’d like to try all other solutions first.

Hi. Sorry, I’m not sure I understood you. Are you making estimations with the “Remaining” time Blender shows?. That “Remaining” time that Blender shows is not accurate at all.

Edit:
Wait, do not trust in what I said above. Testing with progressive refine, “Remaining” time is quite accurate at 50% of the render (at least with BMW27.blend scene)

Yes and no. Yes, in that I can now use it to predict the render time with small error, and No, in that for the first 20-30 samples it is very inaccurate, so I discount the early estimates. But it does settle down into a reasonably good prediction at around 30+ samples or so. I learned this by observing the change in predicted times and letting the files render to completion. In every case so far, initial estimates are wildly inaccurate (22-33 hours for the files above) but after Cycles has rendered around 30-40 samples, predicted times stabilize and are actually quite accurate, not to the minute but definitely within a few tens of minutes at worst. It’s a very predictable pattern that I have observed repeatedly.

Another interesting pattern I’ve observed (but cannot explain) is that in some cases, the UI Preview Render prediction starts low and increases to a stable point, whereas in all cases, the full render (F12) starts out high (usually very high) and trims itself down.

If you are skeptical I would be happy to fully document my next rendering. I have two baking at the moment so it will be a while, but perhaps it would be instructive?

Sorry, I had edited my post above. I had assumed that you are using Progressive Refine because what shows your second screenshot. Is that so?. I guess you do not plan to use progressive refine with CPU when you render the animation, right?
Can I ask what your CPU model?

Anyway I think I can not help you with your problem. I have only questions as you can see. Have you tried to do tests with Motion Blur disabled to see if you get more logical render times and without weird behaviors?

Sorry, I missed your edit while writing my post, internet time warps lol. But yes, Progressive Refine at this point to help determine best sample rate for the subject. I don’t think there is an estimate for the tiled option. I will disable P.R. once I get things a little more finalized and try some tiled renders for render time comparison.

Motion blur of course increases render time but is also an essential part of this animation project, as much of the action will be fast enough that severe strobing will occur without it. So if I’m trying to get a handle on actual render time, I need it in the mix, even for scenes with little motion. I do set the interval to a much smaller value for less vigorous scenes, in case that helps (I haven’t tested for that particular factor), since the blur is minimal anyway.

Processor: Intel(R) Core™ i5-4690K CPU @ 3.50GHz, 3501 Mhz, 4 Core(s), 4 Logical Processor(s)

I have no good experiences with motion blur regarding render times (and other issues). I just use Vector Blur. I know that Vector Blur has some limitations. But, do you have discarded the use of Vector Blur as an alternative?

I used Vector Blur in making Kata, because it was really the only option then, and had to live with a lot of problems and less-than-ideal results. Motion Blur now is so much better than it was in terms of render time increase, and so much better qualitatively than Vector Blur, that I did not really consider using Vector Blur.

Your question has provoked much thought (that’s good, btw, heh-heh) and I am beginning to envision more compositing, especially since I now have to introduce 6 to 8 Hellcat biplanes into an already rather crowded scenario. I really do not want to duplicate the biplane model, rig and crew for this purpose, so I will likely use compositing to bring them all on board, working from one master model/rig/crew and doing aircraft texture & minor crew costuming variations.

Hi . Here someone reported another limitation of Motion Blur in Cycles which can cause extreme slowness:
https://developer.blender.org/T49566
I mention it just in case you are experiencing this problem along your project.

With the shared .blend file in report and using GPU, rendering frame 96 the graphical part of my system is completely hangs (I must kill Blender from a tty on linux)

By the way, if I remember correctly in Cosmos Laundromat project they did not use Motion Blur because the slowness using it with Hair.

Thanks for that info, YAFU, I’m starting to realize I’m maybe kind of lucky I can render these frames at all. I’ve managed to not crash hard yet, only had one or two hangs after a couple of hours-long editing sessions (feels like memory leak but what do I know) but Win10 has been pretty good about recovery for most.

I know I have a number of CPU strains going at once but all are absolutely essential to the project, so I’m slogging ahead as I can. BTW I’ve tried GPU off & on but I only have 2Gb on my CUDA card, so it never quite makes the grade. Understandable given the hair, MB, and large number of large image maps I use.

One ray of hope – once I move the camera back & the hair systems don’t occupy significant screen space, the render times drop to well under an hour, even as low as 18 minutes, so maybe I’ll only have to farm out the heavy-duty stuff.

I’ve even solved the problem with my hair systems going flooey & being unable to edit. Appending the entire scene to a new file did not work, but appending the entire file contents as individual items did. That factoid for anyone else who meets that particular bug.

Thanks for the thread…

In my case, have been experiencing similar issues with hair on a simple scene (latest 2.78 official). On occasions hair rendering just slows down for roughly 300% (7s to 21s) in viewport. After restarting Cycles, times get back to normal until i do edit on a material or particle system, then it slows again. I wonder and have no answer… something strange is going on.

Is this a typo? How can Cycles be “restarted”? Do you mean toggling the viewport mode from Solid to Rendered?