Subsequent render suddenly slows down - significantly

Hello.

I hope I’m not creating a duplicate topic. But I didn’t find it here.
It is about such a peculiarity:

If I turn on Blender and immediately run the render for the first time, its time is approximately 3 minutes.
However, if I make some changes in the scene (just move an object) and start the render again, suddenly the time is for example 9 minutes … the third or fourth time it’s already 30 minutes (that’s the maximum :slight_smile:)
Still without fundamental changes in the scene.

Then I save, close, open Blender, start the render right away, and now it renders in 3 minutes.I do not understand. :innocent:

Can you advise me what I have set wrong, or why it behaves like this?
It’s quite annoying and limiting. Hopefully someone will have a good tip.

Thanks for the advice and help. :+1:

Have a beautiful day.

J&J

Which version of blender and GPU do you use?
Have you tried reinstalling the GPU driver?
Is the temperature of the hardware normal?
Aren’t the hardware settings set to CPU+GPU?
If you are using Optix, change it to CUDA and test it.

There is a bug report of a similar issue in the link below.
Please refer to if the similar problem is correct. :thinking:

Hi. Thank for your reply. :+1:

Here are the answers:

Which version of blender and GPU do you use?

  • Blender 4.0.1, RTX 3080 10GB

Have you tried reinstalling the GPU driver?

  • No, not yet.I will try to update first. :slight_smile:

Is the temperature of the hardware normal?

  • Yes, it did not detect any problem with increased temperature or overheating of the hardware. I can see maximum GPU temperatures up to 70°C today.

Aren’t the hardware settings set to CPU+GPU?

  • Only GPU render

If you are using Optix, change it to CUDA and test it.

  • I’m using OptiX, CUDA seems slower than when I tested it in the past (a long time ago). But I’ll try it and let you know.

There is a bug report of a similar issue in the link below.
Please refer to if the similar problem is correct. :thinking:

It looks similar, but I don’t think it’s exactly the same. Thanks for the tip though, I’ll look into it.

I’m going to update the driver, and maybe try CUDA instead of OptiX.
I will come back and let you know. :sunglasses:

1 Like

I am back with “quick” test.

First, I updated the driver from 4633 to 5599.

Then I tried to render with OptiX. Such a scene - for imagination:

1st render after opening Blender. - 4:08 (loading kernel took longer than usual - but only in this case)

  1. I made a small change - moved one face in the shelf) and started rendering - 5:53

  2. I just closed the result and ran the render again - 11:00 (aprox).

  3. I closed Blender, opened it and started render again - 3:05

  4. Same render without change - again - 2:34

  5. Same render without change - again - 3:42
    MEM peak was 7936M all the time

Then I try use CUDA

1st time after open - 4:20
2nd - without change - 7:53
3rd - without change - 3:50
4. withouth change - 3:50
5. change like in the OptiX case - 8:08
6. without chnage - 3:50
7: without change - 3:50
MEM peak was 8364M all the time :slight_smile:

Weird. :face_with_diagonal_mouth:
CUDA showed more stable results, but it also did not avoid the increase in time after making a small change.

So much for the result of a small test, hopefully it will help a little. :slight_smile:

Only Blender and a system diagnostic program were running the entire time.

I think there may be some differences in rendering time. :thinking:
Test the animation running for about 5 to 10 frames.
I don’t think there will be a problem if each frame is processed in a certain amount of time.

Same for me, the difference in rendering time occurs (simple test of scene composition)
The time difference between first and second order is large and there is little time difference between second and third order. :slightly_smiling_face:

Are you using “persistent data” in your render settings? As far as I’m aware, it’s the only setting that is actually meant to affect the next render’s time. Maybe saving the scene’s data could overwhelm your system’s memory if the scene is heavy?

1 Like

Good tip and I have to say I was giving him hope.
However, the test showed the opposite. :face_with_spiral_eyes:

I replayed the same scene, same frame.

1x - turning on Blender - time 3:15
2x - after completion, run the render again - time 6:50
3x - released again after completion - estimated time - 20:00 (stopped after 8 minutes)

Test done without changes in the scene during test, only “persistent data” turned off.

Thank you for your tip, time and effort.
But it looks like it won’t be in this one :frowning:
Although it is true that this “problem” occurs mainly in larger scenes that have more details. But the solution remains hidden. :disguised_face:

It does sound like a bug, then.

Maybe it would be possible to isolate what exactly causes it? If you turn off different features of your scene, you might find if something in particular is at fault.

  • If you render with CPU, is the render time still inconsistent? (No need to do the full render, turn down the samples to test this.)

  • Is there any simulation or physics? If yes, try turning them off.

  • Any heavy modifier anywhere in the scene? I am especially thinking about subdivision modifiers set to a very high quality level or to adaptive subdivision.

  • Is motion blur active in the render settings? I have seen motion blur have inconsistent glitches in the past. If it is active, try without to see if there is a difference.

  • Maybe it’s an object in particular. Make a copy of the file for testing, then try deleting half the objects in the scene (and make sure to do a recursive cleanup so the data is really gone from memory). If the problem is gone, try doing the same, but deleting the other half of the scene. If the problem is gone in both cases, it’s related to the size of the scene. If the problem persists in at least one of the cases, it’s probably a specific feature or object of the scene.

Yes, does it look like a bug or a setup error … We will see…

But the first point brought positive information.
I tried CPU (OptiX) resolution 50% HD, 100samples

5 times, always with the same time of 1:20 … ± some seconds.
I tried changing the scene as before, moving the objects and still no problems.

So the problem could be somewhere in the GPU, GPU settings or something like that …

There are no animations or physics in the scene.

Some objects contain subdivisions … mostly at level 2-3 … a few cups were at 5. I don’t think it could be a big problem.

Motion blur was not active.

As for the test with “half the scene”, in this case, after dividing the scene, we go to the GPU without a problem in both cases.
It is true that smaller scenes (kitchen, smaller room) do not tend to be a problem, rather it appears in larger ones (living room with kitchen and corridor, …).

So, based on the findings, do you think there is a way to eliminate this behavior - through settings. Or I have to accept it as a fact for bigger scenes. And as I want to render faster, I will have to restart Blender more often.

Thank you for your opinion.

If it happens only on the GPU and only in large scenes, then that’s the behavior I would usually expect from the GPU’s memory being overloaded.

The written memory consumption fits within your GPU memory, but I am wondering if it might actually be larger than that and you are triggering “out of core rendering” without realizing.

This is something that Cycles can do when the GPU’s memory is full. It sends some of the scene’s data to the system memory instead. The result is slower than a scene that fits fully on the GPU (but faster than CPU, and at least, it will render instead of crashing).

That said, I have no idea why it would render fast the first time and then be overloaded afterward.




So, according to your previous comments, this is an interior scene? I am wondering what could possibly be that heavy to render in an interior.

  • Do you have objects with very dense geometry, like remeshed sculpts or photoscans? Is the scene’s polygon count in the multiple millions (when all the subdivision is turned to its render levels)?

  • Do you have lots of high resolution textures?

  • Are you rendering at a very high resolution? Blender’s rendering window stores the image in a heavy, high quality format, so that does take some memory too.

1 Like

I agree with your insights and seriously don’t know why it behaves like this… :face_with_diagonal_mouth:

I think the interior scene can also be challenging, but this is definitely not the case yet. We also had more complex scenes. Where Blender showed the same… In this one, the mem peak is somewhere at the level of 7900M, which is certainly not the maximum :blush:
It contains 14,5mil. of triangles (befor render).

If it would help, I can make pack resources and send you the .blend file. If you want to look at it.

Another question about the GPU - are there any settings/tuning how Blender uses memory, or how it cleans it? Or is it only a system given? Maybe I have something set wrong here. :sunglasses:

We try not to use large objects, photoscans, and other things that could cause a problem, or at least exclude them.

By default, we render at 1080p and 1000samples, so it was also during the tests.

We will see … :slightly_smiling_face:

If you are willing to send it, I would be curious to look at it. If it’s the image you posted, I am almost sure I can optimize a scene like that one to fit a much smaller size with little to no visual difference.

As far as I know, there isn’t really a way to change the memory use of a given process. The rendering process needs all data in its raw, uncompressed form to render with efficiency. This means that polygons and textures have a minimum cost that can’t be prevented and the only way really to reduce that cost is to reduce the amount of polygons and textures.

It will be interesting to find out the possibilities of optimization and learn to do something better (differently). So why not. The scene contains basic modeled furniture (including a kitchen that is not in the shot) and objects mostly from BlenderKit. :slightly_smiling_face:
However, it is still a working version without modifications and tight details.
I send in a message.

And with that memory, it’s clear to me that everything needs to be stretched and processed.

We’ll see if you come up with something in the scene.

Thanks. :+1:

There may always be elements that interfere with rendering.
But it’s a pretty boring time to find it. :smiling_face_with_tear: :sweat_smile:

I did not receive any message. Did you try to send it yet?

Sorry for the hold up but first I tried to upload it directly which didn’t work and then I was out of PC.

You should have it via the Drive link. :sunglasses: :blush:

1 Like

First thing, I see you are using instancing where possible. That’s good.

You are using a lot of textures, but their resolution isn’t excessive, at least for the ones I checked. There isn’t much optimization to be done here. It might be possible to do better if you plan for it in advance, reusing textures in multiple materials (example, you have 2 wood materials, you reuse the same wood texture, but modify its color in the material editor).

I think the main problem is with the polygon count. Remember that when you render, the subdivision modifier will subdivide the object to its “render” levels. This means that if you have set the render levels higher than the viewport levels, you will have a much higher polygon count at render than what you see in the viewport. In this scene’s case, if I set every object’s viewport subdivisions to the same number as their render subdivisions, the scene goes past 36 million triangles.

There are a few objects that are especially at fault:

This fabric adds 5 million triangles when set at its render subdivision level.

This group of objects alone adds 16 million triangles when each is set to their render subdivision levels. Don’t forget that subdivision is exponential, each level is 4x the previous one.

If I go around the scene and disable all subdivision modifiers I can find, the memory peak falls below 6 gb and can render on my older GPU.




Now, let’s talk about fixing this.

I could see 3 ways to go about it.

  • Deactivate or lower the subdivisions on objects that are far from the camera or out of view. Bring them back only if it’s needed for the specific render you are doing.

  • Apply the subdivision at its highest needed level, then decimate the object until it just barely starts losing visual quality. Apply a weighted normals modifier to help fix the surface afterwards if needed. This method is destructive, so use only if you are confident you know what you are doing. I was able to bring your cup to under 40 000 triangles and it looks good at any distance that matters (and it no longer has any modifier, so it could be instanced easily).

  • Set every subdivision modifier that can be to “adaptive”. Adaptive subdivision isn’t just for micro-displacement. It’s a feature that changes the amount of subdivision each face has based on the camera’s position, so you could use it to automatically have subdivision only where it’s needed.

How to activate adaptive subdivision correctly:

1- Make sure Cycles is in experimental mode.

2- Put the subdivision modifier last in the modifier stack. Adaptive subd. can only be used on objects where this is possible (that’s still a lot of objects in your scene).

3- Set the usual subdivisions to 0 before switching modes, then click the “adaptive” checkbox. This is important, as there is an issue/bug where both the usual and adaptive levels will be used on top of each other, so you want to deactivate the usual subd.

4- Adjust the “dicing scale” on the modifier. The quality of the adaptive mode works very differently than the traditional way. If the dicing scale is set to 1, Blender will subdivide the object until each polygon is 1 pixel wide when viewed from the camera. A smaller number means a higher quality. If you are using adaptive only to subdivide the object (no displacement), a dicing rate of 2 or 3 should be enough. There is also a global dicing scale in the main render settings, which acts as a multiplier for all the modifiers. I should also mention that changing the render’s resolution will affect the amount of subdivision, as it’s pixel based.




I have also found some cases of overlapping geometry, on these objects.


candle




Your sampling settings could be improved. This is a rather noisy scene, so I doubt having a noise treshold of 0.01 will do much (no part of the scene will fall below that treshold before the render completes). Also, a complex interior scene will benefit from setting a “min samples” value manually, this will avoid potential denoising problems.

I would try these settings:
sampling

If you don’t know why I am doing these changes, you should probably learn what exactly the sampling settings do.

In this thread, I explain them in detail.




An other problem I found is that the white wall material has a color value that’s more than 100% white. This breaks the laws of physics, makes your wall glow and is bad for noise and performance.




You might want to check your object’s normals, a lot of them are inverted. This shouldn’t affect performance, but it could break some features of materials.




Here is a file where I did those changes. I went with the adaptive subd. method. Does this render better on your end?

Edit: Just after posting, I realized you surely prefer I don’t spread the scene. I have moved the link to a private message.

3 Likes

Hi.

Thank you for your work so much. :sunglasses: :+1:
I got very little into the tests today.
However, what I can confirm is the difference in memory usage, the original file is about 8200M, the optimized file is 6600M. :wink:

So something must have happened.
And the result of the “time test” was also a pleasant surprise.
Even though I kept 2048 samples (optimized file) the time was constant at 3 min 20 sec. Hey, after more tests and small changes than before, still the same time. Very interesting. But the reason for the slowdown is hidden somewhere here. So it looks like. :+1:
(as a reminder, the original file 1024samples time 2:50 and more)

I will certainly study your report again, focus on the details. I will do the tests…

But the rendering looks very nice. I didn’t (again) pay attention to the orientation of the normals at the walls. Thank you for the correction. :blush:

Just the floor “changed” the torch. It could be because you were working with 4.1. and me with 4.0.1? But I don’t want to deal with that now, it’s not the topic of this thread.

I will definitely do tests on another scene where I noticed the same behavior… I will try to apply your knowledge.

I will write.
Once again, many thanks for your effort. :pray:

1 Like

I am in fact using Blender 4.1, I didn’t realize there was anything in the scene that was version specific.

The reason why 2048 samples isn’t really ruining the render time is because I set the noise treshold to a lower quality than before. The way it was adjusted before, the noise treshold couldn’t really do it’s job, which is to speed up the render. The sampling adjustments I made mean that Cycles will put less effort than before on the cleaner areas of the image and more effort on the noisier areas. This will give you a render with more even noise and less chances of having artifacts hiding in some corners or on reflective objects.

Hello.

Looks like we found a problem and solution.
Probably the subdivision was the problem. It is questionable why they loaded the memory and subsequently “accumulated”, or at least slowed down the rendering, but after their optimization and the use of adaptive subdivisions, the problem disappeared.

Probably only when you reach a certain level it “gets crazy”. That’s why it doesn’t show up in smaller scenes or when splitting this test in half.

I continue to work with this scene today, I prepared the final render earlier in the day and I am still OK. No need to reset Blender.
I still do in 4.0.1.
I also tried the new 4.2 and there was still more than an hour for the render. So another mystery? :grinning:
If you want, you can do a test and run the model you sent me in 4.2. and maybe you won’t understand either :grinning:
I didn’t have much time and energy to devote to it :slight_smile: Maybe one day.

So I marked your notes from tracking the .blend file as a solution. :+1:

Thank you all for your cooperation and help.
In the future, therefore, setting subdivisions as adaptive, especially for large scenes and more complex models, may be the right solution.

Why the slowdown got bigger and bigger with each new render is still a bit of a mystery, but the important thing for me is that I know how to work with it now.

Once again, many thanks for the many new insights and great help. You have moved me forward.

Thanks.

Maybe I’ll post the final pictures here when it’s done :slightly_smiling_face:

Have a beautiful day.

1 Like