First thing, I see you are using instancing where possible. That’s good.
You are using a lot of textures, but their resolution isn’t excessive, at least for the ones I checked. There isn’t much optimization to be done here. It might be possible to do better if you plan for it in advance, reusing textures in multiple materials (example, you have 2 wood materials, you reuse the same wood texture, but modify its color in the material editor).
I think the main problem is with the polygon count. Remember that when you render, the subdivision modifier will subdivide the object to its “render” levels. This means that if you have set the render levels higher than the viewport levels, you will have a much higher polygon count at render than what you see in the viewport. In this scene’s case, if I set every object’s viewport subdivisions to the same number as their render subdivisions, the scene goes past 36 million triangles.
There are a few objects that are especially at fault:
This fabric adds 5 million triangles when set at its render subdivision level.
This group of objects alone adds 16 million triangles when each is set to their render subdivision levels. Don’t forget that subdivision is exponential, each level is 4x the previous one.
If I go around the scene and disable all subdivision modifiers I can find, the memory peak falls below 6 gb and can render on my older GPU.
Now, let’s talk about fixing this.
I could see 3 ways to go about it.
-
Deactivate or lower the subdivisions on objects that are far from the camera or out of view. Bring them back only if it’s needed for the specific render you are doing.
-
Apply the subdivision at its highest needed level, then decimate the object until it just barely starts losing visual quality. Apply a weighted normals modifier to help fix the surface afterwards if needed. This method is destructive, so use only if you are confident you know what you are doing. I was able to bring your cup to under 40 000 triangles and it looks good at any distance that matters (and it no longer has any modifier, so it could be instanced easily).
-
Set every subdivision modifier that can be to “adaptive”. Adaptive subdivision isn’t just for micro-displacement. It’s a feature that changes the amount of subdivision each face has based on the camera’s position, so you could use it to automatically have subdivision only where it’s needed.
How to activate adaptive subdivision correctly:
1- Make sure Cycles is in experimental mode.
2- Put the subdivision modifier last in the modifier stack. Adaptive subd. can only be used on objects where this is possible (that’s still a lot of objects in your scene).
3- Set the usual subdivisions to 0 before switching modes, then click the “adaptive” checkbox. This is important, as there is an issue/bug where both the usual and adaptive levels will be used on top of each other, so you want to deactivate the usual subd.
4- Adjust the “dicing scale” on the modifier. The quality of the adaptive mode works very differently than the traditional way. If the dicing scale is set to 1, Blender will subdivide the object until each polygon is 1 pixel wide when viewed from the camera. A smaller number means a higher quality. If you are using adaptive only to subdivide the object (no displacement), a dicing rate of 2 or 3 should be enough. There is also a global dicing scale in the main render settings, which acts as a multiplier for all the modifiers. I should also mention that changing the render’s resolution will affect the amount of subdivision, as it’s pixel based.
I have also found some cases of overlapping geometry, on these objects.
Your sampling settings could be improved. This is a rather noisy scene, so I doubt having a noise treshold of 0.01 will do much (no part of the scene will fall below that treshold before the render completes). Also, a complex interior scene will benefit from setting a “min samples” value manually, this will avoid potential denoising problems.
I would try these settings:
If you don’t know why I am doing these changes, you should probably learn what exactly the sampling settings do.
In this thread, I explain them in detail.
An other problem I found is that the white wall material has a color value that’s more than 100% white. This breaks the laws of physics, makes your wall glow and is bad for noise and performance.
You might want to check your object’s normals, a lot of them are inverted. This shouldn’t affect performance, but it could break some features of materials.
Here is a file where I did those changes. I went with the adaptive subd. method. Does this render better on your end?
Edit: Just after posting, I realized you surely prefer I don’t spread the scene. I have moved the link to a private message.