Blender Cycles X and Optimizations

So I am getting into trying out Blender Cycles X and there are some curious questions I would like to ask to learn how to optimize for better render times from time to time so I am creating one thread for it. :slightly_smiling_face:
Will appreciate any replies/feedback for blender cycles veterans.

So I am wondering if you use one texture multiple times in a shader and also in other shaders, will Blender cycles have to draw these textures multiple times or once? Will that crank up render times compared to using different textures?

Having multiple different textures is more of a concern for memory than it is for render speed. Of course, it will have a small impact (which is largely caused by longer build times at the beginning of render), but the problem is mostly that if you have a lot of very large textures, you could run out of memory.

But image textures just by themselves have a small impact on render times. Procedural textures have a much heavier impact. But above all, the stuff you need to worry about is shaders and their settings. Having lots of glass materials, semi-glossy metals, translucency, SSS or displacements in a scene will have a much bigger performance impact than textures, especially if those effects take a large part of the screen. Also, complex materials can add more noise, so each sample is still fast, but you need a lot more of them to clear the noise.

The things that Cycles really fears:

-Many layers of transparency. If you have trees with alpha textures for the leaves, that will slow Cycles to a crawl. In fact, it’s better to fully model the shape of leaves with polygons and let particle systems instance them on branches than it is to use alpha transparency. Even though you have much more complex geometry, Cycles struggles so much with layered transparency that it will still render much faster.

-Emissive materials, especially if there are lots of them. If you try rendering a Chistmas tree with hundreds of lights made from emissive materials, the noise could take thousands of samples to clear (I learned this one the hard way. In the end I managed to render that scene by compositing a hybrid of Cycles and Eevee together). Light objects are less noisy than emissive materials, so use them when possible.

-Sky light coming through a glass window. Cycles struggles with light going through glass materials. You can improve the situation by disabling shadows on the glass material. In addition, sky light going through a narrow window is a problem, which can be helped using light portals.

-Long, narrow polygons placed in a diagonal. Those mess with Cycles’s optimization system. Path tracers like Cycles accelerate renders by using the bounding boxes of polygons to decide which polygons have to be hit by which ray. Try creating a bunch of long, narrow cylinders. Then, try to tilt them 45 degrees: the render time will go up by 100 times, because the bounding boxes become huge and overlap each other. This is fixed by adding edge loops and dividing the cylinders into smaller sections.

10 Likes

As for using the same texture on multiple objects, my understanding is that it has more to do with how much of the screen is occupied by the texture than the amount of objects that have it assigned (if someone knows better about the inner workings of Cycles, you can correct me. These things are hard to test, as image textures don’t really have that heavy of an impact).

1 Like

Thanks a lot for all the tips. :slightly_smiling_face:

So its better to use smaller textures that larger ones. So if you use udims which is smaller textures to make a larger one, its preferable?

Thanks for clearing this up. Was actually thinking about this.
So if you have multiple linked duplicates in your scene, that speeds up cycles as cycles will render them as one? So definitely modular workflow is an advantage then.

Thats a great tip. I did a test of the Blender barbershop scene and it seemed to render quite fast and looked ok with denoising. I am hoping the denoising tech improves even more so render times can be a thing of the past.

I am wondering what plans Blender devs have up their sleves to make Cycles X even more faster.

It’s more about the total area of texture your entire project uses (no matter if it’s split in many small textures or a few large ones). Also, textures with high dynamic range like HDRIs and 32-bit displacement maps are much heavier. But really, textures are not a problem for performance until your scene gets past the amount of Vram on you gpu (look at the memory stat in the top bar when rendering). Only then would you need to start reducing and optimizing textures.

Linked duplicates, also called instances, are really useful for creating heavy scenes (I have managed scenes in the billions of triangles with barely any render time hit). If you try subdividing the default cube until it’s over a million triangles and then duplicate it with shift+d, that will kill your performance really quick. But if you duplicate it with alt+d, each new instance will have a small impact and you will be able to have possibly hundreds on screen. That’s because the mesh is stored in memory only once. I must add that this memory saving effect only works if the instances have no modifier on them, as modifiers force each instance to be different from each other and they will be stored individually.

An other tip: when instancing lots of really heavy objects, set their viewport display to bounding box, so the object doesn’t have to be fully displayed in the viewport, this will allow better viewport performance.
Cycles will have no problem displaying everything, because ray tracers handle polygons really well. The thing they fear is complex materials and lighting, especially if they take a large part of the screen.

Particle systems also use instances and have the advantage that you can turn them off in the viewport to save on performance, so they are perfect for large grass fields or tree leaves.

I have a tip that helps with denoising. Well, maybe not so much for the render time, but the quality level will be better.

1-Render your scene at 200% resolution. You can reduce your samples so it takes the same amount of time it did previously.

2-Denoise and save the image, still at double resolution. This will allow the denoiser to better capture the fine details and textures. Also, I recommend using open image denoise for a final render. Set its prefiltering to “none” if it works in your scene, but if there is still noise set it to accurate.

3-Open the saved render in an image or video editing software and reduce it back to its intended resolution. The image will look more crisp and detailed than it would normally have. I don’t recommend doing that step with Blender, as it has pretty bad filtering when rescaling an image.

In Blender 3.2, a new feature is getting included which allows individually chosen transparent objects to cast caustics. This will allow for more realistic and less noisy underwater scenes.

In the future, the devs have mentionned eventually adding path guiding. This will allow light rays to be sent in directions that are more useful and clear noise faster (if I understand correctly).

6 Likes

Seriously, thanks for the tips so far. This is really very important knowledge you are sharing here.

I am wondering if this can be fixed for Blender Cycles in future.

What about geometry nodes and instances? And using modifiers in geometry nodes like displace and subdivids? Does Cycles X see these instances with modifiers as one mesh?
Particle Hair is going to be replaced with geometry nodes soon. Any tips on approaching strand hair rendering using Cycles X? :slightly_smiling_face:

Thanks for sharing. Didn’t know that. Blender devs should hopefully look into fixing this in their compositor and video editor. Any chance you might know any good opensource alternative video editor that might do this correctly? :slightly_smiling_face:

Also found this technique using Blender’s Compositor to eliminate noise. The results look ok:

Yeah, I saw that. But currently it only works well with CPU not the gpu in 3.2 beta. Hopefully we get gpu support when 3.2 is released. It also seems to be limited to shadow areas of transparent objects. Currently no caustics from Chrome materials though. Wondering if we get that in future.

That’s very good news. That will be a splendid feature to have in Cycles.

From some quick tests, They do count as proper instances (the polygon count shows only the effect of a single object). I tried to put a geometry nodes subdivision surface after the instancing and it increased the polygon count as if I had subdivided only the base object.

For strand hair, I think that’s a completely different system from instances, just like the current hair particles. My best guess would be that the complexity of the hair material is going to affect the performance, but beyond that, it largely is in the hands of the developpers.

Sorry, I don’t know that area very well. I have access to Photoshop and I have been using the software’s actions feature to batch resize my images.

I have seen that one. I tried it, but it’s very limited in which scenes respond well to it. As soon as there are moving or deforming objects (including camera translation), you start getting artefacts. Also, It requires a vector pass, which means you can’t render with Cycles’ motion blur and you will be limited to the compositor’s vector blur. It could look decent in a static scene where the camera is mostly looking around and panning and doesn’t translate too fast.

One thing I often do however, is re-rendering a single object. Often, with Cycles, there will be just one difficult object that needs way more samples than the rest. So I start by doing a normal render with just enough quality for most of the image. Then, I do a second render, where only the few difficult objects are included. I set the background transparent (render settings → film tab + set the output file format to something that supports transparency) and put every object except the one I want to re-render in a collection set to holdout. Setting objects to holdout makes them display as holes in the picture, but they will still be used for lighting and reflections as normal. Then, I can do a re-render of any difficult object with more quality. Finally I composite it back on top of the original render using Blender’s video sequencer.

1 Like

So geometry nodes work well as instances with modifiers. Thats good news. Thanks for the tests.
On a side note, if you are using geometry nodes for meshes compared to normal meshes, since they are procedural,say a bridge was made with geometry nodes, due to its procedural nature, would it take longer to render especially during the initial stages due to the mesh retaining its history compared to a mesh modeled organically with modular parts? I personally wouldn’t mind as the ability to easily chnage and modify saves you time immensely but I am curious what the answer might be.

Ok. Thanks for the info.

Yeah, compositing always saves the day. Thanks a ton for the tip.

It’s not open source, but it is free and that’s DaVinci Resolve (there’s a paid for Studio version, but frankly you can do a hell of a lot with the free one).

It may take a little getting use to at first, the PDF manual for it is 3605 pages, haha but it’s what I plan on using for editing/compositing/colour grading and outputting all my Blender animation. So far I’ve mostly used it for my Youtube videos.

1 Like

Kdenlive is the best free and open source video editor I’ve found so far, I’d recommend it

1 Like

When rendering a mesh created procedurally by geometry nodes or modifiers, it could take a bit of time to process those modifiers before the render starts, but then the mesh gets converted into ordinary polygons, so the render speed will be as if you had applied the modifiers.

1 Like

@thetony20 @joseph Thanks, will check them out :slightly_smiling_face:

Ok. Thanks. So once the modifiers are processed or applied before the render starts, I am presuming if you are rendering an animation, the modifiers are computed once and Cycles starts rendering the frames or it would need to recompute/process the modifiers each time before rendering each frame?

Because I am thinking of the adaptive subdivision modifier especially if the camera is moving or using lods with geometry nodes. :slightly_smiling_face:

If you have any data that doesn’t change between frames, you can have it be stored in memory by checking the “persistent data” option in the render settings. Persistent data is supposed to store only what didn’t change and recalculate what changed. I haven’t tried it recently, because it used to have bugs with animated objects, but it might be worth trying if you have a scene with long build times. I don’t know what it does with modifiers, as they are recalculated every time you change frame in the viewport.

Adaptive subdivision should normally be recalculated every frame, because it’s the very point of using it.

1 Like

Ok. Thanks a ton :+1:

@etn249 Hi mate. Hope you are doing well.

If you are using a lot of decals using this method for example:https://www.youtube.com/shorts/Zo1ABY07b10
or this addon:
https://amanbairwal.gumroad.com/l/ImportAsDecal
or decal machine and the decals are like 300 plus for a scene but are all using the same material, since its using transparency and modifiers, that would definitely crank up the render times, right?

In your opinion, would using decals using separate planes with alpha be expensive for Cycles compared to cutting out the part and mapping it using a second uv channel?

I feel using the second uv channel method gives better performance due to no transparency and modifier requirements?

If you cut the decal manually instead of using transparency, it will indeed be a little bit faster. though in this case, the difference will likely not be dramatic. The modifiers would probably add a few seconds of build time and the noise would be slightly slower to clear in the transparent parts of the decals. If you already have a full scene with all the decals, it might not be worth changing them, unless they are clearly causing problems.

Transparent planes aren’t too bad, as long as they are all placed against an opaque surface. The real concern is when many layers of transparency are visible through each other, ex. if you try to render a large forest with alpha textured branches. Cycles hates that and actually prefers millions of opaque triangles.

I would like to suggest a third option for making decals. Make the decal a separate, shrinkwrapped object like in the video, but instead of using transparency, use the knife tool to cut out the borders of the decal. That way, you get the ease of using a separate object, you avoid using a second UV channel and avoid working with transparency.

1 Like

Good idea. Thanks for the suggestion. I am sincerely hoping Cycles dev tackle this transparency issue in future. It may not be completely eliminated but maybe a way to tweak and improve Cycles X’s performance with multiple layers of transparency.

1 Like

I’m not sure if it would be possible to improve much. It’s a problem that inherently comes with raytracing triangles. It could be solved by creating a renderer that works on a completely different principle and somehow doesn’t trace rays against triangles (ex. Eevee doesn’t have a problem with lots of transparency, because it’s a rasterizer). In the meantime, it’s a good idea to be aware of this limitation and plan how you make scenes so you don’t do stuff that’s super hard for Cycles.

1 Like

Noted. Thanks a ton :+1: :slightly_smiling_face: Will be keeping an eye on layering transparency.

1 Like