How many tris can you have in a blender scene before it crashes?

How many tris can you have in a blender scene before it crashes?

The most tris I’ve ever had in a scene is around 5.8 million.

Depends on your GPU, CPU, RAM, VRAM, what other processes you have running, OS, etc

3 Likes

Ryzen 5800X (8 core CPU)
32GB 3200MHz RAM
Nvidia RTX 3080

I’m working on a scene that’s pushing at least 20,000,000 tris on an old Haswell i5 with 16GB RAM, and a Geforce 970, and it’s running surprisingly well.

1 Like

Yes its definitely dependent on system specific parameters, but even more, it depends on what concrete datastructures have been filled within blender or if objects make use of instantiation or not, how many mesh attributes are maintained and so on. So yeah there is definitely no such thing as a fix number here that describes how many triangles blender can handle.

2 Likes

How far do you think I can push this scene?

It’s about 70% done.

https://1drv.ms/u/s!Ap6t_YOv62EQgTLOtrhDQWbaEhy9

You could probably triple the amount of polygons in your scene. Though it will eventually start lagging considerably when in the rendered viewport, so long as solid view, and to a lesser extent the material preview view, are still usable, you’re good to go.

Just try to be as efficient as possible. Make instances of anything that can be instanced.

2 Likes

As already said, there is no general hard limit, so it is as @Renzatic says, think rather in terms of tweaking it to be more efficient, if you are already in need of it. There should be plenty of youtube vids on Instancing / Linked Duplicates / Collection Instances etc if you havent used these so far. Your railing poles merging eg is a mixed beast. While it can help to merge Objects to minimize the overhead in low level api context switches I think instancing should definitely outperform it.

I would start getting a bit nervous at around 30M polys with no instancing. With instancing you can easily get the equivalent of 100’s of M of polys without breaking a sweat.

Watch this:

1 Like

There’s only one exception to this I’ve seen, and it’s happened to me just recently.

When I’m playing with geometry nodes, I usually try not to realize my instances unless I have to, because it gives you a slight performance hit. But then I started working on my recent scene, where I have one 6 tri blade of grass distributed a few million times.

You’d think that realizing this setup would set my computer on fire, and that it’d work better as a bunch of purely distributed objects. In reality, it’s the opposite. Unrealized drags performance down to unusable levels, where even dragging a node in the shader editor can take 30 seconds for it to compute. Throw in a realize instances node at the end of the grass distribution tree, and suddenly everything’s running fine.

Why this is, I have no idea. It doesn’t seem to apply for anything else, just the grass. It may be due to the pure brute numbers I’m throwing at it, but it’s like one wild lone exception to the rule I’ve found.

Interesting. I wonder if the geo nodes are realizing the instances in the background anyway, but without the user actually choosing to realize them, the process happens more slowly?

Perhaps something to bring up to the devs?

Maybe, though it doesn’t seem like a bug exactly. I might head over to right-click select, and ask there.

What is instancing?

Think of them as shadow copy of an object. You can fill a scene with thousands of instanced copies of, say, a tree, and have them consume no more resources beyond what the original object they’re being copied from is using. They can be moved, scaled, and rotated individually, but can’t be edited at the vertex level without the changes being reflected in all the instances.

To make one, you simply need to select the object you want to instance, and hit Alt-D.

1 Like

I guess my biggest asset is 17mil triangles. It works without any problems and tri count really can be bigger without any issue.
Just have some big amount of polys in the viewport - is not something where you will suffer from poor performance of blender.

How is blender able to duplicate objects without adding more geometry?

Its the way in which Blender manages its data. If you look at any object in Blender it has its object name and associated data, like overall position/rotation/scale.

But it also has a geometry name (those 2 names don’t need to be the same, a lot of people will change the name of the object, so if it started as a box and they turned it into a tree, they name the object: Tree and thats what you see at the top level in the outliner. However, if you then go into edit mode and look at the name of the geometry, it will still be called: box).

That geometry is the vertices/edges/faces and there relative position, etc to each other. So if you make a copy of an object, you make a copy of both its general data, like overall position/rotation/scale and a copy of all its geometry data. If you do that, the name of each part will have .001 added, so you’d end up with Tree.001 and its geometry data would be box.001

However, if you make an instance copy, Blender only copies the Object data, so you still end up with Tree.001, but it makes a reference to the geometry data, which would still be, box. You could keep doing that and have Tree.002, Tree.003, etc, etc and be able to position/scale/rotate each Tree as you please, but it is all based on the same geometry data, ie box.

If you then go into edit mode (on any one of those Trees) and move some vertices around, or delete some faces, etc then you are changing the ‘box’ data and as such, everyone of those duplicated or instanced Tree’s will also change.

2 Likes

How many tris can you have in a blender scene before it crashes?

It doesn’t crash on me. Out of ram situations operating system start swapping and Blender doesn’t respond very long time.

Most complex mesh I have in viewport use is over 16 million polygons, only single object and I bake it to normal map and reduce polycount to something like 262k, or I split some large 4096x4096 heightfield to 1024x1024 or 512x512 pieces and simplify distant pieces.

When I put meshes to scene, I limit total polygons in invidual meshes to 8M per layer. That is 1Gb ram to geometry. Sure I can instancing some mesh but I limit geometry. Exception is fluid simulation where I can put a lot of geometry, up to 512x512x512, as much there is ram left for layer.

So it really depends on ram budget and keeping viewport working.

All of them!

2 million polygon/4 million tris is good value. You can use up to 4-5 million polygon, but it is risky and slow. 30 million polygon, if you have enough RAM, this is possible, but slow.