Some Very Basic Questions: Mesh Complexity v Performance/Render

Trying to wrap my head around all this stuff and have somewhat confused myself. I have the following questions to try and clear it all up:

  1. I was under the impression that less vertices = better performance/quicker render times. If we take a default cube with 8 vertices or subdivide that cube 10 times to get 728 verts, don’t we still ultimately have the same geometry though? From a rendering point of view doesn’t it still just calculate each pixel, regardless of vertex/face count?

  2. Does using a modifier to subdivide/array/mirror have any benefit (aside from ease of modelling) versus just having the modifiers applied? I.e. If i have 10 cubes, or I have 1 cube arrayed 10 times with a modifier, is there any performance or render time benefit from Blender’s point of view or is the scene just as complex either way?

  3. It seems like using a height map/displacement is more taxing on the system/render times… Would it not be better to just have the mesh displacement applied so that come render time it doesn’t have to compute it?

  4. Quite often the 3D models that are sold online have incredibly high vert counts (e.g. realistic sofas, tap fittings, plants), as best practice should I be trying to simplify these as much as possible? I.e. decimating the mesh, tris to quads, etc. All I want is to whack the models in and render, but I’m worried that if I take a relatively simple interior scene and then populate it with third party furniture and decor my vert count is going to go through the roof and blowout my render times.

Thanks very much :slight_smile:

  1. No. Pixels are not geometry. Indeed, the cube would occupy the same number of pixels on screen, but it will contain more data to construct those pixels from. A simple way to illustrate: have two cubes, one default, one subdivided 10 times. Shade both cubes smooth. You’ll immediately see how they differ. That’s because surface data, at each pixel, is constructed by interpolating data between vertices.

  2. Modifiers are modeling and animation aids. When rendering, all geometry needs to be realized.

  3. You mean using the Displacement socket? You wouldn’t really want such fine detail be baked into your actual geometry. In production scenes you’d then have gigantic files, insane memory consumption, etc. etc. But, when working on concrete assets, that’s more or less what sculptors do: they actually model in all this displacement. Wouldn’t want to do that for a full scene though. There are always caveats though. For film, for example, you do want highly detailed models for foreground objects (and still may even use more render-time displacement for even more detail).

  4. Well, you kind of answer your question number 3 there - that’s exactly why you don’t want to bake crazy detail into your geometry :slight_smile: “Best practice” would depend on your use case. For a game, you could rebuild or simplify those models and then transfer detail from high-poly ones using normal maps and height maps. For still shots, you may use them as is. For complex scenes, you may do both: keep highly detailed objects for foreground, and simplified versions for background.