Polygons vs quality vs performance

Hello guys,

I have two questions:

  1. In general, how many polygons are necessary in order for a rigged character to be seen up close before the loss of mesh resolution start to become noticeable?

I got this character that has about 44k polygons (12k for the head and 32k for the body) and I’m thinking in increasing them to about 200k in order to improve mesh resolution. Do you think it’s gonna get too heavy for animation?

  1. The funny thing is that even though my character has 44k polys, the teeth alone has 81k, way above that what would be necessary I guess.

So I was thinking that once most of these polys will be “hidden” inside the mouth, they will not be processed by the GPU, not affecting the FPS. Or am I wrong? Thank you guys. Have a nice day =)

1 Like

A lot is going to depend on your rendering engine. And it’s all continuous anyways-- yes, 80 k is going to be slower than 40k, 2 million is going to be slower than 1 million, 30 is going to be slower than 15. There’s no particular number without defining what your performance goals are.

What’s noticeable? What’s close? You can make a 2 million vert model and notice the silhouette. There’s no particular number that’s enough (maybe discounting like 4 verts per sample, which would be ridiculous.) In addition to these indefinables, what’s noticeable also depends on the output resolution. (Don’t need a lot of verts to draw a 16x16 pixel icon…)

Hidden polys still cause slowdown. How does your computer know that they’re hidden? It has to calculate that. (There are ways in which they cause less slowdown than visible faces though.)

I’m not sure how you can say a character has fewer polys than the teeth-- aren’t the teeth part of the character?-- but it’s pretty common for amateurs to not pay enough attention to relative vertex density. (So many models you can find where the gross forms are relatively low poly and all the screws are meticulously detailed from arrays/instances…) Yes, your teeth will cause slowdown; no, it doesn’t make any sense to provide 81k teeth sitting in a 12k head.


Thank you for answering.

Actually, I’ve been intrigued by photorealism in CGI and so I’ve been reading a lot of things related to that.

And now using Blender I wanna make the most realistic, believable character that I can do to be seen in VR.

The animation part is the most difficult one cause I know that in order to achieve realistic character animation it’s necessary to use some high quality mocap system. So I’m pretty limited in this area.

Because of that I would be happy if I just managed to make my character look as he’s “alive” by adding some eyes movement/blinking, subtle movements of the head, arms, legs etc. As a person who’s just standing there.

Btw, the character I’m working on had originally 3000k polys so I’ve decimated it to 44k (retopologized quads). But the teeth is a bit more challenging cause it requires a lot of polys to make each individual tooth to look realistic so I just managed to decrease the polycount from 81k to about 20k now. But I can still do some things in order to reduce it further. I could for example just remove some back teeth.

If your goal is VR, then you’re starting to home in on some performance goals: maybe 80*2=160 frames per second.

I don’t have direct experience with VR, but from what I’ve read, 80k verts is probably larger than your budget for the entire viewport, let alone your teeth. What I’ve read may be out of date, I don’t know.

What you ought to do is load up your engine and load (simply shaded) objects into it until you no longer like the performance, then add up the verts. That’s your budget.

Yeah you’re right. I’m pretty disappointed. My character now has about 70k polys, it looks fine and runs smooth in VR while it’s static. But a simple arm animation causes the FPS to drop to less than 15 frames per second in a GTX 1080.

I don’t get it, I thought these modern GPUs were able to handle billions of polys per second but in practice it’s another story. Pretty disappointing. No wonder why characters in VR games look so bad. I’d better forget about photorealism in VR, it seems to still be decades away.

In your desired engine (eg Unity) or in Blender? Performance in the two have very little to do with each other.

My problem with unity is that I didn’t manage to get the same quality results that I can achieve with blender shaders (in Eevee). But honestly I haven’t tried hard. Some people say that if you don’t like Unity’s shaders, then you can write and use your owns. But in order to do that one must be a hardcore coder I guess.

Now I’m been playing with baking materials in cycle. I’m gonna to use baked materials into Unity as well as in Eevee and compare the visuals and performance in VR. That’s gonna be interesting. Let’s see.

In order to make a good model, you have to be a hardcore modeller. In order to make a good texture, you have to be a hardcore texture artist. In order to make everything, you have to be an entire team of hardcore artists. Absolutely none of it is going to come naturally the first time you do it. There are years and years worth of experience for any of those disciplines, never mind trying to master all of them at once.

Even for non-VR game models, the reality is that the textures and materials (and the shaders than interpret them!) are much more important to the quality of the finished product than the actual meshes are. For a VR model, that’s going to be even more true, as your vert count is even more limited. (That said, frequently your textures are made from a mesh.)

Well said, I was actually going to say something similar. One thing I keep thinking is that this is still just a game engine. You’re going to be incredibly constrained by the real-time nature of it. You have a budget, why spend it all on polygons? Keep your mesh light where there is less deformation and use textures to trick the viewer into thinking there is more details there. That’s what normal/parallax maps are for. If your engine supports real-time displacement, go for that. Spend more time making high quality roughness, albedo, spec, etc. But that’s just the “Look” of the character, what about the moment? Nothing kills photorealism more than sh*tty animation. If the character doesn’t move right, the viewer will spot that immediately. Where as, the viewer might not notice that the details in your model are just texture maps.

EDIT: And don’t forget about lighting and atmosphere! :wink:

1 Like

Good point guys. Textures, materials, shaders and animation are much more important than the mesh, even more when it comes to photorealism.

Learning CGI is complicated, isn’t it. I mean, in the sense that there’s so many skills to learn and each of them requires years of practice in order to be good at them (as bandages has pointed out). Which is not a bad thing at all for the world of CGI is fascination. The only problem is that time is short and we get only one life to live lol.

Anyway, I have a question for you guys: We know that, for obvious reasons, offline rendering models look much better than real time ones. So as I’m striving for photorealism, I’m trying to bring the “look” of an offline rendered character into a real time engine (like Eevee, Unity etc).

And the only way to do that is, of course, by means of baking textures/materials.

So what are your thoughts about that? I know baking materials comes with a lot of limitations. But I’m trying to see how far I can go using this technique in order to achieve a more photorealistic character in a real time engine.

Thoughts on what? Should baking textures be part of your workflow? Yes it should.

Well I mean, what can be done in terms of baking for achieving more photorealistic characters?

For example: I know that things like reflections (glossy, roughness) cannot be baked cause they are dependent on camera/lights position. However, I’m not sure about the subsurface scattering. Is it worth baking? Shouldn’t SSS also change according to camera/light position?

And what about baking the face’s character shadows? I know shadows are dynamic and so shouldn’t be baked for non-static objects. But even then, if the shadows are “neutral”, subtle, maybe the lack of shadows dynamism will not be so noticeable for the viewer. If I’m making sense.

But Idk, just my idea, I’ll have try it and see what I get.

You cannot know what your textures should look like until you know what your rendering engine is (which includes the shader used.) You haven’t said-- it seems like you haven’t even decided.

Sometimes I make models for a really old rendering engine. This engine uses a UV mapped texture plus diffuse, emissive/ambient, specular colors. Diffuse + ambient use the same texture. The rendering engine allows for only a single dynamic sun lamp.

Do I bake lighting into the texture? I bake some lighting into the textures. I carefully tune my lighting for the bake so that view/lighting dependent effects exist, but are subtle. Do I bake specular? I mix in some specular. This gives me some variety in the texture so my ambient isn’t flat, but not so much that it looks out of place because the baked in lighting doesn’t actually represent any scene lights. Without baking in any specular, without any normal maps, I can’t suggest the same (false, fake) detail that I can by baking in a little specular.

Now, that old engine also allows for the use of custom shaders. Some of those custom shaders have different lighting models. When I make textures intended to be used with a different shader than the default, I do things differently. If I’m building for a custom shader that allows for multiple lamps and lamps of different types, allows for normal mapping, allows for world HDRI lighting, I might not bake in any lighting/specular.

SSS isn’t a view dependent effect, but it is a lighting dependent effect. Some rasterizer engines include some SSS emulation (like Eevee, for example.) This is very different than Cycles SSS-- it’s a trick, like most rasterizer things. You might bake some SSS into your diffuse texture to compensate for limited scene lighting. Depending on your shader, you might also bake SSS color, radius to a texture for it to use.

Roughness is not lighting or view dependent (unless you’re doing something really weird.) Any shader that uses per-texel roughness/gloss is going to use a static roughness texture.

Well, I’ll stick with Eevee (because it’s free and I have VR support) and use cycles for the bakes. I wanna see how far I can go with that. Then l’ll try to “adapt” what I get in Blender into Unity and see what I can get. Or maybe I should try Unreal, although I’m not familiar with it and they say that it’s harder than Unity to get into.

Oh yeah, that’s true, SSS isn’t a view dependent effect but a lighting one. And SSS in Eevee doesn’t look good. No surprise, as you pointed out, it’s just an emulation. Yeah I’ll definitely start to bake the SSS into the diffuse. I’ll try to follow your recommendations.

You know, CGI is actually not that hard to learn (at least I don’t think so lol). The biggest problem to me is to find good materials to study. All I know (about CGI) I’ve learned mainly by doing searches in the internet and watching youtube videos. And it would be perfectly fine if not by the difficulty in finding good, technical, deep, affordable, clear explanation of the things. But thank goodness I still have the internet to learn lol. So, if you know of some good website, tutorial, book about it, please let me know.