why only quads?

Hi guys, I am pretty new to Blender, only using for about a month or so. I’ve done maybe 4 models so far, kind of just to get my feet wet.

I want to make some low-poly models of characters I have in mind for a game. Here’s my question: in a lot of tutorials I have seen, one of the “rules” seems to be to avoid creating triangles in the mesh as much as possible. But, when modeling low-poly, it would seem like using triangles would make it easier to keep poly count down.

What is the reason for the rule “only use quads?”

Thanks!

  1. 1 quad = 2 triangles.
  2. quads distort better than triangles during animation.
  1. most games use triangles anyway. :smiley:

So yeah, when you’re getting it ready for in game- start messing with triangles. But before that it’s easier to work with quads (they divide better. Try a subdivide on a quad & then a triangle to see the difference).

OK, so quads are better because other modifiers and tools that you might use while modeling work better with them. But if I’m kind of finalizing a model for a game then using triangles isn’t an issue?

Quads are great, and preferred when creating the mesh, as the tools like 'em better, and you don’t have all those extra edges cluttering up the place, but…

3a) Ultimately, EVERYTHING uses triangles as that is what hardware uses.

Game engines are no exception as they leverage the acceleration afforded by the hardware to achieve real-time frame-rates.

Newer commercial game engines happily accept quads, triangulate, calculate strip & fan primitives (used by hardware to accelerate rendering… & BTW, requires triangles :slight_smile: ) on the fly. And quad meshes typically work fine, not so much because tri’s are no longer important, but more so because these newer engines accept meshes with much higher poly counts so that the angles between edges are much less… and therefore the deformation is spread out across more faces, requiring each individual face to distort less during animation. Thus making it easier for each quad to maintain it’s initial “concavity” or “convex-ness”. (Now I’m makin’ up words :stuck_out_tongue: )

However, if you are using an older, less advanced engine which requires much lighter meshes (i.e. fewer polys), tight control of the triangulated mesh is necessary. This is due to a quad having that ambiguous bisecting edge that turns it into 2 triangles. The game engine has no way of knowing which direction that edge bisects the quad unless you explicitly tell it.

Example:
Create a single quad plane.
Select 2 vertices on opposite corners and raise them up on Y axis.
Use the knife tool (K) to connect them, forming the “bisecting edge” that turns the quad into 2 tri’s.

In shaded mode, this will form a sort of “tent” shape. Whether that “tent” is right side up, or upside down depends entirely on the direction of this new edge. With the new edge selected, form the Ctrl-E edge menu, choose “Rotate edge CW”. Notice how this flips the “tent” to the opposite direction? This is the difference between a “concave” and “convex” quad… all dependent upon that bisecting edge.

This is why the previously stated view that quads “distort better than triangles during animation”, does not hold water for low poly game meshes. When you distort a mesh by animating it, whether a given quad becomes concave or convex is directly dependent upon the direction of the bisecting edge. So if you’re doing very low poly, you need to triangulate prior to export, and go thru your mesh, turning the new edges as required to try and maintain the same type of logical “edge flow” that modelers look to achieve with quad-only workflows, and preview your animations to see in any edges need turned in the area of the joints because they are “going concave” instead of convex or vice versa.

here’s a visual example

Attachments



Very good information, and a good visual example. It seems like the best way to work would be to create a model using quads, and then as a second stage convert the mesh into triangles in order to make sure all the edges are in the right direction.

As for the game engine, I am not yet at a stage where I have even thought about choosing one, but it’s good to know about this kind of thing ahead of time.

However, maybe you guys could give me some suggestions… I will try to describe my idea for the game:

The game will be in 3D, but the camera perspective won’t change. The perspective will be from above and side (camera located at side, above action, tilted downwards). The character the user controls will be near the left side of the screen. The game will consist of levels; each level is a section of terrain that scrolls across the screen from right to left. Enemies will come in from the right and the user character must kill them as they appear in order to progress through the level.

for something like that it doesn’t matter what engine. You could use flash I’m betting too.

With older engines though, lighting won’t matter: no older engine (~pre-04) goes per-polygon lighting on entities: entities are either fully lit or not. Heck, even games that came out this year do that too! :smiley:

The lighting in Blender isn’t something you should be super-worried about either. No game lights like Blender, except the Blender Game Engine. :slight_smile: You can get approximate looks but things always look different in game due to different lighting situations.