Vertex color painting :: Yes...but...WHY ? I never truly understood the WHY of it

Many 3D application have the ability to vetex paint, but I never understood why.

1: You need more vertices to allow more “resolution” for the texture to interpolate correctly…why not just use texture paint and set a higher texture resolution ?

At least you can always downscale the texture map later and you don’t have to F with the model’s topology JUST to be able to paint it [WHY would you purposely densify your model JUST to be able to vertex paint it with the same clarity that normal texture painting can give you with a higher resolution map anyway ?!! Why would you make your life difficult like that ? Denser model = difficult to rig and animate why would anyone do that willingly ?]

2: It is not like you will have seam/wrapping problems with texture paint, Blender’s texture paint is incredibly robust [or even just good enough…way good enough] and you even have stenciling abilities, so why would you ever use vertex paint ?

3: Game engine are very compatible with traditional texture map anyway, since the way colors are stored in vertices are so proprietary across applications, so why shoot your own foot ?

I never understood WHY anyone would use vertex paint.
Does it affect my day to day ? No.
I just don’t understand the mentality behind this concept.

Can someone help me understand this ?
Thanks.

For starters, you can define regions (on your meshes) for various shader effects without creating gigabytes of texture maps (which would be the case once you have done a lot of complex scenes). It is also very fast to use and the data itself uses very little memory.

If you had a model made of multiple parts, and you want to have a different value for one shading effect on one of them, you can either create another material (inefficient) or make a mask with vertex colors.

Then there’s its use for splatting (on meshes like terrains).

4 Likes

Currently, discussion are about vertex painting in sculpt mode.
Resolution is not seen as a limit.
Most of people are sculpting really heavy dense meshes. So, it is assumed that if you have polycount to sculpt details, resolution is sufficient to paint those details.
Final target is to be able to sculpt and color same area, in one brush stroke.
Interest is to deliver a quick concept before or without spending time, to make a retopology of a low poly model, unwrap it and bake textures.

In the 90’s, early 2000s, games were not able to handle weight of heavy textures and multiple of them. Especially, online games. At that period, the way to color a mesh without using textures or multiple materials was to use vertex color.
That is fun that you are associating vertex color to heavy stuff. That is present in all main 3D application since decades. Because decades ago, it was the lightest way to color a model.
And that is still the case for people that are considering that a low poly model has a polycount around 100 tris.

1 Like

The real WHY is why only 8 of them in 2021 :frowning:

1/ Because when you spend a week or more sculpting a complex character. It’s VERY useful to be able to work with something else than just a clay or basic grey color.
2/ Because the vertex paint can then (when the object’s finished sculpted) be baked onto the diffuse of the low poly retopology.

It’s a workflow most artists using ZBrush do daily. Sculpt+ Polypaint (Vertex Paint) > Retopology > Substance (pixel paint) to add roughness, metalness, etc.

Real problem here is that I can paint a 10 or way more million polygons mesh in ZBrush. Ultra fluid. But when I paint a 20K polys in Blender… It’s slow. That’s the only issue. Blender needs an equivalent to this so called “2.5D” tech. If they go on with the sculpting part of their app, Blender needs another window with another rendering engine (all CPU like ZBrush) to compute high polycount meshes quickly.

1 Like

For my projects i use vertex color for a lot of stuffs. splitting part of a mesh, assigning different texture by splitting R G B channels (splatter) or giving particular tints to mesh zones.
It avoids me storing images in my file or my videocard memory.
But you are right it’s a little bit old school.
anyway with that kind of technique, i m able to use my old machine to create pictures.

here is an example on how i split shaders with it:

of course with subsurface, curvature ll keep the vcol info

Vertex colors, are somewhat of an early 90s technique or early OpenGL. Think for example games as such as Crash Bandicoot, or Spyro. So vertex colors were very useful back in the day considering the hardware limitations of that time.

Gradually the technique became more and more obsolete, and at this current point in time, is almost forgotten.

However the very last few years, since any typical sculpted model tends to be about 100K++ vertices, people slowly finding ways to put the technique into good use.

It gives enough points on the surface to register it as a “canvas” it practically does the same job as a texture. When the model is to be optimized for production use, textures can be baked right away and vertices reduced. But at the point of iterating and prototyping actually might seem efficient enough.

They’re not obsolete or forgotten, they’re used in a lot of current games and other realtime media still. Texture blending, defining animated parts of a foliage system, defining cloth stifness, colour overlays, encoding any data…

1 Like

I have never seen a strong use case in various game engines, so far.

Other than that, the most standard and multipurpose way of doing things are vertex groups.

As for example you can consider to get three vertex groups, and call them R/G/B. Then allow a shader to read these data and interpret it as color values.

In the meantime, the exact same concept applies to vertex colors. The logic is exactly the same in both cases. The only thing that changes is the data structure in the 3D engine.

Since many programmers do not want to bloat their Vertex structure with three extra floats (12 bytes). They prefer to rely on vertex groups instead so they have more flexibility, to have them loaded and enabled when actually needed.

But I don’t find anything wrong with what you say. Everything goes.

For things like masking (to assist in texturing a character or a terrain), vertex color channels have no major alternative.

The reason why is their independence on UV data. A texture needs to be redone in certain areas if you add a new section to a terrain or make changes to a character for instance (because the UVmap has to be edited). As a result, Sculpt Vertex colors will be a very useful shading tool once Joeedh gets it committed and makes it utilize multires subdivision).

Though in Blender, there is now a couple of alternatives which can take basic masking and texturing tasks away from using Vertex Colors.

  1. Weight maps converted to data Cycles can use via the Attribute Node (using Geometry Nodes).
  2. Using a dedicated vertex color channel to create global coordinates (with Geometry Nodes) that deform with deformation modifiers. This unlocks the use of procedural textures for animated objects, and is most useful when done in concert with point 1.

VertexColors in gamedev are commonly used as kind of replacement of vertex groups in shaders.

  • Textures are memory consuming - usually You have more pixels in texture than vertexes in models,

  • texture sampling itself is expensive operation and it takes your GPU time

  • vertex colors are sampled in vertex shader and not in fragment shader so they tend to be more performant because 1 vertex usually covers more pixels than 1

  • You can have multiple instances of same object with same material, but different vertexcolors data. Those objects could be rendered in single draw call instead in multiple ones if they had multiple materials

So there are usecases where VC is more performant and more memory efficient

example of usecase in unreal:

1 Like