Texture Array Texturing in Blender

Is this possible using Blender shader editor?

I am not sure. Would appreciate any info on setting this up. The issue is how to get the vaules in the mask to correspond to each texture.

If it is not, maybe a new feature for Blender? :slightly_smiling_face:

That’s absolutely possible :slight_smile: You can use vertex paint to blend between just like that:

Each of those “Greater Than/Less Than/Minimum Nodes” is an AND operator: if the value is between X and Y, output 1, otherwise, output 0. This allows you to get a bunch of discrete mix factors from a bunch of different colors. With that understanding, you can make those into a node group, which gives you a very simple shader tree:


Thanks a ton, man. :+1:

Here is the method someone used in ue4:

Would be nice if we could just multiply the mask with a number and paint within those vaules.

The one on polycount uses a texture atlas sheet, right? and the mask vaule calls each texture. I am wondering how the textures are tiled, except they are mapping each square on the mesh to that area.


That is possible, it’s just a little more complex. You could plug that vertex color into a Mapping node, and throw all the textures together on a big texture. I’m not 100% sure of all the nitty gritty details, but I’d guess you can play around with this setup and make it into something awesome :slight_smile:


Honestly I would guess you can do this without having one giant texture atlas, I would guess they just did that originally for optimization, but if you’re not worried about that, you can just use image textures instead of the procedural textures in my screenshot. I’m just musing now though


Sure thing.

I am just curious how using a texture atlas would work. If there is a way to tile from a texture atlas in Blender.
The less shader calculation the faster rendering time for Eevee or Cycles especially for animations.

It would be crazy if you could build a vast environment just using two materials. :slightly_smiling_face:


Very simply actually. Blender’s material node system is very flexible.
Adjust the column and row values to see it in action.
atlas_material.blend (102.8 KB)


With this, you could use the Red of the vertex paint for Row, the Blue for Column, and the Green for texture mix, giving you a huge array of options with just one material and one vertex paint :thinking:

1 Like

Wow…Thanks so much for sharing :+1: :pray:

This is awesome.

Yes @init_pixel ! well done !
That may be a bit involved if you don’t understand what modulo is doing…

In the meantime, @melvi you can look at this video that is filled with cool tricks that you may like :

Toward 9:55 there is the same method with vertex paint, but by using different channels (with a separate RGB) you can mix 3 different textures using the same vertex color.
But it’s also possible to use 3 different vertex groups to make the same effect if that’s simpler to you.

I think the most heavy calculations that you may want to avoid, is mixing shaders : Like mixing two principled with a mix shader to have a blending of two materials.
Procedural textures are also slowing things down, it’s better to avoid complex procedural only materials.
Bump map tends to slow down also I think especially when used with procedurals.

That doesn’t mean you should avoid that at all cost , but this is where you should be aware of what you’re doing.

Having multiple Vertex color, UV , Images textures will probably be slower than one texture , one UV, one Vcol, but it may be not worth the added complexity in the shader.

At least, that’s my personal checklist for optimisations !


That might work …yeah :thinking:

1 Like

I am aware of that method. Thanks for sharing :slightly_smiling_face:

Thats true but with the method by init_pixel . It isn’t that complex especially if you can tile from a texture sheet. Many methods can be used to achieve the same goal :slightly_smiling_face:

The advantage of using optimizations like this is you can save rendering time and of course, the normal way of using textures and shaders shouldn’t be avoided.

One of the things I have learnt is using game techniques sometimes in offline rendering does help and I have always been curious of implementing them especially where texturing and shaders are concerned if it helps improve rendering time.

1 Like

I am guessing using 1 divided by 16 to get values to pick each column and row since there are 16 textures in the atlas?

Cool !

That’s a great experiment to make ! Would be interesting to get some numbers to see the performance impact , but that’s some work …
I’ve done it for mixing shaders here : Mixing multiples shaders in Eevee - #4 by sozap
And I generally manage to avoid that, but it give me way more convoluted networks…
I see people mixing shaders a lot in many tutorials but they probably don’t care about optimizing, or maybe they are not aware of the performance cost…

I was a bit wary that you’d get too much complex solutions , but you seems to know what you’re doing very well ! I’m quite curious on the results you’ll get !
I never though about making atlases, that’s a neat trick , thanks !


Yeah- I would put the red and green from the Separate RGB through a color ramp each, just to ensure they’re clamped to a 0-1 value, and then toss that value in a Math node and divide or multiply by 16 :slight_smile: you can then use the blue to do what I did with AND nodes in my earlier reply.

Technically you have a fourth channel as well to work with - the alpha channel, which you can get from vertex painting. You could use this for something like rotation, to add some randomness to your image textures. Just multiply that value by 360 to get it to degrees and plug it into the Rotation of the Mapping.

Just a note- you’re probably already aware of this, but you need lots of vertices to do vertex paint. You could very easily do texture painting instead and use exactly the same methods, just use an image texture instead of vertices. This comes with the pro of not needing a ton of geometry, and the con of increased memory usage and pixel based instead of vector based rendering. If your texture is too small, there may be pixelated borders between textures that way that you won’t have with vertex painting, but then again, it may be a pain to get enough vertices for good detailed painting. You might try both and see which works better


Thanks for the tip.

I am using textures for the masks rather than vertex painting. I am keeping mesh polycount relatively low too.

You can use a small tiling mask on the gradient edges of the bigger mask to bump up the mask resolution:

No worries, man. :slightly_smiling_face:


I am getting parts of the underlying texture showing when I paint the mask. The A1 still shows when I paint with a vaule of 0.0625 in the red channel vaule

This is the setup. I tried using a color ramp before the multiply by 16 but the same issue is still there.

1 Like

I wonder if the sRGB color space on that image texture is distorting your values. I think you’ll have more precise results with Non-Color instead. Your math is right and your nodes are right, and it’s not a small enough number to be a floating point error, so I think the image color space is the problem.

Whenever I’ve worked with ILM maps (much like this, they use a single image’s channels, but for shading information instead of texture mixing), I have to use Non-Color to get accurate results, I’d imagine that will also work here

1 Like

I figured it out. The math was wrong. From the separate rgb, you have to multiply both channels by 16 and then multiply again by 0.125 before combining them.
Also you were right, Changing it to Non color, it now displays correctly. Thanks.

1 Like

@init_pixel Do you think it might be possible to use this with the atlas tiling to remove texture repetition? :pray: