Recently I did some research on adding high resolution skin details using bump/displacement maps.
But I encountered some problems in rendertime and GPU memory problems because these high frequency details require some big file dimensions and sizes.
I’m not well informed in the color format and bit depth stuff…
I did find this in a texturing.xyz tutorial:
To combine these 3 BW bumb maps into one RGB map.
But I am unable to judge if 1 4k RGB image file is more efficient than 3 BW maps?
Also, if someone could inform me on what the best compression is for saving BW bumb maps and general color (diffuse) maps… (Ex: 4k Bumb -> 16 bit BW or 8 bit?)
one image where RGB and sometimes even A channel are filled/packed is usually much more resource friendly with less draw calls
than four separate images where those channels are unused.
With packed images you are lowering the amount the GPU has to recall and load Images.
16bit= 65535 possible variances
8bit = 255/256 variances of colour
That blue channel looks like dummy data, totally flat, likely mid grey. Probably just a compatability measure.
Don’t see the need for hundreds let alone thousands of potential height divisions for mm depth features. Unless your view is going into the pores / occupying > 256 pixels/axis of screen space in profile 8 bit should suffice. Paint programs and game engines are less likely to choke, but if you’re in Blender the whole time it’s not a big deal either way.
It looks just ‘flat’ blue because it has micro-scanned detail.
Visit texturing.xyz and you will be amazed by the detail their maps will provide!
I do not need such realism at the moment, but I did (and still am) considering to buy some of their maps or alphas.
I think if you have 3 maps for different frequencies of bumps it gives you more flexibility in developing the look of the material as you can adjust the amount of bump in different frequencies while mixing them - that’s great, but not that memory efficient. Unless you can have high frequency maps tillable and repeating and lower resolution for low frequency(since they are blurry anyway) this way making it a lot more efficient. This might not be that useful for a face, but I am sure there could be situations just right for this. I think it would be possible to set it up in a way so you could store them in one file as channels for splitting them up and manipulating them as separate data sources in the shader.
For regular everyday use it makes sense to have bump or displacement map as a single greyscale image, so the file only contains one channel as opposed to the same information in all 3 channels repeating. I suppose one could have 3 separate bump maps in one image as RGB channels. For one map if it was not possible to figure out a clever way to somehow reuse information in certain frequencies of detail, the frequencies would need to be combined to one map to make it use less memory. If you can get away with 8 bits - that’s great, but often in reality it might not be enough. There is often a visible difference between 8, 16 or 32 bits for bump and even more so for displacement maps. I would use lossless compression like for example the compression used in TIFF file format. This would be only to save storage space on disk - compression will not affect rendering.
I am actually working on something where I tile higher frequency detail that has a higher pixel density onto the model. But, trying to combine the different detail maps into a single image has issues. It would restrict all the detail to the same pixel resolutions. If you have a 2048x2048 high frequency detail image, do you want to layer that with a 512x512 low frequency detail image? That defeats the purpose of being able to optimize space by saving your less detailed textures at lower resolutions.
There are many trade off regarding different detail levels. Saving the images as BW, RGB, or RGBA does not matter. Just think of multiple channels as multiple layers that you can pack information into. This can be seen in how PBR material data is often saved. Having one file to work with is a useful way to package data. But, remember that Blender uses a single texture mapping for all the channels in any given image. So, only store data in a single image when all the data is mapped onto the model in the same way.
You could save RAM by saving a small square of the high frequency detail to tile over the model using a lower resolution stencil. You would save the small, high detail square as a smaller image. Then, you could save all the other details at a lower resolution as well to save RAM. But, the small tile would need a different texture mapping, so you cannot put it into the same file as the other data. You could replace its layer in the file with the stencil map, though. All of this would still probably save RAM in the end.
However, if you plan to tile a high resolution texture and to mix it with other textures, that takes processing power for each bounce of each sample. The same overhead applies to mixing all those levels of detail. In practice, you would want to bake all your textures into a single layer for each type of data. In the case of the skin with three detail layers, baking them into a single layer would reduce the file size by 67%. And, in the case of baking the tiled texture, you would have a single bump layer that would lose the RAM benefits of tiling. In fact, both results would be very similar because a single image would need to cover the entire model while having a high enough resolution to preserve the high detail.
Regarding the bit level, the bit level is the same for all the layers in an image. For something like bump, 16-bit is good. Remember, for subtle bumps like the high frequency detail, the detail is so light that you can hardly even see it. Those details could easily be lost or pixelated by 8-bit images. Similar issues apply for normal and displacement maps. So, you could use the RGB channels of a 16-bit linear image for a normal map and use the A (alpha) channel for a displacement map. Then, you could save the color/albedo maps in the RGB channels of a 8-bit image with the A channel being actual alpha or whatever.
It is kind of like a puzzle. You have all kinds of data. And, you need to figure out how to package it and how to minimize the RAM requirements of it. And, you have to balance RAM use with render times.
If the information separated by frequency is not reused or manipulated in any way then, yes, combining 512x512 low frequency map with 2048x2048 high frequency map will save memory. The 2k map has pixels in places where the 0.5k map will be but does not have the information contained in low frequency. That information can be stored within those pixels so you do not need the low frequency map anymore. Actually it is likely it was originally one image but was separated for easier manipulation. Frequency separation is often used in photo retouching exactly the same way - to make it easier to work with, not to save memory. One can do a lot of amazing stuff with detail separated into different frequencies, but just doing the separation alone without any manipulation has no benefit.
If you reuse the data, then yes, you are right - you cannot reuse the same high frequency map on top of two different low frequency maps for example if you do not have data separated by frequency. This can be very useful. For example, let’s say we have to make wooden floor texture and let’s say we get images of 5 different boards. That’s not enough to be unnoticeable. So we can separate high and low frequencies of color data and mix them all so we now get 25 different boards or we might also use the low frequencies flipped in x, y or both(same as 180 rotation) directions and then have even more variation.
Combine the three maps using the ‘Combine RGB’ node
Bake this map
‘Unpack’ this image using the ‘Split RGB’ node
Mix these 3 values with different weights if necessary
Then I’ll try a couple of different render results with different export types for the RGB image.
I feel like the resolution and image depth will differ in any scene anyway depending on how close the camera is and the final render’s resolution.
But I will save RAM by using a single RGB file instead of 3 grayscale ones right?
Not really. That will be about the same. Combine node would not be appropriate here as well. You need to figure out how the frequencies were separated. You should google ‘frequency separation’ and see how it’s done in photoshop to understand the concept better.
Very good point. I am actually dealing with this as well. I have several tiled images that I am mixing together at different scales so that the tiling is not perceived. But, the surface that they are texturing is so large that baking them is not practical. That is not exactly what you are talking about, but it is a different use case that I think illustrates the same end result, baking the layers being impractical or counterproductive.
Frequencies add by simple superposition (addition). If they were separated using simple frequency filters, like high pass and low pass filters, combining them should be simple.
(1) Use one layer as the base layer
(2) For additional layers, subtract 0.5 (to make the neutral points 0.0)
(3) Add all the layers together
Ideally, bump maps would be in linear space, not sRGB. If they are in sRGB, the neutral value should still be 0.5 once Blender automatically converts the image to linear space. If the image is not sRGB, you need to be sure to specify that the image is not color data.