Mipmaps and tiled textures (the combination of both is the magic!) are IMHO the most important feature that’s currently missing in Cycles, because you don’t have to worry about texture memory and manually choosing the best resolution (as small as possible, as big as needed) any more.
Stolen from the Arnold documentation (https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_textures_html):
When using Arnold it is best to use a tiled mipmapped texture format such as .exr or .tx that has been created using maketx.
.tx textures are:
Tiled (usually the tiles are 64×64 pixels).
Mip-mapped.
If you already have tiled and mipmapped EXR’s that have been created by another renderer, you won’t need to convert those files to .tx
Due to (1), Arnold’s texture system can load one tile at a time, as needed, rather than having to wastefully load the entire texture map in memory. This can result in faster texture load times, as texels that will never be seen in the rendered image will not even be loaded. In addition to the speed improvement, only the most recently used tiles are kept in memory, in a texture cache of default size 512 MB (can be tuned via options.texture_max_memory_MB). Tiles that have not been used for a long time are simply discarded from memory, to make space for new tiles. Arnold will never use more than 512 MB, even if you use hundreds or thousands of 4k and 8k images. But then, if you only use a handful of 1k textures, this will not matter.
Due to (2), the textures are anti-aliased, even at low AA sample settings. Neither of these is possible with JPEG or other untiled/unmipped formats (unless you tell Arnold to auto-tile and auto-mip the textures for you, but this is very inefficient because it has to be done per rendered frame, rather than a one-time pre-process with maketx).
It should be worth pointing out that .tx files are basically .exr or .tif files that have been renamed to .tx. This means .tx files can be read by image editors, although you may need to rename the file to .exr or .tif so that the image editor will load it. However, .tx files have a few extra custom attributes set that are not normally included in .exr/.tif files which can make rendering even faster. For instance, they will include a hash so that if you try to load two different files that contain the same data, Arnold only needs to load this data once.
The first benefit is that you are assured of having mipmaps and tiles. These will dramatically improve time to the first pixel, overall render time, and allow using a smaller texture cache. This should be considered mandatory, with the possible exception being don’t use .tx for the few images you are actively modifying if you don’t want to wait for them to be constantly converted to .tx every time you make a change.
The previous level you could get from non-tx files that you saved with tiles and mipmapping. The second level of benefit is only with .tx files and that involves maketx adding metadata that lets Arnold make more optimizations, such as being able to detect duplicate textures and only loading a single copy into memory or detecting constant color images, such as an all-black UDIM, and instead of storing all those black pixels in memory Arnold can special case that.
I’d need to look a bit deeper into OIIO, and how Cycles use it; But afaik, the TX structured data is integral part of the OIIO library, and it should be used by default, under the curtain.
OIIO is used, but not the tiled / mipmapped part of it sadly.
The only way to get a glimpse of it is when using CPU OSL, then tiles and mipmaps work. But of course it’s much slower to render than GPU and even CPU non-OSL.
P.S.: A long time ago in a thread I posted a comparison here of a simple earth texture from NASA applied to a simple sphere, rendered “normal” and as a tiled and mipmapped texture in OSL mode.
I can’t find it right now but the difference was shocking.
You’re right but I think that we should consider also that solving highfrequency different pixels needs more samples…so if a lower mipmap level could help on this, I think that it will be helpful… not important as on realtime but it will help.
Not only V-Ray disagrees, every renderer using mipmaps does
The filesize of the texture on disk will of course be larger, but the renderer only loads the mip level(s) it actually needs. And that might be a mip level that’s 16x smaller in resolution than the original full size texture.
With tiles the renderer can even load only the visible parts of a texture and save even more memory.
Well…for what I know, Vray CPU loads the whole textures set at full resolution as default behaviour if you don’t convert to .tx file format and don’t use the VrayBitmap texture ( https://docs.chaos.com/display/VMAX/VRayBitmap )
Then, if you use Vray GPU you have additionally the mipmap options like I’ve posted before.
In any case it seems something that could help…as Cycles have GPU version (and I think that it’s the most version used) and save memory it always useful to add more textures and more detaield objects on scene.
This is needed to have more photorealistic images? No, as shown from others , we can’t make them also without this feature, but, if available, it could help for everyone that will reach the Vram limit and can’t press render button. In that case you’ll have to spend your time on fixing texture resolution than tweak the image for better photorealism.
The last time I used V-Ray there was a tool named img2tiledexr that basically did what the maketx tool from OIIO does. TX files are just renamed TIFF or EXR files anyway and both support tiling and mipmapping.
And I don’t know how V-Ray reads textures nowadays but back then it also only read the needed tiles at the needed mip-level and saved lots of memory (and loaded faster and needed fewer anti-aliasing samples because the mipmaps were prefiltered).
By the way Redshift also converts textures to tiled / mipmapped versions automatically and keeps them in a separate cache, not like .tx files next to their sources.
P.S.: I see that tool still comes bundled with V-Ray.
Just in case someone is confused as I was:
Mipmapping requires more memory, because all the size variants are usually loaded at once.
However, this is about On-Demand Mipmapping which is a different technique, with different pros and cons. On-Demand Mipmapping does not load all the differently sized textures at once into VRAM, but only the needed ones.
Of course, if you load a mipmapped texture into RAM it will use more RAM (because it contains the original plus all the mip-levels).
But if you use an offline renderer with a proper texture cache you only keep the tiles at the mip-level (= resolution) you actually need in RAM.
I just quickly recreated my simple “Earth with NASA textures” scene because I couldn’t find the old original. It’s 2 simple spheres with 4 21.6k textures from this site https://visibleearth.nasa.gov/collection/1484/blue-marble applied to different properties. The inner sphere is the surface of the earth, the outer sphere contains some clouds. It’s not about realism but just about the sheer amount of RAM it takes to render it. I know it looks shitty, I just used it to apply some large textures to.
I used maketx to convert the textures to tx-files. In a renderer with a proper texture cache this comparison would be much more drastic, but in latest main of Blender (4.2 Alpha so to say) I get the following:
Ignore the render times. OSL is super slow and it’s CPU while the other is GPU. But apart from that: 2808.65MB vs. 33.76MB is kind of a statement. And this is just current Cycles, not a renderer with a proper texture cache system. I don’t even know what features OSL on CPU is currently using.
My point was simply that mipmapping is not the same as on demand mipmapping. The original statement by @Secrop that mipmapping does not save memory is 100% correct.
It was my fault to confuse the two as this is the first time I am aware of this distinction.
I stand corrected, but this doesn’t change the actual problem.
I’ll just casually move the goalpost and say what we actually need is on demand mipmapping.
Oh yes!
In situations like this one (like with Blue Marble textures), both tiling(demand mipmapping) and mipmapping are an advantage:
A large part of the huge texture is not used at all, and in the other part, pixels are so scarcely sampled that each fetch operation could probably deal with just one at a time!
This is really fascinating to me. I don’t know quite how the Blender camera does what it does, but I do wonder about how large the impact of the Camera object itself is to the final render, and it does make me wonder if other render engines utilize a different design for the Camera itself?
If you like seeing a lens simulation, you should see this video if you haven’t. This is a simulation of a physical camera in Blender to a crazy extent.
I also wonder how much of an effect this has, I’ve heard a lot about the octane camera and how it integrates a lot of the post-processing effects in a more grounded way.
A personal pet peeve of mine is overdone chromatic aberration, and it is a plague among newer / midlevel artists. once they start playing with the compositor nodes, they really want it to be known that they were playing with the compositor nodes, so they dispersion gets cranked up to 11 and it looks like trash.
It’s the post process equivalent of glowing red ears with SSS, subtlety is a lesson learned later on in the creative process, I guess.
If there were a professionally designed post processing suite and camera settings, it would be much more likely to not suffer the same consequences.
I’m wondering if having a reasonable amount of DOF would help soften the details in the distance in a more plausible way. I see a lot of people just disable DOF unless they need it for a closeup macro shot, but real lenses have real depth of field, even with an F64 aperature.