Gamma correct textures?

My texture library is mostly 8bpc and sRGB.

When i’m using Blender Filmic the textures look dull, so I use a gamma correct node for the color textures. I usually work in the “Base Contrast” look mode in Blender.

The results generally look good, but Is this a recommended workflow? Does anyone else work this way?

I’ve considered converting all my textures to a linear workspace, however none of my experiment results compare to the gamma node workflow. So i’m not converting them correctly, I could use some advice on this.

I have a feeling i’m doing something wrong, can anyone give some advice?

Any image texture set to “Color” will be automatically converted to linear color space.

Because gamma is a godawful term that relates strictly to displays. It also won’t help you solve getting data into the scene referred domain.

If you have an “sRGB” photograph from the net or your camera, the encoded values cannot be recovered to the scene referred domain. That is, the camera applies a unique bending of the sensor values to make the image look “aesthetically acceptable” when viewed.

The sad truth of the matter is that there is a huge disconnect between the scene that your camera sees and the resulting encoded values. The intensity mapping, or in colour science terms, transfer function, has been lost along the way.

So while most folks slowly learn that they have to linearize their work, and rightfully so, they are only given half of the problem. If you take the sRGB OETF (Opto Electrical Transfer Function) and apply it to a photograph, it will not result in scene referred values. Further, no amount of exponent functions will take a normalized range from 0.0 to 1.0 and expand them to the scene referred 0.0 to infinity range. Try it! Even if exponents could do that, the result would be incorrect because we don’t know the nuances of the parametric curve that shaped the data, nor the scene ranges, in the first place! This is where the millions of log encodings come into play, which if you understand the above random rubbish I just explained above, you will probably have an epiphany moment as to why log encodings are important; they permit us to take relatively evenly distributed bit depth encodings and transform them back to correct scene referred correct ratios.

Hence what you are seeing are sRGB display referred values being transformed into display linear values. This is functionally useless for scene referred work sadly.

There are ways to get proper encoded photography into a scene referred workflow, and I can offer a bit of advice if required here. HDRIs, log capable cameras, or raw photos are one path to a potential solution.

Sadly, this doesn’t help you with crap sRGB display referred photography that has been bent and mangled beyond comprehension. This has always been the way Cycles has worked, and has been incorrect workflow forever, but unfortunately most folks weren’t even aware of the scene referred native model prior to Filmic.

The inverse sRGB OETF will only deliver display linear values that max at 1.0, which is why your background plates would look “dull” because the intensely illuminated elements that live at sRGB’s OETF value of 1.0 are being mapped to the scene referred value of 1.0; not correct. Even if the transfer function went further, the results would still be garbage because we are dealing the unknown curves baked into the camera and potentially post processing of a given image.

If you would like, I can step you through generating properly encoded photography for use in raytracing or CGI assuming you have a DSLR available, or access to some raw photography files.

With respect,
TJS

Your explanation is very complicated to me, I dont fully understand.

Does this mean I can basically throw away my existing texture library? I do have an DSLR Camera but i’m not much of an Photographer, and I have never used my camera for textures. Many of my textures I created in Photoshop from scratch, some of my textures come from texture CD’s I bought or that came with magazines. Other textures that I have I simply don’t know the origin of.

Is it possible to create a texture in Photoshop from scratch that does have the scene deferred range needed?
Does this mean al textures in a PBR workflow need to have 16bpc or 32bpc, even for the color input of a (principled) shader?

I will start by saying that I am not the one to speak to this authoritatively. I am sure other people can follow up with greater detail as required.

For emission back plates, the images are essentially broken.

As textures, it requires more fumbling and jumbling, and it will never quite be a decent result.

An easy way to spot this is to try Filmic with various surfaces using the diffuse reflectance (albedo) values. Frequently folks find that their surfaces are reflecting back far too much light, and their textures end up blowing out.

This is because albedo needs to be roughly close to physically plausible values when hit with physically plausible light levels. If you crawl this forum, you will find more than a few discussions on the matter.

When we place a camera in front of an object to photograph it for a texture, we want as close to a purely linear response recorded that we can get, because in a raytracer, those albedo values are very sensitive. However, because many photographs aren’t carefully handled, the result is an aesthetic response designed to be looked at, not used as data. That aesthetic response is unavoidable unless we use the raw file the camera can record; aesthetic decisions are baked into the hardware and software of every single camera you use, and those decisions are baked into every JPEG you save from them.

So while you could use any old JPEG or random photograph you find out in the real world, it isn’t going to be able to easily deliver the encoded values you are actually needing, despite it looking like an interesting photograph. Same applies for the countless people over the years that tried to massage in a display referred background emission plate, or any other workflow issue.

There are a number of issues in using a low bit depth pipeline that can creep in, not the least of which is the means of which alpha is stored within Blender. Your life would be significantly smoother to stick to EXRs for example, and not dealing with the myriad of problems that crop up as a result of bit depth quantisation issues etc.

Getting back to point, a typical still photo from a DSLR will deliver 12-14 bit depth, and can be encoded reasonably correctly for texture use if you have the raw file. Once encoded, it can be saved into a half float EXR and end up being a great asset to have on hand.

Starting with incorrect assets will simply make your job much harder, gobbling up more time, or ultimately ending up as one more element in the “That feels odd” pile.

A good example as to how a workflow / process can benefit you can be had by looking at the role of a decent view transform can impact your work. How many countless hours have been wasted trying to massage work to fit into the Default view transform? With a stable view transform, “magic” happens with no additional effort; one can drop a CGI model into a scene referred HDRI and have a seamless integration.

Ultimately what you choose to do is up to you; some folks are content ramming away under the Default view transform using low bit depth nonlinear assets, and have fun making content. Good on them! For those that are able to see the issues though, a change in process can be of tremendous benefit, saving time and significantly elevating the work along the way.

I haven’t photographed my own textures for a while, certainly not since the arrival of PBR. I believe that for albedo texture you would have to somehow remove all the lighting and specular reflections (rough or smooth) and then (regardless of how much light was used when shooting it - sunlight or candle or flash) remove any color cast and scale it down to approximately the albedo reference value found in a PBR cheat sheet (example). If shooting many textures in the same lighting conditions, consider manual exposure, then the same correction could be applied to all images.

Keeping 12-14 bit original exr’s around is probably a good idea. But I tend to just use a copy of them as 8bit for actual texturing work unless I have special reasons to use higher (i.e. gradients in actual bumpmaps if I prefer that over normalmaps in that instance). For me, texture memory is key and I simply can’t afford working in higher bit depths.

@OP: As for existing textures, probably best keeping them around. I tend to just get them into the expected reference albedo range, and ignore the fact they contain lighting information. At least the really old ones won’t have any occlusion maps with them anyway, the principled doesn’t have occlusion map input (you have to do it manually outside - source of error), parallax mapping, and even the coating shader seems completely broken for me wrt to roughness - going from 0.0177 to 0.0178 cause a massive change, making coating scratches impossible to texture.

Other than some additional controls, the major difference from principled/pbr to old style material creation, is that fresnel is decreased automatically by roughness, and that roughness is squared for more sensible slider control. On your end, you need to decide if you want to decrease roughness toward the edges. At work we have access to “lambertian diffuse surfaces”, but they still exhibit sharp specular reflection at the extreme glancing angle (gets noticeable if you put your eye pretty much on the surface, so it’s still a very good lambertian surface - just not perfect). Roughness/glossiness maps also needs to be scaled to the new value if they were designed to work with default Cycles roughness.

With textures for channels like albedo/basecolor, you don’t strictly NEED scene referred data unless you are trying to perfectly match an existing surface, ex, a scan. If you just want, say, a cool looking rock, just fudge it. You’ll need to convert the texture to display linear, and you’ll want to pay attention to what values are actually being encoded - you do not want 0 or 1 as a an output value, as nothing (not even vantablack) is 100% or 0% reflective. If you’re faking things, you can fix this with a levels operation. Beyond that, just eyeball it. Not only does it work well enough, but it also uses half the memory of a 16bit EXR, and textures are usually enough of a memory hog as it is. 256 steps between 0-1 that you get from an 8bit image is fine for albedo maps, you just need to be careful to use it properly.

For emissive maps and backplates (backplate cards, environment maps/HDRIs, etc), yeah, you pretty much do need direct scene referred imagery as troy_s explained. The reason it’s possible to fudge it for basecolor/albedo maps is because these are also a 0-1 range, so you can just massage the display referred data into the right range, de-gamma it, and it works well enough. As noted though, this will not work if you need to precisely reproduce a surface. For backplates though, you’re essentially always trying to “precisely reproduce a surface”. You need to duplicate the light that should’ve been reflected/emitted by the objects represented by the plate, and once you convert the photo to display referred, you’ve thrown that information away.

NO! The lack of an occlusion input on Cycles’ Principled BSDF shader is no accident, and for heavens sake do not try and “fix” this by multiplying your AO and basecolor maps together. “ambient occlusion” is occlusion for ambient lamps. Hence the name. Cycles does not use ambient lamps, thus it does not need occlusion for the ambient lamps it does not have. All light interactions that ambient lamps and AO are supposed to simulate is calculated “for real” in a path tracer. In summary:

  1. Do not use occlusion maps in Cycles, Redshift, Mantra, Vray, Arnold, or any other renderer like that. At all, ever. They’re for rendering techniques that are really only used in realtime rendering these days.

  2. Even in renderers that need them, never, ever, EVER apply an occlusion map by multiplying it over the color/albedo map. Plug it into the occlusion input on the shader. If the shader does not have one, that’s probably because it does not need one.

And, hey … let’s just be sure we all know what ‘gamma’ actually is(!) :slight_smile:

“Gamma” reflects the fundamental physical fact that, if you sent a “digitally continuous (therefore, of course, pefect™) gradient” to any sort of physical video display device at all … or film … or print … in all cases it would not appear to be “perfectly™ smooth.” Instead, it would be perturbed by some kind of transformation that is peculiar to the technology in question.

  • But, this would be a “well-known” (per-situation …) pertubation – a “gamma curve.”

Data-capture devices, including film and digital cameras, are the same way. (Less so if you use “raw,” but even a digital sensor is still biased in a known way.)

Therefore, when you receive data from any sort of physical device – or, from any sort of “image(!)-format file” – you must remove the effect of gamma (as best you can …) in order to produce digitally normalized data that can now be meaningfully applied to data from any other similarly-normalized data-source … such as “CG rendering.”

… understanding that “data has been lost by the input device,” and so that the gamma-mapping can now only ever be approximate … because the device’s perception of the input is, alas, non-“linear.”

Then, as the very-last per-device(!) step (after transforming and re-transforming your data in “a blissfully linear, blissfully all-digital world”), you must apply an appropriate gamma function to the data that is finally going to be directed to each particular (alas, “physical” again) output device.

I’m aware of the pitfalls, hence “source of error”. I haven’t multiplied with diffuse/albedo in a long while, and when I did I was fully aware of it. I just didn’t care because it improved the result in available time. Whenever I use AO/cavity maps now it’s usually to dim reflections of bright metals. Some use it to dim down bounced light as kind of an “ambient light”. And certainly using AO maps to affect the World AO would be considered legal? AO input and a dropdown list of uses would be nice, especially if it can access shader results down the line that are unable for us in nodes. Warnings about being wrong could be given as tooltip.

“For real” is just too time consuming. I’m faking it all the time (does anybody use caustics for light transport?), and “optimizing” to get speedy renders and re-renders if required. It’s okay to break the rules as long as you know you’re breaking the rules. Unfortunately, traffic police over here doesn’t agree :stuck_out_tongue:

It would, but Cycles doesn’t support using occlusion maps for world AO, it just recalculates everything on its own each render.

Hell no! :evilgrin: