[SOLVED] Bugged lightning on some faces (NOT "FLIPPED NORMALS" PROBLEM)

I’ve got this weird lightning bug on some faces.
After some troubleshooting i’ve found that disabling my normal texture fixes this. So i thought that the problem was in the texture.

But the bugged faces are no different from the working ones in here.

Next obvious thing could be that i had some flipped normals, but no, they’re all fine.

Googling this showed me countless tutorials on how to fix flipped normals, so that didn’t help at all. I hope someone here could tell me what’s wrong with these faces :upside_down_face:.

Since i can only have one embeded file i’ll list all the screenshot’s here:

In your material, did you set the normal map’s color space to “non color”?

1 Like

That first image looks more like a clipping issue… where the object is too close and clipping is set too far…
Did you make sure to set your Normal as Non-Color Data?

1 Like

Setting Normal to Non-Color really did solve this. Not shure how this works but thanks everyone!

Short answer:

When you select Color Data, Blender will gamma correct your image (based on the image encoding). When you select Non-Color Data, Blender will not alter the RGB values of the image, leaving them in a linear scale

1 Like

To add up on what @joseph said :

That’s what you should always use for normal maps,
The reason is tied to how images are displayed on your monitor.

Blender can’t interpret colored image ( like every photo/texture we see on the internet) as they are,
When using these images as texture (plugged into color), at render time a conversion is made so the render engine can interpret it better.

Because normal maps is more an information on how the surface should react to light, their colors are meaningless to us and blender should interpret them as they are , without color transformation/conversion.

If you really want to know more about it and really understand all this, you should look into color-management. That’s a big topic hard to get at first, but quite important nonetheless …

Or you can just use non-color-data on normal maps , and learn about that latter !

2 Likes

This is an excellent response, OP should refer to this instead of my extremely lazy one I copied from google haha

1 Like

:smiley: you’ve made the short answer, I did the “a bit longer” one, and now let the OP discover the real one at the end of the rabbit hole … :smiley:

1 Like

Thanks, I’m going to do just that!

The last guy said “Plastics”. But, I didn’t listen.

To give a few insight before you start to look at all this , here are a few things to consider :

Historically, all the calculations made on images (2D and 3D) in every CG software that we used ( photoshop, AE, blender, mentalray, vray…) was done wrong until ~2010 when the notion of “linear workflow” started to append on the CG scene.
That didn’t prevented to make good images, because we could still make it look good even if we didn’t used the tools properly.

If you are old enough, you may remember that at some point in time, the same image didn’t look the same on a PC or a Mac, and that was a side effect of all this mess.

The main issue here, is that colors in images like jpeg, png are saved to look good on screen, but it’s different from how light react and how calculations should append on these images. But software didn’t cared about that at that time.

To make the right calculation (that can go from a blur filter to a 3D rendering : nearly everything that involves color and light) most of the time the image needs to be converted from srgb to “linear” , then calculation appends, and it’s converted back to match your display screen, that is generally SRGB.
Now, every software have more or less integrated linear workflow, so you don’t need to worry too much about it. But it’s important to understand the history behind all this to get a better idea of how that works, and why it’s like this.

The main issue the OP had is just about that. Color images like textures needs to be converted from srgb to linear to get a proper 3D render. But normal maps, render passes like ZDepth, or other images that store “data” rather than color should be treated differently. We use color to store these data, but in the end, they don’t describes colors. Zdepth for instance store for each pixel the distance from the camera, it as nothing to do with color or light.

Nowadays the way we handle color in images get a bit more complex and refined than it was in ~2010, the challenge we have now is to manage colors correctly between different sources and displays.
If you want to display the image you have on your computer on a cinema projector, other problems appends : because the cinema projector can display more brighter images, with more color depth than our 8bit regular computer screens, a conversion needs to be done between the two.

It gets even more complicated when you film something with a camera, add CG on top of that , and display everything on a cinema screen. And now that Linear Workflow is integrated into softwares, it’s time to integrate “Color Management” that deals with converting images from different sources to different displays. So everything speak the same language, so to speak.

Even if we don’t work on cinema, how to make sure that the 8bits image we make on our regular monitors can be displayed correctly on recent UHD monitors that can display more brighter images, with a broader range of colors ?

Another issue is that nature and render engines can manage lights powers way higher that our monitors can display even UHD ones. So again some conversion are needed so 3D calculation are accurate and the result looks good on screen. And if we need to do more work on these images, then we need to reverse the process, similarly to what linear workflow do. That what’s filmic do and that’s called tone mapping.

What I want to demonstrate, is that what makes color management quite complicated to understand , is because a lot of subjects are interconnected, so there are a lot of information to unwrap here, so take your time and digest everything bit by bit. In the end it’s some valuable knowledge for CG artists.

I’m not the best guy to talk about all this, there are probably oversimplifications here.
Generally, you don’t have to worry too much because the software will try to do the right thing for you. But of course , having a better understanding of how color works, in real life, and on computer will give you a better understanding of what the software is doing, what these buttons are for, and you’ll end up doing less mistakes and better images.

Here are some links on the subjects :
This article is quite old, it’s talking about how to use a linear workflow on your render engine, it also explains why it’s important. Now with blender in 2022 you probably don’t need to do anything, but it can help to start understanding a part of the issue :

This one is a bit longer but explain a bit better :

Here it’s a really good read that takes time to understand , but the point is to get the big picture, that we are quite missing when we look for a short answer, it’s more about color science and color management :

2 Likes