I’ve thought a lot about that, searching for answers on the internet but nobody seems to really know the answer for that.
Ok, when I started learning texturing, I’ve intuitively presumed that If I just could use a diffuse map that is just like a persons real photographic (without specular reflection), it would be the most photorealistic diffuse map it would be ever possible to create. Here is an example for illustration
This is the diffuse map of a scanned persons head, and this looks pretty much real in a good material set up.
So, how is it that this diffuse map can look so real when applied to a character, even despite the huge lack of details (by comparison), while the other map, even though it looks perfectly photorealistic (after all it’s a real photo, it couldn’t get more real than that) looks awful if you use it as your diffuse map.
This is really a mystery for me, maybe because I’m new to CG. But I know many of you guys have a lot of experience in working with CG. So, what do you think? What’s the real technical explanation for something like that? Thank you.
Because the diffuse map you showed has a lot of lighting information in it which is really bad for high quality models because obviously you want to setup the lighting in your 3D scene. If you take an image on a sunny day with harsh shadows for example, these shadows and highlights will always be visible on your model, no matter if your 3D scene is a day or night scene.
The second image however is a so called albedo map which only holds the color information of an object. So when creating textures the goal is always to only include the information that actually belong to just the object. Because light can totally change the appearance, you want to exclude it from the textures as much as possible.
The next crucial step is that the color map is not the only texture used for a photorealistic setup. Added to the color map you also want to use a roughness or specular map to define the shinyness of the surfaces as well as textures that hold information about the microstructure such as skin pores. For that we have bump and normal maps. For larger, geometrical displacements you can use displacement maps also known as height maps.
So basically you want to give Blender as much information about the actual object as possible without taking external influences in account and Blender then uses that, adds your 3D lighting information and calculates a photorealistic result.
The picture is like a final render with all the lighting information put in there like shadows, SSS, reflections, etc. It could be used in 3D and look good if you have it as an emission node. There can be no shadows different, or reflections changed, or anything changed really to have it look good. If lighting on the rest of the scene is different it would look off. If it was something with a non rough reflection it would look off as soon as the angle changed. Even rough reflections change as the angle changes.
Another way to think about it is to get a clear cup in real life. If you look at it from a single angle that never changes a picture could be taken of it and it looks great. As soon at you move your head in real life though the cup has a ton of colors changing as the light refracts light though it is completely different based on the angle. In 3D we can make the geometry of the cup and accurately calculate how the light bends though it. It’s pretty amazing that is can be done when thought about.
So we separate the maps to cover all the different physical properties of that surface. Again, amazing that we describe the look of the physical properties of the world in such a small number of pictures. We can say this part reflects with a reflection map. This part will scatter light then emit it again with a SSS map. This part is rough with a roughness map. This is how we get all the different maps needed to make a 3D object to look good for every angle and as it moves and changes with animation.
The flat first picture you have also has the data of the 3D geometry crunched or missing at certain places. Let’s say she looks to the right for example and there are veins in the eye over there. Your first picture has none of that information. What is the texture under the hairs of the eye lashes, underside of the nose, inner part of the lips? I do see things like that on some architectural visualizations. It’s a low poly person with emission texture. It always looks a little out of place, but from far away it can be good enough.
Thank you eaNiiX. Your explanation was very clarifying. Now that you’ve pointed it out, I can see that the first image has lighting information when I compare it to the last one.
I’ll have to train my eyes to recognize it. For to me, information like light, specular and shadows in an image would be something like this (image bellow), thus I thought the first image was fine:
Thank you watercycles, very interesting explanation. It gave me a better insight on this subject. The concept is very clear, I think I just have to think more about it and train my eyes to recognize those aspects (shadows, light, reflections, SSS etc) more accuratelly. It’s a little bit confusing to grasp in practice.