How did you install Blender 4.0?
What you are describing sounds rather suspicious and doesn’t make a lot of sense to me, unless it was constantly running in the background?
A lot of modern web browsers are memory hogs and that is after the possibility of Blender using most of your RAM to display and/or render your scene (which is a possibility if there’s a heavy use of nodes).
When you run low or run out of RAM, you get unstable behavior in applications, it is not Blender’s fault.
Thanks for feedback guys. My apologies. Slight foot in my mouth as the browser out of memory issue was not being caused by Blender 4.0. It was actually a conflict with Nord VPN. I’ve deleted my initial post since nothing to do with Blender 4.0, thank goodness. Again, my apologies.
The color input for subsurface scattering simply mixed the base color and the subsurface color, depending on the subsurface weight.
You should be able to recreate this quite easily in 4.x. Mix the subsurface and base color using the subsurface weight as factor and use that as the base color. https://youtube.com/clip/UgkxbnVkKMl2UNz8KdzLAP4Z2Ll2QBMXZTFu?si=JTZZ3892nCOTHJzm
Personally, I think by not having that option, it makes it clearer as to what is actually happening. As was always confused as to what this color actually does, until I have seen Lukas’ talk (the previously linked clip). In my view, it is a better default, because it is less confusing.
Well, frankly I never find that separation useful, basically say I have a skin shader, or a tomato , what would be your albedo and subsurface colors, how they would differs and why ?
what do you mean “I have a skin shader, or a tomato”?
Do you mean like, what is the difference?
In case it is like that: wth would be the difference between holding your hand against light and NOT holding it against light? (edit: like light shining through it, if that makes more sense. Right, you can see another texture… veins perhaps, you see what I mean and why it was so convenient to have it?)
I can understand why one might not like it, but to me it would be similar if the emission color was always the same color as the base, and you were just locked into that result. It’s far more flexible to be able to have a separate color for both, even if that means an extra node or two.
But conversely, i’ve no idea if 4.0 handles opening a 3.X file gracefully and recreates that node setup for you on import, or just changes the way the shader looks. The former is best, the latter is not nice for the user.
You seem to have a very different definition for workaround compared to me.
In my view, this is a good design, because you can easily figure out that the base color is being used. If you want to do fancy stuff with it, you can still do it. It is less confusing for people who try to get an intuitive understanding of what is going on. That’s a good default in my books.
I’m trying to figure out why you need to have two colors so badly.
I used two different materials as an example, one is a simple tomato material.
The other is a human skin material.
The question is, what would you use as color for albedo and subsurface in these two different cases ?
I’d give my answer, basically I would always set the same color.
In the case of the tomato it’s very obvious, it should be some reddish color, and modulating the subsurface value shouldn’t change the color only the diffusion of the light. Some hue changes could be done with the radius, to give an extra reddish tint, but that’s what radius is for.
Now in case of the skin we could argue that we are mixing the skin color with some reddish tones for the underneath blood and some yellowish tint for the fat…
So we could try to have a natural color for albedo, and a redder tint for subsurface.
But, now if I need the subsurface effect to be stronger, increasing it would make the skin globally redder , where I would probably just want to increase the diffusion.
In that case, color variations should be done in the albedo, and changing the subsurface value should only change the diffusion of the light, and shouldn’t change the color…
But I’d like to know your POV and how you would do so I might learn something !
Well, I don’t completely agree with you here, it’s true that there are some subjectivity in many things, UI and art make no exceptions, but in the other hand there are also always objective facts and best practices. Blender should enforce best practices, so the UI and functionalities guide a new users toward the right choices.
In the case of the removal of subsurface color, to me it’s a step in the right direction and I never manage to use these both colors in an useful way, but I’d be happy to be proven wrong !
wait wait wait… hang on…
I am going a different way here.
Help me out, please
I thought that the SSS color input takes a map and doesn’t turn it into a base value?
Correct me if I am wrong (edit: I get it, you can use a base value comprised of 3 bytes… or 4 maybe, if alpha is being read, in fact).
I use two textures, one is the “skin” texture going into the color, the other one is a more detailed organic texture of the first having more underlaying detail (veins and such, as I mentioned previously), noodling into the SSS color slot…
Have I been doing things wrong?
(edit: If I understood you correctly, you are thinking I should just use a base value instead of a map?)
Now I add a checker texture to the subsurface and set the value to 0.5
We could think that it’s mixing albedo and subsurface color in a clever way, but it’s not…
And I would have to do some complex trickery to get the same overall 50% blue tint on my model.
I think what you’re trying to achieve is called layered SSS where you’d have different colors depending on what’s under the skin and how deep the light goes,
something discussed here :
But that’s not what the old principled was doing, and you might start to see why it’s best to remove it as 1 : it serve no purpose and 2 : it gives a false impression of something useful where it’s not
I’m not an expert at realistic skin shader, so it would be interesting to have some hints on how it’s done with blender and if there are case where modifying the albedo or radius is good enough…
No RGB is colour, Alpha is a separate value used only as a mask (and has a seperate output in the image node).
This approach seems a bit weird to me.
You can plug a colour map into the ss radius to control the colours of the subsurface light that shines through objects. But plugging a colour map into the old subsurface colour input was basically mixing that map with the base colour.
This is the old behavior (with a sscolour of 0.8, 0.8, 0.8) and a colour image for the radius:
If I am not mistaken the devs decided to get rid of the colour input to avoid confusion about what it was actually doing.
I am not saying that the old colour input was not usefull for certain effects, personally I think it did no “harm” and could have been left in, but Blender is not “broken” you can still do the same (with 2 extra nodes- a value and a mix colour node).
I do not think there is a right or wrong way to achieve the desired result, if you are happy with it then it’s OK. Not everyone is going for realism.
However it is good to know how things actually work to get the most out of all the possibilities, the subsurface colour input was mixing that colour with the base colour using the weight as factor.