I’ve just spent some time testing Blender/Octane and am now reverse-engineering some scenes to see where the differences are.
One thing that I immediately noticed is, that the bump maps in Cycles looked “dirty” - for a lack of a better term. I looked closer and played with the lights. What I found is, that bump mapping via the Bump node has some really bad artifacts at glazing angles. I know that there’s a checkbox in the material settings, but that one improves the issue only slightly.
After watching a video of Raphael Rau (Silverwing) - as I often do - I was reminded of the option to use the Displacement node for bump mapping. So I swapped the Bump node for a Displacement node and the results are much better. No black artifacts any more - neither on the surface shape overall nor in the little grooves of the bump map.
So now I have to wonder, if there’s something that I can do to make the Bump node properly or if it’s better in general to use the Displacement node. Maybe somebody has some insight on this. Otherwise I will give the devs a nudge to take another look at that. Cause the average user will always head for the Bump node and might not even be aware that the Displacement node could deliver better results.
Here’s a comparison rendering. You can clearly see the issue with the surface shape as well as the darker grooves in the Bump node version.
Well AFAIK this is (like also described in the docu):
The Bump node generates a perturbed normal from a height texture…
The Displacement node is used to displace the surface along the surface normal…
…so the first is “just”
manipulating the light reflection on a particular surface point to fake a different geometry
while the other is
manipulating a particular surface point in/as geometry to get a different light reflection
That’s also the reason why the later was introduced on a later time in any renderer after the first one because it more computationally intensive. Or the other way round: if you have not enough performance then you may want to use a less intensive method to fake it. Maybe also using the one or the other on a hero object or background.
Edit: ohh and even if not using a real displacement… of course the node computes a slighly different result… that’s also the reason why for real displacement there is the displacement node… so there are “historical” reasons for the naming and different methods…
By the way: that’s also the reason why we do not have any “do it physical correct or photorealistic”–button… → different mathematical models and different implementations which are not everytime easy to “recognize”… that’s also the reason of the my-renderer-is-better-than-yours–wars…
The comparison with Octane made me aware of the issue. The bump results there resulted in less issues and overall looked better.
I’ll stick with the Displacement method for now. It’s just odd that this approach is the more obscure one. I wonder why the original Bump node isn’t improved to match these results.
This isn’t about using actual physical displacement and rather using the “Bump Only” option via the displacement node. So no real change in geometry is happening. It’s just an alternative to the classical Bump node.
Yes… see my edit… it is a different approach and so the node is named differently.
This is a general thing in every other special field… there are different possibilities and a user can use them differetnly with some knowledge about it… maybe sometimes even just trying to get some experience. So i do not quit understand why you think considering…
… as a reason to change something about this because they…
I do not think that it is the assignment of any program or developer to teach the average user to get knowledge and/or experience in the areas this application is made for except a general manual.
For example :
mobiles/smartphones do no teach how to make a call ( … well… this said: maybe for some apps their maybe should be even a simple manual ).
image apps do not teach about colormanagement
text editors do not teach how to write good stories
and not every 3D app comes with several PBR-materials included
Also no 3D app tells any user to use not a three million polygon asset when it is just used in the background and a simple billboard might be enough or the other way to use more polygons to make a better outline ( ← and mayb not just subdividing this but refine the silhouett only ).
It’s the job of the user to learn about the possibilities in general and the ones of an application in particular.
And again: if using advanced, more recent, more sophisticated methods then of course the result might be… better ( depending on the context. Just compare to the first cycles/eevee implementation/version or even with the internal renderer ).
I’m not in the mood to diverge into other topics today or to talk about fallacies. Maybe some of the devs will stumble over this and can share some background about these approaches and implementations. That would be interesting.
Your discovery (within the context you use it) was: displacement node is different and maybe even better than bump node… so why do you think the do exist in the first place ? They are clearly different approaches to get a specific effect. When you think you get better results with one of them then just do so. It’s like comparing apples with oranges.
And your question is bit puzzleing:
when you already discovered that with displacement node (even if only using bump) you got better results… (again: for your use case with PBR and comparing with octane …other people just use eevvee )
So I just do not understand what you are asking for !?
It’s like asking about the reason of the existence of any other color mangement when someone get’s better results when using a specific one… the answer is diversity and possibilities.
I appreciate any feedback. But if the question is not clear, please ask for clarification on the problem. Diving into philosophical rabbit holes only distracts from the actual issue and might result in long-winded threads that are off topic. Look at me here trying to sort this out and spending time that could be used otherwise.
Once again for clarification - here’s the TL/DR …
In the recent comparisons that I’ve made with specific renderings from Octane, the bump map results from the Blender node don’t look great in comparison. There are dark areas in the surface and the smaller grooves show overly dark results in the depressions. The results that the Displacement node delivers look better and are closer to what Octane delivered. This is a small subset of tests and a quite narrow set of test situations is used.
This begs for some question …
*) Why are the results from the Displacement node method different?
*) Are there any disadvantages in using the Displacement node approach? Or the other way around - are there advantages in using the Bump node?
*) Would it make sense to replace the Bump node algorithm with the one use in the Displacement node?
So these aren’t questions about “diversity and possibilities”, which has nothing to do with this topic. This is about the technical background of these two approaches and to gain a bit more insight into how they work.
Someone explained why this is an issue in the thread above.
and here:
I think a lot of people didn’t notice it on this thread.
I will admit that I do not like how some people here find a reason to attack or belittle other people trying to ask questions or figure out certain issues that Cycles might have in comparisons to other renderers.
Let people investigate and ask questions if they need to. What’s the worse that would happen? If they find an issue, is that not a good thing that benefits other users?
Too bad that you didn’t get any feedback on your thread either. It’s not as if bump mapping is some niche technique. Those are things that should work well. But for now as long as I don’t really know more about the advantages/disadvantages of the two methods it’s hard to make any more judgement.
And yeah. Sometimes people can lose the ability to stay rational and on point. There’s nothing wrong about testing different options and comparing them. There’s a lot of valuable information in all this if people spend their time doing that.
I would say that at least it is an issue (for me with 6 legs),
At least using the displacement method is a workaround for the moment.
I love trying different things and plugging nodes into each other!
I am pleased you tried this out, for the moment I will be using the displacement instead of bump too.
This also reminded me to use those shadow terminating options in object properties which I had forgotten about!
Yea it looks like a known issue too, I recall reading somewhere that they were thinking of putting a checkbox option that had to do with bump and normals calculation it could be related, but I read a lot and often forget where I read it.
As far as I could test, in eevee there is also this difference, only that the advantage depends on the strength in the displacement node, when it is moderately strong the faceting of normals happens, but to be fair the normal map node also does the same.
There’s a checkbox in the Material panel, but it doesn’t really make the issue go away. So there might have been a point where they were working on the issue, but the solution doesn’t really work (any more?).
I wasn’t getting any philosophical whatsoever… but just stating:
There are historical reasons for
“faking geometry” via light ray modification:
– bump mapping: using a height maps ( “coded” as grey-values ) to computed the “slopes” between the values knows as normal ( this might use different delta value to get them…)
– normal mapping: using normal maps ( “coded” as RGB-values ) to give this normals “directly” (usually also normaized )
“real” geometry changes via:
– displacement: using “difference” maps (again only grey) to change the geometry along the normal (retrieved by the geometry or even custom normals)
– normal displacement: using an encoded (RGB) normal again; now not normalized but even with different “length” / difference
The nomencalture is not always “good” because while using bump sometimes people spoke about normal manipulation…
So because of this in blender there is a
bump node
normal node
displacement node
and also a
Normal input slot for the actual shader node used…
– for the “faking” geometry… and a
Displacement input slot on the Material Output node for
– for the “faking” geometry… ( Bump Only )
– for the “real” geometry… ( Displacement Only or Bump and Displacement )
. Because of all this possibilities… blender just offer them… ( maybe even also using different nomenclature).
Well… you added no info about the possibilities of octane and their bump, normal or dispacement features…
So:
When looking into the docu of Octane i can find for example for the UniversalMaterial… this shows under Figure 5: Universal material parameters → Geomtrey Properties: Bump, Normal and Displacement (slots, node connector or whatever).
Since you didn’t wrote anything about your octane setup i can only assume you used a bump node/connection (whatever) in both… and octane just might use different gradient detecting algoritm like simple delta, prewit , sobel and/or parameters?
So yes:
You may have to elaborate or research this to get a more suited answer.
Oh right the “bump map correction” one, unchecking it does make it a lot better (at least the clipping) but displacement still seems to give slightly more detail.
This is with the displacement correction checkbox unchecked;
That’s interesting and not great. In my test scene it makes it better though. Not really great, but it looks even worse with the checkbox OFF. I guess that checkbox takes care of larger curvature corrections and might mess up smaller details.
Maybe I’ll head over to the dev forum to get some answers. They don’t seem to be here that often.