Don’t you hate it when you render an object with normal mapping, but around the edges it looks flat? Don’t you hate the long render times caused by displacement? Why not combine them to get the best of both worlds?
It it was coded, these would be the steps.
Object is rendered with nor map. Image saved.
Edges of displaced object are determined.
Only those regions of the displaced object are rendered.
2 images are masked together.
This is all easier said than done, of course, but I’ve done a test to show that in theory it is possible. I didn’t save any render time, of course, because I rendered each entire object. But I think this still shows that it would work.
That is a good idea. It is like LOD but instead of it being based on distance from the camera it would be based on the face angle to the camera. It seems since Blender can do Edge detection something like this would be possible. Let us see what the coders think.
That’s right. I don’t think you can get face angle information through nodes, so in this case I used a depth map – which of course gives exactly the same result on a sphere. But for complex models a depth map would not work.
About the implementation of it (though there is no problem to get the angles values) I wonder how complex it can be to subdivise the mesh differently, depending on the angle… Because if you want the technique to be useful, you cannot afford to cheat like you did on this video (as I guess ) so you’ll not do 2 renders and mix them more or less. Regarding the subdivision stuff, maybe Broken could tell you more, I saw he has wroten animated modifiers, e.g, subsurf depending on the distance (for LOD).
And btw, parallax mapping does a cool job also, though the edges of your model remain flat