Screen or Camera Space Normals

The normal pass in Cycles provides them in world space, but I need the screen or camera space normals. Is there a way to achieve that in the compositor? I could not find any node which has the necessary information.

Vector > Vector Transform node should do what you want. Use Geometry > Normal value as input, set mode to Vector, input to World and output to Camera.

1 Like

That is unfortunately not possible in the compositor for the final renders. It would be necessary to implement a script which applies this change to every shader, without accidentally destroying the original shader.
It seems to be way easier and practical to write a script which uses the rendered normals and the camera information and create the screen space normals like that. I expected that this should be doable in the compositor directly.

Thanks anyways for the reply!

Ah, sorry it slipped my view that you asked about compositor. It should be possible to calculate the camera normal vectors in compositor if you have the view vectors and world space normals as two rendered elements. How exactly to do it… can’t help with this, I’m bad at vectors.

Although the easiest way would be to simply add another render layer with material override and for the material use an emission shader with the color from normals calculation I wrote about before. And then use this new render layer as your camera space normals pass.

I need to automate that, such that it basically works for arbitrary scenes. Getting a solution that works for all types of displacement out of the box sounds too much like a search for all special cases to me, meaning I am scared by it :slight_smile: .

What do you mean by types of displacement? If you have rgb value for a pixel that expresses one type of normal, you can calculate the other kind of normal from it. Maybe I’m missing something in what you want to achieve?

The normals in Cycles are not just the ones from the input texture, but they can be modified as everything else with a node setup. On top of that, there are different kinds of displacements (bump, true, both) which have a different effect on the normals as they are visible on the rendered image. To be sure that everything is correct, this would require extensive testing. It is possible that I am overthinking it :slight_smile:
On the other hand, I am absolutely certain that the normals that Cycles calculates are absolutely correct and they just need to be converted to camera space. This can be done in Python and I feel much more comfortable doing it like that, because there are not going to be edge cases. Even though the performance is not going to be great, it won’t add a significant amount to the render time.

You asked about compositing, which means that all the data you are handling is already rasterized, everything is already calculated, modified in whichever way they were. What you get as input to compositor is the final modified normal. It seems that you are overthinking something or I don’t get what you really want.

My goal is to open any file and to run a script which sets everything up, such that I get all the passes from Cycles stored in exr files (that part is working) and on top of that to have the normals also in screen space.

I don’t want to modify every material or shader just to get that. The compositor has the normals that I need, but unfortunately no information about the scene camera. I hoped and expected that I simply missed something in the compositor, as this would have made in very simple. The workaround is now to implement a Python script which reads the normals and converts them to camera space and save the result in an exr file.

You don’t need to modify any shaders or materials. What you need is two additional renderlayers with two materials that overrride all scene materials. One produces the world space normals, other camera view vectors. And from these two layers you should be able to produce the camera space normals in compositor.

Thanks again for the explanations. That cleared some misunderstandings I had about render layers and overrides! I am currently testing which approach is more practical or reliable for my use case.

Besides the solution as described by @kesonmis, I also tried to use a script. In case someone is interested, here is the general ideal (WARNING: I had to adjust the code slightly here, so it is untested!):

pixels = screen_space_normal_image.pixels  
# Copy the pixels for faster access.
pixels = list(image.pixels)


camera = bpy.context.scene.camera
camera_rotation = camera.rotation_euler.to_quaternion()
camera_rotation.invert()


for i in range(0, len(pixels), 4):
  normal = Vector((pixels[i+0], pixels[i+1], pixels[i+2]))
  screen_space_normal = camera_rotation * normal
  pixels[i+0] = screen_space_normal[0]
  pixels[i+1] = screen_space_normal[1]
  pixels[i+2] = screen_space_normal[2]


screen_space_normal_image.pixels[:] = pixels

Hey, I’m running in the same issue, World normal is quite handy but screen normals too, I’d like to make a kind of hologram effect in compositor and screen space normal would be perfect.

Since 2.8 doesn’t have material override yet (or maybe I’m missing something), and no more blender internal , how can we do that ?

Let’s summon the quite busy but best guy that I know of who can answer that —> @brecht !

Just throwing an idea out there (Not even sure if it can be fed into the compositor) but I believe the normals matcap is in camera space.

Hey thanks cgCody ! It could be possible to have a matcap in the compositor , by feeding it into a STMap / UVmap node, and providing tweaked ScreenSpace normals as UV :smiley:
Thanks for the hint anyway !