I need to get the camera position into my OSL shader, and I’m wondering how to do this. I know I can get the vector from the camera to a sample point, but that won’t help me. I need the vector from the camera to the origin.
In short, I need to get the camera position as a vector, take the cross-product (I think), do a small quaternion rotation around that cross product, then render the second viewpoint and mix it with the first. This isn’t supposed to be a stereo view per se, but is for verifying that stereo vision won’t interfere with some imagery - which is also generated in OSL. So I’m doing all this in OSL for real-time viewing.
Because it has to be basically real-time interactive, I’m using OSL in Cycles so we can just rotate the perspective view around - so setting up a stereo camera rig and then rendering won’t help me. I need it more interactive than that.
Though I started programming in Python before recently venturing into C (specifically for OSL), I’m not having much luck with Blender Python. My thought is that a Blender Python script could read the camera position, then create a node expressing that position as a vector. Then I would just plug that node into my shader node. Does this seem like the right approach?