camera position into OSL script

I need to get the camera position into my OSL shader, and I’m wondering how to do this. I know I can get the vector from the camera to a sample point, but that won’t help me. I need the vector from the camera to the origin.

In short, I need to get the camera position as a vector, take the cross-product (I think), do a small quaternion rotation around that cross product, then render the second viewpoint and mix it with the first. This isn’t supposed to be a stereo view per se, but is for verifying that stereo vision won’t interfere with some imagery - which is also generated in OSL. So I’m doing all this in OSL for real-time viewing.

Because it has to be basically real-time interactive, I’m using OSL in Cycles so we can just rotate the perspective view around - so setting up a stereo camera rig and then rendering won’t help me. I need it more interactive than that.

Though I started programming in Python before recently venturing into C (specifically for OSL), I’m not having much luck with Blender Python. My thought is that a Blender Python script could read the camera position, then create a node expressing that position as a vector. Then I would just plug that node into my shader node. Does this seem like the right approach?

thanks

You can only calculate the camera position when the ray type is ‘camera’, multiplying the RayLength by I and adding P.

point CamPos;
float L;
if (raytype("camera"))
{
getattribute("path:ray_length", L);
CamPos=L*I+P;
}

All the other rays won’t work.

PS:Or you can get the camera position with drivers and input the vector into your shader.

Thanks Secrop, I really appreciate the help. I’d been having trouble getting my OSL shader to compile, so I switched to using the trace:hitdist attribute. I’m using the osl-languagespec.pdf from github, but maybe I should be using Blender-specific docs to their implementation. Here’s what I have so far.


shader GetCam(
    point surfP = P,
    vector Inc = I,
    output point CamPos = point(0, 0, 0)
)
{
    float L;
    if (raytype("camera")) {
        getattribute("trace:hitdist", L);
        CamPos = L*I + P;
    }
}

In order to really set up my right eye view correctly, I figure I’ll need to create a copy of the camera vector with the Z value at zero, then take the cross product between those. That gives me a vector along the XY plane. Then taking the cross product of that with the camera position, I should get the “Up” vector for the camera. Then I’ll do a small quaternion rotation about that vector, and it’ll give me the right eye position.

But I just found out that they really want to view this in stereo, so I’ll need to figure out a whole lot more tricks to get these two views rendered simultaneously in separate Blender windows.

=or= instead of doing all this interactively, I could just create a camera stereo rig and path, and pre-render stuff. I was given a sprint deadline of two weeks to get a lot of stuff done for this project. :slight_smile:

-as I said, thanks very much for your help; I’m learning C on the fly as I do this stuff (migrating from Python)

have you tried the new stereo3d/multi-view functionality (from above 2.75)?
It may be just what you need…

I’ve tried the stereo side-by-side view, and it works excellently in all the viewport shading modes which I’ve tried - except for Rendered. I get a nice 3D view in all the other modes.

I still get a side-by-side view when in Cycles Rendered mode, but both views are from the same camera viewpoint. (I should mention that I’m running 2.76 on Windows 7-64).

Have I encountered a bug, or is it possible that I just have a wrong setting somewhere?

in my monitor multi-view works, but I don’t have the possibility of testing other outputs…
maybe you could PM @dfelinto and ask him how it really works…

Try this way:

point CamPos = point(“camera”, 0, 0, 0);