[SOLVED] shader boundaries

I was wondering if the bounds of a shader are available.
That is if I assign only certain faces/vertices to a shader is this information available or does the code have to deal with the whole object.
I can accept that you would have to return a full closure but I only want to work on the ones that will be drawn. The rest can return 0.
The purpose if this is to reduce the computations required
Thanks

Ok lets see if I have this right.
It seems as if a shader defines a cube and when a ray hits it it returns a luminosity value.
How that cube fits into the world is neither here nor there. It’s location scale etc are irrelevant.
Depending on how the shader is related to the object through the coordinates/UV map define how that cube twists itself to fit the object.
Basically the same way that a UV unwrap does with a 2D image however the shader could be 3D if you where showing a volume.

So that the ray simply asks the shader I’m looking here what can I see from your perspective. The render engine then takes that value and applies it in relation to the other nodes however they are setup.

So effectively my question is irrelevant cause the parts of the shader that are not visible (as defined by the mesh) will not be called.

«(…)a shader defines a cube»!! :eek:
I think you are mixing things up… :slight_smile:

When a shader is called (for every sample, and not for each pixel), the render gives it a few globals that the shader can use (P, I, N, Ng, dPdu, dPdv, u, v, etc); These are related to the point being shaded (sampled), and they change significantly from sample to sample… The render will then use the result of the shader to decide what to do next. This will depend on what type of shader you create, if it’s a generic shader, or a surface shader, volume shader or displacement. (You can add more parameters to the shader, like textures, other coordinates, etc.)

The shader itself will restart each time you call it, and apart from some variables that may still be in memory (which you cannot control), most of the variables are reinitialized for each sample. This makes it a bit difficult to have a shader giving up a value based on factors that are far from the point being shaded (either spatial or temporal).

When trying to reduce computations of the shader, it’s a good practice to avoid calculation that involve big lookup loops (like probing all the vertices of a mesh from each sample). A good example is the Worley’s variation of the voronoi algorithm: Instead of calculating the voronoi boundaries for the whole system, we use a jitter algorithm with a fixed seed and we perform the calculations just for the 27 neighboring cells (thought only the nearest is returned). Since the jitter function is locked to each cell, we can expect the same result if we are looking right to that cell or from any other neighbor, and thus have the known continuity from the voronoi diagram. If we were going with the original voronoi algorithm, we would need to check all cells first, and only then return a result (sort all points in one axis, and then loop until the nearest point from the point being sampled is found).

In a more simplistic explanation, let’s say you start a render… The engine start’s throwing rays from the camera, and each ray goes into the scene. If a ray hits a primitive (a triangle), then the render calculates the globals and check for each shader associated with that hit. The shader is executed, the returned values are given to the Closure and the Closure tells the engine what to do next. The engine analyses the Closure and decides if it stops the ray there (i.e it’s an emission shader, an holdout, or if the bounce limit has been reached), or if it needs to throw a new ray into the scene (based on the closure specifications). This happens for each sample, and a scene rendered with 500 samples will throw 500 rays for each pixel. If in that pixel there’s your OSL shader, then your shader will be called 500 times, each time with a slightly different P, N, etc. (Note that branched path tracing will reduce this amount to the AA samples).
The shader itself won’t know anything about the mesh it is applied to (therefore we have the trace()), and it will only be called if some ray hits the shader.

For testing, you can always use the printf() to output information from your shader into the console… You’ll will clearly notice the amount of data the shader will print out, everytime the shader is called!! :wink:

You can also look into the testsuite from the OSL github… it’s has some interesting information that can help you go deeper into the language.

ps:sorry if there’s some repetition in these formulation… but testing things really helps you understanding what’s going on.

Thank you Secrop your answer is pure gold.
I have read the spec,
Bought and read the book. Played with some stuff. But your answer clarified it beautifully.
Thank you again.