in depth OSL, where to start?

Hi all, i have been lurking this place for quite some time.

There seams to be considerable insight regarding OSL in this place and i’d like to ask around how you people got started.
My google-fu comes up with stuff that’s so simplistic blender can do the same without OSL and stuff so advanced it’s tough to make sense of.

I’ve read the spec but it’s not that specific on what exactly a color closure is.
I’m guessing it’s not quite the same as your run of the mill closure in lisp, ML and their ilk?

I’ve done toy implementations of all the usual graphics stuff except a ray tracer maybe that’s what i’m missing …

So. How did you guys get into this stuff, surely there is a less masochistic way then straight up reading the sources?

asking Mr Secrop specifically :slight_smile:

:slight_smile:

Most of what I’ve learned was in the OSL specification document. But I confess that it took a little more to understand it fully…
Perhaps understanding how the render does what it does helps alot (tracing rays, calling shaders, using the globals, etc).

If I could break the knowledge into parts, I’d say that three things should be taken into account:

  • Knowing the render’s algorithm (ray from camera, hit, calls shader, new ray, hit, etc)
  • knowing the OSL language (what it can do and what it can’t)
  • knowing how to create textures (some math included)

And of course, a good knowledge of the 3D nomenclature (what is a Vector, a Normal Vector, differences between object space, world space and camera space, etc)

In short, whenever an object is hitted by a ray (either from the camera or from anywhere else), the shader is called and it’s given a few variables the shader can use to produce a (not so) final result.
These variables can be the position of the hit (either in respect to the object/world/camera coordinates); Its normal vector and if it’s a backside hit; Any attributes that are present (Vertex Color/UV’s, or others); some parametric components (U and V of each face) and their derivatives (dPdU, dPdV), etc; Along with some other info like the direction of the incoming ray, which type of ray it is (camera, reflection, transmission, shadow and so on), the length of that ray, how many hits already gone since the ray left the camera, etc.

With that info, and with any other input (variables, textures, etc), one can define the execution of closures. This means: defining how the closure should behave, by setting its color, normal, or any other parameter a specific closure uses. (some very usefull stuff for beginners can be found in TheBookOfShaders)

Closures are somewhat difficult to explain in depth, but think of them as a function describing the amount of energy/light that can be ‘throwned’ in the Incoming direction, from the surroundings of the hitted point. Each other direction will contribute, with some variance, to the amount of energy that will return in the incoming ray. And as all surfaces absorb some of this energy, we express this absorbtion by multiplying the distribution function by a color. Other parameters for closures, that may include ‘roughness’, ‘IOR’, ‘Normal’; change the distribution function (BSDF) to some extend. After setting the closure parameters, the render will (more or less randomly depending on the case) peek a new direction for a new ray, that will be sent to the scene until the ray hits a light or gets lost in the background. And the process repeats in some other shader.
Atm, we cannot create new closures, so we are stucked with the ones that come with Cycles. But they are already enough for most of our needs.

Hope this info is usefull… for any other doubts, feel free to ask.

Btw… for a more depth look into OSL, you should have a look at the test suite from the source code. Just remember that there are some stuff that just don’t work in Cycles.

1 Like