OSL Goodness

Another quick question which I have not been able to find the answer to: how do you get a texture set up inside the your node.
Most nodes I’ve seen so far use a vector to an image node which then maps color to a script node. Is there anyway to do this inside the node?
Is there anyway to get all the information of a color ramp inside a node instead of just giving a single color?

I would just create two color inputs and connect there the image and ramp nodes as inputs. To code that inside the script seems too complicate that just use what cycles has.

We are all working on same things at same time !

Here is my AO ,done for a personnal project. In fact, it’s a copy of the Mental Ray one.

http://vadrouillegraphique.blogspot.fr/2012/12/simple-osl-ambient-occlusion.html

Nothing really special. Just the need to have a better AO in Blender !

Inverted AO (edge detection) is nice, and I’ve work with it to simulate old material. OSL is openning bigger door to use it. I’m going to make some test soon.

Thanks for the quick reply :slight_smile: But not exactly what I’m looking for, what I want to do is evaluate stuff inside the node (or maybe externally which I’m trying right now but it is not really producing the results I was hoping for (yet)). If I understand it correctly, color is just a single value and a closure can be all colors depending on type of closure, the lightning normals etc. What I want to do is use shading information of a simple diffuse() to select a color on a color ramp depending on how white/black something is.

A texture look up (which is specified in the OSL language pdf) seems to what I want but I have no idea how to get it to work. Preferably I’d have all information of the texture accessible inside the node, but if you use a color input, it is only a single value thus you can’t use internal node information in combination.

I can’t get s and t to work. I want to use s and t in the same way that it is used in renderman. I tried doing

vector uvwmap;
getattribute(“geom:uv”,uvwmap);
float s = uvwmap[0];
float t = uvwmap[1];

but this does not return such values between 0 and 1.
I will keep on trying to see what I can figure out, but any advice is appreciated.

Edit, since asking this question I am closer, now that I know about http://www.openshading.com/osl/example-shaders/, but this is still not quite right, because I want uvw coordinates.

Dingto, do you know if these errors are bugs or still todo? The functions compile but do not run.
ERROR: LLVMOSL: Unsupported op aastep
ERROR: LLVMOSL: Unsupported op faceforward

vector uvwmap;
getattribute("geom:uv",uvwmap);
float s = uvwmap[0];
float t = uvwmap[1];

The problem with getting UV coordinates (or any other mesh attribute) directly by OSL getattribute is that Cycles only generates this data when it is explicitly requested by the node. This request system is not part of the OSL standard and currently can not be controlled from within OSL shader code.

The responsible function can be found in intern/cycles/render/graph.h

class ShaderNode {
...
virtual void <i>attributes</i>(AttributeRequestSet *attributes);
};

This is implemented by all the C++ node subclasses. The script node by default does not request any extra attributes though, so in general the UV data is not available unless there is another node such as Texture Coordinates with a connected UV output. There would need to be a way of checking the OSL code for requested attributes to make this work …

Vector Math node. I know this already exists but this might be useful for investigating vector maths.



#include "stdosl.h"

vector faceforward (vector N, vector I, vector Nref)
{
return (dot(I,Nref) &gt; 0) ? -N : N;
}

shader vectormath ( 
        vector Ain = vector(1,0,0),
        vector Bin = vector(0,1,0),
        vector Ref = P,
        
        int negA = 0,
        int negB = 0,
        int negC = 0,
        int SwapAB = 0,
        
        output vector Reflect = vector(0.5),
        output vector Cross = vector(0.5),
        output vector Dot = vector(0.5),
        output vector FaceForward = vector(0.5),

        output vector NormalisedA = vector(0.5),
        output float LengthA = 0.5,
        
        output vector NormalisedB = vector(0.5),
        output float LengthB = 0.5
)
{
   
   vector A,B,C;
   
   if (SwapAB) {
       A = Ain;
       B = Bin;  
   }
   else {
       B = Ain;
       A = Bin;  
    }
 
   C = Ref;  
    
   A = negA ? -A : A ;
   B = negB ? -B : B ;
   C = negC ? -C : C ;
   
   
   LengthA = length(A);
   NormalisedA = normalize(A);
   
   LengthB = length(B);
   NormalisedB = normalize(B);
   
   Reflect = reflect(NormalisedA,NormalisedB);
   Cross = cross(A,B);
   Dot = dot(A,B);
   FaceForward = faceforward(NormalisedA,B,C);
    
}


Globals node

Very simply outputs the globals. Might be useful.



surface globals(  

    output point Point_P = P,
    output vector Vector_I = I,
    output normal Normal_N = N,
    output normal Normal_Ng = Ng,
    output float Float_u = u ,
    output float Float_v = v ,    
    output vector Vector_dPdu = dPdu,
    output vector Vector_dPdv = dPdv,
    output point Point_Ps = Ps,
    output float Float_time = time , 
    output float Float_dtime = dtime , 
    output vector Vector_dPdtime = dPdtime , 
    output closure color Closure_Ci = Ci  
    
)  
{  
  /* no code */
}

Does anyone have a clue about the license for the various header files used in sl shaders?
see: https://github.com/nfz/RIBMosaic-exp/tree/master/shader_library/surface



/************************************************************************
 * filterwidth.h - Some handy macros for filter size estimation
 *                 for antialiasing of shaders.
 *
 * Author: Larry Gritz ([email protected])
 *
 * Reference:
 *   _Advanced RenderMan: Creating CGI for Motion Picture_, 
 *   by Anthony A. Apodaca and Larry Gritz, Morgan Kaufmann, 1999.
 *
 * $Revision: 1.1 $    $Date: 2008/02/13 01:59:47 $
 *
 ************************************************************************/



Charlie: license? My posts are for educational purposes only. I thought yours too! :RocknRoll:

Where is the code to the mental ray one. Do you have a link?

I added to your shader the other options in mine. I think for AO the best one to use is now this mix of yours and mine. I renamed it to version 2 so I called it GAO2. And just was a lot of funny looking to my avatar name here in this forum! :stuck_out_tongue:

Angles is in degrees now.

Note: I am not completely sure of the way you calculate the random vector to throw the ray. I think you are creating a random vector that has the three components x,y,z absolutely the same value always! I will take a look tomorrow on this.


/*
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License
 * as published by the Free Software Foundation; either version 2
 * of the License, or (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software Foundation,
 * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
 */
 
/* Simple Ambient Occlusion by François GASTALDO
 * contact me at : [email protected]
 * blog (3D, shaders, photos and more) : 
   http://vadrouillegraphique.blogspot.fr/
  
   Small Documentation :
   - This shader is a general purpose Ambient Occlusion
   - You can use it for color or for closure, or for both at same time.
   - if you connect use just the ' ColorIn ' or just the ' closureIn ' 
     inputs and connect nothing to 'ColorHit' or 'closureHit'. then 
     Occlusion will be black.
   - Mode: 0: Concave (AO) 
           1: Convex (Wear) 
           2: Both 
   - InvertEffect: Swap colors/closures used
   - AO Angle is in degrees.
   - Maxdistance limit distance of occluding objects.
  
 * Use it in production at your own risk.
 * Closures are for Blender/Cycles.
 * If you use this shader, please credit me. Thank you.
 *  Enjoy !
 * François Gastaldo
 */
 
shader GAO2(
  color ColorIn = color(1),
  color ColorHit = color(0),
  closure color ClosureIn = color(1) * emission() ,
  closure color ClosureHit = 0,
  int Mode = 0, /* 0: Concave (AO) 1:Convex (Wear) 2:Both */
  int InvertEffect = 0,
  int IndirectIllumination = 1,
  float AO_Angle = 54,
  float Distance = .5,
  vector NormalBump = vector(0,0,0),
  output color ColorOut = ColorIn, 
  output closure color ClosureOut = ClosureIn,
  output float Fac = 0
)
{
  if ( raytype ("diffuse") ) //Disable diffuse emission from this material
  { if(InvertEffect)
    { ColorOut = ColorHit;
      ClosureOut =  ClosureHit;
    }
    else
    { ColorOut = ColorIn;
      ClosureOut =  ClosureIn;
    }

    if (!IndirectIllumination)  //AO without indirect illumination
    { ColorOut = color(0);
      ClosureOut = 0;
  } } 
  else  
  { //Compute vector to trace Ray=random*N    
    vector noisevector = vector (0, 0, 0);
    //perlin gives float between -1.0 and 1.0
    vector aovector = N + NormalBump + ( 0.001 * noise("perlin", P*10000.0) );
    float aoangle = radians(AO_Angle);
    
    //cell gives float between 0 and 1.0
    noisevector = aoangle * ( 1 - 2.0 * noise("cell", aovector*10000.0) );
    aovector +=  noisevector;

    //Trace Ray for AO
    
    int didhit1=0,didhit2 = 0;
    float dist1,dist2,fac1=0,fac2=0;
    if (!Mode)
    { didhit1 = trace (P, normalize(aovector), "maxdist", Distance );
      if(didhit1)
      { getmessage ("trace", "hitdist", dist1 );
        fac1 = clamp( (dist1/Distance), 0, 1 );
      }
    }
    else if (Mode == 1)
    { didhit1 = trace (P, -normalize(aovector), "maxdist", Distance );
      if(didhit1)
      { getmessage ("trace", "hitdist", dist1 );
        fac1 = clamp( (dist1/Distance), 0, 1 );
      }
    }
    else
    { didhit1 = trace (P, normalize(aovector), "maxdist", Distance );
      if(didhit1)
      { getmessage("trace", "hitdist", dist1 );
        fac1 = clamp( (dist1/Distance), 0, 1 );
      }
      didhit2 = trace (P, -normalize(aovector), "maxdist", Distance );
      if(didhit2)
      { getmessage("trace", "hitdist", dist2 );
        fac2 = clamp( (dist2/Distance), 0, 1 );
      } 
    }
    //int didhit = trace (P, normalize(aovector), "maxdist", Distance );

    if ( didhit1 || didhit2 )
    { //gradient between the 2 colors to have smoother AO
      if(didhit1)
      { Fac = fac1;
        if(InvertEffect)
        { ColorOut = mix( ColorIn, ColorHit, Fac );
          ClosureOut = (ClosureHit * Fac) + ClosureIn * (1 - Fac);
        }
        else
        { ColorOut = mix( ColorHit, ColorIn, Fac );
          ClosureOut = (ClosureIn * Fac) + ClosureHit * (1 - Fac);
        } 
      }
      if(didhit2)
      { Fac = fac2;
        if(InvertEffect)
        { ColorOut = mix( ColorIn, ColorHit, Fac );
          ClosureOut = (ClosureHit * Fac) + ClosureIn * (1 - Fac);
        }
        else
        { ColorOut = mix( ColorHit, ColorIn, Fac );
          ClosureOut = (ClosureIn * Fac) + ClosureHit * (1 - Fac);
        } 
      }
    } 
    else  //return black
    { if(InvertEffect)
      { ColorOut = ColorHit;
        ClosureOut = ClosureHit;
      }
      else
      { ColorOut = ColorIn;
        ClosureOut = ClosureIn;
  } } }
}


Hi @Bao2 ! Thanks a lot for all those welcome additions !

It starts to be a very nice shader :slight_smile: !

I like inverted AO. I now have to try the GAO2 seriously and make pictures.

Technically : The OSL spec says that the noises are independant on the three vector components. so the ’ disturbed N vector should be purely random.
The main problem I found is that, to chage direction, I multiply the N * cellnoise. It’s a multiplication, so, if any component is 0.0, result would not be disturbed and keep its 0.0 value. To avoid this, I add a small amount of non-N randomness : P. So vectors are never purely at zero value and can be randomized properly.

For MentalRay code, I don’t have it, but I’m using it for so long time that I start to know it quite a bit !

Charlie : Thank you for your debugging nodes ! Perfect to have it ready to use instread of modifing WIP shaders each time I need to check a global param :slight_smile: !

So far my attempt at porting the team fortress 2 shader has hit a dead end. I don’t know if it is due to me not understanding OSL correctly, or because it is simply impossible to create such a shader the way as it is described in the breakdown of the renderman code.

Basically my problem is this:
Closures are evaluated at some point during the rendering process making them basically black boxes. Because of this it is impossible to use the shading information from for example a simple lambert to be remapped through a color ramp to create the dramatic terminator (as I think the hard shading is called). This was afaik possible to do in the node materials with BI.

As far as I can see there is no way to use shading information computed by a closure to drive the position of a color ramp. But if someone knows how to do that I’d be grateful :slight_smile:

Your shader can just run at the color level until you need to influence the closure. In the future there will be more closures to use.

I downloaded a graphicall build but I do not get any results.

Not even this simple tutorial works.
http://www.openshading.com/osl/getting-started/

What do you think is going on? Did you have any problems like this?

Did set the render engine to cycles + in the render tabs you have to set it to cpu / open shading language then the scripts should work.

Thanks, now it works.

Mix like the face of the robot in IRobot: (light/shadow mix)
from varkenvarken code:
http://blenderthings.blogspot.co.uk/2012/11/getting-light-vector-in-osl-for-blender.html#comment-form

Breeeeecht, please fix the
getattribute(“Lamp1”,“object:location”, loc);
that just now instead of giving us the location of “Lamp1” gives always the shader’s object location !!!

(I don’t submit bug report because I think is just not implemented yet I suppose)


/*
  Mix two colors/closures depending they face or not the light.
  Open the Light_Position combo and enter there manually the
  position of the light in the scene (select the light and look
  in the 3DView N panel the Location)
*/
shader mix_IRobot(  
  color Color1 = color(0, 1, 0),  
  color Color2 = color(1, 0, 0),
  closure color Closure1 = holdout(),  
  closure color Closure2 = holdout(),
  vector Light_Position = point(0),
  output color Color = color(0),
  output closure color Closure = holdout()
)
{
  P = transform("object","world",P);
  float cosine_t = clamp(dot( normalize(Light_Position - P), N ), 0, 1.0);
  
  Color = mix(Color1, Color2, cosine_t);
  Closure = (1 - cosine_t) * Closure1 + cosine_t * Closure2;
}

This looks nice, but I have no idea how to use it to get the uv map information. Could you please elaborate.

By the way, does anyone know how to write closures for OSL, so we can get things like phong, lambert, minneart, cook-torrence, Seelinger and maybe even some BSDFs etc.