Can procedural/OSL materials respond to external geometry?

Just how powerful are procedural and OSL shaders?

I was just mulling over a use case. Say I have a stone wall. The stones are of varying sizes, and their locations may be subject to change.

If the stones’ geometry is set into, say, a large cube, like smaller cubes protruding from a larger cube… is it conceivable to create a special material for the large cube into which the stones are set, that is like a “procedural mortar” material. This material would contain not only the surface information for the general appearance of mortar… but, could a material be created that is able to detect the intersection between the “mortar wall” and the stones, and displace itself differently around the border of each individual stone, as if the mortar were thicker in specifically those areas directly around the stones?

In this way, rather than having to establish the location of each individual stone and sculpting that thickness into the “mortar wall” around the border of each stone, the material could be applied to the mortar wall, and would respond properly to the stones no matter how they are rearranged?

It seems like there should be some way to accomplish this, but, is there in reality? Or is it just wishful thinking?

Could anyone speak to this? Does anyone know of any tutorials of courses for purchase that teach advanced material creation?

What I think you’re trying to describe could be accomplished with dynamic paint. Here are some of the options which should get you headed in the right direction:

I definitely still would like to know about the OSL or some other pure shader based means, but yeah, this seems like it could work. I feel dumb for not having thought about it! Thanks!

OSL lets you do a lot of things, but it still has some limitations… Probing outside geometry, for example can be done by tracing rays and gather nearby hits; thought this is quite an expensive process, as rays are traced for every sample, meaning jumping back and forward from shader calls to BVHTree calls.
However, this cannot be used directly to create displacement maps… It’s a chicken/egg problem, as you need the BVHTree to trace rays, but you need to trace rays to build the BVHTree!
So you need to split this method in different steps; probe the geometry into a baked texture, that would help another shader do the displacement.

OSL has raycasts, so you can “look at” around geometry from shading point. You can also query UVs/VCs at points that was hit by raycast. That can be slow, but it works.

BUT if you want to use Displace based on surroundings - then no, it will not work. Despite “Displace” and “Surface” outputs are put together in material node - in reality this is different phases to prepare geometry for rendering. Blender firstly calculates all displaces in all places - and then “render” already displaced triangles with “Surface material”. And you can not use raycasts at displace phase - there is no “finished” geometry yet

Not OSL, but there are other ways to do this.

You can get distance to geometry via a vertex weight proximity modifier, and then displace with a modifier.

If you need to be doing displacement in nodes (for complicated procedurals or adaptive subdivision) then you can translate vertex groups into a UV map via a UV warp modifier, then use that UV map to drive your displacement.

Note that vertex groups are a vertex interpolated value, like the difference between vertex color and texture color (per-vert vs. per-texel) but if you’re using enough verts to do meaningful displacement, that shouldn’t be a big deal.

Thank you. These are all interesting responses, but this seems to be so close to what I need, creating a mask/map that can be updated with the objects. Now… since it seems like a pure shader route would be for future endeavors, that puts this between dynamic paint and the vertex weight proximity modifier. That set me down the right path and I really appreciate it. Now if only there was a less convoluted path from Weight Paint to Vertex paint…

In any case, for anyone that may stumble upon this thread later, this page is really useful:
https://blender.stackexchange.com/questions/15167/weight-paint-in-cycles-nodes

Yeah, it’d be nice if vertex groups could be read directly in nodes. I have no idea why they can’t… In my mind, data is just data until you actually do something with it. There’s not really any difference between vertex color or vertex groups or UV, not until you use it to do something, and then, the difference is in the use, not the data. Sometimes I’m reminded of Brain Candy: “This is a pill to give worms to ex-girlfriends! You just don’t get it here…”

1 Like