Proposal for a new Cycles concept; A new 'Modifier' shader node type

What problem does this solve exactly? Over the years, Cycles has seen some shader node types get progressively larger as more options are added (especially the ones related to glossy shading). The issue (one example) is that there’s so many glossy-related shading features that the devs. can add (such as nk), but then we lose some of the individual building-block approach that has defined Cycles from the start.

So what could the solution be, how about adding a new class of shader node that with a special output type that goes into a new shader input in the individual shader nodes.

*** In a sense, this would be the new workflow for some materials
shader modifiers > shader node > mix/add shader nodes > output
As opposed to just
shader node > mix/add shader nodes > output

What this would allow would be an expanded set of built-in shading options while keeping every shader node feeling like a building-block (rather than a full set). Why the workflow would change is that some shading aspects like anisotropy would be done via the new modifiers rather than the original node itself. So in a sense…

  • Anisotropic shading would now be done by way of aniso modifier > glossy shader > output (the anisotropic node would be removed).
  • Additional glossy node modifier types could be nk, glint, aberration, and thin-film
  • Modifier nodes could have a shader input themselves to stack modifiers before going into the base node
  • Modifiers could potentially be mixed via their own mix node for more control over effects
  • It’s not only glossy that could benefit (I could see the translucent node, glass node, velvet node, and maybe the SSS node having modifiers). This would greatly expand options and simplify the creation of some materials without hardcoded ‘super nodes’
  • The new shader output/input type should be a different color (because the connections from the output will only work if one or a string of them terminate in the new input for the shader nodes).
  • 2.8 is coming, always a good time to break workflow (if there is no way to convert it to this new idea).

***Onto the next part, just how could this be implemented? Two possibilities I see.

  • Assuming that the order in which instructions are gathered during the shader compilation stage (that assumption for shaders being it starts with those farthest back in the tree and moving forwards). The code can theoretically be made to support modifiers by first working backwards when each shader node is compiled in (if the new input has a connection) rather than continuing to move forward (then skipping to whatever the shader node is connected to once all the modifiers are traversed).
  • The modifier nodes could also potentially be represented in code as already a part of the existing shader nodes (where the modifiers then essentially work to enable features and effects), though this would mean turning the individual components into feature-rich super nodes (even though it won’t look like that from a user standpoint). Whether or not this would be the way to go would depend on what kind of power they should have.

What do you think, would this be possible?

I like the idea… I’ve thought of something similar, but more to do with texturing. Actually this is the workflow of some of my most complex OSL scripts.
By compacting sets of variables that work together for some effects of the same algorithm, I keep the UI clean, organized and less intimidating.

For closures, I was thinking it would be better just to have a dropdown to choose which variation of glossy methods, diffuse methods, transmissive methods, and the combined methods to choose; and the UI of the node change accordingly adding and removing inputs.

But I was clearly not thinking about uber/disney/principled shaders… which in that case, your proposal makes more sense.

I think that you are using the wrong word.
Anisotropy, NK data, glint, aberration, thin-film … These things cannot be called modifiers.
It is a list of attributes.

If some actual nodes have several outputs (Color and Factor) ; it is because it does not correspond to the same piece of information.
It is because it matters to have a clue of what info you are mixing to understand what result, you will obtain.

At least, devs have to know if we are using boolean, integers or strings.

I don’t understand why we should confuse attributes and shader mixed results as potential inputs for anything.

If list of attributes that a shader can accept is expandable, why not just expand it and keep its value to default value when not used.

Current UI allows you to hide unused sockets.Is it unsatisfying ?
Will it be better to have ability to hide selected sockets instead, to reveal desired ones ? Probably.

IMHO, it would solve your problem.
I have difficulties to understand how Cycles developers could anticipate support of arbitrary attributes for each shader in a short period.
It is already difficult to maintain functionality of nodetrees that are making a sense.
Cycles would become unmaintainable if developers would have to respond to request that are non-sense.

For them, adding an attribute to a shader does not mean just adding a multiplier.
The result have to correspond to effect provoked by attribute according to user expectation.
You want a chromatic aberration in SSS shader. OK. Do you have any idea to what result should look like ? to how shader should behave ?

maybe its about getting used to combining with the disney shader

I fail to see to see the problem outlined in the premise. Factors like anisotropy are components of particular BSDFs, you can’t just compose them arbitrarily. You’d end up with nodes that only work with some other nodes, which is confusing.

If anything, the current nodes are not hardcoded enough. Nodes are not actually compiled into machine instructions, they become instructions for an interpreter loop, which has significant overhead. Don’t create overly complex node graphs, or else performance will suffer.

That’s how it would have to work (completely nonsensical to implement aberration for SSS for instance), but it could be made less confusing if it was made clear in the naming convention itself or even the socket color (each modifier node can have a prefix corresponding with what they can work with). Another way would be to use the cyclic warning color to tell users the modifier can’t be used with the chosen shader node.

If list of attributes that a shader can accept is expandable, why not just expand it and keep its value to default value when not used.

Current UI allows you to hide unused sockets.Is it unsatisfying ?
Will it be better to have ability to hide selected sockets instead, to reveal desired ones ? Probably.

By that logic, the devs could just stuff everything from each category into one massive node (one node per category), because it doesn’t matter if you have 20 inputs and 20 outputs if you can just collapse them (but the devs. decided against that).

Seems like a ton of work for a unclear payoff. The BSDFs are separated because they are individual chunks of code. All the necessary options are closely knit within that code. I fail to see the efficiency in mashing 6 nodes together and then splitting out 5 nodes that only work with one core node with a system of naming or colorcoding to identify what goes with what.

Looks like a solution in search of a problem.

I agree with SterlingRoth that this is solution looking for a problem. Everything about this seems practically worse than what we already have. What are the actual real world problems resulting from having more inputs to BSDFs? Poor aesthetics isn’t really one.

A common mistake that beginner programmers (fresh out of school) make is that they overgeneralize the most trivial things, in search of some sort of elegance, completely losing the focus on the actual problem. This what I see in your proposal, as well. You can spend a great deal of time trying to solve problems that you invented for yourself, especially if you have a tendency to obsess over details. Just something to be aware of…

Basically you’re talking about a struct input/output type. See section 5.9

I just wanted to say one thing: a modifier modifies something. So, I think you have the order wrong. It should be shader>modifier>mix node (or whatever).

Another one of my ideas for Cycles is shader alteration through a shader node type that can be placed anywhere in the tree after the shader node (bend rays, spread rays, redirect rays, randomize ray positions, low-level behaviors that operate on rays themselves).

However, I don’t think having modifiers such as anisotropy and anti-reflection (one for the glass node) would work well if they were to be placed after the shader node (due to the special input/output socket type that would likely be needed). The other question being that I don’t have much of an idea of how they should act if multiple shaders are already mixed or added together.