Everything nodes 2.8; Possible ways to distill it down to achievable steps.

As the title says, we all want to see things like modifiers and particles become node-based in 2.8. However, we are faced with the reality of FOSS, that being limited resources and the limited ability to hold onto experienced people.

Therefore, we may need to do it as a serious of smaller steps if we want to see good results make their way into master (and as such having full-blown generic nodes in one go may just be too much). How do we do it then?

First step; code a series of specific, standalone node-tree types for things like modifiers, constraints, and particles;
Essentially, these would have some similarities to the Cycles node system in that there would be a special input/output for the main data type. The standalone nature of each of them would mean they can be coded and committed one at a time and as such be more achievable and manageable, an example for modifiers below where we start with just a straight translation of the stack to nodes, such as…

<i>Input &gt; <b>array</b> &gt; <b>subsurf</b> &gt; <b>simple deform</b> &gt; output</i>

However, we could also have merge nodes that combine multiple branches back together (so the derived mesh output is added on top of each other, this could mean things like…)


<i>Input &gt; <b>solidify</b> (inset by 0.1) &gt; combine &gt; output</i>
          <i>&gt; <b>solidify</b> (inset by 0.2) &gt;</i>

The result would be a mesh that has two inner layers spaced according to the original’s normal data

But then, some nodes that can take objects as an input could now make use of derived mesh results from a node branch if desired, such as…


<i>Input &gt; <b>subsurf</b>  &gt;               <b>boolean</b>(difference) &gt; output</i>
          <i>&gt; <b>simple deform</b></i> -- o(boolean object)     

And then you have the usual power from things such as texture inputs (the example below would create an array of objects with random offsets


<i>input    &gt;                              <b>array</b>              &gt;      output</i>
<i>texture &gt; math(*2)</i>  -- o(  OffsetX  ) 
                                   -- o(  OffsetY  )
                                       o(OffsetZ=1)

We could forget about very low level operations for the moment (such as ones for matrix math), even basic modifier nodes such as the above would already be enough to really increase Blender’s non-destructive power.

Step 2, develop node types that would allow trees to communicate with each other
Naturally, each new node tree type would be a datablock type as well, so one would have special input nodes that allow the results of one tree to be fed into another, an example being the result of a texture node tree being fed into a modifier node input.

Step 3, eventually tie everything together with low-level nodes and eventually merge the tree types
This would cover things that were worked on in the object nodes branch, such as matrix math nodes and the like, this means that sometime in the far off future, the BF will finally have the means to pull everything together and allow for unlimited power in modeling and animation in a glorious node soup.

What would your approaches be?

That makes sense, but I think the biggest difficulty in that scenario is actually designing how it will all work together, not the individual nodes separately. However… having plenty of nodes ready to be dropped into a node system might ease the approach of designing that system. But I’m not sure that’s how development works.

My idea is that we wouldn’t really have a real generic node backend complete until the end of step 3, but steps 1 and 2 sees at least some incremental work towards that goal (getting it done during step 1 would require the BF getting a huge infusion of cash because of the full-time development position it would require, even more so if we were to get Lukas Tonne back as that might require a well-paid multiyear contract).

Sometimes I wonder, did anyone take into account that we don’t use node based system to code because they grow as visually complex as the logic they represent? We already tried this, in the field of integrated development environments. It didn’t work, for that very reason. We still use text because of that.
Not that I’m against trying to reinvent the wheel, but this particular wheel is not going to roll.

As long as you don’t try to emulate programming (like in Unreal 4), then it’s fine (and the type of node trees I’m talking about, from the artist perspective, would relate to anything but coding).

Besides that, having some of the major Blender features moved to nodes would sharply reduce the need to code specialized tools and vastly increase possibilities to a point that would otherwise create convoluted legacy UI’s.

Softimage ICE Trees. That’s what you are describing here. Let’s get them in Blender and just be done with it. This isn’t a hard concept. IT’S BEEN DONE ALREADY. There’s a nice wheel that Autodesk took off its axle. The wheel needs a new car. We have a car. We just need a good driver.

Actually, node-based systems are ubiquitous in CG. Maya, Houdini, Softimage, but also Modo, 3DSMax and C4D all have adopted them in one or more areas. The popularity of Houdini and Softimage is/was highly dependent on their proceduralism, realized as “artist-friendly” node graphs.

The reason that it didn’t take off for general programming is that it’s really hard to represent complex control flow and state transitions in a node graph, which is what general-purpose languages require. In CG, node graphs usually are “pure functional”, data goes in one direction and out another, there are no side effects, and the complexity of the actual evaluation process doesn’t burden the graph itself.

That the “concept” isn’t hard is irrelevant. We have a real codebase here. The developer that just left the project spent years (maybe not full-time) to get a node system off the ground, using the features that are already in Blender. If you start from scratch, it’s “not that hard”, but then you have almost no functionality. If you try to shoehorn existing systems into a node evaluation system, you’ll quickly run into issues. You’ll likely have to do massive refactoring, which is a tough sell on a codebase that is “in production”.

I don’t think lvxejay meant to actually shove the XSI ICE tree code into blender which is impossible because its copyright protected by Autodesk. I guess what he meant is look at ICE tree from a user perspective and mimic the workflow which could then influence the logic how it operates. I am no coder i have no idea, but from a users perspective ICE and Houdini should/could be high class inspirations on how the user interfaces with it.

I like to think that there is no room for terms like “hard” or “simple” in computer science. I’ve never read about any obstacle encountered in the act of representing a logic structure with a node based visual programming tool. In that respect, as far as I know, they worked as text did.
Note that while the experience we had was made in the context of programming languages the results extend to any user interface that uses the same approach. The node based material system of cycles, for example, exhibits the same issue: when the complexity of the underlying statement grows (that is when the amount of atomic operations and the number of their interconnection grows) the complexity of the visual representation also grows.
Also note that this is not a matter of reducing the complexity of the underlying structure: text is linear but what it encodes stays as complex as it is, it just doesn’t add visual complexity to the equation.
It might be the case that in the cgi field experienced users are so accustomed to node based system that they grew dedicated patterns for their interpretation but I could go as far as hypothesizing that a dedicated, text based interface would increase their productivity, more specifically, given the dynamic nature of the task, a cli interface.

What I mean is, it’s hard to “get right”, i.e. make it usable and readable. Purely functional graphs are easy to read: The evaluation order doesn’t really matter, data flows in one direction, etc. In terms of visualizing data flow, it is superior to text.

Once you add mutable variables, control flow (i.e. something happens or not), state transitions, you lose the clearly defined “flow”. Something can happen at any point, causing some effect at another point in the graph. The benefit over text is simply lost. You certainly can represent that in various ways, but nothing is really “good”.

Also note that this is not a matter of reducing the complexity of the underlying structure: text is linear but what it encodes stays as complex as it is, it just doesn’t add visual complexity to the equation.

Text itself has significant visual complexity, the relationship between inputs and outputs isn’t necessarily as obvious in text as it could be in a diagram.

It might be the case that in the cgi field experienced users are so accustomed to node based system that they grew dedicated patterns for their interpretation but I could go as far as hypothesizing that a dedicated, text based interface would increase their productivity, more specifically, given the dynamic nature of the task, a cli interface.

Historically, 3D packages had (and still have) dedicated scripting environments and CLIs, but you simply can’t get most artists to use that stuff. Maybe if you added tons of visual feedback and removed the possibility for syntax errors, you could get there.

Hard, easy… Sounds like we’re talking about eggs… The “decision” to look at another piece of software that’s done this well already isn’t a hard one. It’s really, really easy. Implementation? Slapping it into a working codebase? Yeah, that’s inherently challenging.

A text based cli interface would basically be taking us back to the 90s… Nodes are actually the same exact thing. They are text, wrapped in a visually appealing object… Considering most people using 3D software are artists with very small amounts of programming knowledge, Blender would alienate around 90% of the industry if it went towards a text cli… I really don’t want to entertain that idea.

It’s not like sticking with legacy UI’s would be any better in that department (unless you’re willing to live with a more rigid and limited system). Look at some of the Cinema4D videos for instance, it’s being developed in a way that insists on keeping almost everything in a legacy format and the complexity of that system can get to ridiculous levels (as in getting piles of buttons, menus, and widgets that you have to sort through).

At least with nodes you don’t have every option in front of you at once, how complex the tree gets visually really depends on the level of sophistication you’re pushing for in your shaders (fully realistic shaders will naturally be more complex than simple ones).

In the case of modifier nodes, particle nodes, rigging nodes, ect… It would be the same way mostly.

In an effort to get back to the original conversation though…

“Everything Nodes” needs 3 things…

  1. Context (Object, Action, Simulation, Materials and Textures) - These are like the buttons in the node editor
  2. Sockets for each context - to carry data and settings within each context
  3. Interoperability… (which means you can reference data from one context into another context)

All of this will be covered in my proposal.

Essentially you have to start at the highest abstraction level possible. We don’t need a “rigging” or “modifiers” context. In those cases we need an object context with sockets that carry modifier data.

You get into some interesting usability and design aspects when looking at it this way. You could use “modifier nodes” in an object context and give someone all the parameters and sliders on the node that they want. Or on the flip side, you could let them create the modifiers on the object like they normally do in it’s properties window, and then use “modifier operators” with “modifier sockets” to change the modifier’s behavior, switch it in the stack etc, and pass that data to another object.

Not to sound rude, but unless you have a conception of how this can be implemented (i.e. you could actually program it), don’t bother. There are implementation details that will carry over to the user-facing design, you cannot just start from the UI. There’s a reason why Houdini has different node systems for modeling and shading, for instance. You can not just arbitrarily pass data between them the way you’d like.

For example, I often try to explain why you can have a “blur” or “erode” operator in the compositor, but not in the shader - but the users still want these features and if you tasked them with designing material nodes, they would put those features in, resulting in a design that isn’t actionable.

MaxCreationGraph and ICE have shown it takes many years to populate such systems with enough nodes to be truly production flexible. I have repeatedly hit the wall in various nodal systems and been forced to resort to scripting.

A much more straightforward solution would be to allow python based modifiers/nodes such as C4D, maya, Houdini does. Blenders strength is Python, expand the scope where it can be used and allow community to create rich functionality that can be shared so development time is used where it is needed most.

PS. I am not dismissing nodal concept, simply saying that an efficient first step can be scripted modifiers and/or nodes which would empower community to help themselves while things evolve.

Why not use Sverchok and Animation Nodes as a starting point?
I think the only features not yet controlable by nodes in these systems are armatures and simulation, other than that they’re pretty good.

… How are you going to tell me what I can or cannot code or do with my free time? You actually can arbitrarily pass data between them. If you’re seriously commenting on this thread and haven’t used Animation Nodes at all then your opinion is pretty moot.

There is literally a Script Node IN ANIMATION NODES that you can put your own script into and use within the node tree. This is not rocket science… it has been done before. There are various models to choose from. However, we already have a great python based nodal system in Aniamtion Nodes, and we have a high level abstraction model to look at from Softimage. At this point it’s just about implementation. ICE is still arguably the most scalable and flexible nodal based creation system available. It was YEARS before its time. All those production houses that were using Softimage need new software… If you put something similar to ICE Trees into Blender, you have effectively killed three birds with one stone.

  1. Blender gets more exposure to industry standard production pipelines
  2. Blender get’s a feature suite that keeps it in line with current trends
  3. Nodal creation opens up exponentially more ways to create.

I’m not telling you to do anything. Just be aware that no developer will pick up on a concept that isn’t sound from an implementation standpoint.

You actually can arbitrarily pass data between them.

You can’t arbitrarily pass data between Houdini node graphs. The evaluation contexts are very different, for good reasons.

There are of course opportunities to pass data, but those depend on implementation details. That’s why I say, unless you know how the implementation is going to work, your ideas about the “3 things” that “Everything Nodes” needs are probably misguided.

In my opinion, arbitrary data links during the earlier stages could vastly complicate things in a way that would only ensure that things like Modifier Nodes and the like remain a pipe-dream (due to all of the extra work needed before anything can get into master).

If you can for sure propose how special data types could be interpreted when plugged into the sockets of other types (in a way that’s reliable and predictable), then one could possibly see it as a possible future project after the basic systems (the different node tree types as mentioned) are complete and available for users (ie. committed).

One step at a time, the very nature of FOSS practically demands this in the majority of cases.