Please provide a scripting language that can replace shader and geometry nodes

For geometry nodes, I am not aware how a language would work. As of writing a Python geometry node? In a sense that you would be able to pack dozens of Python code into a single node, thus hitting a good balance between low-level code and high-level editing capabilities.

For Shaders, this is somehow a bit no-go. As I have mentioned it to Blender developers a few times, there is not a clear strategy how this would be done in a correct and proper way (x GLSL = depricated / √ Vulkan Shaders / ? Metal OSX Shaders).

Based on what I have figured out is that you have to bypass Blenders features and slide your own code to handle the rendering, you write your own “shader” using Python. Say that you plot the pixels to the pixel data directly (but is slow), or use a Python-OpenGL (Python-SDL) backend in order to perform rendering operations, then again you send the pixels of the render buffer to the Blender texture data.

SPIR-V is both OpenGL 4.x and Vulkan compatible. Metal support is handled by Apple anyways from what I understand.

Say for example Blender developers implement a ShaderProgramNode that allows you to load your own shader from a file.

Question is if GLSL is safe. I assume using the shader compiler you will be able to handle both cases of GLSL (OpenGL/Vulkan).

Then the other case is that if your shader is written in GLSL, then it would have to be transpiled behind the scenes to a MetalShadingLanguage shader:

As for example using tools like this: https://github.com/septag/glslcc

So there is another aspect how strong and bullet-proof are these sort of utilities/converters, and if it can guarantee a safe foundation, for the entire effort to become feasible.

(No need to answer directly to these questions, I just want to pinpoint the methodology and reasoning. If the entire idea is wrong or it has problems in it, then it would not worth the effort of going with this plan. The meaning is to find a good technique that won’t be so much of a bother to Blender development team, so they most likely to adopt it and implement it).

1 Like

I’m not asking for GLSL or similar specifically. Sure, that would be great, but even a language that just ran on the CPU would be a big improvement.

The problem is that node based visual programming systems make it very difficult to express complex ideas. Once you have more than a dozen nodes in your graph, the thing becomes a pile off spaghetti. And the whole Capture Attribute system is pretty unintuitive too. Node based programming is a reversion to the days of the ENIAC when computers were programmed by stringing cables together. Programming languages made things easier with innovations like named variables and function calls.

Node programming makes some sense in shaders where you’re mostly just managing a coloring pipeline, but really starts to become hard to use when you start needing to do university level math or generate non-trivial geometry. So while something like GLSL might be useful from a systems point of view, I’m more interested in just having something more expressive.

When node systems do not attempt to be straight-up programming with boxes, then they start to work better. For instance, it is a lot more difficult for artists to express in code what they can do with color/float curves, mix nodes, and color ramps. In addition, unless the BF revamped the text editor to actually work well as a coding environment, then you dramatically decrease the ease of use as you then need to download and install a potentially heavy IDE such as MSVC.

Fields tend to work because it does not attempt to be a programming language but with graphics. If you want an example of what felt like a programming language without the intuitiveness of a good code editor, check the original design for nodes predating fields).

I can see things like expression nodes coming at some point, but a full coding language is unlikely, even if the intent is to have it as an alternative rather than a full replacement.

1 Like

What render engine Are You talking about?

Couse You mentioned GLSL and CPU only rendering and that doesn’t add up.

Unpopular opinion probably, but if your complex nodes look like spaghetti then you’re just not good at UX stuff of noding.

Every time I download somebody elses file with node tree, even the popular ones like Erindale, its horrendous. Every single input coming from single Group Input node. Why? Dont people know they can have multiple ones? Why not group them logically? Noone uses frames, noone uses Text data-blocks to leave descriptions, no one uses color-tags for nodes, noone uses reroutes. Blender has amazing UI/UX for nodes, you can see on recept updates every software is borrowing ideas from it, and people are demanding them.

Devs have been begging people to use Node Groups as assets but people keep going doing same trees in every blend file. I guess idea of modular workflow is very new to Blender users.

Once I downloaded this asset file that had MASSIVE algorithm implemented in shader nodes, that had (lets say for simplification):

pow(a+b * pow(a*b) / c, d)

Node tree was absolutely massive, because a, b, c, d all had very intense calculations themselves. I saw the text file and it was like one page long.

It took me one day to clean that up. But I turned calculations for a, b, c and d in node groups. Inside those node groups I had other variables grouped as well.

So I was left with less than 10 high-level nodes, with ability to go inside any variables and change them, or change variables of those variables. And in the end because it became so simple I experimented easily and got other results without needing to rebuild node tree from the scratch.

Node Responsibly people. And please, use multiple Group Inputs.

9 Likes

I was using it as an illustration to show how much simpler some ideas are expressed in code. While being able to write your own GLSL shaders would also be nice, it was not the main point of my argument.

I learned that only by accidentally making a copy of the Input and finding out that it’s okay :person_shrugging: This information is not obvious/accessible enough, I guess.

Also, you can add node color preset to Favorites, to enable and set color quickly.

Now I color all my Group Inputs in bright orange to keep track of them.

3 Likes

I do use groups and frames. Sometimes they do help encapsulate code. But not always.

Groups can be really useful when you need to reuse things, but that tends to be a bit of a specialty case because there aren’t a lot of node networks that you need to reuse - mostly just fairly simple equations.

The other thing they’re useful for is to divide your code into chunks, but that’s presuming your algorithm can be easily divided into chunks. It works well for pipelines where you can neatly separate it into different stages, but is far less helpful when you need to connect in nodes from distant parts of the graph.

And then there’s the fun part of node programming where you need to select the output of one node and scroll waaaaaay over to the other end of your graph to hook it up and hope you don’t mess up where you drop the connection.

As to most people not using groups and frames, I’d suggest that it’s not them being lazy or ignorant, but rather the system itself being badly designed. If it really was that useful, more people would use it.

1 Like

Can I ask what are you shading, that uses such complex algorithms, and so often, that its not dividable logically in chunks and also not reusable? You left me genuinely curious.

In the above code, I had already created a shader that cut a surface into square patches and applied a random translation and rotation within each patch. I then tried to extend it to interpolate between adjacent patches so they blended together smoothly. My GLSL shader came out to about 20 lines of code. My attempt to do it in Blender was abandoned because attempting to express even fairly simple ideas with node networks is very frustrating.

The fact that samplers cannot be independent of the vector that samples them (ie, the Image Texture node only outputs the color sampled at a particular uv location - you can’t just pass around the image itself and sample it later) also made this pretty much impossible.

This is also much worse when you get into geometry nodes. The stuff you want to do with geometry nodes is a lot more like typical programming and node networks just don’t cut it for anything but very simple systems.

I recently implemented a binary tree (no binary search tree, just a plain binary tree where the tree topology is used to guide a space partitioning much like in a k-d tree) in geometry nodes, and it is a bit painful.

And such things cannot neccessarily be divided logically all that much, especially in the absence of such a concept as (scoped) variables.
Conceptually

  • build the binary tree
  • fill in data from outside source and
  • use the tree to extract the data again + additional inferred meaning

are three consecutive steps here, and the evaluation goes through them in this order, yet encapsulating in node groups isn’t all that practical for several reasons:

  • some operations will e.g. only work correctly if certain settings are consistent between the above-mentioned stages, and some settings (e.g. Domain) cannot (yet) be exposed
  • even things like attribute-names can be exposed, but it’s finicky (e.g. String is not really a proper datatype, without e.g. spreadsheet-support)
  • groups hide things away and abstract them away, but that’s not always desirable. Debugging stuff is more convoluted when you need to toggle into a group first. Treating it as a high-level black-box may make it harder to understand what inputs it requires and what outputs are to be expected.
  • you have very little in the way of packing data for easy passing around between functions. In a regular programming language, if I need to pass a number of variables (of pothentially different) type from one function to another, I wrap them into some compound datatype like a struct in C/C++ or a tuple in Python, pass that as a single variable and unpack its contents inside the ‘receiving’ function. Blender’s node-UI on the other hand doesn’t even allow to pass an OSL-struct as such from one script-node to another. So to pass twenty different variables from one nodegroup to another, you might have to wire 20 output-sockets into 20 input-sockets.

With all that said, I’m not complaining at all, and for me personally, a scripting language as discussed here, or even a simple expression-node are pretty far down on my wishlist.
I do use Frame nodes a lot and give them descriptive names (like they were code-comments) though. And reroutes. So many reroutes.

Anyway, I’m not sure the BF plans to have some explicit ‘language’ (something like a VEX-equivalent) eventually, and if they do, it’s a loong way off.
I understand the frustrations though. As I see it, the nodes have their strong point in how effortless it is to use them to prototype stuff and get immediate visual feedback etc., but there might be a point where an actual programming language serves you better.

greetings, Kologe

1 Like

While I agree that a script equivalent of geometry/shader nodes would be much more concise for certain cases, and I wish for it to happen,

This is just a matter of getting used to, and commenting your node network the same way you’d comment your code will make it easier too. Networks made of hundreds or thousands of nodes don’t pose a problem if they’re commented, named, collapsed in groups or contained in frames that can be named and colored too. It can be very easy to parse

At some point, Jacques mentioned the prospect of a scripting language specific to geometry nodes (at least a year or two ago). But that was meant as a distant consideration, and I don’t think it’ll happen soon, if ever

edit I commented a bit fast, I see all this has been covered already

1 Like

Absolutely. Realistically at best they would probably only support some subset of the shader language. And implementing a cross compiler would be no small feat.

Any code logic flow be it visual graph or text based risks becoming ‘spaghetti’ if the operator/user doesn’t know how to structure things well and is creating a mess. With respect to visual graph based programming systems good utilities such as sub graphs and good interface utilities are a must(IMO) including of course good debugging tools and flow analysis means.

But I do think a well implemented graph system can be as good as text based.

However different users will have different preferences and someone already well versed in text based programming will likely always prefer that over learning a whole new interfacing system.

And every visual logic flow system will always be bespoke and something completely new to learn anyways. Just take game engines with custom scripting languages.

There’s always someone who will argue for ditching it in lieu of a more widely known language like C# or what have you because they already know that language and don’t really want to learn a new one.

17 posts were split to a new topic: On horizontal-only node networks

This is news to me- how do you do this?

1 Like

image

8 Likes

Best case scenario in simple terms, is that if you are interested in Cycles only, your most logical approach would be to use OSL.

At some point I was very interested in nodegraph-scripting hybrid approaches as well. Here I did a very simple experiment as you can see and it worked really great! I think that there are unlimited capabilities in that style of programming to combine the best parts of nodegraph and scripting at the same time.

Cycles Boolean Node (Not Shader)

The real bummer in this case, was at that point in time (that year) Cycles rendering was not efficient so I abandoned the idea to invest more time to it. (I was interested only in rasterization at that time).

Now I think that Cycles is starting to become top-notch again, since it has lots of improvements and innovations compared to 10 years ago. You get for sure better hardware for more raw power. Then you have denoising which is a huge deal as well. Then you get great innovations on the Cycles-X project which is far faster and efficient due to a complete refactor of the architecture. All of these factors are enough to start placing Cycles once again on my primary choice of rendering.


While I still like EEVEE and I am still interested to it, but I can’t deny that is a huge bummer that there is lack of programmability. There can be lots of techniques to do the job behind through third party means, but essentially it like you are trying to do duct-tape engineering, is not exactly the best way.

1 Like