I need a way to sort a set of points based on the distance to camera, in shader nodes. Imagine a node group that will have 5 vector inputs and 5 vector outputs, and the output socket order matches how far they are from camera. How should I go about that?
My advice is to sort the vectors outside of the shader/render environment…
Sorting in shaders is exponentially expensive, and by doing it prior to render, you’ll be using your machine in a more efficient way…
Anyway, if you still want to do it in a shader, here’s the setup for just 6 vectors:
(note that the complexity of sorting is greater than just to get the closest/furthest vector)
Odd-Even Sort Algorithm
And you could also try other GPU compatible algorithms but they all be expensive, as they will be calculated for every sample.
Thanks, I wouldn’t figure it out on my own. Two questions though: will it be noticeably slow at 10 instances already, or it’ll require much more to have an impact? And also, will it be more performant to run from geometry nodes modifier?
It’s exponential… I never tried this setup, but I use something similar in some of my shaders (thought I just peek the top or the two most higher values, from a max of 27 values), and it’s relativelly slow… The NVidia link says it’s O(n^2)
, and you can look here to see what that means, when you change the amount of items.
But if you have the option to do the calc apriori (if the vectors are the same for everywhere), then it is stupid to do this in a shader… It will be the same calculation being done in N samples (render setup) for every pixel the object occupies in the screen + all indirect samples hitting that same object from everywhere (unless you do some LightPath trickeries).
That means, the calculation will be executed once! Result will be stored, until some change in the scene invalidates the result, and the sorting will be done once again.
In the rendering part, the result is just some constant value, shared by every sample execution… making it a O(1)
operation !!!
What if I do update the geonode version every frame, like by using an empty that follows the camera? (The effect needs to persist in animation. Basically, I need a bunch of points so the shader will draw billboard sprites on them. The ordering is needed so I can make those sprites occlude eachother using ther alpha and Mix nodes) Is it as slow as executing in the shader? Better? Worse?
Sorting 10.000 elements just once, takes you more or less a some hundreds of microseconds, or some tenths of a millisecond, to do. It happens very often, as sorting items in a list (i.e. the Outliner).
But if you do this for every sample in 1/10th (let’s say your sprites occupy 1/10th of the screen) of a 4K res image ( 3840 × 2160 ~= 8,294,400) using 1024 samples per pixel, you multiply that tenths of milliseconds by 849,346,560 !!! (This is no more a simple delay!! it’s an eternety!)
Edited: Of course, a GPU can run things in parallel, but even dividing that value by 7680 cores from a RTX 4500, it’s still too much.
That’s wild o_0 By the way, do you know any reliable method to render object A on top of object B, even when their actual position shouldn’t allow it? The reason I thought about this billboard shader setup, is for a slime-based character who’ll have some features (such as bubbles) inside of them, and I do not want them to have real transparency because it is buggy in Eevee
Yup, View-Layers > Compositor!
It never misses.