Thank you for your insight. I have an ignorant question for you regarding your second point.
Does currenty implemented Worley’s algorithm knows which point is mapped to a surface? I know that it is performing check within 3x3 kernel, but is the calculation performed for the whole domain or only for the parts that are projected onto the mesh?
Currently, the calculation is based on hashing the coordinates of each grid cell.
Here’s a short step by step:
Let’s say we want to sample the point (4.15, 5.65, 7.89) (with the coordinates already scaled by the ‘scale’ factor). That point belongs to the cell grid (4, 5, 7), (floor(x), floor(y), floor(z)). If we hash those coordinates we can produce a pseudo random coordinate xyz that has its components in the range [0, 1[, and if we add this new coordinate to the cell grid’s origin, we get the point in that cell. Then we just calculate the distance from our sampling point to the cell point and store it. We repeat the process for all neighbouring cells, and pick the lowest distance to each point.
Since hashing produces the same output for a specific input, every time a sample needs to calculate the cell point for (4,5,7) it will get the same random coordinate. So there’s no need to store anything, and avoid memory lookups (that are quite expensive in GPUs).
Boy, that’s really efficient design. Now I see why it is so hard to beat it in terms of performance.
And just for the sake of brainstorming, since I’m a theoretician right now:
Then, how about not touching point data at all? Just leave Worleys solution to random distribution of final points intact, and same goes for proximity check at the end.
But dynamically change kernel size and placement istead. The kernel data would needed to be stored and calculated, but it could take much less time than calculation of all points. And the caches for it could be relatively small.
The simplest idea is to divide kernel space not only in grid, but recursive levels:
The hardest part I assume would be implementing cross-checking between kernels of various sizes.
The kernel space data could be calculated on the basis of vertex weight or vertex color. Since verts have xyz coordinates it could be translated to kernel grid easily. And the weight/col value could determine the kernel level (size).
You should try it with OSL to see if it works… But I suspect that dealing with samples that have Level0 but with neighbours with higher levels might be a bit tricky (not to mention that breaks parallelism a bit).
The nodetree i posted above is a bit similar to this, with 4 levels of detail…
Basically the algorithm stays the same but is executed 4 times with different scales.
Just out of curiosity - what map ranges did you use on the node setup?
I’m playing with it right now, and the most interesting effect is when you multiply the voronoi distance by negative values.
Below is roughly the effect I’m after. Think natural moss growing patterns. But right now it’s unusable - this pattern is only working when direcly connected as base color in Principled BSDF. In any other node it gives pure black due to negative value.
Both texture scaling and detail works like a charm. Using it as displacement gives trash thou.
Each one converts a fraction of the input to the [0,1] interval.
Something like -> from[min:0.6, max:0.7] to [min:0.0, max:1.0].
In that case I used the following intervals ([0.6-0.7], [0.5-0.6], [0.4-0.5] and [0.3-0.4])…
Thought these values might vary depending the ‘Control Texture’ ranges (the Noise texture has its values around 0.5, so the fractions are near that).
Also, my ‘Add nodes’ are clamping the values (we only need the distances that are in [0, 1]). So I wouldn’t expect any results while multiplying with negative values.
In you case, I’m not figuring why you get that result (no blender atm)… I might need to check the source code from the MapRange node to see what it does when clamping a negative output (it should be clamped to 0, theoretically).
Well it might not be true Voronoi, but this approach is dangerously close to naturalistic effect. I’m changing the solution for this topic since your node setup is the fastest and most elastic approach. My apologies @Tarby
Transition between different voronoi sizes is one step. Next is to do it between completly different noises/patterns!
If it works that means building biome and natural environment shaders could be automated very easily.
From my understanding the key is to properly interpolate between masks of different sizes and shapes.
I know that this is an old thread, but after facing a similar problem and searching far and wide for a solution I thought of maybe post what I came up with in case that somebody else will end up here
I completely abandoned the Voronoi texture and used the proximity of the point distribution of the geometry nodes in Blender 3.0 to create the (i’d say) exact effect in the reference image. The downside of it is, that your Mesh has to be in a pretty high resolution for that to look smooth. I adressed this problem with a multires modifier.
Since I don’t know what exactly you try to achieve this might not be the way for you, but for baking the texture it should work. Anyway, I hope this somehow helps!