yes ! you can load images in geometry nodes (just like you would use cloud textures inside GN), and the conversion to weight map, is kinda done naturally.
Basically every texture instead of being rendered to a shader / material , in GN they feed values to points / edges or faces.
The more you’ll have vertex in your mesh , the more precise representation you’ll get from your image.
you can think of vertices as if they were pixels, say you have a 10x10 grid, then you’ll have only 100 “pixels” / points to represent your texture. Even if the image is 4096x4096 , it’s like if your image was 10x10pixel inside GN.
The density is based on the grayscale values of your image.
it goes from 0 (black) to 1 (white) , you can multiply that if for instance 50 is a better density for your case. Or multiply by 0.1 if that’s more appropriate.
What you’re doing here is that you multiply it by a vertex group, which is fine if say you want to paint some areas.
And lastly pay attention to how the image is mapped, it might not be the same as within the shader editor.
Try to plug an UVMap in the vector input of the image !
The reference you want to archive gives me strong Tsutomu Nihei vibes.
If you don’t know him, look him up - he has a great unique style and his architecture is mind-blowing.
Unfortunately my brain cant come up with in-depth mathematical solutions to your problem but I’ve been thinking maybe there is a different approach.
You look at it from the perspective of addition, I am seeing it from the perspective of subtraction.
Rather than building the shape, maybe you can build the negative space and then use booleans to carve it out.
The other thing that springs to mind is a layered texture approach, you could create black and white textures with procedural means, then use that one (grease pencil?) filter that turns images into splines/curves which you then extrude and make 3D. A couple of these layered together should do the trick.
A strictly texture driven method should work too, you can create high resolution displacement maps (or displace the mesh directly) then voxel remesh and decimate the mesh until it is manageable.
If I where in your shoes, I would probably waste some time trying them all and then mix and match (you’ll never know if one of these techniques gives an interesting happy accident).
Not necessarily. Shader graph/compositing works too.
I have to admit I am a little out of the loop when it comes to GN.
You could also use GM as a intermediate (distribute shapes quickly), then render them with a white emission texture and a black background - turning them into a B/W image.
You could just change the input in your GM node system and render out some variations, then add/multiply the results together.
The filter I was speaking of is in the convert menu and is called “Trace Image to Grease Pencil” (you would have to convert it further to curves/mesh.)
The downside to all these methods is that they are kinda destructive and not purely procedural.