# Geometry nodes depth map to distribute objects

After watching this tutorial Distribute Objects using Weight Paint-Geometry Nodes i wanted to know if there is a way to use a grayscale map to a weight map to distribute objects?, like we could do with the brush system, but more precisely.

I’m trying to recreate something like this

neil blevins starship hull

1 Like

yes ! you can load images in geometry nodes (just like you would use cloud textures inside GN), and the conversion to weight map, is kinda done naturally.
Basically every texture instead of being rendered to a shader / material , in GN they feed values to points / edges or faces.

The more you’ll have vertex in your mesh , the more precise representation you’ll get from your image.

you can think of vertices as if they were pixels, say you have a 10x10 grid, then you’ll have only 100 “pixels” / points to represent your texture. Even if the image is 4096x4096 , it’s like if your image was 10x10pixel inside GN.

Hope that helps clarify things a bit !

Good luck !

ok thanks, understood, but how can i drive the density from a texture?

The density is based on the grayscale values of your image.
it goes from 0 (black) to 1 (white) , you can multiply that if for instance 50 is a better density for your case. Or multiply by 0.1 if that’s more appropriate.

What you’re doing here is that you multiply it by a vertex group, which is fine if say you want to paint some areas.

And lastly pay attention to how the image is mapped, it might not be the same as within the shader editor.

Try to plug an UVMap in the vector input of the image !

1 Like

Ok thanks, i’ve setup a simplier image to feel the effect, i’m starting to catch the tips

ok i’m starting to have a descent result here, i’ll add a second layer of greebles

1 Like

The reference you want to archive gives me strong Tsutomu Nihei vibes.
If you don’t know him, look him up - he has a great unique style and his architecture is mind-blowing.

Unfortunately my brain cant come up with in-depth mathematical solutions to your problem but I’ve been thinking maybe there is a different approach.
You look at it from the perspective of addition, I am seeing it from the perspective of subtraction.
Rather than building the shape, maybe you can build the negative space and then use booleans to carve it out.
The other thing that springs to mind is a layered texture approach, you could create black and white textures with procedural means, then use that one (grease pencil?) filter that turns images into splines/curves which you then extrude and make 3D. A couple of these layered together should do the trick.

A strictly texture driven method should work too, you can create high resolution displacement maps (or displace the mesh directly) then voxel remesh and decimate the mesh until it is manageable.

If I where in your shoes, I would probably waste some time trying them all and then mix and match (you’ll never know if one of these techniques gives an interesting happy accident).

1 Like

Hi, you’re right @Romanji I’ve read all tsutomo nihei, blame! Is a blast.

That’s an interesting approach

Within GN you mean?

Not necessarily. Shader graph/compositing works too.
I have to admit I am a little out of the loop when it comes to GN.
You could also use GM as a intermediate (distribute shapes quickly), then render them with a white emission texture and a black background - turning them into a B/W image.
You could just change the input in your GM node system and render out some variations, then add/multiply the results together.

The filter I was speaking of is in the convert menu and is called “Trace Image to Grease Pencil” (you would have to convert it further to curves/mesh.)
The downside to all these methods is that they are kinda destructive and not purely procedural.

If you want to do that sort of thing within GN,

Heres a way to achieve it…with bitmaps.

1 Like

Amazing looks a way to explore and use mixing like @Romanji said