How to make the textures in Geometry Nodes look less ugly?

The better question would be:
How to make the textures in Geometry Nodes look as high quality as textures in Shader Nodes?


The textures in Geometry Nodes look like they are displayed on vertices, instead of faces like in Shader Nodes (even if you chose to display the textures on faces, then they would still look just as bad):


It looks way cleaner with the same set of nodes, in Shader Nodes.


If you had to use too many nodes to create a piece of textures in Geometry Nodes, then you wouldnā€™t want to copy the same set of nodes and paste them into Shade Nodes, thatā€™s when you store the attributes and reuse them in Shader Nodes.
But then you would think that once the textures get into Shader Nodes, they would look better, BUT NO, reusing the attributes makes the textures in the Shader Nodes look the same as the ones from Geometry Nodes!


Is there any way that you can deal with the problem, besides having to subdivide the mesh 10 times?
Thank you!

GN Textures donā€™t ā€˜getā€™ into Shader Nodes!
They are used to store attributes in specific domains (verts, faces, etc); which can be used in Shaders as some source of data. Face data will have a specific value, while vertex data will be interpolated over each triangle.

The Shader function is supposed to be called for every sample/pixel being rendered, and that means that a single triangle may be sampled a thousand times. In GN, you cannot store that much information in a single triangle, without subdividing it until the resulting triangles occupy something less than a pixel.

What you can do, is to create a Shader function that uses the data from GN to produce a textureā€¦ OSL would be the best choice to deal with this, as you can access vertex-corner data directly, instead of the interpolated values from SVM; but still, itā€™s up to the Shader to figure how to produce a result for every sample.

5 Likes

Secrop is right - or do say it simpler: Geometry nodes is for geometry - vertices/edges/faces - shader nodes are for textures. If you want to have the same resolution in GN as you have in shader nodes - yes, you would have to subdivide it like hell and your computer will thank you for that with burning fire.

So if you wanna do shading - keep it in shader nodes.

or maybe you wanna tell us, what your ā€œend goalā€ is instead of asking this technical detail? maybe someone comes up with an even much better solution than you can now think of.

2 Likes

Thank you!

I have no end goal, Iā€™m just simply finding a way to reuse Geometry Nodeā€™s textures in Shader Nodes, but make them higher resolution somehow.

Are you suggesting that, if I wanted to have the same textures from Geometry Nodes to appear in Shader Nodes, I should recreate the node chain, or copy paste it?

That would be inconvenient to do, in my opinion.
Assume that I used a thousand of nodes in Geometry Nodes to make my own custom height map to procedurally model a bird, and then I want to have that same height map in Shader Nodes to paint some textures / materials on the bird.
Then that means Iā€™ll have to copy paste those thousands of nodes from Geometry Nodes into Shader Nodes. And if I wanted to adjust something in the model, Iā€™d have to copy paste again, and rewire the nodes, again.


Or, how about, can we do it inversely?
Can we store the Attribute in Shader Nodes (using something similar to Store Named Attribute Node, do we even have those kinds of nodes in Shader Nodes?), and then use those Attribute data to displace?

Iā€™m considering this way, because the textures will always look high resolution in Shader Nodes, then the reused textures in Geometry Nodes will also look high resolution.
Instead of, having low resolution textures in Geometry Nodes, and then bring that low resolution into Shader Nodes.

Is it possible?

No, but you can use the Displacement node in the Shader Editor to displace geometry.

1 Like

I think youā€™re overthinking it.

In general, your GN should be able to define geometry and its data, while your shader is suppose be able to add texture details to the geometryā€™s surface.

Letā€™s think about your ā€˜birdā€™ example:
The GN would create an instance of a feather, and populate the birdā€™s body with feathers. It would also store some data in the feathersā€™ geometry, like ā€˜lengthā€™ and ā€˜widthā€™ for each instance, and also a ā€˜tangentā€™ vector and an ā€˜uvā€™ coordinate for the barbs orientation and location, in the vertex-corner domain.

The Shader would then take those values to produce a texture that fits each feather accordingly. There, you can, as an example, use the ā€˜lengthā€™ to change between different sets of colors, the ā€˜tangentā€™ for some anisotropic effects, and the ā€˜uvā€™ as for some more advanced mapping.

Note that both systems (GN and SN), use the same algorithms for the Texturesā€¦ which means that using the same coordinate in both systems, will result in the same output. This makes things a bit easier, as you can store the coordinate system from GN and use it in SN. But thatā€™s not passing the texture itself; just a couple of its parameters.

2 Likes

Thank you.
Iā€™ve actually done Procedural Modelling in Shader Nodes for about 1-2 years before learning about Geometry Nodes.

I want to step away from Shader Nodes and explore Geometry Nodes for awhile.

Yeah, I actually donā€™t think that Iā€™m overthinking it. I just simply want the textures generated in Geometry Nodes to have higher resolution, without subdividing the mesh to many times.

Iā€™m sorry for making a too specific example of a bird, I should have generalized it by saying ā€œany shapeā€.

Iā€™ve just done some quick random Procedural Modelling, and the textures look pretty high resolution in Geometry Nodes, which is the result that Iā€™ve been pursuing, and the textures when getting reused in Shader Nodes still look good.
The mesh was subdivided 5 times, which is low in my opinion:

So I think that, as long as the textures are gradient, it can be easier to hide the low resolution appearance.

The problem here is that you need to minimize discretization of the surface in order to keep that detail you want (a.k.a. subdivision).

All this is not magic, and itā€™s how 3D graphics have been doing things since forever.

For example, many people that work in Geology, are used to work specifically in the vertex domain to store all types of attributes. Mainly because itā€™s practical for most the calculations a Geologer needs to perform in the data. Their meshes are normally Gb is size, and they rarelly use texturing of any kind.
For graphics, and specially animating millions vertices with bones can become very expensive, so the alternative was to use a bigger discretization, and use textures to hide the lack of the geometric detail.

You have to choose which of these areas you want to rely on. Both have pros and cons, and ways to bridge them togetherā€¦ But not at all as you are imagining it.

2 Likes