The better question would be:
How to make the textures in Geometry Nodes look as high quality as textures in Shader Nodes?
The textures in Geometry Nodes look like they are displayed on vertices, instead of faces like in Shader Nodes (even if you chose to display the textures on faces, then they would still look just as bad):
If you had to use too many nodes to create a piece of textures in Geometry Nodes, then you wouldnāt want to copy the same set of nodes and paste them into Shade Nodes, thatās when you store the attributes and reuse them in Shader Nodes.
But then you would think that once the textures get into Shader Nodes, they would look better, BUT NO, reusing the attributes makes the textures in the Shader Nodes look the same as the ones from Geometry Nodes!
GN Textures donāt āgetā into Shader Nodes!
They are used to store attributes in specific domains (verts, faces, etc); which can be used in Shaders as some source of data. Face data will have a specific value, while vertex data will be interpolated over each triangle.
The Shader function is supposed to be called for every sample/pixel being rendered, and that means that a single triangle may be sampled a thousand times. In GN, you cannot store that much information in a single triangle, without subdividing it until the resulting triangles occupy something less than a pixel.
What you can do, is to create a Shader function that uses the data from GN to produce a textureā¦ OSL would be the best choice to deal with this, as you can access vertex-corner data directly, instead of the interpolated values from SVM; but still, itās up to the Shader to figure how to produce a result for every sample.
Secrop is right - or do say it simpler: Geometry nodes is for geometry - vertices/edges/faces - shader nodes are for textures. If you want to have the same resolution in GN as you have in shader nodes - yes, you would have to subdivide it like hell and your computer will thank you for that with burning fire.
So if you wanna do shading - keep it in shader nodes.
or maybe you wanna tell us, what your āend goalā is instead of asking this technical detail? maybe someone comes up with an even much better solution than you can now think of.
I have no end goal, Iām just simply finding a way to reuse Geometry Nodeās textures in Shader Nodes, but make them higher resolution somehow.
Are you suggesting that, if I wanted to have the same textures from Geometry Nodes to appear in Shader Nodes, I should recreate the node chain, or copy paste it?
That would be inconvenient to do, in my opinion.
Assume that I used a thousand of nodes in Geometry Nodes to make my own custom height map to procedurally model a bird, and then I want to have that same height map in Shader Nodes to paint some textures / materials on the bird.
Then that means Iāll have to copy paste those thousands of nodes from Geometry Nodes into Shader Nodes. And if I wanted to adjust something in the model, Iād have to copy paste again, and rewire the nodes, again.
Or, how about, can we do it inversely?
Can we store the Attribute in Shader Nodes (using something similar to Store Named Attribute Node, do we even have those kinds of nodes in Shader Nodes?), and then use those Attribute data to displace?
Iām considering this way, because the textures will always look high resolution in Shader Nodes, then the reused textures in Geometry Nodes will also look high resolution.
Instead of, having low resolution textures in Geometry Nodes, and then bring that low resolution into Shader Nodes.
In general, your GN should be able to define geometry and its data, while your shader is suppose be able to add texture details to the geometryās surface.
Letās think about your ābirdā example:
The GN would create an instance of a feather, and populate the birdās body with feathers. It would also store some data in the feathersā geometry, like ālengthā and āwidthā for each instance, and also a ātangentā vector and an āuvā coordinate for the barbs orientation and location, in the vertex-corner domain.
The Shader would then take those values to produce a texture that fits each feather accordingly. There, you can, as an example, use the ālengthā to change between different sets of colors, the ātangentā for some anisotropic effects, and the āuvā as for some more advanced mapping.
Note that both systems (GN and SN), use the same algorithms for the Texturesā¦ which means that using the same coordinate in both systems, will result in the same output. This makes things a bit easier, as you can store the coordinate system from GN and use it in SN. But thatās not passing the texture itself; just a couple of its parameters.
Yeah, I actually donāt think that Iām overthinking it. I just simply want the textures generated in Geometry Nodes to have higher resolution, without subdividing the mesh to many times.
Iām sorry for making a too specific example of a bird, I should have generalized it by saying āany shapeā.
Iāve just done some quick random Procedural Modelling, and the textures look pretty high resolution in Geometry Nodes, which is the result that Iāve been pursuing, and the textures when getting reused in Shader Nodes still look good.
The mesh was subdivided 5 times, which is low in my opinion:
So I think that, as long as the textures are gradient, it can be easier to hide the low resolution appearance.
The problem here is that you need to minimize discretization of the surface in order to keep that detail you want (a.k.a. subdivision).
All this is not magic, and itās how 3D graphics have been doing things since forever.
For example, many people that work in Geology, are used to work specifically in the vertex domain to store all types of attributes. Mainly because itās practical for most the calculations a Geologer needs to perform in the data. Their meshes are normally Gb is size, and they rarelly use texturing of any kind.
For graphics, and specially animating millions vertices with bones can become very expensive, so the alternative was to use a bigger discretization, and use textures to hide the lack of the geometric detail.
You have to choose which of these areas you want to rely on. Both have pros and cons, and ways to bridge them togetherā¦ But not at all as you are imagining it.