Context:
I’m working on creating a high-resolution animated contact map for a low-poly mesh in Geometry Nodes.
To achieve this, I:
1- Generated a high-poly version of my low-poly mesh.
2- Used a Store Named Attribute node within a simulation zone to calculate the contact map on the high-poly geometry.
Now, I’m looking for a way to transfer the high-resolution attribute data from the high-poly mesh to the low-poly one, while preserving the high resolution of the attribute. My ultimate goal is to use this data as an animated texture in the Shader Editor.
The attribute is tied to the points of the geometry. Simply transferring the attribute from high poly to low poly doesn’t work because the low-poly geometry doesn’t have enough resolution to capture the details.
Possible Solution:
One idea is to bake and export the attribute as a sequence of 2D texture images, then re-import them into the Shader Editor. However, this doesn’t allow real-time preview, and if I need to modify the animation/simulation, I would have to bake it again each time. So it wouldn’t be non-destructive anymore.
Question: Do you have any suggestions for this workflow, or alternative methods (e.g., transferring the data to an independent domain such as UV coordinates)?
Thank you for your suggestion regarding Dynamic Paint. Besides scripting, is this the only viable solution?
I’m wondering if anyone has ideas for more procedural and “non-destructive” methods that can be managed entirely within Geometry Nodes (e.g., transferring the data to an independent domain such as UV coordinates, where it can be used and edited in real time).
Achieving high-quality results with Dynamic Paint requires baking image sequences. These are external assets, and managing them adds a bit complexity to my non-destructive workflow.
It would indeed be interesting to analyze how Dynamic Paint works, particularly how it links this data into images, and attempt to replicate its functionality within Geometry Nodes. Even if this might require scripting or extensive node setups, which could impact performance.
Apologies if this seems overly specific.
I’d love to know if anyone knows how to emulate this functionality procedurally, or has alternative methods to keep the attribute data flexible and editable directly within Geometry Nodes.
Attributes are stored on mesh/curve elements, so you can only have as high a resolution as your mesh is dense. You stated the constraints of the problem yourself. It’s either a high density mesh, or textures, the size of which is independant of the mesh resolution.
Perhaps we could think of something else if we expand the bounds of the problem a little bit. What are you trying to do, and why the low-poly constraint ?
Thank you for your explanation! I understand that attributes are tied to the mesh resolution, and that textures can bypass this limitation.
However, one of the limitation to me is that these textures are created through baking, which means it’s impossible to preview the high-resolution contact map before baking. This makes the workflow less flexible and harder to control in real time.
I’ve been considering looking into the source code of Dynamic Paint to understand how the images are generated. Perhaps this could inspire a way to replicate the process within Geometry Nodes, if it’s even feasible.
I apologize if my explanations seem unclear or if my request feels impractical. If no better solution exists, I’ll mark the first suggestion as the answer. But if anyone has other ideas, even experimental ones, I’d love to explore them!
I didn’t mention it earlier to avoid overcomplicating the post with unnecessary details, but the contact map is part of a slightly more complex rain system I’m creating in Geometry Nodes. I need to generate wetmaps in real time, allowing me to visualize the high-resolution contact map as it evolves, without the need for constant baking. This would improve both workflow and control.