Adaptive Cloth Simulation

How awesome would it be if a cloth simulation would be run in a way similar to Unlimited Clay, where details could be subdivided on demand, instead of initially from the beginning, ultimately wasting memory?

Collisions could theoretically be calculated faster, wouldn’t they?

interesting.

the question is if this could work why arent they using it in production simulations? i guess if this would be faster they would create a software that would do adaptive cloth.

not trying to be negative. but i am always looking at WETA and ILM what they are doing.

Well, you dont know where to subdivide when you already don’t have the details from the cloth.

To know when where the details is gonna take place you already need to have those details on the cloth geometry.

Its possible, i think if you think it in a voxel way.

The problem would be “how will the cloth know when to subdivide or not?”

Subdividing on collision probably wouldn’t be that much of a problem but if a dress needs better quality folds in a turn or similar it would be very hard to detect where and when to use subdivision. Another problem that would naturally occur would be cloth simulations changing on replay because of the dynamic subdivisions, when more subdivisions are added the cloth can move in ways not possible before those subdivisions were there and could mess up the simulation, to fix that you could simply bake the animation as it simulates but if artifacts show up when doing that you would have to scrap everything and re-do it all again with the mesh you originally started with or you’d have the problems of changing simulations again.

I think it would be way too complex to handle now but maybe in the future some smart person comes up with a way to do it without too much trouble.