Is anyone working on AI and Retopology?

I have googled around and I can’t find much at all about using AI for Retopology.

I am NOT a programmer but I do enjoy watching Python/Tensor Flow tutorials. I just watched the state of AI with Andrew NG, he mentions that AI is currently best suited for things that a human can do with about a second of thought (such as driving).

It seems Retopology would be perfect for this, so why is everyone working on image processing (denoising) with AI and it doesn’t seem anyone is working on Retopology?

Just curious if anyone has any more info on this?

For denoising, the problem is fairly nicely constrained. There is a ‘ground truth’ - a correct output that the AI is aiming for, and you already know things like the output will be an image the same size as the input image. It’s relatively easy to create training data for an AI like this - all you need is a bunch of noisy renders, and then noise-free versions of the same renders (i.e. much more samples).

Retopology is a much less constrained problem. Some elements of the output will be subjective - what direction the edge flow, or what the density of the mesh should be at any given point, for example. It’s not immediately clear what would constitute a good set of training data.

That’s not to say that it wouldn’t be a worthwhile project, but denoising is more of a low-hanging fruit for machine learning.

3 Likes

I’m also eagerly looking forward to the first AI-driven retopology algorithm. If you look at the auto-retopology algorithms of ZBrush and 3D-Coat, an AI / Machine Learning approach should be able to take that a step further.

1 Like

I am an machine learning engineer and would be able to create such an AI tool. I have some experience with Blender and retopo, I’ve tried in the past Instant Mesh or other auto-retopo tools which aren’t fit to create characters ready for animations. But I am not familiar with the above auto-retopo toold by ZBrush and 3D-Coat. Before I (or someone else) would start to create an AI-based retopo system, we would need to see if the current best auto-retopo tools have any cavities that only AI algorithms could resolve. Is there any issue with the above tools that you know of, something that would make people avoid using it and still choose to do the retopo themselves for better quality? Or are these existing tools adequate enough already?

5 Likes

The weak points of both ZBrush ZRemesher and 3D-Coat Autopo are:

• They’re not very good at hard-surface retopology, especially when there are infinitely sharp edges present in the model, like after Boolean operations.

• Small details are often lost, mangled or cause topology deviations.

• Thin parts often result in bad retopo / retopo issues like holes.

• 3D-Coat Autopo sometimes generates singularities where more than five edges converge, resulting in surface tension and subdivision artifacts.

• The topology flow is not always ideal. Often there’s a bit of a polygon flow shift along concave and convex edges / ridges, causing slightly diagonal polygon flow where edge loops and poly loops should accurately follow creases etc…

An area that would be ideal for machine learning is recognition of shapes, such as faces, and applying an optimized retopology to that.

Have a look at this thread as well.

And here’s a great site about topology.

If you’d need a tester for a new tool, I’m interested. :slightly_smiling_face:

1 Like

This is the closest anyone has gotten to AI retopology, and it was back in 2015. I imagine someone has made progress since then over at IGL.

1 Like

Honestly, the best solution I’ve found is to manually create good base topologies for your common work, then use R3DS to apply it to your sculpt.

1 Like

Up to now, manual retopo is the most efficient solution, yes, but my laziness screams for a fully automated tool. ZRemesher comes pretty close. It does the job sufficiently most of the time.

Maya has hidden auto retopo nodes

Not so impressive, it created wobbly surfaces in his demo objects. Nevertheless it is a good addition to Maya.

Which approach do you have in mind to tackle this problem?

I have been approached by a few people regarding this topic. When I thought about it, I could not come up with a plan on how to tackle the problem.

The two main difficulties I saw were those:

  • Getting ground truth data would be quite difficult and it does not have a unique representation. An option might be to have a loss function which could replace the ground truth data, but I am not sure how it would look.
  • Another difficulty would be the representation of the mesh. LSTMs could certainly be used, but they wouldn’t make the topic any simpler. It might be possible to somehow get another representation for the mesh to avoid the complexity of dealing with recurrent neural networks.

Are there publications I have missed which tackle parts of those problems?

Yes I have a plan although I need a large database of humanoid models with good animation-ready retopo - because that’s what I’m aiming for, although rigid / non-character objects may be a simpler playground to start with but I think humanoid / bio-organic characters are probably more relevant and hardest to auto-retopo atm due to irregular surfaces and moving parts. I have 2 options in mind to acquire database, one requires partnership, the other requires funds. Both sounds feasible though. With a good database I have the plan to tackle this in a pretty trivial way (unlike the approaches mentioned above though). If you have experience with ML coding or 3D grapics programming (eg. c++, opengl, python), and want to work with me on such a tool, message me in private and we can chat about ways to go!

1 Like

Well since this is MY thread, I expect to be an early beta tester and get the final version at a heavily discounted price. :wink:

PS I have no clue how to code or do anything in ML except watch tutorials and leave only understanding 10% of what was going on.

1 Like

This is a bit different it looks more like an ‘AI-driven brush tool’ to me because it can only optimize local nodes in the mesh rather than retopologizing an entire mesh which would require learning both local and global patterns.

It’s a fascinating topic and I’m not aware of any more current research (though it has been a while since I went searching during my machine learning craze a year or so ago). Here are some rambling thoughts…

For ground truth to train on, I suspect you would need a lot of meshes that represent the kind of topology that you consider good and want the machine to learn to emulate. You can then probably make input and output pairs of before and after retopologizing by taking the good mesh and randomly triangulating it, or moving the vertices around, or whatever, to produce the “before” mesh to go with the desired output.

I guess it depends on whether you want to take the approach of improving an existing mesh in place, or trying to cover the surface with a new mesh without specific regard to the vertices that exist in the original.

Getting a lot of good meshes to use is of course problematic and you might need a LOT of examples I think. Maybe you could generate them procedurally, but if you understand the problem that well, then maybe you can come up with a retopology solver without using machine learning.

For representing the mesh and feeding the problem into your network, it would be fun to think you could just feed in the whole mesh (somehow) and say “just learn how to do THIS to it”, but I’m not sure how you would actually do that as you say.

For improving existing topology I was thinking maybe you could pull out little sub-parts of the desired mesh, even as small as one polygon and everything connected to it by a few edges say, then take the points near that spatial location in the “before” mesh as the corresponding input. You can generate a lot more examples from each mesh that way, but a lot of topology decisions like where to put the edge loops kind of need to look at the big picture.

Maybe you want to look at the overall shape and plan the polygon flow, and then fill in the actual polygons as a separate step (I imagine most automatic retopo tools do something like this).

I wonder if there’s a simpler 2D analog that you could start with (optimizing UV maps or something) that would be easier to get started on than the full 3D version.

1 Like

Those ideas sound like a good starting point for the data augmentation. An obvious addition would be to add some sort of subdivision. A little bit of scaling, stretching and bending might also help to make it more robust.

Finding such a representation would be amazing indeed. It would need to be paired with a loss function somehow is capable of judging how good the result is. The overall shape needs to be matched as closely as possible, but there also need to be edge loops. Intuitively, figuring out a suitable loss function for edge loops appears easier with a 2D representation. On the other hand, there is the difficulty of dealing with seams.
I like your idea!

Thinking of objective good topology, the following points should be useful;

  • less difference from the original mesh (judge by boolean or transferred normal maps)
  • less distortion when posed(compare with a armatured original mesh)
  • only quad faces
  • less vertices with edges>5
  • more coplanar edge loops
  • preserving hard edges
  • mesh resolution based on a “mesh resolution map”

Any other ideas?

1 Like

Have you checked the upcoming Blender auto-retopologizer? It’s no AI / ML yet, but does a very decent job:

1 Like

Didn’t know that. Thanks for the info
Great News!

1 Like

comment of predictive news:
The next GPUs will have A.I. of autoretopology of all the meshes in their core, which will boost geometry efficiency and performance, and together with A.I. denoiser and AI systems to convert and optimize all the code turning it into multi-treading GPU & CPU, we will have all the modeling and rendering all together and in Realtime.

1 Like