The big MagicaCSG and SDF modeling thread

:+1: Thank you for buying me a coffee, appreciated. Here’s to you: :coffee::blush:

I’ll make a new video for that after posting this message.

Sounds good, I’ll check it out. :+1:

Yes, Modeler has had an ‘AI’ feature liek this for a while. It links directly to the Substance Source asset library and matches shapes in real-time based on the assets in the library, and then converts them to voxels. I don’t use the feature at all though. It’s more of a proof of concept beta thing at the moment.

Apparently, CSG is technically not correct either. :sweat_smile:

But yeah, it makes sense to stick with SDF, as it’s catchy and it’s become the common term, correct or otherwise. :smile:

1 Like

As far as I know, CSG is the umbrella name for 3D vector-based industrial solids type geometry, which is usually NURBS, and the… Let’s keep calling it SDF :sweat_smile: is also based on volumetric math shapes without the restrictions of polygonal geometry, and featuring blends, fillets and chamfers, and therefore comparable to NURBS-based CSG methods.

1 Like

Here you go. If you’ve got questions, feel free to ask…

The element names are in Dutch, because I find that a little easier when I’m creating a 3D scene. :slightly_smiling_face:

1 Like

I got a really detailed mathematical explanation for why Frep is the most technically accurate term, but I think it’s best if we just stick to SDF. :sweat_smile:

The mentor I mentioned above actually did his Phd on this stuff back in the USSR, so it’s a very old argument.

2 Likes

Frep does sound friendly.

“How’s it hangin’, ol’ Frep?!” :laughing:

1 Like

@Musashidan I’ve tried to reformulate the first info sentence of my initial post. Did I succeed? :slightly_smiling_face:

1 Like

You know a lot more about this stuff than me. I’m just parroting what others tell me. :laughing: But yes, you’ve done a fine job, my friend. :smile:

From what I can gather, one of the most important factors that defines it is implicit Vs explicit.

1 Like

I think this is a good explanation as to why the programmers themselves don’t use the term SDF

2 Likes

Haha, I love such delightfully geeky discussions. :blush: Good explanation by the way, gives more context to the techniques.

1 Like

Interesting. I had assumed I was missing some feature or interface element, but it seems I’ve just been approaching things from the wrong angle.
I will simply have to face down my true nemesis: focus and practice :laughing:

Anyway, thanks again, very helpful.

1 Like

Maxon added SDF to c4d in version 20 back in 2018.

The voxels are a bit like what 3dCoat and ZBrush had with sculpting. I haven’t played with Magica or any of the newer apps using this tech.

I jumped off the c4d ship four years ago so I don’t know if maxon refined their SDF implementation very much. It had a role for some designs…but never rose above good old quad-based modeling as the primary approach.

Maybe SDF will ultimately win out overall. I’ve been more interested lately in Plasticity CAD style modeling, but I suppose this reflects my lack of interest/skill in character dev.

I think I see what is the disconnect here. Yes you are absolutely correct on that there’s no need for AI for mesh to voxel conversion. Modeler doesn’t use it, 3Dcoat doesn’t use it and I think these people also didn’t use, there’s plenty of algorithms that do that job

You’ve mentioned Modeler. This is not doing what base modeler does with the asset library converting .OBJ into voxels, or what modeler does when you import any mesh. What this is trying to do is to convert any mesh into a set of blended primitives like the beta feature in Modeler of non destructive primitives, or like the examples you have seen in Magica, using base primitive shapes that are persistent and merge or blend with parameters that can be changed at any time, not a fixed voxel volume that you can move with sculpt tools, but a voxel generated volume from the live, real time interaction of two primitives:

If they convert the mesh to voxels or not to use in their initial calculations, I don’t know, but let’s assume they do, from these voxels they get a closed defined volume, this is the range of acceptable answers that the end solution of their model needs to fit in within an acceptable margin of error. Acceptable is an arbitrary parameter they decide is a good “close enough” result, being the difference in size/shape from the final result and the original mesh dimensions

The AI part comes in from how they get to that solution. Needed or not, they decided to go with AI.

What I imagine happened if they trained their model with AI was to create 3D models and then manually create an equivalent of these 3D models with primitives, just like you would do in Magica. Make a ton of these, and manually label them. Then train your AI so it can do the same, loads the mesh, tries to fit different primitives and change the blend settings to match the original mesh.

This isn’t particularly easy to do, but there must be a reason why they decided to go with AI. Maybe they were working on other solutions and thought this could be faster for their team to implement, or maybe this can be used for future projects they have in their roadmap.

Another possibility worth considering, and to make it clear, I’m not accusing these people of doing it, but it’s something that can be done so it must be take into account, is that any organization with a set up like this could use, not the AI solution itself, but the mesh acquisition through the cloud service to obtain a large number of high quality 3D meshes to train their own AI generation models.

Training AI to do 3D mesh generation has been a bit slower than 2D because of many reasons, but one of them has been the small size of quality 3D models available for free out there in comparison to drawings. We are talking about billions of images that were scrapped out of the internet vs thousands of high quality meshes. So this might be important to consider, even if no one can be really sure.

If people start uploading meshes into an online service, they better understand it can be used internally to train a model, no matter an EULA says. We all can choose to ignore that possibility to get some peace of mind of course.

I’m not making any moral judgement on the situation, the use of AI, or asserting that this is in fact what’s happening here. But it’s something that’s always an open possibility in regards to cloud based services compared to features that could easily be run in an app locally even if it’s a bit slower or requires the user to own a beefier machine.

2 Likes

Have mathematicians even ever agreed on any term? Let alone notation?

2 Likes

I could have sworn this was the wireframe mode… oh, wait! :sweat_smile:

2 Likes

:slightly_smiling_face: Yeah, MagicaCSG mesh export can be quite dense if you’ve upped the grid resolution. But I’m glad that’s the case, as Unbound exports just not enough polygons, and if you’ve used vertex colors, you can’t remesh while keeping vertex colors untouched. Even when remapping vertex colors, the color distribution and boundaries will not be the same as on the source mesh.

Were you able to test the previous alpha of Conjure? There will be a public alpha in a week or so according to the dev.

You seem to spend a lot more time in Magica so if you do eventually check it out I would like to know your thoughts

1 Like

I haven’t tested ConjureSDF so far. I am too much in love with MagicaCSG to request alpha access. :slightly_smiling_face: But I’m very much looking forward to try ConjureSDF when the alpha has gone public. Once SDF curves are added, it might become a full-fledged MagicaCSG alternative, with the added great value of every other mode, tool and renderer Blender offers.

No macOS version though, but oh well, I guess my plan to return to Apple is slowly falling apart anyway. :slightly_smiling_face:

1 Like

@Musashidan Do you remember Clay Studio Pro for 3ds Max, in the late 1990s? It was quite cool at the time: metaballs of various shapes, including clay splines. Very FReppish avant la lettre. :slightly_smiling_face:

Back then I had just started working in 3D Studio Max, and made this snail race with Clay Studio Pro… :grin:

metin-seven_3d-animated-gif-animator_snail-race-characters

6 Likes

Wow! A blast from the past. :sweat_smile: I do remember it, but I never used it. Wasn’t it made by Orbaz, the same company that made Particle Flow?

It’s so cool that you have all of your old work still. I lost most of mine on a failed hard drive years back.

1 Like