More Advanced SubSurf?

Well, I was talking to my brother yesterday (Maya user) and somehow the topic ended up to be Subdivison Surfaces. I found it interesting that in Maya it was possible to apply a subdivision surface modifier in the places you wanted, with whatever layer of detail you would like, and edit the subdivided polygons, then go back into a lower level of subdivision and edit the more general shape ect. Perhaps I’m not making any sense so I’ll quote Wikipedia.

So I was wondering, is this feature already in Blender and I just missed it? Or how hard would it be to add this? I ask because this is a subject that was never brought up on the forums before, and it seems like a beneficial feature that would be exceedingly useful when modelling organic objects.

If you still don’t understand what I’m talking about there is a short Maya tutorial that uses this here



Well, I took a look at it and it defitnely looks cool. Blender doesn’t have such a thing. It looks like you need N-gons for it, otherwise it looks like that it places a lot of tris in your topology. It is very useful, but it should be used with a good understanding of edgeloops. You need to understand where in the proces of modeling you should use this feature, otherwise it could mess up the main loops that formed the basis of a model.

The best use for this I think is when you need to ‘break’ the grid, i.e. placing diagonal topology. Pixar’s Gerry’s head is a fine example of how to do it wrong and how to do it right (they use Alias Wave front= Maya). Gerry’s wireframe model shows a lot of strange edgeloops topology in the face, but the sternocleidusmastoid is fine by me, since it is a diagonal feature that is difficult to model.

What Blender needs is a way to have both sharp edges and very smooth ones at once a mesh without using those crappy creases that always result in ugliness.

What about Doo-Sabin?

Looks like that would be a very cool thing to have indeed.

Having multiple levels in different areas would be powerful.

You can do this with NURBS patches in programs like Rhino and Amapi… each NURBS patch can have its own ss level. unfortunately blender doesn’t have true nurbs, yet.

It would be great if subsurf (as well the other modifiers that don’t have it) had a vertex group option so we would have this kind of flexibility/functionality.

It’s not about having a Vertex Group option on the subsurf, but it’s about having real N-Gons support. Watch the images in the tutorial, expecially Step 4 ones: just below the nails, some polygons have 5 vertices.

jep real ngones

also mayas dub divs are real curves. you do not need any render subdiv.
they are always smooth. however mayas subdiv have issues with uv textures and other problems. there is also no very god edge creasing tool.

in that context blenders subdiv is quite better. lightwave has a similar thing like blender and they have different subdiv levels among some other goodies.

I know someone is reading the code of CCGSubSurf of blender, and he also devotes himself in poly-gon arithmetic. A more complicated SubSurf implement would not be far.

Just wait :smiley:

blenders current version isnt that bad.

well with ngones we would be able to model in finer resolutions into the mesh to form better shapes without subdividing the complete mesh.

what i would love to see is render time subdiv depending on how far the object is away from the cam or not. that is something truly amazing.

what i would love to see is render time subdiv depending on how far the object is away from the cam or not. that is something truly amazing.

Surely a python script to do this would be fairly easy? Hmm, I’ll read the API tomorrow, maybe it isn’t.


You’re statement about n-gons is kind of mutual exclusife. Finer detail modelling can only happen with the smallest element which is a triangle. Forming better shapes is also not completly true. For non-organic modelling like architectual modelling and modeling machine parts, it is cool to have 12 sided polygons and such. But for organic modeling you don’t really need to go beyond a five sided poly. Even so, it is used when loops meet and split.

THe last point is LOD management which is certainly no small feat. A 80.000 vert model that is so far away that it looks like a few pixel is indeed taking its time to render. The industry is aware of this and it is just very difficult to implement complex LOD. Pixar has a paper on LOD based on statistical visibility…
Until then, what is mostly done is just modeling different models of the same object at different resolution. So a battle field has some high detail characters in the foreground and low detail characters in the background.

I don’t know, to me it just sounds like some crazy form of mip-mapping that actually changes geometry vertex count. Not too far out, but what do I know…

Not necessarily that easy. It can be done to some extent, but you can’t optimise to aggressively, since otherwise you’ll see the model ‘pop’ as it changes from eg. subsurf level 1 to subsurf level 2 and the entire shape of the model changes. It could help for when detailed models are far in the distance, or up very close, but it won’t be anything like how renderman does it. It would be much easier to do it for curves though, it wouldn’t be difficult to code and the change between levels of curve resolution is fine enough that you wouldn’t see too much popping.

That are quite a few more than catmul clark. Would be nice to see some more.


toontje did you ever work with mayas subdivs or lightwaves subivs on a quite professional level? you have a rather technical view point.

Good point, I was only thinking of the very simple form.

Cekhunen, well I worked with wings3d, so that is my experience with Ngons

I don’t think you’d use LOD for rendertime subdiv. Renderman uses micropolygons for that. They also have an LOD mechanism but it is invoked by the user because as mentioned, there is the possibility of popping when changing mesh resolution. Micropolygons ensure smooth surfaces no matter the camera position and there is no possibility of popping.

This is one major reason I don’t like to use Blender’s renderer. I’ve seen models where very fine creases like around ears or shoulders produce horrible jagged artifacts and even going as high as 3 or 4 level subdivision doesn’t sort it but renderman doesn’t have a problem at all. You can see the artifacts easiest in Blender if you try to do displacement mapping.

I know that Renderman’s implementation didn’t used to work well for raytracing but with the recent Cars movie, I was reading a presentation where they said how they developed a method to allow raytracing in extremely complex scenes using ray differentials and caches:

It sounds like a good way forwards.

This is of course different from multi-resolution subdivision surfaces, which would also be useful in Blender.