Pablo Dobarro's master plan for sculpting and his official sculpting branch

About volume objects, I think it is long overdue. I worked on one back in the days, Brecht also made some fundamental work in this area.

Although level sets could be sculpted directly (and Blender could have a volume sculpt mode if it had a volume object), generally speaking, volumes are inherently procedural and it is best to have a node system to build or process them. The modifier stack is far too limiting here, and currently it seems people are trying to cram as much OpenVDB/volume functionalities into a single modifier, which does not please me. Perhaps scattering the functionalities across modifiers could be better, more acceptable for a merge in master (like putting the boolean operations in the boolean modifier).

IMHO, nodes are the go to for such volumetric based workflow, but unfortunately the current “Everything Nodes” are not suited for the task as their abstraction will not allow for volume to mesh operations to occur right in the middle of the graph. It could allow to modify voxels or vertices in parallel, but not heavy topological changes to the object being treated (this was the same issue with Lukas Toenne’s system for those who remember it). So, let’s not put too many “random” functionalities in one modifier, since a more capable node system will not be here for a long time.

As for C4D, it seems to me that it is a node system in disguise : it is using a hierarchy of objects to express the final result, which is basically what nodes are (think of node connections as being parent/child relationships). It seems like a halfway point between a modifier stack and full blown nodes.


I fully agree with your point of view KWD. I am fearful though that users will be denied this functionality for another 5 years if not more. Which is why elegant solutions are sought here that I hope developers like yourself can steer it in right direction increasing chances of being accepted to master.

Volumes are also sorely needed and one of the last strong missing points to solve complex vfx production scenarios without compromise. Last time I worked with sea animation (imported from houdini), I had to use emissive shaded spheres to emulate whitewater, foam and could not even import mist. This is very embarrassing to admit and not right. What about clouds, destruction animations(rbd debries+volumes), instancing fires/smoke across some destroyed area (+time, shading offset). These are some of the issues I have run into and compromise on each time. In 2019 I think Blender’s the only main DCC application who still cannot straightforwardly import these assets. I’d be happy to simulate all day and provide examples of any software required to make this watertight incase any DEV picks this up.

Regarding nodes, I do agree that they will offer maximum flexibility. Problem with each nodal system (such as ICE, MCG, blueprints, even AN) is that not all functionality is exposed and it takes many years for it to mature. I had to extend Blueprints with C++ class nearly immediately when I had to use it in production, I had to rewrite AN graph with python callback as I hit roadblock. MCG introduces new nodes yearly, still more often than not it’s easier to just do it in maxscript. In Houdini that is the most comprehensive solution, people prefer to use Wrangle(script) nodes to write anything with VEX compared to VOP that takes 10x the time and can get very big and confusing if you introduce a few loops. Even if we look at Cycles now, some of the most amazing procedural materials done with it(Bricks, snake scales, eyeballs etc), what is the percentage of people capable of doing that of your userbase? I would honestly say, I would struggle and I would hate the messy graph. That is the problem with nodes VS a stack or few checkboxes and sliders as is the case with C4D, Realflow, Modo, even AE(vs Nuke), SP(vs SD). You might compromise on a tiny bit of functionality, but flipside of it is that every user will be able to understand and apply it.

In conclusion, just kindly asking to find a way to integrate this very powerful functionality into 2.81, rather than putting it on backburner for unknown distant time in future with everything nodes. When that time comes, take it a notch higher.


Well even if this work is not accepted in master for an extended period of time, like has been the case for the Fracture Modifier, it is still quite valid for many users to have it as a branch. Our team’s work on the Fracture Modifier shows that a branch can be a valid and very successful alternative.

Heck, blender keeps promoting blender by using FM stills and videos. Take Albin’s airplane crash video. That’s not possible with blender. It is only possible with the FM branch toolset. But the blender ecosystem benefits by blender’s promotion of the popular branch even if it is not upfront about it only being possible with a branch, which some think is questionable transparency.

In the big picture, early development being a branch is a benefit and not a drawback. You have more freedom to develop independently of the critical views of the core devs on feature development. You also have the option to break compatibility more frequently as the project matures.

Scorp’s work with the rest of our team has really matured the Fracture Modifier while being a branch. This maturing would go much slower if we were in blender official. Of course there are some drawbacks to this.

But in my opinion as one of the FM team members is there’s more of a benefit than a drawback for the time being. So I and others don’t always value as much or completely understand the, “But it has to be in master!” mentality.

So those that really want this work, please keep supporting the main dev and scorp as they work on this and I’m sure everything will work out in the long run.

FM and sculpting are much more closely related under the hood than it seems on the surface so some of us on the FM team will continue encourage this project and hope for its longevity.

Fracture ON!





This guys are doing a marvelous job.

I just wish the fracture modifier was also part of this sculpt branch. :slight_smile:

Just keep practicing with normal radius for Scrape brush for hardsurface workflow :slight_smile:
so sweeeet for fast concepting in substractive mode


Pretty cool !

1 Like

Hmm, i would suggest to add something like volume modifiers to volume objects, to

  1. keep the operations inside the volumes
  2. split the crammed remesh modifier into several with sticking to 1)

Then you would have a similar stack to mesh based modifiers.
Putting the volume CSG ops inside the boolean modifier would require mesh->volume->mesh conversions across the stack. I made this an option to the remesh modifier in order to stay in volume space, but this is maybe better done with a volume object instead, plus modifiers (which basically could be a set of OpenVDB functions.

But there should be some way to dynamically extract a mesh from that volume, additionally to statically / destructively “converting” them. Like a volume to mesh modifier or something, which could output a mesh (internally) like modifiers on curves or texts do.

And vice versa some combination modifier, similar to the voxel remesh modifier with its CSG slots, which could populate the volume objects from mesh objects.

In order to render volumes, imho they could be meshed to extract a surface where applicable or volume shaders could be used (somehow, lol… i am not so familiar with them)

I recall you made a couple of demo videos showing viewport renders of volume objects. I guess a similar functionality would be useful, if not already present in brechts volume-object branch base.

Edit, lol i knew it sounded familiar… seems i posted a similar post already lol… anyway…


One great thing is that resolves all dependencies for official blender automatically on Linux.
I can not build LibQEx and I’m totally lost…

Hmm, either I did something wrong, or the example implementation of the qex preprocessing step, the mixed integer quadrangulation, is … well… suboptimal… lol. Doesnt really look as “advertised” here

(pick the “9boomes” models and “Mixed Integer Quadrangulation” there)

I get already a very badly looking result with a simple cylinder. Quads are not aligned to the curvature at all:

So imho the fact that it is crashing a lot, and becomes very very slow with higher poly counts, and produces underwhelming results (especially the last point doesnt justify high processing times) makes me a bit disappointed about that library / implementation.
But the crash problems may also be somewhere in my code, calling the library.

I hope my attempt getting it running (not speaking of “working” here) was no waste of time…


hmm, the only thing I included for the dependencies are cmake scripts to build the deps with “make deps”, unfortunately this has sometimes for example some odd quirk with tclsh and sqlite, where it might error out due to a permission error on the “sqlite3” directory under /usr/lib/tcl8.6 (i chown-ed that dir to my username, then it seemd to work. Weird.

Sorry for not having specified the problem before. I can compile sculpt-mode-features without problems. Inside Remesh modifier, if I choose “Quad” I get the message “Built without QEx support”. So I downloaded LibQEx from github, and I saw that it required OpenMesh as a dependency. I have been able to compile and install OpenMesh. But then when I try to build LibQEx I get an error, apparently when it tries to build demos (I’m not sure).
I will continue researching about “make deps”. As you say, on other occasions I’ve had problems with it with some dependencies. I will continue investigating. Thank you.

yup, make deps from sculpt-mode-features will automatically download and build openmesh, igl, lapack/blas and qex (all dependencies necessary) and add it after finishing to the lib/linux_x86_64 folder (next to the blender source folder, where all the other blender dependencies are.
Note there are no libs for windows or mac in SVN, but as last resort (not really recommended) you could also try to build the deps under windows… (mac i think has make deps too, not sure) there are cmd scripts for this purpose. I didnt test that, so it could be there are dep build problems under windows then, which require changes to the cmake build scripts for the deps

@Thornydre Hi, it seems like the blender.exe file is missing in your latest build (17/04/2019).


Yeah… and it’s been almost 4 years since I worked on the rejected particle mesher…

If there has to be a volume object, then yes there will be volume modifiers. Actually the very first volume modifier shall be the smoke modifier as it is creating a volume based on a bounding box (it really has nothing to do in a mesh modifier stack).

About node systems having sometimes lacking functionalities, this is mostly about how the graph is processing the data, and which data the graph is processing. I may do a write-up in the everything nodes topic about the various observations that I made about this subject. Furthermore, I believe that the programming languages (C, C++, etc…) may be somewhat limiting as well.

Nodes are powerful, but sometimes a hassle indeed. As a matter of fact, I spent the last 4-5 weeks writing a language for my software (similar to Houdini’s VEX), as some things are easier to express in a language than with nodes:

An idea I just had is to have a dependency graph of data requests, like you have a stack of modifiers that are parsed into a graph of what data processing is needed (normals generation, texture lookups, object merge, etc…) and in what order, and then execute that graph like we would a node system. This would add some flexibility to the rigid modifier stack.

Such a dependency graph could also be used when sculpting, so developers won’t have to explicitly write in which order things have to be updated when applying a stroke, or remeshing, etc… I have unfinished code to do the same in my UI system: the 3D view depends for example on the timeline and the properties editor, so there’s already a dependency graph to be created and executed when updating the UI. I do have a lot of bugs in this area which could be solved easily with a graph.

Also scorpion81, I stole part of your fracture code for my software :stuck_out_tongue: (I still need a boolean system to constrain the shards to original object shape, I think OpenVDB would be overkill especially with thousand of shards):

In the video the shards are generated by using the point positions of the small object with the noise graph applied to it.


I do recall the reason, and that is the Blender developers want to see large libraries like OpenVDB used for multiple areas before adding 30-50 megabytes to the size of the Blender binary.

That is actually one of the nicer things about some FOSS projects, they care about trying to prevent the application from ballooning in size (unlike some applications by commercial companies which can require more than 20 gigabytes of space). The flipside though is that they won’t include libraries if it forms just a minor part of overall functionality and not a core piece like we see when OpenSubDiv was integrated.

Hey @scorpion81 libQEx and QuadriFlow are two different solutions…

1 Like

No, the reason was that the mesh modifier stack is supposed to only do mesh to mesh conversions or processing, whereas the particle mesher was doing particles to mesh conversion, and Campbell was opposed to add such a modifier (even if some modifiers were already using partices as input). He wanted to wait for nodes for that…

Remember that OpenVDB is already used for caching, which is not a lot of areas, so there is no problem to add such a library for one or two features. The problem was the design of the modifier stack.

yup, i know :slight_smile: on the quadriflow page there is a comparison between both methods though. I am trying libQEx first, because on first sight it looked like to produce better quad meshs. (if i get it properly to work, lol)

1 Like

Oh sorry, my bad. I thought you were comparing these two. Anyway, are you sure that “9boomes” models with “Mixed Integer Quadrangulation" is using QEx under the hood?