New stand-alone fluid simulation engine: How to integrate workflow with blender and other 3D apps?

I’m developing a new fluid simulation engine that can be used for very large scale scenarios ( based on sparse, adaptive multiresolution techniques for grid resolutions of up to 4000x4000x4000 on standard work stations ).

However, as all the development effort is put into the complex simulation algorithms itself, there are no time ressources left for programming graphical interfaces or different plugins to each 3D Apps such as blender and others.

So the simulation engine will be a console-only application to be used for the sole purpose of performing the physical fluid simulation itself.

So my question now is how to solve the problem of integration/interoperability of the engine with host 3D apps (I’m thinking of blender but it should also work with the main commercial apps like Maya etc.)

There are three main “interfaces” i.e. input-output situations to handle:

(1) Specification of parameters that control the behaviour of the fluid simulation itself independent from a concrete scene setup, such as “grid resolution”, “time step”, “CFL number”, “fluid viscosity” abd so on.

(2) Specification of the fluid sources and interacting (possibly animated) solid objects, custom force fields etc.

(3) Output of the result for rendering.

The current approach is to use text/XML/Jason based configuration files for (1) and sequences of OpenVDB volumes to be read from and written to (a fast) disk for (2) and (3). OpenVDB wuld be a “natural fit” for the simulator as the internal representation of all data is based on sparese level set functions (fluid,solid,force-fields, and even all output relevant for rendering so there would be no need for meshing or mesh-based input/output) . However, due to the very high resolution , the amount of storage for sequences of large OpenVDB volumes (even sparse and compressed) may be impractical for a 3D workflow (It should be feasible, however, with the now avaialble fast and large SSD to handle animations of several hundreds frames in the form of OpenVDB volumes where each volume may be several hundred MB )

I for myself have only limited experience with 3D apps and CGI production workflow (my main focus is on programming and physical simulation algorithms) so I ask this question to the CGI community here.

Is the approach outlined above (i.e. parameter-config-files plus disk-resident OpenVDB-volumes) feasible with respect to usability in a typical 3D production workflow ?

What about the alternatives (such as python/C++ bindings ? simple and general plugin interfaces ? I assume it would then be necessary to write separate plugins for each 3D app ? )

Some other Open Source engines (e.g. Mantaflow) offer extensive C++ or Python interfaces such that animation can be directly scripted in the programming language. However I wonder if directly writing C++ code or python scripts to drive animations is not the preferred way of interacting with a simulation engine for most 3D artists/users ?

Thanks for any proposals or comments

Hi What you are doing sounds fantastic. Not sure if you know, but there is another forum that’s even more geared towards developing for/with Blender, where it may be good to post your questions https://devtalk.blender.org/

Hi Dimitar

thanks for your answer. I put the question also here because it is not only about technical/coding details. The project is still in the state of deciding for the final way of integration of the engine with the 3D app workflow. So i’m interested in the feedback of active 3D CGI users/practitioners on how to best integrate a stand-alone engine in the workflow. In particular weather the proposed approach of using OpenVDB as a kind of universial import/export format would be feasible/pratctical even without custom app-specific plugin-code.

Hi zx-81,

This sounds like an awesome project and would be very cool to see this integrated within Blender as well as other 3d software!

I develop a FLIP-based liquid fluid simulation addon for Blender called FLIP Fluids (not a very creative name!), so I can comment a bit on how we integrate the fluid engine into Blender to be useful for artist’s workflow. I was in a similar position a while ago where the fluid engine was only a script-based application and had no graphical interface. I ended up writing a plugin/addon for Blender so that it was easy for users to set up, run, and render a simulation.

Your process of exporting parameters and fluid sources/solids/forcefields so that it can be run from a console application sounds similar to what we do, so it should work. The only problem I see specifically within Blender is handling the OpenVDB data. As far as I know, there is not much functionality in Blender for loading and rendering OpenVDB volumes. There is a separate OpenVDB branch of Blender, but I do not know much about how functional it is.

As for developing an interface in a variety of 3d software, each software will have its own API with differences in how parameter and object data is accessed, exported, and read. So you will need to write a separate interface for each software based on the API that it provides.

I don’t have any experience developing plugins for software outside of Blender, so I’ll list a few details of how we integrate our addon within Blender:

  • Our fluid engine is written mainly in C++. Blender’s API uses Python, so we created bindings to operate the engine from a Python script. Our engine is compiled as a .dll/.so/.dylib library which is loaded using the Python ctypes module.
  • One of our main workflow goals was to make sure that Blender can still be used while the fluid engine is running a simulation. This meant that we needed to run the simulator from a separate thread so that the the long running fluid calculations would not stall or stutter the Blender interface. A limitation of the Blender Python API is that it is not thread safe, meaning that we could not use the API to retrieve parameters or source/solid data from the simulation thread. We worked around this limitation by exporting all parameters and object data to files before launching the simulation thread.
  • Exporting data: our simulation parameters such as resolution, CFL, viscosity, and many others are exported to a JSON file format. We export the fluid source and solid object data as triangle meshes stored in a binary format. In the case that the parameters or meshes are animated, we export these as a list of values/data for each frame. Internally within the simulator, volumes are represented as levelsets so we convert the triangle mesh data to volumes within the engine.
  • Running the simulator: the simulator starts running after we launch the simulation thread from Blender. We start by initializing the engine with all of the parameters and mesh data using the Python bindings, and then run the simulation frame by frame.
  • Simulation output: After each frame, the simulation thread retrieves the output data from the engine and writes to disk. Our simulation thread writes the free-surface liquid as a triangle mesh in a binary format. We convert the liquid levelset to a triangle mesh within the engine so that it can be easily loaded into Blender. The simulation thread will also communicate to the main Blender thread with info on any errors or whether the user requests the simulation engine to pause/stop.
  • Playback and Rendering: The plugin loads. the triangle mesh data into the 3d viewport using the Python API. The Blender API uses ‘frame change handlers’ that can run code whenever a frame changes and we use this to load new mesh data written by the simulation thread into Blender for playback and rendering.

Creating the addon interface to the engine was a very large task. We wanted the engine to feel like it was tightly integrated with Blender, so we have a lot of workflow features to help the user create, run, and render simulations. I would estimate that the addon interface code is about 30% of the total project in terms of lines-of-code.

I hope this helps give some insight into how to complete your task! Let me know if you have any questions and I’d be glad to answer.

- Ryan

5 Likes

If the final data is saved as triangle meshes, any rendering engine could load and render them in a predesigned scenario and display the final render. I tell him this because I myself was working on a similar prototype to load the files generated by the simulator (yes, Flip Fuid !!) directly to the render engine (Sorry for the low quality, they are only test renders…)

Cheers…

Well. Probably work stations that would be standard in few years.
As volumes or as meshes that requires hardware not able to compute those data but able to display them.
Current workstations are probably not able to display that dozens of GB per frame.

Blender 2.8 with new EEVEE render engine tries to be more WYSIWYG.
Even though that kind of data would be reserved to a renderfarm using Cycles; you need to create a simplified preview that produces a shape not too far from what would be rendered in viewport.

Artists will not risk to engage days of rendering without clues that surprise will not be disappointing.

Very nice of you giving directions and suggestions to something that can grow as a potential concurrent

Here are a few example renderings of the fluid simulator.

The grid resolution is between 1024 and 2048 cells along the longest dimension, CFL number between 2 and 4.

Simulation time per animation frame took on average five to ten minutes on a 8 core intel processor with 64 GB RAM.

Each animation has about 500 animation frames and occupies between 50 and 100 GB disk space (using a special format similar to OpenVDB but with a lossy floating point compression scheme)

Asteroid impact with developing rim wave:

Breaking wave on inclined sea floor with obstacle:

High resolution dust plume:

Martian dust storm (using digital terrain model from mars orbiter)

10 Likes

RLGUY, Thank you very much for your detailed first-hand-reply from a plugin-devlopper which is very helpful because it clarifies my impression that developing plugin/add-on interfaces is unfortunately no trivial task, especially as ther is no common standard for all 3D apps.

So it was my hope, that one could simply use OpenVDB as universal import/export interface for all data (fluid sources,solds,force-fields and the final level-set volume for rendering). The new simulator is entirely “mesh-free” anyway, i.e. internally based solely on volumetric scalar/vector fields and level set representation for surfaces. I also developped a lossy floating point sparse volume compression scheme (think JPEG in 3D) for cahing simulations on disk, that can greatly reduce the actual amount of storage (e.g. from 100 GB to 200 MB for a 2000^3=8 Gigavoxel volume)

So my (perhaps naive) idea of using a OpenVDB-only interface would be to provide a sequence of disk-resident volumes, e.g. “fluid_source_000.vdb”, “fluid_source_001.vdb” (similar so for interacting solids and force fields) and let the stand-alone simulator generating the output sequence “result_000.vdb”, “result_001.vdb”, … which could then be fed into any rendering app.

This is essentially the ad-hoc workflow I used for the example animations I posted above (using a self-written render engine based on sparse-volume raymarching) .

However I suppose this could become impractical for more complex, scripted animations that need to interact with several solid objects that “live” inside the main process of blender or other 3D apps.

2 Likes

wow, those images are quite impressive, especially the smoke sim ones. I am just a user, I cannot help you much with integrating your simulator in other applications, but I’d definitely like to see that in blender! Good luck!

Are these animations somewhere on youtube?

Hi zx-81, I think that’s a really cool direction.
I’ve recently become interested in learning OpenVDB and EEVEE, and trying to determine if it’s possible to directly render OpenVDB data sets this way. Something like this: https://developer.nvidia.com/gvdb for EEVEE.

My end game is to visualize and analyze large amounts of volumetric scientific data. e.g. uint16 ~ 1000 x 1000 x 500 voxels, x 2, 3, or 4 for color channels, x ~1000 time points. The current visualization and analysis software uses a lot of GPU RAM and quickly fills up even a 24GB card.

Could you tell me more about your 3D jpg ish compression? Is there an article published you can point me to?

Thanks
Neil

Those stills are fantastic! The water shots are unreal, and the smoke looks ridiculously good, I shall be watching with baited breath.
And if you ever need a beta-tester…