Alembic I/O.

@pitiwazou
From your experience what are the most important things missing from the blender > alembic > guerilla render workflow?
I am considering that pipeline for a project and I need to find out what I will not be able to have…

Thanks : )

Hi,

I tried technique described by J_the_Ninja to bring in pointcloud combining it with “Point Density” node. Problematically Color is absent. I hope I am doing something wrong. This is a very powerful feature to have.

Here's the alembic file to test: [www.cgstrive.com/alembic/flame.abc](http://www.cgstrive.com/alembic/flame.abc)

Thanks

Attachments


This is a bit embarrassing… Actually, Blender requires polygons (edge loops) to store vertex color data (so you can interpolate data along the edge or across the face and such). I forgot about that before adding support to read custom datas for point clouds… So, the data is not read because there are no faces. Not sure if we can make this work somehow.

Hey guys I wanted to bring up something from the patch review that is relative to production artists that from my perspective seems very important. The patch is at D2060 and specifically look at Cambo and mont29’s comments regarding line 2709 in readfile.c and the perspective they take on it.

At issue is the question of if the cache should be loaded at the initial blend load time or should it be loaded when the cached is referenced by the use in the UI somehow. Currently KD has it load with the blend but cambo and mont29 suggest it loads when referenced.

If you are a casual user. an enthusiast or a freelancer it shouldn’t really matter to you. This question is for pros working for companies in a demanding workflow. If you have experience with these types of things in a professional production environment what is your preference?

I work in a demanding field and my preference is to load the cache at blend load time. I want to be ready to go because if my file even has an alembic cache in it then that usually is the focus of my work with that file. I really don’t want to “get started” then have to “get started” again is my experience.

I am also part of the Fracture Modifier team and this is an issue we understand. The biggest reason FM is not in master at this time is because the fractured object cache is stored in the modifier thus saved and loaded that way. One reason is so it is loaded on the front end and does not need to be loaded as a second step after the blend file loads. Once the physics sim is started then another cache by blender is made.

So as you can see cache load point is relevant to many things in a file and software and effects the artist’s workflow especially in a demanding environment. How does any other software you use load it, up front or when accessed? Maya, Houdini anyone?

To recap here are some pointed questions:
As a professional user and from experience do you prefer taking time loading the cache during blend file load or only when accessing the cache somehow later after the file is loaded?

In your typical workflow will you be accessing the cache immediately usually or at a later time or not at all sometimes?

How does the other software you use handle this and how does it effect your workflow and preference or do you think it will not effect you?

Thanks and good luck to us all in our adventures.

PS: I am hoping this post will result in brief and focused responses by peeps who deal with this regularly and that it will not cause a big long drawing out discussion. Keep hope alive! ; )

Hi KWD!

There is a problem with exported cameras having animated focal length, where the 1. frame value is not saved:


Speaking of animated focal length, will it be possible to have it work from imported cameras in Blender? Camera sharing is very important in many productions. Thx!

@JTA
Can’t say I’ve worked with Alembic files (yet), but most likely I would prefer the cache to be loaded with the file, primarily working on a shot by shot basis. I’m not sure how it should work otherwise - by manually clicking a “Load Cache” in the modifier/constraint? That doesn’t seem super practical.

Will check on that a bit later, for now the focus is on merging the branch in master.

Speaking of animated focal length, will it be possible to have it work from imported cameras in Blender? Camera sharing is very important in many productions. Thx!

The only real way to support that would be to use an fcurve modifier (currently in development in a separate branch):


Then this can be used for other arbitrary properties, though some input for the UI/fonctionality would be appreciated, as usual :slight_smile:

@JTA
Can’t say I’ve worked with Alembic files (yet), but most likely I would prefer the cache to be loaded with the file, primarily working on a shot by shot basis. I’m not sure how it should work otherwise - by manually clicking a “Load Cache” in the modifier/constraint? That doesn’t seem super practical.

The change is to open the alembic archive the first time it is accessed to look up data, thus delaying the loading to avoid issues (lagging mostly, e.g. when trying to open dozens of alembic archives in some external drive) when reading a blend file, or when copying a cache file data block. The change is in the code, not in the UI. If you only reference a handful of files in your blender project, then you won’t really notice a difference, maybe the first frame will be a bit laggy due to lazy loading, but that’s it.

@JTA, I think you’re misunderstanding those comments. The caches would still be loaded automatically if they are or become visible in the 3D view or render.

With deferred loading we can potentially avoid loading caches that are on hidden layers, only load low res for faster previews, use multithreaded loading through the depsgraph, etc. I’m not familiar with the Alembic integration so not sure how feasible it all is, but I don’t think anyone is proposing to add extra manual steps.

Manual steps are not inferred. It’s in the workflow of a fast paced workplace that in my experience you don’t want lag to just come out of nowhere seemingly buy just moving to another part of the UI or timeline. As mentioned if it’s just briefly then that’s no problem. It’s when you are trying to get your file back to another person in the pipeline close to a deadline time and then you have to wait in the middle of your workflow when you already waited in the beginning that is hard on some people.

Associated with that same train of thought is a pipeline production artist typically does not want to work for an hour and be in full focus then navigate to a part of the project or timeline and find out that a large supporting file or cache will not link and load. That is best found out when the main working file is loaded.

Of course this is IMHO and in my experience and I work in a particularly fast paced environment. And the patch change request seemed to infer that was a possibility with the later “loading” method that load errors or excessive load times would be differed to later in the workflow. Hence asking others to put eyes on this.

8 )

Deferred loading is quite standard, and it is the only way to work somewhat interactively with huge scenes. Look at e.g. Katana which is designed to do exactly that, where you can work with scenes bigger than you could fit in memory.

Loading everything is a poor strategy for detecting errors, and it only solves part of the problem anyway. Loading all full resolution textures, volume data, all frames, etc, would be incredibly slow. To solve the timeline example you give, do you think loading all frames when opening the .blend file is really the correct solution?

If you have complex pipelines then surely want to have mechanism for detecting errors, finding problems before sending files to the render farm, etc. But doing a full check each time you open a .blend file is not the way to do that.

For the waiting twice issue, the correct solution is to make loading non-blocking, so your caches get load while you can already start working. Even without that though, I would expecting loading everything to have much worse failure cases.

As long as the topic gets considered in light of users needs and not an uninformed or poor design choice that gets implemented and layered upon so it has to stay then it’s all good. I’ve been looking at Redshift’s render engine because of its out-of-core technology to handle big scenes and resources and I’m familiar with Katana’s and other approaches to handle them also thus illustrating it’s an important topic to discuss.

That’s why I posted this request here, so it could be considered by workflow experience of users and not just design decisions by devs without empirical feedback since it seems at least some here are using ABC files in other packages.

If it turns out not to be an issue then woot woot! ; )

Cool! :smiley:

The only real way to support that would be to use an fcurve modifier (currently in development in a separate branch):

Then this can be used for other arbitrary properties, though some input for the UI/fonctionality would be appreciated, as usual :slight_smile:

Sounds good. About functionality, if the fcurve modifier could be set up automagically, atleast for camera focal length, that would be best. For other arbitrary properties it’s understandable that those can be set up manually.

About loading of cache, thanks for clarifying KWD & brecht. I trust your judgement!

Committed to master! Added some release notes too! Thanks everyone involved so far, but this is not the end :wink: This commit did not include Cycles motion blur from Alembic archive, or NURBS import since they are broken. I will continue developing this, and add new features (like the FCurve modifier shown a few posts ago), so we can keep this thread alive I guess. Though, for bugs in features already present in master (minus motion blur, and NURBS for the time being), it’d be best to report them in the tracker.

For multi-threading, since now the caches are opened by the first cache modifier or constraint that needs, maybe we could end up with some race condition here. The CacheFile data-block has a special node in the new dependency graph, so perhaps then multi-threading could be done based on the branches that stem from this node (I am not sure how it is done currently).

I think we could have some operators that just goes over the bmain.cache_files and check that the handles to Alembic are all valid. Then could also check that the modifiers and constraints which reference those cache files are still valid too, but I think it might be redundant. Valid cache_files should enough I guess.

For the waiting twice issue, the correct solution is to make loading non-blocking, so your caches get load while you can already start working. Even without that though, I would expecting loading everything to have much worse failure cases.

Since the caches are open when modifiers/constraints are evaluated, wouldn’t most the caches be evaluated at startup anyway (provided that the scene is updated after being loaded, I’m not familiar with that part of the code either)?

Being able to read vertex colors from imported point clouds and visualizing them using Point Density is also a requirement by various researchers I talked to. I hope this will be supported soon.

Congrats KWD! and great work!

That’s… silly. That should definitely be addressed in core Blender, there are plenty of use cases for point-only data import where color information is vital.

There’s a lot of things you could validate, textures, volume data, drivers, linked libraries, physics caches, etc. File > External Data > Report Missing checks part of that and could be extended.

Since the caches are open when modifiers/constraints are evaluated, wouldn’t most the caches be evaluated at startup anyway (provided that the scene is updated after being loaded, I’m not familiar with that part of the code either)?

Indeed, usually things get loaded at startup. Only when you switch to e.g. a different layer or screen would you need to wait the twice if you have a really heavy alembic cache that take a while to load.

What I’m referring to is something we don’t have the code infrastructure for yet and no immediate plans, but what I think would be the correct solution. A mechanism where dependency graph evaluation becomes non-blocking if it takes more than some time threshold, and the UI becomes responsive again while the 3D view does not redraw or draws bounding boxes. That could be due to loading Alembic caches, evaluating slow modifiers, loading heavy image textures, etc.

Such a system wouldn’t be Alembic specific and would address the problem in a more general way.

Congratulations on master! Thank You for your hard work as well as to individuals of BF for making this happen! This single feature unshackles Blender for some of the most challenging production scenarios.

From VFX point of view the only major data transfer element missing right now is the Volumes. While ideally .vdb reading would be possible(not related to alembic), pointcloud with vertex color can also work very well in basic scenarios (clouds, smoke, whitewater, dust, fire* etc). For that it would be nice to see point Color supported in Blender.

Thanks again for such a thorough implementation!

Is something up with Alembic on the buildbot? Tonight’s win64 build is from rev 566dd4e8a509, but does not include Alembic support.

The platform maintainers have to add the libraries first and then enable compilation with Alembic on the buildbot. Until that’s done you’ll have to either compile on your own or wait :).