Check it out here!
I would love to see this integrated in cycles…
Check it out here!
ok, what is it?
Seems very interesting. But could it be done for the compositor or does it need cycles?
quite a release! since it is compatible with Nuke, the question would be if it is also compatible with Natron.
I’m Andy, one of the creators of Cryptomatte. Thanks for the positive feedback!
We would love to see an implementation in both Cycles and the Blender compositor. And Natron. Cryptomatte basically takes an old technique of ID/Coverage pairs, and expands it into something that we’ve found extremely useful in production. The actual idea of it is very simple, which is to record multiple IDs and Coverages. With about 4 ID/Coverage pairs (we use 6 to be safe), you get great mattes for any object in your scene, even with heavy motion blur, DOF, transparency.
The compositor implementation should be pretty trivial, as you just check each ID for a numerical match with a selected ID, and use the corresponding weight, if a match is found. That works as a fully numerical solution, but if you want to recover ID names, you would need some python. The Nuke plugin we released is just a gizmo with some python. So, likely very doable in Natron and Blender without editing any application code at all.
Generating the Cryptomattes in the first place is the more interesting part. I realize this may not be the right place to post about implementation, but it seems important to have a sense of whether or not it’s possible without too much trouble. (Tonight is actually the first time I’ve looked at the Cycles source code, so forgive me if I’m making incorrect assumptions.)
The big thing I noticed that’s a bit different from the Arnold/alShaders implementation is that Cycles currently seems to just use sample 0 to set up the material and object IDs. (I’m looking at kernel_passes.h) I assume this means that the Object ID and Material ID in Cycles just represent information from a single sample. So, if you have a lot of AA samples, there’s a chance that the Object ID doesn’t actually represent the most common object in the pixel. Presumably, this was done to avoid having to maintain a data structure of all the IDs encountered, and taking the statistical “mode” (most frequently-occurring ID) at the end of rendering. Makes sense, given the nature of how Cycles works, and quite honestly, maintaining that sort of buffer is a roadblock in general for progressive rendering, since you will leave a tile and come back to it to add more samples.
If it’s possible to add something like a std::map<float, float> that always stores an integrated Coverage per ID, you could simply do this until you’re finished iterating, then sort the map by value, and use the most significant weights to choose which IDs and Coverages go into your Cryptomatte AOVs/Passes. Incidentally, I think doing this would also give you better Object IDs and Material IDs as well, as well as Coverage passes for each of those.
For progressive refinements, this is still slightly problematic, as you may lose information if/when you dump the std::map you built. However, as long as you have a decent number of AA samples per refinement, you should get usable mattes. (Note that you’re only losing information if an object had a low contribution in the earlier refinement. The number of samples needed to get proper rank with high probability should be much less than the number of samples needed for image convergence, unless you’re doing lots of refinements).
If the std::map solution doesn’t work due to architecture, or is a performance hit, you could do something fairly similar with a fixed set of components that simply start storing Coverage and IDs for the first N IDs encountered. This is not ideal, as you may encounter an ID that is a low contributor very early on, and then not have room to store more important IDs. However, with a decent-size fixed array, you would be able to handle the case where N or fewer objects share a pixel (which is actually 99% of the time in most renders, for any reasonable N). In our default implementation, we use N=6, and the last few IDs are always empty, or almost completely empty in our production scenes.
To generate the IDs, you can actually do this more efficiently than we did in Arnold, since you can simply compute an ID attribute according to the hash scheme before rendering starts. In our case, we fetch the object string and hash it at runtime, per sample. This isn’t particularly slow, as the hash function is high performance, but it’s certainly an advantage to only compute it once, and this will avoid fussing with strings at all in the renderer. You could also presumably precompute the list of hashes (the manifest) to store in the exr metadata as part of the main Blender thread. Ideally you would filter that metadata based on which IDs actually make it into the image by looking at the rendered ID values.
Hope this is helpful to someone, and please reach out if there are any questions!
AMD's (ProRender) Plugin for Blender Previewed
I forwarded this thread to the cycles mailinglist. I really hope someone picks up on this.
wow thanks andy!, great job on Cryptomatte it seems like it should become the standard across all render engines and compositing tools!
Oh please make this happen cycles devs!
It’s ridiculous how much time this would have saved me in comp on my last couple of projects.
Thanks alot Andy for making this available to the masses and reaching out!
+1 having this integrated in cycles would be great!
Amazing stuff, it’s pretty straightforward but produces great results!
As for integration in Cycles:
- Yep, currently the Material and Object ID passes are only generated for the first sample. The reason is that averaging them like you’d do with colors doesn’t make any sense - especially if the IDs are hashes, the intermediate values would resolve to completely different objects.
- Progressive rendering can be safely ignored afaics - it’s mainly used for viewport rendering, where matte passes aren’t generated anyways. For full rendering, it can be enabled, but users shouldn’t do so anyways since it’s a lot slower.
- For CPU rendering, the memory required to fully store the IDs is pretty insignificant since CPU tiles tend to be small (32x32 at most) and as soon as the tile is finished, the N most common IDs can be identified and the intermediate storage can be discarded.
For example, a possible implementation would be to allocate a array with, let’s say 16, int-float-pairs per pixel, which should be enough to handle N up to 8 without significant amounts of missed IDs. That’d require 323216*(4+4)=128KB per tile, which is no problem at all even with large amounts of threads.
- For GPU rendering, the memory situation is a bit different due to large tiles - still, 512x512 tiles would require 32MB of extra storage, which isn’t too bad.
- As far as I understand the poster, weights are only used for the pixel filter and transparency? The pixel filter can even be ignored in Cycles since it uses Filter Importance sampling, so the weight and PDF cancel each other out.
I’ll hack together a quick test for generating these passes in Cycles. The main problem will most likely be getting the passes out of Cycles, since the RenderPass system is in dire need of a refactor.
Oh, and a final question: Are there plans for a open-source viewer to verify that the data is correct? I don’t happen to have a Nuke license…
according to the official Facebook group of Natron, NatronNation
the Gizmo needs to be converted to a pyplug,
the graph inside the gizmo should be reproduced in Natron to make it work…
So, the transparency handling took a bit of time, but here’s a working patch for generating these passes in Cycles: https://developer.blender.org/D2106
The output isn’t 100% compatible by default (missing metadata, only one ID/weight pair per pass, passes replacing others etc…), but for testing it should be good enough.
Wow, that was impressively fast! And thanks for the positive feedback, everyone! We’re really happy to have opened up Cryptomatte – the world simply has better things to do with its time than making ID mattes
Regarding progressive rendering, the kind of thing that’s been a concern is iterative or indefinite rendering, where buckets get rendered multiple times and averaged using relative total samples. It’s become more relevant lately with the lower cost of pre-emptable cloud instances. It’s also nice for snapshotting in general, and to make sure you can get an image into production as soon as possible, without waiting for the full render to complete. So, kind of an edge case, but important enough to think about. I agree that it’s probably not a problem. At least not any more so for Cryptomatte than for regular ID AOVs.
You’re right about importance sampling of the filters. If sample weight is already factored in, you should be able to just take opacity into account. DOF and MB should also work automatically, based on the sampling distribution. So, opacity is really the only thing that needs to be taken into account. We usually treat refractive surfaces like glass as an opaque object, since the light bends anyway, so the main thing is dealing with things with textured cut-out alphas, or things like hairs that might have some kind of special transparency mode to account for sub-pixel thin-ness.
Getting a bit OT, but do the pixels in cycles share samples? That’s the one thing that seems like it could be a disadvantage of filter importance sampling, as it kind of sounds like the samples just affect one pixel. But I guess you could also do some sort of re-filtering to make use of all the samples in all the pixels they could affect. Or you could do importance sampling based on the combined filtered contributions to all the pixels, and then weight by (this pixel’s sample contribution) / (total contribution for all pixels). Not really relevant for Cryptomatte – I’m just curious about how filter importance sampling works in general in Cycles.
I don’t have any concrete info about plans for Natron or other compositing software, but I would expect porting our gizmo and python library to Natron would be pretty easy. At the end of the day, it’s just comparisons, multiplication and addition, so you can get most of what you need with expression nodes. The remainder is just the name lookups and the UI gravy so that the user doesn’t have to write the expressions themselves.
Wow… Lukas you are incredibly fast at testing these new things out…!
Bravo Lukas! Very exciting
wow Lukas! This was quick! This could be a game changer for production!
There is free non commercial license for nuke on the foundry’s page
One round of applause for Lukas! Danke!
Could someone explain in simple words what this is and if possible give examples where this would be useful and use cases?
super promising stuff!