Noisy "Object Index" & "Material Index" with DOF

Hi everybody ,

I come from Max and start learning this impressive software that Blender is ! But i have something that I don’t understand with ObjectID and matID : When I put a DOF on my cam and use Cycle, the “Object Index” and “Material Index” RenderLayer are very noisy , It does not matter the number of render subdivision i put : it’s still the same , but the other renderlayer like normal , AO …etc are correctly “dofed” without noise on it. How i could have a correct IdMat and ObjMat with dof on it ?

I know that i could render without dof and use zdepth with defocus, but i want a real DOF so is it a way to do this ?

sorry for my bad english , hope somebody understand me :o

thanks in advance

I find it most expedient to render the material, then to apply blurring to it in a down-stream compositing process e.g. as described in https://docs.blender.org/manual/en/dev/compositing/types/filter/bokeh_blur.html. Vary the amount of blur according to distance. Use a Curves node for fine adjustment.

Also – sometimes it is plenty-enough if there is a “backdrop in the distance” that is uniformly blurred: the fact that there is blur at all is sometimes enough to sell the shot.

This is something that is bothering me greatly as well as I use ID masks a lot. It seams material and ID masks are currently storred in 1 bit(!) format so it’s white or black in there :confused: I am hoping for it to improve in the future. There was some talk about Cryptomatte integration, but I haven’t heard of any plans to do anything about it in 2.80, so it might be a while. :frowning:

Cryptomatte like logic is the solution, but the reason it does not currently work is that each pixel can have only one ID value. And when a pixel can have only one ID you will get a random “ID noise” in areas where objects partially overlap (semitransparency). You can’t blend IDs like color inside pixel because ID is a unique value, not a logical continuum and blend of 1 and 3 is not a bit of 1 and some 3 but object with id of 2, which is totally unrelated to 1 and 3.

Cryptomatte is based on a logic where object or group or layer names are hashed to a value and ID mattes are stored as value-coverage pairs. From the top of my head I don’t remember the exact logic but I believe Cryptomatte uses multiple channels with 32bits each to store 6 biggest coverages for each pixel (8bit hash + 8bit coverage value for each id), so 6 id-s need three 32bit channels.

I wonder how one could go around the issues right now. I suppose if you rendered the objects set up with white emission shaders against black background the DOF would show in the resulting mask. One could use RGB channels for fitting 3 objects to one output at a time. I think it would be worth exploring and searching for an easy workflow to get proper ID masks in Blender without changing Cycles source. When(if) I have a few minutes I might give it a go this could be useful. Maybe anyone else has any good tips/setups/python scripts/addons regarding the issue of 1 bit ID masks?

Masks are not 1bit, they just have one ID value for each pixel. With a 1bit mask you would need a separate image channel for each ID or a way to describe the bit mask contents as a list of IDs. Cryptomatte is more flexible in this as it enables both multiple id values per pixel and antialiasing.

What you can do is separate objects to different render layers and use layer material override to create the masks. But it is kind of tedious.

I guess that I just have a common-sense problem with the entire notion. :slight_smile:

Common-sense tells me that data such as “Object-ID” or “Material-ID” only makes sense in situations where it applies to exactly one screen pixel at a time. Therefore, this necessarily must be inconsistent and incompatible with the entire notion of “blurring,” which necessarily must mean that at least two possible sources must now apply to this “blurry” pixel.

Hence, my suggestion to use separate blurring-steps to “fake” DOF, so that all steps which might need to reference IDs will face no ambiguity. Such data basically becomes meaningless after things like blurring have been applied. You necessarily must “assemble the image” first – while the one-to-one correspondence with image pixels still applies – then blur it.

This is the way it is handled in Blender right now. But Cryptomatte solves the mixing problem by cleverly encoding the IDs and storing multiple sets of IDs for each pixel. I’ll try to explain it as I understand it.

Imagine you have a pixel which is a combination of three different objects, all semitransparent (out of focus, motion blurred etc). For each object you have set that it belongs into an ID group (groups themselves can be based on layers, grouping etc). The group id is stored not as a simple number, but a hash value (calculated over group name and possibly other labels), something like a5f798c2. For each group, its coverage is also calculated (in render engine, based on sample count) and groups that cover that certain pixel, are sorted by coverage. This allows us to limit the stored number of groups, we simply discard the ones with smaller coverage that don’t contribute much. These ID-coverage pairs are stored in regular RGBA float image data, but they are non-color, meaning that masks are not stored like it is traditionally done using RGBA channels or single ID value for each pixel. When we need to pull the mask, we choose an ID value, which can be done using color picker because we can assume that user selects a pixel with solid object that contains one group only or has the majority of group of interest. From the selected color (which is, infact a non-color data encoded as color) we can extract the ID we need the mask for, and on each image pixel, we search for that ID. If ID is present, we read the coverage value which is our actual mask density for that group in that pixel.

One very neat thing that cryptomatte allows is storing the original group/object/layer names in the exr file and reading them when pulling the mattes. Names are stored in metadata as key-value pairs and based on the hash value we can find out what the name of the actual object or layer was. This makes it possible to pull mattes based on object or layer or whatnot name instead of poking around with a color picker.

The “problem” with cryptomatte is that it must be supported by render engine. We need render engine to calculate the number of samples from each group that contribute to the pixel because this is where we get our coverage information from. But fortunately it should be pretty straightforward to add this functionality to pathtracers like Cycles.