Z-buffer Sub-sampling.

Z-buffers can’t have anti-aliasing simply do to the way they represent information.

So… Is it possibly to render the Z-buffer at a higher resolution than the other render data in order to prevent blockiness? For instance, say for every pixel in the rendered image, there are 4, 9 or 16 (or any other square numer) of pixels in the zbuffer image. Now that I think of it, I guess the proper term would be Z-buffer sub-sampling.

Would the node system be able to properly use Z-buffers that are at a higher resolution that the other parts of the rendering?

If so, is it possible to store multi-layer EXR files with layers at different resolutions?

well, that would be confusing if it did. I mean, a pixel at multiple depths away from camera??? pixels are just a dot of light coming from a surface. If it had a range of depths, it would be like representing a solid or something (though your idea might help with SSS, but that’s another topic).

If by Blockiness you mean aliasing, enable OSA; 5 to 8 gives good results.

What problem are you trying to solve? I dont understand Blockiness term…at edges, one dot could be the car and the next dot over could be the building that the car is in front of, so the Z would jump from 10 to 100. It would have to; there is nothing in between. At higher resoultions, the Z map gets bigger and bigger, matching resolution.

subsampling causes several samples to blend together to create an intermediate value. But Z values are exact and you cant blend them together, if there is a cube in front of a tree, the edge of the cube doesn’t have a z-value halfway between the tree and the cube. The z value is still on the surface of the cube.

If you are still running into aliases for various reasons, try to render your entire scene at 2x the resulution and then scale it down.

What problem are you trying to solve? I dont understand Blockiness term…at edges

I’m talking about the “blockiness” of the zbuffer. When using the zbuffer in compositing, you can get funky results around edges because it is not possible to anti-alias zbuffers.

well, that would be confusing if it did. I mean, a pixel at multiple depths away from camera???

Yes. Exactly. At the edges of objects in an image, there is anti-aliasing in the rendered image, because the pixel is representing a particular spot where the border between one object and another takes place inside the pixel. The pixel has to represent two colors, so it is blurred between the two. That’s how anti-aliasing works. BUT, you can’t do that with z-buffers because then the z-buffer for that pixel on the edge would be incorrect.

Subsampling the z-buffer would help the resulting composite be more precise with sub-pixel values so that it doesn’t look so aliased.

subsampling causes several samples to blend together to create an intermediate value. But Z values are exact and you cant blend them together,

Yes, I realize that. But I’m not talking about blending them together. I just mean having a z-buffer at higher resolution so that when compositing, the compositor would have accurate sub-pixel z-buffer data.

aliasing is just changing the color of a pixel based on the colors of the pixels around it. It has nothing to do with Z depth. The blockiness of a Zdepth map is required in order to composite (using the Zcombine node) accurate pictures. the resultant picture is then anti-aliased to be smooth.

aliasing is just changing the color of a pixel based on the colors of the pixels around it. It has nothing to do with Z depth. The blockiness of a Zdepth map is required in order to composite (using the Zcombine node) accurate pictures. the resultant picture is then anti-aliased to be smooth.

Ack Yes, I know all that. :rolleyes:

The problem comes when doing DOF blurring based on the z-buffer. This creates problems on those edge pixels because it leaves a “ring” around the object in focus.

It’s soft of similar to the type of problems you get when compositing sources with alpha channels that were rendered pre-multiplied rather than keyed.

Think of it this way: If you have a rendering that is rendered pre-multiplied and later gets composited you will get some rings around the edges when compositing. But, if the alpha channel was rendered at a much higher resolution, then the ring around the edges would be much less noticeable because the added info from the higher resolution alpha lets you know how much of the color information in the pixel can be discarded.

The reason I ask this is because it takes very little time to render a z-buffer at a higher resolution whereas It can take ALOT more time (and memory) to render EVERYTHING at a higher resoution. Especially when rendering stuff at 2k.

Not really, anti-aliased pixels are the result of light coming from multiple surfaces, averaged into one, and that’s why an non-anti-aliased z buffer can provide problems when used with filters on an anti-aliased image source.

What problem are you trying to solve? I dont understand Blockiness term…

The blockiness is just something that comes from filters that use an non-antialiased z buffer to do something to your image. While anti-aliasing the Z buffer isn’t conceptually correct, it can sometimes help to make things look better. Here’s a quick test I did to show, I rendered out a larger exr version, then scaled it down in the comp to fake an AA Z buffer for the defocus node. While it’s not great, it does seems to reduce some of the aliasing (while introducing other problems, yes :slight_smile: )

http://mke3.net/blender/etc/fake_aa_zbuf.jpg

But it seems like there’s a better solution - the Z combine node used to suffer from the aliasing problem terribly, but it looks like Ton’s built some kind of fake antialiasing into the node itself - check the examples at the bottom here: http://www.blender.org/cms/Composite__UV_Map__ID.830.0.html

Perhaps it would be good if other nodes that use Z such as defocus and vector blur could be updated to use this aa functionality too.

How about if you constructed a Z-delta buffer, which contains the differences in the Z depth of adjacent pixels, and used that to feed how much OSA needs to be done in the region?. so
[100 100 10 ] => [0 0 90]
[100 90 20] => [0 10 70]
or something like that, to show the edge clearly, and allow OSA to be done more along the edge. Something like this may already exist for toon edge rendering or edge detection.

ah, people finally get what I’m talking about. :smiley:

broken: I looked at the page about the z-buffering mask trick, and ton writes:

only 1 sample per pixel for Z is delivered to the compositor, so the masks can have small artefacts.

While the masking trick does help, there is still some artefacts from the fact that there is “only 1 sample per pixel for Z”. If the z-buffer were rendered at a higher resolution, and the compositor understood what was going on when using sources at different resolutions, then you would have more that 1 sample per pixel for Z. This is what I’ve been trying to get at.