This is gonna be a rather technical question about the inner workings of Cycles (or basically pathtracers in general). So first of all, allow me to briefly recap my personal (clearly limited) current understanding of the process for the sake of context and to then more clearly pose my question.
I (think) I understand a pathtracer
- (per sample being rendered) traces a camera- (or primary-) ray from the focal point of the virtual camera straight “through” a given pixel (of the image being rendered) in the (camera’s) viewing-direction into the scene.
- as soon as it (the camera-ray, that is) hits/intersects the first tri on its way through the scene, it gets terminated, and a (or several in case of branched PT) secondary ray is cast from the point of intersection in a direction determined by some sort of quasi-monte-carlo pattern (like sobol pattern) “projected” onto a hemisphere around the point of intersection (while the “equator” of the hemisphere lies on the plane in 3d-space defined as being orthogonal to the surface-normal at the point given) and modulated by the BRDF/BSDF relevant for the point of intersection.
- this last step is basically being repeated until either the maximum number of bounces was reached or the ray being traced hit a lamp (where it naturally terminates the whole path) or it hits world background (where it naturally terminates the path either).
- for each camera-ray being traced per pixel, the exact point “through” which it (the ray, that is) gets traced into the scene (from the focal point) is,- on a subpixel level - being (most probably quasi-monte-carlo-) jittered for the sake of anti-aliasing.
- the above mentioned focal point might get jittered along the camera’s Z-axis (in Z-depth-space, so to say) for the sake of (raytraced) DOF (if any), on a per-sample-per-pixel basis.
- for the sake of (camera-) motion-blur, the translation and/or rotation of the camera itself may get jittered in worldspace according to whatever determines it’s movement on a per-sample-per-pixel basis.
- in case of bucket-rendering (in other words more than one tile being rendered per frame) each cpu-thread (or is this a per-core thing?) does as lined out above (plus some actual shading stuff I left out for the sake of simplicity) for each pixel within the boundaries of the given tile (which the cpu-thread currently renders) as often as determined by the ‘Samples’-setting in case of pathtracing or the ‘AA-Samples’-setting in case of branched PT (in Cycles UI-terminology). When it has rendered the specified number of SPP for every pixel within the tile’s boundaries, it does the same for the next unrendered tile, if any.
So with this in mind (anyone with more knowledge of the subject matter feel free to correct me, if I got something wrong there), let us imagine the following case:
- Say we’re rendering a simple sphere and it’s positioned such that it covers the frame (being rendered) by, well, say 30%. So the other 70% of our rendered image are but negative space, the camera “sees” but world background there.
- Let’s furthermore assume we’re not using an HDRI nor any LDRI backplate on our world background, at least,- this is important - not for camera-rays. In other words, as far as our camera-rays are concearned, the world background returns but a constant rgb(a)-value (‘RGB 0,0,0’ I guess, or maybe ‘Null’ instead) wherever it’s being evaluated during rendering.
- Our third assumption is, we’re not using ‘progressive refine’, are rendering 16 samples per pixel and resolution and tilesize are set such that an overall 100 tiles are being rendered (per frame).
- We are not using raytraced DOF.
Now finally, here’s what I’ve been actually wondering about:
Given the scenario described above, we’ll now assume our friendly pathtracer is currently rendering one of the approx. 70 tiles not covering any geometry whatsoever (read it only needs to render our constant world background on all pixels of the tile).
While it does so for a given pixel in the tile, would Cycles
- A) render the first sample, realize the camera-ray was hitting world background immediateley, have black or Null returned due to the world background being set to not be visible to camera-rays, and then skip tracing the other 15 rays for it knows already there’s nothing there which could cause any variance anyway.
- B) render the first sample, ignore the above-mentioned and trace the other 15 rays regardless, for the sake of anti-aliasing (after all, there might still some geo be intersected somewhere on sub-pixel-level).
I’d kind of suppose it’s (B) rather, while this sounds kinda like a bit of an overkill in anti-aliasing, were we assuming some more realworld-ish sample-counts (like 16k rather than 16).
Now if this is true, would it be practical/make sense if Cycles had sth. like an explicit anti-aliasing-parameter (in pathtracing mode) to determine after which number of samples it might as well skip the rest of the globally set sample-count in such a case?
Come to think about it, wouldn’t that be basically the principle of adaptive sampling, or, more preciseley, a highly specialized, small subset thereof (for some rather weired, not very production-scenario-like corner-case) for which no variance would need to be measured?
Hmmm, now I feel I probably answered my own question myself… well, after having typed for ages now, I’m gonna post this anyway, and if it ends up just a weird monologue-1-post-thread, so be it.