A question about pathtracing

This is gonna be a rather technical question about the inner workings of Cycles (or basically pathtracers in general). So first of all, allow me to briefly recap my personal (clearly limited) current understanding of the process for the sake of context and to then more clearly pose my question.

I (think) I understand a pathtracer

  • (per sample being rendered) traces a camera- (or primary-) ray from the focal point of the virtual camera straight “through” a given pixel (of the image being rendered) in the (camera’s) viewing-direction into the scene.
  • as soon as it (the camera-ray, that is) hits/intersects the first tri on its way through the scene, it gets terminated, and a (or several in case of branched PT) secondary ray is cast from the point of intersection in a direction determined by some sort of quasi-monte-carlo pattern (like sobol pattern) “projected” onto a hemisphere around the point of intersection (while the “equator” of the hemisphere lies on the plane in 3d-space defined as being orthogonal to the surface-normal at the point given) and modulated by the BRDF/BSDF relevant for the point of intersection.
  • this last step is basically being repeated until either the maximum number of bounces was reached or the ray being traced hit a lamp (where it naturally terminates the whole path) or it hits world background (where it naturally terminates the path either).
  • for each camera-ray being traced per pixel, the exact point “through” which it (the ray, that is) gets traced into the scene (from the focal point) is,- on a subpixel level - being (most probably quasi-monte-carlo-) jittered for the sake of anti-aliasing.
  • the above mentioned focal point might get jittered along the camera’s Z-axis (in Z-depth-space, so to say) for the sake of (raytraced) DOF (if any), on a per-sample-per-pixel basis.
  • for the sake of (camera-) motion-blur, the translation and/or rotation of the camera itself may get jittered in worldspace according to whatever determines it’s movement on a per-sample-per-pixel basis.
  • in case of bucket-rendering (in other words more than one tile being rendered per frame) each cpu-thread (or is this a per-core thing?) does as lined out above (plus some actual shading stuff I left out for the sake of simplicity) for each pixel within the boundaries of the given tile (which the cpu-thread currently renders) as often as determined by the ‘Samples’-setting in case of pathtracing or the ‘AA-Samples’-setting in case of branched PT (in Cycles UI-terminology). When it has rendered the specified number of SPP for every pixel within the tile’s boundaries, it does the same for the next unrendered tile, if any.

So with this in mind (anyone with more knowledge of the subject matter feel free to correct me, if I got something wrong there), let us imagine the following case:

  • Say we’re rendering a simple sphere and it’s positioned such that it covers the frame (being rendered) by, well, say 30%. So the other 70% of our rendered image are but negative space, the camera “sees” but world background there.
  • Let’s furthermore assume we’re not using an HDRI nor any LDRI backplate on our world background, at least,- this is important - not for camera-rays. In other words, as far as our camera-rays are concearned, the world background returns but a constant rgb(a)-value (‘RGB 0,0,0’ I guess, or maybe ‘Null’ instead) wherever it’s being evaluated during rendering.
  • Our third assumption is, we’re not using ‘progressive refine’, are rendering 16 samples per pixel and resolution and tilesize are set such that an overall 100 tiles are being rendered (per frame).
  • We are not using raytraced DOF.

Now finally, here’s what I’ve been actually wondering about:
Given the scenario described above, we’ll now assume our friendly pathtracer is currently rendering one of the approx. 70 tiles not covering any geometry whatsoever (read it only needs to render our constant world background on all pixels of the tile).
While it does so for a given pixel in the tile, would Cycles

  • A) render the first sample, realize the camera-ray was hitting world background immediateley, have black or Null returned due to the world background being set to not be visible to camera-rays, and then skip tracing the other 15 rays for it knows already there’s nothing there which could cause any variance anyway.
  • B) render the first sample, ignore the above-mentioned and trace the other 15 rays regardless, for the sake of anti-aliasing (after all, there might still some geo be intersected somewhere on sub-pixel-level).

I’d kind of suppose it’s (B) rather, while this sounds kinda like a bit of an overkill in anti-aliasing, were we assuming some more realworld-ish sample-counts (like 16k rather than 16).

Now if this is true, would it be practical/make sense if Cycles had sth. like an explicit anti-aliasing-parameter (in pathtracing mode) to determine after which number of samples it might as well skip the rest of the globally set sample-count in such a case?
Come to think about it, wouldn’t that be basically the principle of adaptive sampling, or, more preciseley, a highly specialized, small subset thereof (for some rather weired, not very production-scenario-like corner-case) for which no variance would need to be measured?

Hmmm, now I feel I probably answered my own question myself… well, after having typed for ages now, I’m gonna post this anyway, and if it ends up just a weird monologue-1-post-thread, so be it.:slight_smile:

greetings, Kologe

You could of course do that, but you won’t win that much performance, because the primary/view rays aren’t the expensive part of the algorithm. If you need it, you can still do something similar by using Branched Path Tracing, which offers you a lot more control (at the cost of a small overhead).
edit: Cycles in Path Tracing mode just samples a lot of rays per pixel and is not doing such “optimization”. You have to keep in mind that you are constructing a special case (no background, no dof, no motion blur, etc.) and there are a lot of such special cases. You don’t want to optimize your algorithms for each special case, you want one that handles all special cases as good as possible.

Sure, I do realize that :). In effect, the whole question was much less geared towards trying to believe doing such a thing would be of much help in any such thing like a realworld-scenario, it was originally rather for the sake of trying to understand the inner workings of pathtracing better.
And on another note, I’d imagine things like the constant-folding in material-nodetrees introduced (by sergey iirc) a while ago would in most production-scenaria rather not be of tremendous impact on effective performance either (things like a part of a node-tree always evaluating to a constant value seem like a bit of a bizzare corner-case too).

greetings,
Kologe

I’m not aware of the “constant-folding” optimization you’re referring to, but definitely material nodetree evaluation is a big slowdown in path tracers. Especially if you have to evaluate them at each ray hit. There have been basically 2 solutions used so far. OSL JIT’s down the node tree to like you mentioned remove parts that evaluate to a constant value with no texture dependent read. Otherwise there have been a few papers about baking down the BSDF’s inputs to textures as a pre-process. Of course this latter solution assumes you aren’t doing shader “tricks” like ray switches but one could argue those are non-physical anyway.

One way to solve these kind of problems while keeping the path tracing algorithm neutral from specific situations is using vectorisation to take on coherent raytracing problems, like Intel is doing with Embree (and game engines do all the time). Optimisation for specific cases is bad, but background sampling is one of the main chapters of any render engine. Adaptive sampling would help as well, as no variance would mean no more rays.

And as for lil’ old me, I still find myself using BI most of the time, sometimes alongside Cycles but rarely if ever using Cycles to do the whole scene. I find that a combination of “ray-tracing” (BI) and “path-tracing” (Cycles) produces a more-controllable outcome.

Path tracing is wonderful for producing “soft box” style illumination – the area underneath that table over there looks terrific – but in a lot of images the outcome is flat and featureless. The lighting is even and perfect – and, boring as hell. :wink: BI, on the other hand, now gives you an easily-controlled and easily-predicted directional light-source that is coming from a certain spot. It is easy to calculate if, and if so where, the light from that source will hit something in the room; therefore (especially) where the shadows will fall. (And if you don’t want shadows from a light, turn 'em off. If you only want shadows, you can have that too.)

Shadows are a great deal of what gives any photograph its sense of “depth.” The photos that you see in magazines are lit using a combination of “soft boxes” and point light-sources. The latter tools are used to put highlights … and shadows … exactly where you want them to be, with exactly the shade and intensity that you want to see there. That’s why I find that it works well to use these two approaches side-by-side, using compositing as the “digital darkroom.” The result is not a simulation of anything real: it is absolutely contrived.