Question about samples.

Correct me if I’m wrong, but this is my current understanding of samples.

Samples are the number of light particles that are bounced off an object or a scene. More samples, more light bouncing around, better quality. In the simplest form, that’s what samples are? Correct?

Now down to my question: How high can samples get before they are limited by pixel size?

How many samples do I need to have before my computer monitor cannot display them due to them being smaller then the pixels on my screen?

I’m not talking about the point where my eye can’t tell the difference anymore (although it would be nice to know just for reference), I just want the ceiling where my computer monitor can’t display them because they are too small.

Any help would be nice, thanks.

It’s been a few days on this, I’m still looking for some help.

samples (at the moment) is the number of times a single pixel has been queried… sure it can bounce off different objects and what not, but thats a different setting…

my understanding of what happens is that the first sample hits the object, and it may trace back to one light that is being emitted… say it gives the pixel a colour of red… the next sample, may bounce onto another object then onto the light… say that this indirect light gives off a yellow colour… these two samples are then combined and averaged… this is why the lower samples you have, the more ‘noise’ you have, because all the pixels in an area are still resolving… finding their true average.

what some people do is render at a higher resolution with less samples, then downsample it (so 2x resolution or 4x… always make it a base 2 multiple)

How many samples? well… it all depends on your scene complexity… with many lights / complex scene… i have had 10000 samples not being enough. on others, 100 samples is enough… pretty much its on a scene by scene basis… not a ‘this number is what you need’ basis.

Pixels, is a different thing all together… thats how many dots are making up one image… or how many dots are on your computer monitor… your max resolution, if you are only playing it off your monitor, should be the resolution of your monitor…

So basicly, it’s like that astronomy photography technique, where they take several shots of the same thing and then stack the pictures to get rid of the noise, and each “sample” is one shot?

Bishop, I think that’s vaguely what happens, but if you look in the settings under the render tab you’ll notice a box that says “Max number of bounces” and “Min number of bounces”. It’s my belief that the samples bounce and bounce off of things until they’ve reached the max number of bounces, and then the brightness/shadows/colors are decided from that. But if they only bounce off it once (cube in the scene, sample enters, bounces off into space, never hit’s another object) then that’s it.

I suppose I’m only answering the question for myself XD.
More objects, more bounces, more detail…more samples…

But I’m confused is what you mean by resolution. How do you render in higher resolution with less samples?

I’m not an expert on such nitty-gritty definitions for things like sampling, but basically it’s like this.

With each pass done by Cycles, rays get fired from the camera in an even spread across the image, these samples will then bounce around until they hit a light and the result of that lightpath is rendered onto the screen in the form of a dot.

With each successive pass, more and more of these lightpaths are explored and the results of all of them start to accumulate onto the render result, when enough of them have accumulated, the engine has now rendered so many dots that they all now kind of blend together into a smooth, coherent result (which the process is known as convergence). This, for a single pixel, means that enough lightpaths have been found that involves it and has now seen it all average out into what is known as the ‘correct’ result.

In more advanced sampling algorithms, the correct result can be found more quickly through means like increasing the amount that some areas get sampled due to noise level or path difficulty (adaptive or noise-aware sampling), sending a bunch on similar paths if one produces a result (metropolis sampling), and sending rays in the opposite direction (from the light to the camera) as well as sending them from the camera before making decisions on which ones to link up and render to the screen (bidirectional sampling)