Brecht's easter egg surprise: Modernizing shading and rendering

non-progressive on gpu is like x-mas!
this half my render time using cpu+gpu on same machine with 2 blender instances

But he deserves for a BF- contract! :slight_smile:

Brecht/DingTo:
In commit 59034 click on “text changed” for kernel_path.h and you can see there, in lines 1147, 1154 and 1160 that now “rng” is used when before (and correct) it was “&rng”

It is correct now. We pass &rng as a pointer to kernel_path_trace_setup() already.

Yes. Is correct then.

Bao2: Better to read the actual source, this web viewer thing can be confusing. :slight_smile:

DingTo, seems there is a problem in Cycles. In 59044 buildbot for Linux it appears with these trees floating without the ground showing there: (I did not checked in what exact version it goes wrong but at least in buildbot 58461 that I have too it renders correct)
Perhaps it is not a cycles problem but some change in particles or so. This is using the Pabellon Barcelona file. Can you replicate the bug?

http://i.minus.com/ibqJegOfPylL6X.jpg

Thats a wide range of 500 revisions to check. Do you render with progressive, or Non-Progressive?
Let me check.

Edit: Yes, if you just render Layer 6 (where the trees are), in current SVN the plane does not render. 2.68a is fine.

Sure you are, IS is not related to lightng, neutron tracking, or weather forecasting, it is absolutely abstract mathematic term related to monte carlo method of solving integrals. You fighting by posting links with fundamental 200+ years old proven knowledge. Ideal importance sampling, when sampling pdf is exact as measured function is best method with fastest in universe noise cleaning speed.

If you have some problem with abstract, there is my ugly simplifyed version.

Imagine pixel divided by half by some line, not matter direction. Maybe it lie on contrast object boundary, no tmater. With uniform sampling you waste same precious samples in dark side that barely can be visible and have very low noise because of that, and light side, thet is what we need. Importance sampling “redistribute” samples to subpixel parts where they are needed, use more on brighter, less on darker, and variance cleaning speed will be best. Of course, we have not that case with pixel filters, but i think you get idea. Details close to center get more samples, and in corner less, with same weight (==1 in case if ideal pdf=filter shape, as in Cycles current code). In your proposed old method we wasting samples in corners, sampling already clean noise, and lack at center, where we need more because of filter bell curve peak.

In your proposed old method we wasting samples in corners, sampling already clean noise, and lack at center, where we need more because of filter bell curve peak.

imagine, a light ray come to a grayscaled-sensor and is exactly between 2 pixels.
it is now splitted to 2 pixels - and both pixels collect some photons.
classic filtering work same way…

with FIS, you have “lens-type”-pixels, where center is more clear than edges. and they do not allow to split photons and you have some “overlapping”. this is not “natural”. you should understand that you need now more light photons (=samples) to remove noise.
because FIS do not allow to mix to neighbors. thats it…

result with FIS: more hard noise
FIS itself is faster, but you need much more samples,
because here is no neighbor mixing.

more samples is expensive than neighbor mixing

TS1234, you still don’t understand importance sampling, so it’s pointless to try and convince you of anything. The paper that was shown to you demonstrates that sample sharing causes more variance in combination with importance sampling, unless there is an equal sample weighting (such as in a box filter, which is poor in terms of antialiasing). It then proposes FIS as a means to still use a gaussian filter.

FIS itself is faster, but you need much more samples,
because here is no neighbor mixing.

As it is, you don’t understand the paper at all, so you shouldn’t be making any such claims.

Layman question here: How often does a sample land “exactly” between two pixels? Isn’t the probability of that the same as the probability that a randomly selected real number will be an integer (that is, 0?)

For example, here’s an image I just rendered of a grid of 10x10 squares (using the Cycles Brick Texture.) The border between them is white, width set to zero.


I don’t see any border. I shouldn’t, because the border region has no area, and thus it is infinitely improbable that the border will be sampled. I can thus only conclude that there is some non-infinitesimal border region in which a sample would be assumed to fall between the two pixels.

Here is the .blend file I used to render that image. The “mortar size” is keyframed to go from 0.5 at frame 1 (all border) to 0.0 (no border, seen above) at frame 25.

Which frame shows the appropriately-sized border region such that if a sample is taken in that region, it is split between its neighbors?
Pixel Border.blend (504 KB)

Well I hope this video where Importance Sampling is discussed settles it:

Thank you Bao2. I feel much more informed now. I think now I truly understand multiple importance sampling…

… And also the meaning of life.

Well I was risking here a ban of 2 months or so by some moderator, but I think the video explains clearly Importance Sampling. The guy moves the leg several times to explain the areas where rays concentrate more.

Are those future new olympic athletes or gymnasts?

No, no, the outfits is just because is Summer

One can only wonder if the Blender Conference were in Greece, what would happen!

The bug in post 12481 seems solved in revision 59076

DingTo commits a new sky model for Cycles in his branch (Hosek / Wilkie, the exact same model that’s now used in Luxrender).
http://lists.blender.org/pipermail/bf-blender-cvs/2013-August/058341.html

This would’ve been something that would be quite tricky to emulate with the existing textures, so many will find this as a nice addition.

Apparently he committed the older Preetham model to his branch as well (still being used for OSL), he also notes that very low values are needed to make sure the background isn’t completely washed out, but you apparently can’t get bright light into the scene in that case.

One can only wonder if the Blender Conference were in Greece, what would happen!

Don’t expect me to save blender Bao2. I won’t do it.