Brecht's easter egg surprise: Modernizing shading and rendering

I’ve got a semi-working Blackman-Harris filter working in Cycles, I need someone who knows more about it the Cycles code to take a look for me. On first test(s) it would appear that BH is in general slightly blurrier than the Gaussian filter at the same filter width, but is better at maintaining a bit of low frequency detail, especially in the lined disk test. Overall, though, I have the same sentiment as I do for any other filter without a negative lobe: the only person who is ever going to be able to tell the difference between them is the person who did the rendering. No human on the planet could put this and a Gaussian filter next to one another and know which is which.

EDIT:




Waiting for comment from Bao ASAP!

Waiting to see people try to figure out which is which. Winner gets the “The eye sees what it wants to see” prize.

I am going back tomorrow to the occulist.

A question to those that know.

I have started to try and learn a bit about physically based shaders and OSL and I have been reading a few papers here and there (men I wished I paid more attention in multi-variable calculus and linear algebra).

My question is where is the Fresnel term in Cycles glossy shader? I recall it used to be there and than it got axed. All the micro-facet shading papers I read its there. It was also there in the OSL paper I read from Sony.

Nodes:
Using regular nodes would a glossy shader + Fresnel node with a mix node with one empty input closure socket work. I’m trying to control the Fresnel value of one shader node not a mix of two.

OSL:
would calling the glossy shader as microfacet_beckman(N, roughness, ior) work?

P.S.

the microfacet model I found in most papers was this one:

f(l,v) = (F(l,h) G(l,v,h)D(h))/(4(n.l)(n.v)

F was the fresnel term, D was the distribution function which I figured you choose on the node as beckman, GGX etc. …not too bothered about ‘G’ for the moment.

@TS1234 not again, please…

ideally a sample should come equal from every pixel position,
and then weighted to neighbour pixels.
this is how a correct filter MUST work.

just lol.

Edit:

For anyone not in charge that old our discussion, he is obviously do not understand what importance sampling mean and why it much better, have ideal noise reducing property as huge advantage of current method, used in Cycles, with strong mathematics proofs, and still promote archaic uniform sampling with weights.

storm,
but as you can see Cycles vs. Mitsuba image
the noise is more visible in cycles,
because it is not mixed to neighbour pixels

and storm, you are wrong again!

https://sites.google.com/site/isrendering/
Importance sampling offers a means to reduce the variance by skewing the samples toward regions of the illumination integral that provide the most energy. For instance, the direction of specular reflection or a bright light source within an environment more likely represent the final value of the integral than a random sample.

and my described sample transformation have nothing to do with energy (light intensity) - here are no formulas in cycles that check energy.

do you understand?

the filter system is “dumb” - it do not check any energy
look at source, if you do not believe!

and this have NOTHING to do with importance sampling…

it is only a clever trick to “emulate” filtering…
but it is not 100% correct

He’s right, Cycles uses a type of filtering I personally haven’t seen anywhere else. It doesn’t do what could be traditionally considered as “filtering” as far as I can tell. It gets around the complicated issue of padding around buckets (which apparently is a problem on the GPU) by using this “importance sample filter” method. Whether or not this is better or worse, though, is kind of a matter of preference. The filter type is never going to make or break your image as a whole. I’m not sure where I stand on this new filtering method as I’ve never really had an issue with traditional filtering methods, but setting up an image rendered in Cycles next to an image rendered in Arnold I can’t see the difference between the two as far as filters are concerned. Seems like a wash to me.

here is what most people understand for importance sampling:
(what storm wrongly understand here, but was not debated by me)


Whether or not this is better or worse, though, is kind of a matter of preference. The filter type is never going to make or break your image as a whole.

current system is not correct… but most people do not notice this…
most people notice only hard noise… this is by design of this filter system

the whole image is not really wrong…
but then we can remove such filtering from cycles
only box filter is correct at the moment
but without neighbour pixel we need more samples
and box filter is bad! this is last choice for me.

please compare noise in cycles and arnold with same setttings (if possible?)

Maybe you’re misunderstanding the point of the gauss filter? It is an antialiasing filter, not a noise filter.

There are two ways of going there: scattering a ray’s color over several pixels with a weighting function, or gathering a pixel’s color by gathering rays that are outside of that pixel’s frustum. It is my understanding that cycles is doing the latter because scatter operations are very expensive on GPUs.

Noise in Arnold and noise in Cycles are almost identical with same settings and render time for simple scenes.

noise filter?
no!

if you shoot a sample exactly between 2 pixels.

you can mix this value to neighbours or not.
cycles don’t do this - so you need more samples to soft out this sample.

but if you mix it to neighbours, you get less noise.

every other renderer do this: mitsuba, lux, yafaray…
so you get “unsharp” noise

The method Cycles uses is different. But it is not wrong. It will, in the limit, converge to the same result. It is a tradeoff, you have to live with more noise, but you get a more efficient and simpler GPU implementation in return.

I thought I read in another thread some time ago that trading off noise for a moire pattern here and there isn’t all that good because it takes a long time to go away.

The sharp, fine noise of Cycles also means it’s easier to get rid of in post-processing, all of the fireflies are just one pixel in size and that for example is a bit easier to remove than the 3x3 fireflies that Luxrender is known to create. I also read a thread on the Lux forum of someone trying to get the same sharpness as in Cycles because of the massive amounts of samples that need to be done to converge everything down to a similar level of sharpness.

In all, filtering out the noise to soften it in cases will basically mean trading the ultra-high frequency noise for lower frequency noise, and I don’t know if that is desirable as a lot of noise filters seen in paint programs work best when it’s at a frequency like seen in Cycles.

@TS1234: importance sampling is nothing related with what you say. It is basic fundamental method of redicing variance in monte carlo intergation. You can sample uniformly and apply weight, that is dumb old method that you trying to promote. Importance sampling http://en.wikipedia.org/wiki/Importance_sampling is different, and have strong advantage in noise cleaning. Cycles use it sampling filter, increasing probability proportional to filter curve, not OLD DUMB STUPID NAIVE uniform sampling and applying weight. When function curve almost flat, you get same result/performance/noise cleaning speed. With more complex filter shape importance sampling get strong, mathematically proven (see link above, google more for details) property - variance cleaning speed MUCH faster. I am absolutely not understand why you continue to spread wrong information, read any probability theory book, it so basic and known.

@skv

The method Cycles uses is different. But it is not wrong. It will, in the limit, converge to the same result. It is a tradeoff, you have to live with more noise, but you get a more efficient and simpler GPU implementation in return.

Wrong again. It get LESS noise. Are you all blind?

I agree with TS1234 - it’s hard to compare, but renders in lux, octane and others have a softer light feel, which I believe is a result of this “bleeding” between pixels. It would be nice to have the option in cycles if you don’t want the harder look.

storm, to be fair, there is a LOT more “hard” noise in Cycles at low sample levels than there is in other path tracers that use traditional filtering methods.

Part of me wants to believe that this is the best of the best option for image filtering. Another part of me says that if it were it would be used by the best-in-class renderers across the board, as the paper is circe 2007, but it’s not which creates some doubts. However, images seem to converge to the same result regardless, and no one is going to notice the pixel filter you used anyway, so it’s pointless to argue about.

In path tracing, importance sampling will reduce noise. In this particular application, however, given the same number of rays, I would say it generates slightly more noise. In antialiasing, all samples may have different contributions to a single pixel, but they all contribute equally to the overall image. A sample that contributes little to one pixel may contribute a lot to a neighboring pixel.

By not splatting across pixel, the renderer is discarding information.

Are you all blind?

Until we have an apples to apples comparison (same render engine, only AA filter different), I can’t answer that. Putting a Mitsuba render next to a Cycles render tells me nothing, all I can do is theorize.

We solving light transport equations integral, remember? Using Monte carlo (MC), or other methods (there are many other btw! ). Result will converge to SAME PICTURE. on Octane, Arnold, Cycles, anything. 2+2=2. If we keep MC, then importance sampling when function is exact as inverse CDF (not always, but in our case it is, for example gloss BSDF have not ideal but close function), we have only ideal best method. Period. Better is only precalcularted constant ^^.

I am absolutely not care did you like image after 10 samples or not. It is way to nowhere. To 1960 scan line rendering. Or maybe 1990 king-of-the-hill Z-buffer texture mapper, that have unbeliviable fast pure hardware accelerator, a.k.a “gamer video card”. It so EVERYthing you love above thread: hardware AA sampling (yes, dedicated on chip units), extreme wide cache bus, etc. I am not care about it. We go next step, GI light.

You look at wrong pixels, to wrong noise. You making scene full of gradients, and changing “renderers” to search what is make pleasent picture faster. It is WRONG. With GI you can invent new scenes, new shading, that is not game-like feel that fed everyone on planet thanks games and cheap tv AD.

How about noise in 4 order bounce from that left candle behind vertical decore stripes, in motion. How fast you clean that noise? That is so complex task, needs so many samples, it only can be measured by strong mathematic method, or - side by side comparison of that week long motion scenes that home user dont even mind to render.

the only solution to test a correct filter in cycles is to modify code so it should mix correct to neighbor pixels.
for simplification we should test with only one big tile and render same nr of samples.
and you will see the difference.