Pixel based (instead picture based) sample weighting ->
<b>done</b>
Here is no visual things to see - only many code changes
My own concept Noise Priority Rendering (NPR) ->
<b>done</b>
In theory it should eliminate noisy areas much faster
It just render non stable pixels (auto detected in learning process) with much more samples!!!
Render specified shader / material / object (first path from camera) with more samples than others
So you can render glass-shader with more samples than any other shader ->
<b>done</b>
Stop rendering of specified material / object (first path from camera) after defined number of samples ->
<b>done</b>
Stop rendering at defined noise level (npp = noise per pixel) - canceled
Well count me as one of those who will be eagerly waiting for your first test results.
When you look at all of the other features currently being developed for Cycles (storm_st’s bidirectional sampling and volume rendering, Brecht’s render pass support, and the environmental importance sampling from Mike Farny), it is possible that we could eventually see a revolution in Open Source Rendering.
black = no noise (stable)
white = high noise (non-stable)
background is very stable
shadows are not so stable
diffuse is not so stable (cube)
glossy is very non-stable (monkey)
note that shadows on cube are much more non-stable than cube
note that glossy looks stable in image, but very non-stable in noise map (this is because here is nothing to reflect - otherwise you would see it)
While it seems like you’re doing a good job so far, I do wonder.
Does the noise-analysis run on every pass or just every N-passes, because if it’s on every pass, you might be able to reduce overhead by allowing the user to have the analysis run only after every 100 passes or so.
Does the noise-analysis run on every pass or just every N-passes, because if it’s on every pass, you might be able to reduce overhead by allowing the user to have the analysis run only after every 100 passes or so
it run every pass - but it is not very time-consuming
new test image:
glossy is now without roughness (=mirror) - it is stable
other 3 monkeys with glass-shader
and soft shadows
look the difference
i see in my first images above a small problem with “flickering” lights and shadows
this generate higher noise values than in reality visible
here is no direct way to fix this in cycles
workaround:
every 16 samples/pixel are stored a temporary imagemap and then compared with final image
I went to bed thinking on your approach and wondering how it would behave in complex scenarios (more complex geometry than suzanne on a plane, lighting, materials, DOF, Motion Blur…)… With that last image you just answered my question. It holds up perfectly!
I just figured that having it run only once every 100 passes or so will help to prevent cases where an undersampled area would be considered stable simply because it’s an area where the samples are slow to gather. Like areas where caustics would form, but you would see very few samples at first simply due to it being a difficult situation for the pathtracer.
This could also be seen as a reason to have the noise-priority system being tile-based when that is implemented, as sometimes you would probably get better results if the system waited for an entire tile to be noise-free rather than individual pixels. The reasoning for this would be roughly the same as the reason above in which you could have a caustic region which will never converge because the system thinks it doesn’t need more sampling (when in fact the area would just be slow to receive samples until the actively sampled regions start getting smaller and fewer)