Brecht's easter egg surprise: Modernizing shading and rendering

No, that’s another issue.

What I am saying is, a pixel in a rendered image with antialiasing (that is, multiple samples) is not the result of one point on one object, it is the average of all points sampled within that pixel. These points might belong to any number of objects. If you render an image in Cycles with 128 samples, that means 128 distinct points will have been sampled. If you render an ID pass, only one of these points (probably the first) will actually end up in the resulting image, as averaging doesn’t make sense.

what worried me is that you said you’d have to render out at a large size,for someone wanting a final output of 1080p (animation) that sounds disastrous (time and money) but I guess theres no way around that.

It’s really not that bad if you realize that rendering an image at 4x the resolution is the same amount of samples as rendering it with 4x the samples. If you render it at 4x the samples, the averaging is done by the renderer, if you render it at 4x the resolution, you can average the samples yourself (by downscaling the image to the desired resolution).
Really, the only increase will be in the framebuffer memory requirements and the per-pixel compositing cost.

ah yes,I see,I was think of samples and not resolution.

yes, this is the difference to luxrender and others.
is is easy to change and is give nearly no speed impact but need more (v)ram

cycles can only handle different fixed multiplicators for diffuse, glossy etc. but again global for whole imag
this is called non-progressive.

@Zalamander/Bao2, I see you have a lot of knowledge regarding render engines. Is Pixar Subdivision suitable for displacement in Cycles? I’ve seen that this algorithm can perform displacement very well even in real time. It would be great when OpenSubdin could speed up development of Cycles.

I have a feeling that the ‘OpenSubdiv’ package wouldn’t be intergrated directly into cycles… but more in the mesh generation for cycles… just as all the other modifiers in blender are applied before rendertime.

I was actually just trying to do this the other day and it was really frustrating. The issue I was running into is that the shadow pass doesn’t take into consideration shadows created by GI or IBL. This is really important for me when I’m chomping characters into live action plates.

A shadow catcher material would really go a long way in making this more livable and making Blender more competitive.

Well Brecht works fast

Nevertheless, Brecht is already on it, and expects to have a first WIP demo of Cycles-OpenSubdiv at Siggraph next week!

cool, so this works for the render engine as well. I thought it was more of a viewport thing to allow you to see a highly divided sub-d limit surface seeing as Reyes renders have always rendered sub-d surfaces to a high level because of their use of micro-polygons. I remember that caught me be surprise the first time I used Aqsis.

But isn’t Tile rendering a similar concept? With it enabled Cycles computes a (fixed)number of samples in a restricted portion of the image. Can’t we, in a similar fashion, further subdivide the tile until we obtain individual pixel sampling, and then decide how many samples to spend?
The whole thing, seems to me, is to postpone the decision of how many samples to shoot.
Right now this is done once at the beginning, just reading inputs like total samples, or renderlayer samples limit. Is this right?

Exciting news from the mailing list in terms of Cycle’s implementation of rendertime subdivision surfaces.

In Cycles OpenSubdiv will replace the incomplete subdivision surface implementation, to enable faster BVH builds and lower memory usage for subdivision surfaces, as well as Ptex rendering support.

Integrating will require a lot of work inside Blender however, so it will take some time before there is an official release with OpenSubdiv support. Especially support for multires and sculpt will be complex. A planning for this work and a set of actions still has to be defined.

Nevertheless, Brecht is already on it, and expects to have a first WIP demo of Cycles-OpenSubdiv at Siggraph next week!

In other words, there’s already work underway for Pixar subdivision support for Cycles, but don’t expect it for the rest of Blender anytime soon as there’s going to be a lot of work that needs to be done.

Great !
Cycles should use multires with full support,multires and ptex in Cycles could really improve the workflow for creatures.

PTex support would make me so happy.

OpenSubdiv ! And probably full integrated to BVH, and respect to motion blur as other GSoC. Cannot believe.

Hi,
not sure where to post this, but with all expanding of Blender’s color space expertise (which is great) Curves visual modules became a bit obsolete someplaces. In many cases, due to linear space, there’s not enough resolution on the low end of curve.
Not sure if these are best solutions, but could there be checkboxes to switch between linear/log space, or even better, though probably not so much for beginner - a “gamma” option for curve graph display?
Any thoughts or is this an issue for me only?

log is relevant for better compression of high dynamic data.
but linear is good enough and have nearly “unlimited” uncompressed range.

Visually the curve hardly can go any lower, but on the right you can see it’d be great to change y-axis gamma to have some resolution to control the shape in darker region.
Of course Vector curves can be used here, but apart of them not having a combined channel, they cannot be used in Scene>Color Management tab, and I guess many places more, where there can be scale mismatch (addons/color-non color conversions etc.).

You dont need a gamma curve here, you need an ability to zoom in to the graph, as seen in nuke lookup node for example.

I never said gamma node - what I tried to say is gamma of y-axis display. To change amount of non-linearity of y axis so there is no need for zoom.
But I guess it’d be too complicated, so just a switch between color/non-color space would probably solve something.
On other hand I might not see some obvious mistake I’m doing.

DingTo / Brecht: In file
intern/cycles/util/util_task.cpp

I would delete the lines 192 to 195:
else {
/* manual number of threads */
num_threads;
}

58457 :
“…
If anyone thinks bigger commits
would be better, please let me know…”

Well, I was to keep shut my mouth, but just because he asks I answer:
I was with the impresion he had some kind of Artificial Intelligence robot commiting every change on every line he did…
Just work on your code in home and when some relevant advance is done commit it. Doing it for every line is too much.
My point of view of course.