Cycles status (as of May 14th)

According to Project Mango Blog:

Here’s an updates on what we’re working on in Cycles. Up to now, the features that we’ve added for the 2.64 release are BVH build time optimizations, exclude layers, motion/UV passes, filter glossy option, a light falloff node, a fisheye lens (by Dalai Felinto), ray length access (by Agustin Benavidez), and some other small things.
There’s also now a page in the Cycles manual about Reducing Noise in renders. All of these tricks are used in production with other render engines and should apply to path tracers in general.
Currently there’s still three major features on my list to implement: motion blur, volumetric rendering and a contribution pass. We already have a vector pass now for doing motion blur in compositing, but real raytraced motion blur would be nice as well. The code for this is mostly written but work is needed to make this faster, currently it’s slowing down our raytracing kernel even if the there are no motion blurred objects.
http://mango.blender.org/wp-content/uploads/2012/05/cycles_motion_blur-540x303.png

Volumetrics should be a target for 2.65, so the release after the one we’re working on now. There’s already a developer who has patches to add volumetrics to Cycles, we’ll need to review the design and add support for rendering smoke and point cloud datasets. Currently volumetrics are being rendered with Blender Internal, and probably in production they end up in separate layers anyway, but it’s not very convenient to mix render engines and have to switch back and forth.
The idea for the contribution pass is that’s it’s like the only shadow material in Blender Internal, but more flexible and interacting with indirect light, to help compositing objects into footage. How this will work exactly is unsure still.
Everything else is related to optimizing performance in one way or another, either by making things simply render faster, reducing noise levels or adding tricks to avoid noise. Baking light to textures for static backgrounds may be added too, but if at all possible I’d like to avoid this. Most of the optimizations we are looking at will be CPU only, on the GPU there’s not as much that we can do due to hardware restrictions. Some directions we will look in:

  • Improving core raytracing performance (SIMD, spatial splits, build parameters).
  • Decoupling AA sample and light sample numbers. Currently one path is traced for each sample, but depending on the scene it might be less noisy to distribute samples in another way.
  • Better texture filtering and use of OpenImageIO for tiled image cache on the CPU, so we can use more textures than fit in memory.
  • Texture blurring behind glossy/diffuse bounces. Like the Filter Glossy option, this can help reduce noise at the cost of some accuracy, especially useful for environment maps or high frequency noise textures.
  • Non-progressive tile based rendering, to increase memory locality, which should avoid cache misses for main memory and the image cache.
  • Adaptive sampling, to render more samples in regions with more noise.
  • Better sampling patterns. I’ve been testing a few different ones but they couldn’t beat the Sobol patterns we use yet when using many samples (> 16), still hope we can find something here.
  • Reducing memory usage for meshes (vertices, normals, texture coordinates, …).
  • Improve sampling for scenes with many light sources (just rough ideas here, not sure it will work).

Hopefully these changes together with careful scene setups will be sufficient to keep render times within reason, but beyond that we can still look at things like irradiance caching or other caching type tricks. I hope to avoid these because they don’t extend to glossy reflections well, I’d like to try to keep things unbiased-ish, it seems to be the direction many render engines are moving in, and it’s easier to control, maintain and parallelize over many cores.

This was really a juicy post for me! As for a lot of you guys here, i bet!

I especially was attracted by one line, so here’s my reply, followed by Brecht’s answer:

It’s always a pleasure to hear Cycles news directly from the source! :wink:
One thing i’m curious about: “Adaptive sampling, to render more samples in regions with more noise.”
I get it like Cycles will be “noise aware”, my doubt is: will it just throw more samples for the same pass in difficult areas, or will it drop sending samples in already clean areas? The latter would need the implementation of a sort of “smoothness parameter” that could be useful to always achieve renders with the same (visual)level of smoothness

brecht says:
May 14, 2012 at 5:57 pm
I’m not sure yet how it will work exactly, it will depend on the algorithm, making it really robust is very hard and most algorithms fail already on simple things like soft shadows so I haven’t decided yet which one to use. Probably there will be a way to configure the integrator in such a way that it renders to a certain smoothness level.
Thinking again to this, i say that a visual smoothness parameter is indeed needed, because you can’t guess how many samples a scene needs if its lighting setup changes over time in the shot; i.e. a light turns on/off, a windows is opened/close, a sun beam hits a glass… and so on…
What do you guys think about it?

I 've had a render with difficult patches lately: while the rest of the image was fine with about 300 samples, these areas needed much more.
So i was dreaming about a solution where the user/artist could give Cycles indication on where to concentrate his efforts. Imagine some kind of “Sample density” layer in the UVeditor while it is set on “render result”. While the engine is rendering, you’d paint like a weight paint in fact. RED= high priority, lots of samples needed. Blue=normal priority.

Of course, it’d only work for stills… And it would work best with the possibility to restart a render where we left it (just like in LUX).

If it has to be fully automated on the other hand, i have no idea how one can have the engine decide what area need samples… How can it make the difference between high-contrast details and noise for instance ?

Painting “Sample density” will be suitable for static picture. In animation we need animatable item, maybe in shader.

Maybe rendering a reference frame (a complicated one) to the desired smoothness and then setting that as a goal for the other frames. Either the whole frame or a small area. Wasn’t there a thread about quantifying image smoothness?

i think you’re talking about adaptive sampling, and i think that’s actually been proposed http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/ReducingNoise

The ability to resume a render like with LUX would be most valuable. Even for animations it means that certain frames with more noise than others could be refined without having to start from scratch. But this would require a special file format (much like the luxrender .flm) and for the scene to remain exactly the same.

Or, the rendered sequence could be cached internally somehow, and that cache could later be “dumped” as a regular image sequence.

I guess that’s a job for the algorithm Brecht was talking about in his answer.

btw, being this just a hack, while we can’t resume render we can mix two noisy render to obtain a cleaner one (just make sure the second has a different seed, or the noise will match the previous)

What a great post from Brecht. Quite relieving to be honest. Absolutely everything said there are very important features of any production renderer. But I want bring extra attention to a a few topics:

Decoupling AA sample and light sample numbers. Currently one path is traced for each sample, but depending on the scene it might be less noisy to distribute samples in another way.

From my experience, at least in Vray, it’s substantially more efficient (faster and less noise) to increase the local samples for an effect (glossy shader, area light, etc) and increase AA as necessary from there. To put it differently, it’s better to have the minimum AA possible (as long as it gives smooth edges) and increase samples per effect/shader as needed.

Non-progressive tile based rendering, to increase memory locality, which should avoid cache misses for main memory and the image cache.
I was just about to ask around here if Cycles would ever add that and I’m glad I don’t even have to anymore :cool:. I currently miss this in Cycles particularly because when rendering on CPU, the progressive sampling is good for lighting and rough material setting, but fine tunning a scene with subtle bumps, textured roughness on a glossy shader, this becomes a real pain. And there are the optimization advantages for tile rendering too, mainly related to memory handling. But I believe this also allows better implementation of other important features like Proxy objects, micropolygon displacement and hair/curve primitives rendering. As all of these are dynamic objects created at rendertime on a per tile basis. Not sure if it’s possible to have that with progressive refinement, at least not as efficiently. I hope Brecht implements a way for the buckets to follow the mouse pointer during rendering (it’s incredibly useful feature for rendering regions). Actually, even the current progressive mode could benefit of such idea with some kind of “raybrush” feature. Maybe I miss that because I don’t have a powerful VGA to help me here :(.

Adaptive sampling, to render more samples in regions with more noise.
Alright, here is another topic I want to share my experience. Both Vray and mental ray have a very efficient adaptive AA (Adaptive DMC the former and Unified Sampling the latter), unfortunately I think those are proprietary technologies. But I like the way they work, because are very simple and logical. Vray DMC (stands for Deterministic Monte Carlo) I find more intuitive and therefore a better reference for parameters. In essence, there are 4 main parameters for the entire thing and those are: Min and Max samples, noise threshold and adaptive sensibility. The noise threshold and max samples are the 2 limits for sample count, where max samples as the name suggest is the maximum allowed sample count the renderer can spend on a pixel, and the noise threshold is how sensitive the renderer is to contrast between samples. In practice, max samples do not now allow the renderer get stuck if some part of the image needs waaay too much samples (eg: rough reflection of a very bright source), while low noise threshold allows the render to sample deeply in shadow areas. But as this is unified in Vray, actually EVERY sample (BRDF, lights, GI, whatever) passes through the system and obeys the rules of the DMC sampler. So the user can effectively control the entire scene quality only through these parameters. Unified Sampling in mental ray is almost the same, with the exception it doesn’t have an “Adaptiveness” parameter like Vray. And actually, this makes a significant difference. In Vray this parameter controls how adaptive the AA engine is, can’t explain the details but a value of 0, it’s no adaptive at all and will shoot the max samples from the AA right away, and a value of 1 it’s fully adaptive. By default it’s at 0.85 and curiously, this gives the best quality x speed ratio most of the times. What I think worth mentioning about this parameter though, is that if it’s set to 1 (fully adaptive), it’s somewhat difficult to get rid of the noise. And that’s the case with Unified sampling in mental ray.

http://elementalray.wordpress.com/2012/01/29/unified-sampling-redux/
http://interstation3d.com/tutorials/vray_dmc_sampler/demistyfing_dmc.html

I believe Brecht is quite aware of everything I said here, but I would love to know what he thinks about it…

Regards,
Eugenio

Non-progressive tile based rendering, to increase memory locality, which should avoid cache misses for main memory and the image cache.

Wait, is Brecht saying Cycles will render with “vray like” method? Those tiny square buckets around? Isn’t it a totally different method respect actual progressive path tracing? I’m not expert at it, i’m trying to understand. Any help appreciated.

But I believe this also allows better implementation of other important features like Proxy objects, micropolygon displacement and hair/curve primitives rendering

Sounds great.

Yes. But it doesn’t mean progressive mode needs to be discarded. I don’t know Brecht plans, but it could be added as a new rendering mode for the user to choose from.

As a side option it would be neat, but if it’s the only thing we’ll be using, then it kills off a lot of what makes cycles great.

I spoke with Mike Farnsworth a couple of weeks back about the possibility of being able to resume renders. Obviously some kind of code already exists for this as in the rendered preview you can pause, restart, and add more samples without having to restart the whole render. There just needs to be a way to dump the information stored for the sample state of each pixel into a file (.crf, Cycles render-state file perhaps?) and give the option in the render panel to load a previously started render rather than use the selected camera in the scene. Probably not a trivial task, but also probably not a show-stopper either.

any opencl updates? getting an amd card like to fire it up for rendering

They could store it with the .blend as a datablock. It would be an internal format anyway. No need for other software to read it.

Some of my feelings about the current version of Cycles.

Well in general it works great, it’s based on nodes, which is fantastic (unlimited possibilities), it has a nice viewport previews, works pretty fast in most of the cases involving GPU and so on. However there are some things that bothers me:

  1. no reload option for textures
  2. no texture preview in Textures panel (Properties window)
  3. no Save Viewport Render Preview (the only thing you actually can do is Print Screen if I’m correct)
  4. dividing Glossy and Diffuse shaders; why not keeping them together? I mean you can always set Glossy to 0 to disable it, in 99% of cases you mix Diffuse with Glossy
  5. not very intuitive name of options (Light Path node is a great example; without documentation you have no idea where to plug it)
  6. no “Strength” option that would help to tweak texture (for instance displacement; the need of mixing your image texture with e.g. grey colour, or tweaking it using extra nodes such as brightness and gamma is weird; besides these options should be available somewhere in Textures panel in Properties).

So here’s my few cents.

Bucket rendering is essentially just border rendering, you will still be able to use whatever method you want to render.

This is exciting because it opens up for other lighting solutions other than pure brute-force path-tracing (which I believe Cycles uses right now), algorithms like Irradiance cache and many others could be implemented and that would be a huge improvement for Cycles.

One thing I would love to see then is the ability to use different algorithms for primary and secondary bounces like in Vray, Irradiance cache + Brute force is a killer for interior renders :slight_smile:

@NinthJake @eugenio jr. - Tnx, that’s great, can’t wait to see updates. If Cycles will implement a progressive viewport rendering like it does right now for scene setup, its powerful nodes and a bucket rendering for final output it would be awesome.

Even Arnold seems it does progressive refinement until some point, then starts bucket rendering to finalize the frame.

this bothers me too. Here’s something even funnier: sometimes I will create landscapes generated from textures and use the brightness and contrast to tweak the area affected. Once I turn to Cycles engine, I don’t have these parameters available anymore.

I’m really waiting for volumetrics, though, to have those perfect clouds… yeah, you may compose, but we wants it integrated, dear, yes we wants it.

I agree you with all the ponits except n# 4 and 5, because:
4) “Diffuse shader” in Cycles is Lambert/OrenNayar shader, they are meant to emulate specific real-world properties. Mixing into the algorithm a glossy component (on a math level) means creating something that may not be Lambert/OrenNayar anymore, therefore giving unexpected results. On the other hand, Brecth wrote somewhere that he might implement an Ubershader, a special node that mixes all kind of shader the way we do now, with many nodes. But still, we can it already with nodegroups, quite powerfully.
5) “Light path” is the name of, actually, the paths light travels in the scene, which is pretty a good name IMO. Also it is a OSL name, qouldn’t change if it’s possible

Drooling!!

A thought on resuming renders: i’ve learned here that you can make a weighted mix of two noisy renders to obtain a cleaner one. So say you have a 400 pass image and a 100 pass. If you mix them with a 4/1 ratio (factor 0.200) you get the equivalent of a 500 pass render.
So.
If you could start a render specifing an image and the number of its passes, the software may, after the 1st pass mix everithing and go on with render. Am i right or missing something?

lsscpp - OK, about number 4, fine, tell me what kind of material you can make using diffuse shader only, no glossy shader.

About Light Path - I understand its purpose but before I figured out where to plug this node - that’s really not intuitive at all.