Vray 3.0; pulling out all of the stops to remain the industry leader.

I just stumbled upon the fact that they are about to release their new 3.0 version, and the list of new stuff is pretty long.
http://www.v-ray.com/features/

Twice to five times the rendering speed, raytraced SSS, the integration of nearly every new open standard related to rendering, progressive rendering, and OSL support to name a few. I know that Brecht obviously can’t develop Cycles at the same pace as a paid team, but it does highlight that he is taking it in the right direction considering that Chaos Group is now using much of the same technology. It also strikes me that perhaps the progressive rendering environment will continue to slowly eat away at the number of artists who still prefer tiled rendering, because their videos showing the instant overview of the progressive mode and the slower image revealing of the tiled mode is pretty stark.

If there’s anything that Brecht could get inspired from, it would be the way they are using dynamic tile sizing to speed up rendering, I’m not exactly sure what kind of tile rendering strategy they’re doing (the highlighted tiles pop up all over the place in the video), but I’m almost certain it would get the result out faster than the default Cycles strategy of starting in the center and going outward.

Now I post in this forum since I know there are Blender users who use this engine and the fact that a Blender integration is in progress, so I wonder what those currently using it here are thinking of the upcoming update.

I like progressive even if it is slower just to get that full view of the scene and whether the composition and general lighting look good. If I’m happy I go back and render the optimal tile size for the GPU/render size.

It’s interesting because while VRay have a massive following and heaps of features, Cycles is ahead in small areas like the ambient occlusion shader for GPU rendering.

How they provide distributed rendering using the progressive renderer I have no idea…

It’s interesting because Vray has had progressive refinement ever since Light Cache showed up. it’s kind of hidden because it’s a special mode and can only be activated if you’re using light cache for primarily and secondary engines. They added it right after Maxwell came out and everyone wanted something like that. Vlado’s always been quick to add new features that other renders have as long as they are good ideas. It’s also interesting that they are trying make it known that Vray now has a new more physically accurate path tracer. And again, Vray has always had that it’s just that it’s been called “Brut Force” up until now.

I think the reason they are making such a big deal about path tracing is that up until now, Their “biased” methods have been getting a lot of flack for the artifacts that they produce when used in animations. I worked at Bent Image Lab for many yeas and we used Vray for almost all of our commercials. It’s definitely an amazing renderer but we always had to struggle to get rid of the blotchy artifacts. toward the end of my time there we finally just started using “Brute Force” GI instead because of this.

I think Arnold has been getting a lot of attention lately due to it’s ease of use but also due to being totally free of those kinds of artifacts. With Vray you have to spend a lot of time setting up and optimizing your render environment to get the optimum balance between speed v.s. accuracy. With Arnold (as well as Cycles) all you have to worry about is weather you have enough samples to clear out the noise. So I think Vlado decided that Vray should move forward into the future instead of saying in the mud with Mental Ray and the others.

Obviously the only reason that biased/approximate methods existed before was because, up until now, pure path tracing has been to slow. But now that it’s become a lot more optimized and with things like, faster hardware, easier access to cloud based render farms and GPU rendering, it makes a lot more sense. even in the past pure path tracing was always seen as giving the best results. they used to use it for the bench mark of quality in all those papers about photon mapping and irradience cacheing.

So I am really glad that Brecht choose this path. It’s interesting to note too that the latest release of Pixar’s RenderMan includes a new path tracing engine. It’s seems everyone is coming around to the idea. I’m sure Mental Ray will have one in it’s next release too. :wink:

@Ace: I know you don’t like bucket/tile rendering but it has it’s place for final rendering. As I’ve said before, Progressive refine is great for previewing your scene for surfacing and lighting. But for final render, I need to see at least one tile rendered as quickly as possible so I can judge if the samples are high enough. Otherwise, I might have to wait a really long time just to discover that wasn’t the case. It also makes a lot more sense for rendering animation. For stills and large renders yeah, I can see where progressive refine would be better. You can just set the samples as high as they can go and stop it when it’s all clean. But you can’t do that for animation.

But just so we’re clear, I don’t think it’s one method or the other. It’s more like, each has it’s place.

Yeah progressive has been around for some time now. They also had progressive in VrayRT so now they are getting it for the third time it seems xP

But yeah, I’m super-excited for the 3.0 release and since they hired Andrei a while ago I hope that they are also aiming for an official Blender release now :slight_smile: