Why is Vray so good ?

Wow,
I mean, really, why is Vray so good ?
When lighting a scene the claire/obscure is phenomenal.
I have trouble recreating the effects using yafray…

Vray is developed fulltime, yafray is not, OK I get that.
Vray is more feature complete, OK I’m not using most of them.
But why are Vray’s colors and light more deep and shadows more intense ?
Didi they decelop some techniques concerning BRDFs, QMC etc that are not described in publicly available papers. What is their secret ?

But…a plus for blender must say over here ppl were impressed by the composting of the blender renders, esp npr styles with a hint of AO go down well with us architects.

blend on, conquer world.
wzzl

It’s not all about the tech, it’s about how you use it.

Having vray doesn’t make your renders good automatically. From the top of my head, look at robertT’s work, how intense and vivid it is.

of course, you’re totally right.
the artist makes the art, not the brush
but…
with vray my initial results are better, which makes me wonder what is under the bonnet.

If you could post some examples it might help. Simple things like exposure, gamma correction and the like can make a big difference.

wizzleteet

Thats a good question, i ask myself that every time i see what VRay produces here in our Architects office, especially for photorealism.

VRay looks like a damn good renderer and you can see why there’s so much work produced with it, with Max of coarse, although it’s now available for SketchUP as well.

I believe that many commercial raytracers are optimised to offer good results, even if the settings chosen by the users are not that optimal.

Yafray is a simple and rudimentary raytracer, therefore it should be easy to use, once you have a couple of things in mind. I can show you photomapping tutorials for other app that use the same approach than yafray users (trial/error).

In general and particularly in this comunity, people embrace raytracing to solve their poor knowledge in lighting by using the quickest solution, namely GI.

There are more important things than GI for a beginner (and for an expert), for instance multipass rendering, and Blender Internal has got it. In multipass rendering, GI is just another render pass.

2.43 really changed the way you think about rendering. It’s not about tweaking the values beforehand and hoping for the best.

Instead now you can do a lot of that in the post production phase with nodes. There are loads of possibilities available offered by render passes. The real work begins after rendering now. (Of course you can do it the old skool way.)

bebraw: well said!

Yes, again, I agree totally the composition node setup is what I’m learning to master right now.
On topic however, is it that vray is so good because of this :
http://www.spot3d.com/vray/help/150R1/render_examples_advancedimap.htm
It uses various ways of using the irradiance map.
esp the ‘delone’ (sounds like delauney and looks like it too) method looks cool.

grtz Wzzl
PS I didn’t start this thread to put down blender nor yafray, these are my tools of choice. I’m simply trying to understand vrays technical merits, to counter concerns when I’m evangelizing Blender again… :eyebrowlift:
PS II Here in the office Blender is getting some raised eyebrows, sometime I think too much so, as I get the feeling they’re intimidated. I showed them NPR renders, composited animations and indigo renders. Apparently they’re put off by the sheer scope of possibiities.
Give us sketchup they say.

@bebraw:

i think that’s just one side of it - also with the best node-setups you can’t get fundamental effects like real caustics, gi, sss, blurry refractions, etc., you can fake it, but it’s not the same…

yes, a compositing-setup is a whole new world of possibilities and very, very strong, but it can’t place in rendering-methods, like told above.

i think these features will be implemented sometimes, it’s just a matter of time, and how i know the developers it will be stunning good and huge…

s.

simhar: You can use other renderer than Blender’s internal to produce GI pass for instance. I know it’s a bit of a hack but still the possibility exists. It’s up to you to find the limits of compositing.

I’ve been using Vray a bit recently, and I’m amazed by just how darn fast it is. I can render out a very nice looking simple shot with GI much faster than Blender with AO, or any other raytracing effects. GI, area lights, sky lights, SSS, even more basic things like non-blurry reflection and refraction, are all much faster in Vray than I ever expected it to be, and even on the outdated hardware I was using they seem fast enough to be practical in a lot of situations.

Really makes me wonder why Blender is so slow at raytracing. I have heard that the Octree is simplistic and should be replaced by something better like a Kd-tree or BVH. It’s silly, hardly anyone does any work on Blender’s renderer except Ton, but every other month you see a new ‘yet another and another homebrew raytracer’ project crop up. I wonder why it’s so enticing for people to start coding from scratch, rather than working on Blender, which I would assume to be much easier to get going in…

@broken:

blender has the sloooowest AO i’ve used :frowning:

i would say, mentalRay’s AO is 6 times faster, also in complex scenes and with high samplings…

yes, Vray is pretty fast and often delivers stunning results without much efforts…
modo202 renderer is also a fine example that delivers smooth ambient occlusion with 64+ samples in a decent timeframe …
btw, they’ve just announced a speed boost up to 1.4x in upcoming 203 !!!
rendering GI with 4 or more cores is insane to watch : the rendering buckets really fly though the picture…

that should remind us that there is room for optimization within blender’s own ray-tracing methods. i really hope for a future Kd-Tree or a faster algorithm taking advantage of refactoring the meshdata structure…

Eh… well, speaking from my own experience, making significant changes to Blender’s renderer is not easy at all. I’ve tried before, and every time I run into a “I can’t figure this out” barrier. A lot of the problem is that Blender’s rendering architecture is not (to my knowledge) documented, so trying to navigate the code or figure out how various functions and data structures relate to each other is… frustrating.

I haven’t looked at the rendering code since the big refactor, so this may have changed. But I think that–historically–the reason no one else has worked significantly on Blender’s internal renderer is that the rendering code was sufficiently obtuse and under-documented that very few people other than Ton could (or would care to) figure it out. Rendering code in general tends to be very complicated and involved beyond the most simple of renderers, so it’s difficult to just jump in and figure it out without prior knowledge of how it works and how it’s organized.

I can easily understand why someone would be tempted to write their own renderer from scratch rather than spend the time necessary to decipher under-documented render code (whether Blender’s or any other software’s).

I totally agree with you and I think the reason that it is the way it is are because of mainly three things:

  • The naive goal of “making the best renderer ever”. While one can argue that some ppl are gifted and have revolutionary ideas on how to make a powerful renderer, it always fails on one specific point (sooner or later): manpower. You simply cannot compete with that. Also, users demand consistency and improvements over time and that type of commitment isn’t possible for pure hobbyists. Blender can provide that since it has a network of contributors to rely on, but a one man army will fall sooner rather than later.

  • There is also the learning experience. Many ppl will want to start from the ground up to take part of the whole development process and have the ability to experiment along the way. Sadly this often results in “reinventing the wheel” and most of the experience could been had by looking at existing work rather that starting over.

  • Programmers are control freaks (I know, I’m a programmer too). Sure some more than others, but deep inside I think we all are. We want to know what makes our applications tick and we want to control the way it works. This total freedom can only be had with 100% involvement in all aspects. I think that many developers feels obstructed by the fact that there is an already established rendering engine, a userbase with expectations and a foundation with long term goals to follow. It removes a lot of freedom and I think that is what scares many developers. In reality It should not be a problem, and I don’t really think it is, but it might be perceived as such. Also, the demands and resposibility towards the existing userbase might be especially off putting for newcomers.

IMHO, the best solution to this would be to approach developers of existing and upcoming opensouce renderers, welcome them into the “Blender family” and give them the support they need to get started and an area to work on that can give them the freedom to experiment and to try their own stuff. I think what scares most independant developers is the feeling that they are being lured in to do tedious gruntwork, like recoding or structuring different existing parts. If one can work around that issue and keep them connected with the community I think that there are many voluteers to be found.

The raytracing realm is quite a different animal than scanline rendering. You simply can’t expect Blender to do the same things than a true raytracers such as Vray, YafRay or Sunflow do.

IMO is quite difficult to transform a scanline render engine into a raytracer, since they use completely different lighting models and the way the scenes are handled is totally different.

Anyway, a flexible scanline internal render engine, and support for external raytracers via pluging is not exclusive of Blender, is used in other professional APP.

In fact, in this way you give oportunity for other people to develop their own software projects around Blender, which is a good thing. IMO we can’t expect Blender to have/do everything. I smile every time I see people here asking Blender to have pathtracing or true CAD tools.

IMO it is more important if Blender can create sinergies with those third app, than trying to do everything by your own. Of course it is important if those projects created around Blender are open source.

About those new raytracers that appearing, not everyone is going to survive IMO, and probably their developers will join other teams once his personal project is halted (or will stop developing rendering engines). We are in an expansion time (driven by blender success) that will be followed later by a consolidation phase IMO.

I think Alvaro is exactly right.

If you consider Yafray (which is a pure raytracer) in my experience it’s faster than Blender at ray tracing operations, such as AO, reflections and so on - but it’s also proved to be quite difficult to optimise and develop further (lack of people doesn’t help).

Rendering in the general sense, and getting up to a level of integration with Blender (or any other app) is quite annoyingly hard. Once you’ve got even your ray-tracing sorted you’ve got the dull grunt work of implementing texture mapping fast and accurate relative to other renderers. Programmers would rather optimise existing features to hell, instead of adding boring tedious ones which you could theoretically work around.

I do wonder how much room for trivial optimisation there is in Blender Internal, but to be honest I doubt it’s much. The problem is changing the underlying structures to make substantial optimisations requires lots of work, and a great knowledge of the domain in which you operate - quite a rare thing.

watch the video about the shading rates: http://forums.luxology.com/discussion/topic.aspx?id=15322

i think it’s one reason the modo renderer is fast. they have decoupled shading rate values. so multi sampling only gets done if there is a too big contrast threshold between neighboring pixels.

but i don’t know if yaray or blender do that aswell anyway? :slight_smile:

Its probably worth noting that yaffray isnt the only renderer out there (mostly free):

Sunflow

Toxic (awesome…truly.)

Aqsis

3delight - little known but awesome. look at the labarador dog, he isnt real! You know how good this renderer is, it was used for Superman Returns. $1000 a license though.

Maxwell- awesome results, virtually no setup. hefty price tag tho, around $1000 again.

Brazil, Turtle…vray. these are aimed at specific apps. worth a look tho.