Why is Vray so good ?

Well, that depends on what you’re asking for. You certainly can’t expect it to do fisheye lenses, focal blur, or anything else that involves things before the first ray hit. But anything after that is quite plausible to do (given someone to code it, anyway).

Well, raytracing has already been integrated into Blender’s renderer, so now it’s really more a matter of optimizing the raytracing routines and adding features.
Several renderers out there use a hybrid scanline/raytracing architecture specifically because it’s faster than raytracing alone. Speeding up those first ray hits can mean a lot depending on the scene.

I agree. I think support for additional external renderers would be extremely valuable. I think it would be most valuable to add support for the RenderMan spec, as that would get us the most bang for our buck.
Of course, adding and maintaining such support is not an easy task.

The thing I’ve noticed about Vray, as compared to, say, Mental Ray is, that it’s easier to get good results “out of the box”. The basic settings will often look good, whereas to achieve the same in Mental Ray might require a bit more tweaking.

Maybe this explains why open source renderers don’t always get the attention they deserve. From the forums of toxic renderer:

As you might know, I moved to Berlin, Germany at the end of June. I didn’t move for the pleasure of the currywurst (curry-sausage), but because I’ve been hired by mental images to work on mental ray (I’m part of the rendering core team). I must say that I love my job so far: this company is full of extremely talented and commited people, and working with them is a real pleasure.

Big biz poaches those eggs.

Also, for a Maxwell type renderer for Blender (that is free and awesome), check out Indigo .
http://www2.indigorenderer.com/joomla/
It would be good to see more support for this. This project is still in development, wheras sites like Toxic seem pretty quiet.

You forgot to mention Indigo. It’s so easy to use and renders perfectly every time.:smiley:

Nobody mentioned Kerkythea yet…
By my mind it’s the most easy and complete to use renderer amongst those that are free. It allows user to get decent renders out of the box. It has GUI with nice material edition system.
Results of it’s work can be compared to Vray’s (I’m talkin’ about pictures not technical background).
Moreover beeing phisically correct it can use various GI solutions - photon mapping (very fast), path tracing, bidirectional path tracing and unbiased MLT (similar to Indigo’s).
There’s also Blender’s branch with Kerkythea implemented the way Yafray is., thought it’s still a WIP.

www.kerkythea.net

Vray renders look great if you know a little about what you’re doing, better if you know more. The same goes with Yafray. Although Vray produces more scientifically accurate looking renders faster. You can fake the look by tweaking the Yafray settings.

But if you don’t want to spend a lot of time adjusting render settings and whatnot you can use Maxwell or Indigo. Maxwell and Indigo produce amazing renders that require minimal preset adjustments.

You don’t always save more time using compositing methods to enhance renders. Neither does compositing always yield better looking renders. Compositing can increase the time spent developing a 3d image. Compositing allows you to tweak the image in parts to get the desired render effect. Artistic effects like DOF, specular bloom, faking SSS, refining shadows, etc. are some practical uses for Blenders Compositing features.

Multipass composites are more akin to hand tweaking and eyeballing your renders to perfection. Some of us have done this kind of thing for years as 3d artists. It’s nice to have more accurate looking Gi rendering software to aid us nowadays.

But time is everything. Composited faked scanline reflections and shadows, etc. render out in seconds or minutes for complex scenes. You can get some nice looking renders this way. Raytrace renderers like Vray, Yafray can take hours for the same scenes.

A better way to get more use out of renderers like Vray, Yafray, Maxwell, Indigo, etc. might be for scene backdrops. Then you can use Blenders standard internal renderer and compositing features to overlay animated/still 3d object elements, shadows, reflections, etc. You can also bake GI render lighting into buildings, etc. so that these objects aren’t included in a raytraced pass.

Use whatever works I always say.:smiley:

Yes, but it’s often filled with noise and always takes hours and hours to render.

I’ve heard a lot about Nvidea Gelato… There was a massive thread here discussing it a while back. Maybe that’s a good alternative to the internal render?

I forgot PovRay. Ruiz builds renders out to PovRay which is a wonderful alternative to Vray, Yafray, Indigo, etc. for some scenes. Here-

http://blenderartists.org/forum/showthread.php?t=56552&highlight=povray

Here is a post of mine regarding Gelato with some relevant links (including the thread you’re thinking of?). Keep in mind that this is an option only for those using nVidia graphics cards.

.

Well, raytracing has already been integrated into Blender’s renderer, so now it’s really more a matter of optimizing the raytracing routines and adding features…
It is the GI part of raytracing that would be difficult to implement IMO. I’m talking about pathtracing, bidirectional pathtracing, photons, irradiance maps, etc, althought I’ve read that certain photonmapping techniques should be easy to implement in scanline engines.

Anyway, I believe that these specialised GI functions should be done by specialised external engines, like is done in other applications. For instance:

Mental ray has been integrated into Softimage|3D and Softimage|XSI, Autodesk 3ds max and VIZ, Alias Maya, Side Effects Software’s Houdini 5, SolidWorks PhotoWorks 2, and Dassault Système’s CATIA V4 and V5 products
That’s the model that works with other applications. A strong, flexible, scanline internal engine, and integration for specialised, external, raytracing engines.

There are obvious advantatges in this approach. In first place, this mean oportunities for other open source projects, created around Blender, but with a separated organization and development, such as Yafray, Sunflow, Renderman engines, etc (choose one)

When I say open source, I mean also:

  • With a public license more or less compatible with Blender in order to create sinergies, in the sense that some features and ideas can be shared by different applications, like it has happened between Blender and YafRay.
  • And with the cross-platform philosophy of Blender.

I knew about that and kerkythea , but couldnt rememeber there names! :yes:

Well, yes, those are rather involved to implement. But that’s the case with non-hybrid renderers as well.
It would be no more difficult to implement them in Blenders renderer than anything else (assuming a similar quality of code base), because all GI happens after the first ray hit, and thus has nothing to do with the scanline part of the renderer. The only thing I can think of that might be an issue is figuring out how to deal with particles.

I guess I just don’t understand why you seem so against GI capabilities being added to Blender’s internal renderer, or even just optimizing its existing ray tracing features. External renderer support and internal render features aren’t mutually exclusive.
If I were forced to choose between the two, I would definitely prefer external renderer support (in particular for the Renderman interface spec–I’m far more interested in using Aqsis than any GI capable renderers). But that’s not a choice we have to make.

Besides that, I get the impression that the Blender developers are much more interested in external renderer support than major new internal render features anyway.

I just want to point out that first license is free, if somebody wants to try it.
http://www.3delight.com/en/index.php/products/3delight/3delight_pricing_and_licensing

I guess I just don’t understand why you seem so against GI capabilities being added to Blender’s
I think that Blender AO needs to be improved, and probably the ‘participating media’ part of the photonmapping theory would be a nice adition to Blender, but with animation in mind.

Any GI adition to blender should be compatible with animation IHMO, which discards most of the current algorithms.