Yafray Vs. POVRay

I have seen example output from both of these ray tracers. The POV out put was from jms( hey how about translating the rest of your python tutorial to english<joke>) The Yafray out put I have seen was from various people on #blenderchat.

I must say that each Yafray render that I have seen looked quite grainy. I was told on the channel that was becasue most people to not take the time to let Yafray fully render the picture. All the POVray things I have seen looked quite good. My question is for my situation with a 700mhz celeron 64mb 4mb vram which would be the best raytracer in terms of time vs quality output.

Right now I do not have a need for raytraced output. I am just really getting comfortable with blender, but I am looking toward the future.

I currently use a p3/600Mghz with 392 mos ram and a nvidia gforce256 video card 32 mos.

It all depends on the scene, but I always find Yafray to be much faster than POV. And using more samples lessens the grain, which only shows up if you use radiosity. I get good GI results with very little grain in 20 minutes on my 450Mhz PII.

Let me say that, at least at this moment, Pov-ray has many much more features than Yafray (the same for PovAnim vs. YaBle, of course), but actually Yafray and YaBle are very much more faster than PovAnim and Pov-ray. Both of them have a very high quality for rendering, IMHO.

Env

ditto, you can eliminate grain in yafray using the antinoise filter. ie: [shameless self promotion] :smiley:
http://tomhebel.www3.dotnetplayground.com/bo_with_string_25samples.png

Can you use Yable to produce ray traced animation with Yafray?

I have seen the term “GI” used here a lot. What exactly is it and can somone give me a reference that explains it in simple non techical terms.

ditto, you can eliminate grain in yafray using the antinoise filter. ie: [shameless self promotion] :smiley:
[/quote]

Now that is the kind of output that I am talking about. Was that done in yarfray or povray. That is excellent

Yes, there are memory problems however, nothing to do with yable, but a blender python problem.

I have seen the term “GI” used here a lot. What exactly is it and can somone give me a reference that explains it in simple non techical terms.

GI is an acronym for “Global Illumination” which yafray actually does not do, not full GI anyway. It tends to be misunderstood for anything from very soft shadows, to skylight to radiosity. But these are all just aspects of it. Global Illumination means just that, rendering calculations that involve light where ever it originates from.
So say you are rendering a room scene with a window, a lamp on the ceiling, a table and a glass on that table. Now lets say that the renderer is currently looking at a point on the table close to the glass. To calculate Global Illumination for that point, you have to consider all the light arriving at that point wherever it comes from, so for this particular scene that would be the lamp on the ceiling, the light coming from outside through the window, the light originating from the lamp on the ceiling and the light from outside coming through the window bouncing of the walls onto the table, the light refracted/reflected by the glass of the lamp on the ceiling and the light from outside coming through the window, the light bouncing of the walls refracted/reflected by the glass and so on and so on… It is an infinite process.

Yafray uses two methods to calculate approximations of GI, one is photonmapping, which can calculate things like ‘caustics’ which is light directly refracted/reflected by other objects, as well as the indirect light, ‘diffuse’ light bouncing of objects with other materials.
Then there is the pathlight, which does a limited form of so-called ‘pathtracing’, it only considers indirect light of basic colored materials (diffuse colors). So yafray doesn’t do full Global Illumination.
In fact, very few renderers do, there is lot more types of light interaction that is still ignored by a lot of renderers, simple because there is not always a practical method yet to do this efficiently.
One recent addition to the GI spectrum is so-called ‘subsurface scattering’ which calculates what happens with light that travels through semi-transparent(translucent) materials, which is known for a long time of course, but only recently methods have been developped to calculate these efficiently (even in realtime now). Not in yafray yet however.

Anway, the methods in yafray, both have advantages and disadvantages, the photonlight can calculate very fast indirect lighting and caustics but is fairly difficult to setup, parameters are not very intuitive, nothing to do with yafray, other renderers (like POV too) have that same problem as well.
Then there is the pathlight, which can be very noisy depending on the scene, fully visible skylight is very fast, but the same light shining through a tiny window in a room will cause lots of noise and grain if you don’t turn up the samples, and in turn can really add up in rendertime.
Another method people confuse with GI is the hemilight, however, this just calculates shadows, nothing more, but it is also faster if you want a ‘GI’ look.

POVray however can do these things too, but uses an approximation that nevertheless has the potential to create much smoother pictures a lot faster without much difference in different situations.

Don’t forget though that POV has had a history of well over a decade of development and can’t be compared in any way to yafray yet.
I would certainly give jms’ povanim a try, he also has been working on that for a very long time and it shows.

So in the end it all comes down to whatever you like best and/or find easier to use.

Don’t forget Blender itself however…

Wow, eeshlo, what an explanation! :smiley:
Thanks, many things are much more clear now.

Env

eeshlo finaly did it. I’ve had a the similar problem as env – to understand to different types of lights in YAFRay (or GI or radiosity, etc). Thanx eeshlo.

BTW: how’s it going with YAFRay/YABle, guys?

I’m sure Yafray has the potential to become a great raytracer, but if you ever decide to go beyond scanline I would recommend POV-Ray. The built-in textures are very powerful, and thanks to functions you can create your own procedural textures if none of the built-ins suit your purposes.

The problem with it, though, is that it’s text based, and unless you come up with a python script to write POV textures you’ll have to code them directly in POV with no preview.

I would not recommend povray (I have tried it) and DEFFINATELY not recomend bringing up old threads.

povray uses more powerful tools than yafray but
needs lot of practice and know-how .

Im just going to play stupid here-
I see lots more pretty pictures in Yafray then Povray.

Weather its that Yafray has been used more with blender (and intergrated) or that It has nicer GI? The stuff I have seen in Povray, while very good, disent have the same feel is Yafrays slightly grainy, well lit- Photo-like qualities.

Povray is probably powerfull and more mature but I have rarly seen the procedural texture used that effectivly, they LOOK procedural.

I like yafrays approch- to take a simple mesh and render it- rather then trying to be overly complex SDL-
Im all for renderman like shaders, but povrays SDL seems to show in the resulting scene.
I say allow the modeler to have controly over that and the renderer can take a bunch of verts and faces, and render them.

  • Cam

Didn’t notice the date of the last post. If I recall I found it in a search and didn’t think to check.

That being said, that’s the beauty of the internet. Face to face if you have something to say after a topic has changed you’ve lost your chance. With message boards the conversation stays as long as the thread is available. If someone who didn’t even know the thread existed has something to add to it why shouldn’t they be able to if the thread isn’t locked?

I’ve tried every other free raytracer I could get my hands on, and POV-Ray is by far the best I’ve been able to find. It’s not that hard to use, either. I barely have a high school diploma with only enough math to get me the required credits and a lot of the features people consider more advanced are trivialities to a near-dropout, possibly mentally challenged (not saying that as a disparaging remark to the mentally challenged, just starting to suspect that I do have some mental impairments that weren’t caught, but my mental state isn’t at issue here, only used to prove the ease of use of POV-Ray).

I’ve found that most people who don’t like POV dislike it for superficial reasons such as no drag and drop editing (i.e. text based interface), no material preview (without having to sit through a fairly long render of the material showcase include file), and the like. I urge those people to spend some time with it and you’ll be able to do things with it that Yafray can’t handle*.

*not putting down Yafray, but ten years have gone into development of POV, and not even half that time has gone into Yafray, so POV would have more features.