I guess it’s something similar to trying to calculate intersection between two graphs. In reality there is a precise value for the intersection point, you just spend time trying to get it as precise as possible. The more time you spend, the more your outcome goes to the actual value. Or something like that…
One problem seems to be that the paths can be annoying to set up. But, once you get it to work, it’s good. You’ll also need to change the “scenefilepath” to what you want to render, it’s in “inifile.txt”. That file also changes everytime you render, it changes back to the default directories for textures and models, which is annoying.
The major problem I got was that when I exported from Blender, I had the feeling that it didn’t export right. Because, when I tried to open the .XML file, it said it was, basicly, corupt. So I rendered it anyway, and there wasn’t any objects, just white. But it looks like it got the “world” right. I want to play with this more, I’ll tell you of any progress.
What I do like about this renderer is that within a few minutes, perhaps a few seconds (say 30 seconds), you can see where the render is going. If you don’t like it, you can immediately start tweaking the scene again. It could actually have a very nice workflow for rendering still images- the basic feedback is excellent. Perhaps it is one of the fastest renderers I have ever used for giving basic feedback. It progressively renders the whole scene at once which is great.
Once you are happy with what you see you can leave your computer working on it over night to eliminate the noise- safe in the knowledge that there will be no rendering artefacts.
Another thing I like is that it doesn’t freeze or slow down your computer that much - you won’t be able to use your computer for games, but for web browsing, work and even using blender, my computer at least is only slightly less responsive than usual.
You don’t need to worry about losing your render either - at every step a .png is saved.
In summary, what it loses in speed it makes up for by keeping your computer in a usable state by almost running as a background process - even an application of Maple is usable. It seems to be an asymptotically converging solution - the first few iterations make absolutely mindblowing progress relative to the later ones allowing you to be certain you’ll be happy with the final render before letting it swallow up the hours.
I suppose it could be annoying if you found after 20 hours that you weren’t happy with some subtle caustics and had to rerender but on the whole I am impressed.
“Ladies and gentlemen, let me welcome you to My Favorite Rendering Engine With Blender Export Script thread number 3872!”
bmax, I don’t know what you are trying to say - the more free renderers that support blender the better. Yafray development is pretty much on hold and Blender internal can’t do everything. What is your point?
Koba
I took it for a spin, just for kicks, and this is what a 4h 40min render looks like (included example, not Blender export):
I had it running in the background on my AMD64 3000+ computer while I was working. Although I was using the computer simultaneously there should have been plenty of idle-time to use, so results should be similar on other machines.
It is very cool, but too slow atm for any real production use. Hopefully that may improve with later versions. I imediatly saw stuff that could have been done otherwise to improve performance, just by looking at how the application operated (although I doubt that speed ever was a high priority by the developer, so I wouldn’t call it a flaw). Also I have no idea how well it scales, performance wise, with heavier geometry (my image is basicly 4 primitives, with some advanced materials I admit). Too bad it’s not OpenSource ;(.
man take a look at he blures relfections/refractions, those look good.
i would like to know how the image would look after 1,2, and 3 hours since you let it render for around 4,25 hours.
i am not sure why, but for some reason the colors in the renderings look a bit more realistic/natural than from other GI engiens i saw sofar.
i am not sure if i am imagine that or if this is realy the case.
What I do like about this renderer is that within a few minutes, perhaps a few seconds (say 30 seconds), you can see where the render is going. If you don’t like it, you can immediately start tweaking the scene again. It could actually have a very nice workflow for rendering still images- the basic feedback is excellent. Perhaps it is one of the fastest renderers I have ever used for giving basic feedback. It progressively renders the whole scene at once which is great.
Yeah, instead of waiting an hour to see if Yafray bumpmaps are correct, or if anything is correct, this does it in several minutes.
thats quite true, a better preview would be more than great!
i thought in yafray uou can also render only borders now
like in blender but i cannot get it to workl.
maybe i missunderstood it.
but i guess the next version of blender will maybe have a progessive preview/ in my opnion this would make a lot of sense.
but i guess the next version of blender will maybe have a progessive preview/ in my opnion this would make a lot of sense.
Not to me, Ton (the only renderer coder) ajust made render buckets, why would he turn around and make a progressive renderer? Besides, Blender doesn’t take that long to render, really. How fast would Blender be compared to other renderers?
Well, In my opinion Indigo can’t be compared with YafRay, at least at this point. YafRay is a biased, quite well integrated with blender, ready for some serius work, relatively fast, GI render engine.
Indigo is a unbiased, physical approach, non pipelined, only ready for test purposes, quite slow GI render engine.
That is why I posted this thread. Indigo is something Blender users hadn’t had before.
Besides, I believe that what YafRay development needs to wake up is a bit of competition, but in the quality side.
Well, i only remember how it looked about an hour or two into the process and it was quite grainy (like in not acceptable). For example: it was somewhat hard to see the edge of the sphere to the far left. As you can see, the caustics are still grainy but could be fixed in post-pro. Also the mirrored reflection on the rightmost sphere is quite noisy (look at where the center-sphere reflects).
Like I said, my test isn’t a good benchmark so best way is to try it out by yourself (it is very easy, no settings needed).
My take on the application is that it is very cool, but impractical to use in it’s current state. Even if the perfomance were to increase tenfold it might not be enough for animation renders (esp. since post-processing has become so easy and one could do several test-renders in the same time). Very realistic stills are a possibility.
i am not sure why, but for some reason the colors in the renderings look a bit more realistic/natural than from other GI engiens i saw sofar.
i am not sure if i am imagine that or if this is realy the case.
I’m sure your’re right, it is one of the supposed benefits of “unbiased” renders. I’m personally very impressed by the light distribution and how smoothly it interacts with shadows and surfaces. The light is supposed to be calculated on real world light physics (photon wave optics and such) and repeats the process to infinity (interpolating the result as it goes). That is why it is never considered “finished”, you only interupt the rendering process when you are satisfied.
It depends. The image is so grainy the first couple of minues that rendering a biased low-fi image is probably both faster and provides more accurate feedback. If your bumpmap is high-res it may take well over an hour before you can distinguish it from the render noise.
currently it’s v. 0.2, let’s make it grow and optimize.
maybe not everybody knows how much hype is maxwell renderer making among the architects… this could be a serious contender for static fotorealistic architectural renderings.
and maybe a few remember very early maxwell pictures… these are better!