Again…pay attention to the developers and what they do.
Blender already have similar preview rendering, it’s also smarter
because you preview right on top of the area you need to check out.
This saves time and help you focus on what’s important.
Furthermore - you can set up a node compositing window, and in
the uv-window make it preview, so any changes you make
will be displayed in that window, in fact - all you need is a refresh.
very true except for raytraceing… In that it can take a minute or more to get one bucket to start appearing… Whiled the IPR method wolud at leat start to show the lighting at a very undetailed lighting pass and go from there…
I know it’s just a step in the new render recode so we all have to wait till ton is finished with his recode till anyone will try anything new in there, But lets say this is good for historys sake
Rendering takes time no matter what. The render preview is free from
costly antialiasing so it’s fairly quick.
You also need a fast computer in order to enjoy this, I’d belive you’d
need an equally fast computer for the modo preview as well, especially
with hard shadows such as raytraced shadows. No way they did that
demo with a slow computer.
Blender’s scanline renderer is fairly fast.
Have a dual processor setup? Select threads in the Render menu
and watch previews fly even with hairy details in seconds.
Even with my “old” 2-gb 1.8 ghz AMD 2500+ machine (running under
self-compiled Linux and self-compiled optimized AMD Blender…to be fair) the preview window are fairly fast even with the complex scenario
I described (with the hair) earlier.
But of course - improvements in the code + optimizations are
SO welcome - so get cracking with learning code
JoOngle, I think the progressive rendering you see here is actually quite fast… It is probably much like FPrime (watch some of the demo vids), which is used in Lightwave.
But the rest of your point still stands! youngbatcat, you need to get out of your cave and start programming these features yourself :o)
youngbatcat,
all these suggestions, do you post these at developers-forum as well? you know this is mainly a user-forum and developers don’t check this forum as much as the other one, and don’t forget the mailinglists as well.
That is a really cool tool. Not to slam our render engine which I like alot, that one there is really cool. With one area light and GI those are some very realistic looking renders he got. http://content.luxology.com/modo/201/video/MouseProject-02.mov
Since i am not a coder I have no clue how much work it would take to get blender’s internal render to do that. I’m guessing it would take a lot of time and Ton has a lot on his plate already.
megapov seems to be capable of this. at least, while i was experimenting with it over the weekend it seem to capable of providing similar feedback via progressive rendering.
hum, megapov has this kind of rendering only when radiosity is activated, and sunflow is a GI-based rendering engine, whose approach is based on successive approximation of the final solution. megapov, with no radiosity, acts like blender internal, and the majority of the rendering engine I know (maya internal, pixie, cinema 4d’s rendering engine, etc).
In fact, when you activate Radiosity in MegaPoV, you have the progressive refinement, as you said… But you can also have the same type of refinement with standard rendering, although it’s not activated by default…
I’d have to look back, because there’s a long time since i used this feature, but if i remember well, you can activate it by turning on Mosaic preview or something like this…
For now, even if we dont have a multi pass renderer, would it be possible to make one of the windows in blender to do a “render from view” in the actual blender window at that size, or even better, a render that uses a limited amount of cpu, renders in the window at a preset resolution, and restarts rendering if anything is changed? It could be an option in the “Draw Type” as an extra option above “Textured”. This way u turn it off whenever u want to do something that will take a fair bit of cpu.
Having said this, and not being a programmer, Im not sure how much this would take, but my guess is a lot
FYI, not the same sorta…
The IPR in Sunflow starts on the whole screen, as a very pixeled image. While Blenders in like the render engine we have now rendering in tiles… Both of which go hand in hand…
The damage of the Sunflow one is that you get an immediate feedback if the scene is lighting correctly …
Unlike having to wait for a tile to reach it’s destination…
Yfkar, don’t know of any other method… The render nodes take the same ammout of time to rerender the scene as a normal render…Evn with anti aliases off…