Farsthary has been busy working on the Final Gathering part of his GI implementation, and he’s wanting some help on making final gathering work like in the high end renderers (no noticable high frequency noise), he’s looked at papers but it mainly talks about photon mapping and not much on final gathering.
So not to worry, the lack of blog updates was just meaning he was busy looking at papers and writing code.:yes:
me too, internal is better than most people thinks… powered with the composition nodes there’s unlimited tweaks than you can do to get a nice end image, the only thing it left at the moment is GI, so why make a fully external render integration instead in “just” adding this modulo into the great internal render? I really think blender’s internal render is one of the best.
I thought the idea(from wintercamp discussions) was not to go dev a render api before some love’s given to internal.
and the whole photon mapping is after all a WIP, its not gonna get finished tomorrow,
and IT’s not even certain it would make it in 2.5, so plenty of time left still,
besides, GI with a slowish raytracer is better than no GI at all.
that paper is awesome (as always papers are!) and hopefully could match the photon map project needs. But I think that BI will still need proper phisically based materials and lights (areas please!!!) to compete with those big guys (vray and friends)
edit: btw, in GSOC 2009 proposals there’s a raytrace speedup idea (hint hint hint…)
I’ve never personally understood the fascination with GI and physical accuracy. Every movie you see and are amazed by the special effects (many photoreal), not one uses physically accurate techniques, not one uses for example spectral-based rendering engines and yet time and again we get people wishing for true GI in Blender.
Let’s assume for a second that Blender does get it, what then? People here will be sitting with their dual/quad core machines throwing objects into the 3D view, hitting the render button and waiting for 1-2 hours to drop out a single frame so it can be posted online along with an expectation that people will be amazed by the quality as if they had a huge deal to do with the quality that comes out?
Then they make a small change, hit the render button and wait another few hours for the machine to do all the work? We already see this with Maxwell, sunflow, kerkytherea tests to a much worse extent.
As the Renderman paper posted shows, it’s just not necessary. Look at the render times in the paper. What good is raytraced GI going to be if you have scenes with fluids, particles, smoke and so on? None whatsoever because your machine just won’t be able to cope with it. Point-based approaches and image-based techniques get the job done so much more quickly and allow artists to focus on making scenes look great instead of having hours of down-time waiting for the machine to finish. Even if we reach a point where full raytracing of a complex scene takes under 5 minutes on a home computer (not an Nvidia Tesla), the faster methods will take under 30 seconds so very complex noise-free animations can be done and not just stills.
And this is where an external system comes in. It’s not about accuracy, it’s about flexibility and to some extent compatibility. All the techniques mentioned above can be done right now without people having to hang on the smallest comments made by the best Blender developers. Look at Whiterabbit’s recent mapping tests where you can get noise-free GI-looking results within minutes. Compare this to his original raytraced tests which took hours to render out.
In summary, I really think that true GI in the internal engine is the wrong thing to wait for. BI just needs to be more like Renderman in design to allow experimentation and fast progression - true GI can actually be developed with that design externally. GLSL and nodes are great steps in the direction of allowing artists to visualize artwork and develop it more quickly and this is really what everyone wants - being able to render out the images you think of. GI isn’t going to deliver this because it’s too slow and inflexible to have any real control. What will deliver it is a flexible rendering engine that allows fast image generation i.e has low computational requirements.
how much time does it take to then tweak the scanline render passes to “fake” a raytrace?
And are there better scanline methods we do not have? If Scanline can still be used as you said, then what do we need or how much extra time does a project need to not use Internal GI?
I’ve been pushing for a renderman solution for some time. The good thing about it is that Renderman is an industry standard. Renderman can handle both Bolt and Pirates of the Carribean with ease on a production level. It looks the part, and its FAST. The other plus that I mentioned before, is that if Blender could import/export RIB files and compile RM shaders, then we would be an instant shoe-in at many smaller studios, if not a few larger ones as well.
On the other hand, the GLSL route is very intriguing. Many games today have amazing graphics that look much like they have been rendered for hours on a multi-core workstation. The big thing to remember is that they produce 30+ of these images per second! If we could produce images that look just like a screen from Crysis, that would be very cool indeed. Even if Blender is woefully slow compared to Crysis or another amped-up game engine, using the same techniques, I don’t mind waiting up to a full second for one of my renders!
All they need is a well documented renderer api and people could plug in whatever external renderer they wanted with a little glue code.
Looks like they’re discussing the details on the mailing lists so as long as they don’t get bogged down with integrating a specific renderer (lux) then it shouldn’t be too hard to plug in something like nvidia’s gelato or ATI’s hardware renderer (that I can’t remember the name of).
Yeah, but a lot of us do. And when you use point-based versions of Renderman, you can’t really tell the difference between a 4 hour render and a 10 minute one.
It’s just that I hear the argument of we don’t need raytracing if Pixar does not. etc…
But I never hear or see an example then of how to go about this method of faking it all in blender.
There is a reason a huge glut of the industry for motion graphics and product design jumped to Cinema4D and Modo. It’s easier and faster to make pretty stuff due to the speed of rendering and output.
But my argument has no harshness to it. I am merely asking for some neat docs to show the time saved by “faking it” in blender.
I myself gave up faking it due to frustration of the bleh look the scanline outputs. I can wait for the raytrace speed then totally alter via nodes. But the default scanline output looks off.
don’t you think your point of view is very narrow? Not everybody does animation. Many also use 3D for product, architecture, and other purposes.
Till today I did never see any non GI rendering which has the same realism as a GI rendering.
Period. You can get close - but not to the same. Indigo, LuxRender and those are not known for
fast rendering.
Renderman might say they do not need GI - does everything Pixar do look 100% natural either?
Do you think they put GI into MentalRay or Vray just for fun?
Yafaray is nice and free however Vray is very fast also with GI.
You also need to keep in mind that motion picture and still is not the same. In Motion picture many flaws are not visible because of the frame rate compared to a still image.
I rather have few mouse clicks and render 20 minutes a GI solution, than spend more time than 20 minutes on setting up a complex light rig to fake GI.
And effects like color bleeding, caustics, elements important for product rendering, cannot even be realistically rendered - you can maybe believable fake it.