2.5’s way of opening up the UI fully to outside extensions is a huge step towards having Blender be seamlessly inter-operable with many different rendering engines, both free and commercial. That’s really promising.
I think of modelling, texturing and scripting as the core strengths of Blender. If you find any given tool works better for you for a certain task, than you should totally use it. Monolithic applications belong in the fossil record.
It depends also on the scene… I rendered a HD frame in blender in 20 minutes, did one with the same set up but with interpolated monte carlo radiosity (3 bounces), AA, reflection blurring, adaptive sampling, nodal textures with anisotropic reflections and reflection blurring, in 26 sec. in LightWave. Few more tweaks stressed it out in 1min. 3sec. I don’t think I can tell you about the other stuff until the release of version 10, but it was faster. At any rate, I keep a few render engines on hand depending on the situation, both open source and commercial. I think that BI is a good render with very good results, and in the future could be a bit stronger, but with the development of Yafa and Lux, is there a reason to improve BI?
At least to keep it alive… I am not skilled enough to go out there and start going back and forth between Blender and some other alien renderer, I really need a straight way to see, as fast as possible, what my scene looks like DURING the modeling and animating and lightning process.
Does yafa supports normal maps? I never had a clear answer on this. IMO it doesn’t.
Is lux fast? Faster than the ‘slow’ BI?
Everytime we complain about blender, always the same answer: ‘You haven’t paid for this’. But we did, I did! So many hours of work…
An example:
A zbrush + 3DCoat license costs ~750 euros. Its the most powerful digital sculpturing combination I know.
Don’t spoil your talent trying to achieve same results with blender, its pointless and a lost of time.
Yes, the good artist does masterpieces with every tool he has. But…
When the only tool you have is a hammer then everything around starts looking like nails…
If this makes the artist better, I don’t know. It never improves the tools maker, thats for sure.
In my opinion BI is a joke
You can render almost everything but please don’t joke…
You will spend days simulating radiosity instead one hour setting GI in other 3D software.
Radiosity, and Photon Mapping are algorithm before year 2000
Path tracing is from around 2002. So how many time ppl need to wait to see GI in Blender Internal? Another 10 years? I hope not.
I think after 2.6 will be finished Blender Foundation really should start to work on many things that escape somewhere in rush for new features.
-Blender should have decent, fast, direct lighting and GI render engine.
if blender foundation don’t have men power to write renderer on its own, then
Ton should marge yafaray into blender. Especially now, when shader rework need to be done too. And I must tell, I don’t believe so render api is cure for lack of GI in blender.
-Better procedural texture engine with fully support solid textures with bunch of presets like types of woods, bricks, marble etc.
-Whole particle and physics system… I agree with Ton, now its not usable for production.
-Official material library for architecture
You just don’t know how hard it is programming a renderer. You can pay all the money you want to a programmer you can meet just after graduation and I am sure he will not be able to do it.
To change the renderer needed the changes they did to 2.49 to 2.5. It was not to create a more beautiful app but to prepare it to all the future changes it will have. The renderer is one of the changes.
Imo it would be the best to focus on a stable Render API. Every now and then this discussion of what to do with BI pops up. Then the visulaization people shout “best thing would be LuxRender or random other unbiased engine” others shout yafray would be the best etc.
I personaly wouldn’t like to see any of those integrated in blender. They are all good render engines for special purposes but not for all. LuxRender for example is a great tool to make freaking awesome fotoreal pictures but it’s weak on animations due to its physical nature it is slow and hard to do cheats of any sort with it.
Currently I am developing a Blender 2.5 to Renderman exporter and am interested in VFX so i could shout “integrate Aqsis/Pixie with blender then you have the flexibility to do anything you like!!!” but i won’t because i know thats not what other people might need.
Another thing is that there are a few programs out there that have good render engines integrated (Cinema4d, Lightwave(imho), Modo) but the ones that are used the most out there (atleast in the film/games industry) are the three from autodesk, 3d studio max, maya and softimage. They all have a big base of paid fulltime developers but realy don’t focus on there renderengines. Look at their internal render engines they are far more outdated then Blenders Internal. But they have awesome Render APIs and therefore a huge list of render engines you can choose from.
The list of blenders developers is much smaller and few of them are getting paid.
So in my opinion it would realy be the best thing to make the render api as stable as possible, keep everything related to that as flexible as possible and then focus on anything else but the ancient render engine. There is just no need for that.
At the moment the render API feels very much half baked. There are some areas that are hard coded that are absolutely useless in external engines (Render Layers and Passes for example).
To improve Blenders internal engine to make it competitive with modern render engines would take far too much resources that are just not there, so that won’t happen any time soon. And if the developers keep heading for this aim I’m afraid the render api will remain half baked and noone will be satisfied in the near future.
I have seen talk in the Luxrender forums about trying to render some of the scenes from Sintel sometime in the future, they did acknowledge that to do that they would have to get one or more of their GI methods fast enough to do animation and maybe bring in new material features.
Lux’s photon mapper is working again (though SPPM will probably supercede it in the near future), and there’s talk of a new material in development that would work well for effects like halos, seeing Sintel rendered in Lux would show that it can replace BI for most people.
There is the instant GI integrator available in Luxblend25, it’s somewhat limited in the effects it can do compared to photon mapping and bidirectional methods but results come fast.
I have to say I don’t quite see the difference anyway…
and yes, yafa is much faster and better-looking than BI. AFAIK, yafray was originally designed precisely to be a beefed up render engine for Blender.
Is lux fast? Faster than the ‘slow’ BI?
Yes, it’s faster, both in setup time and render time, if you want photorealism. If you want plastic-looking materials without GI though, BI is much faster.
Everytime we complain about blender, always the same answer: ‘You haven’t paid for this’. But we did, I did! So many hours of work…
A zbrush + 3DCoat license costs ~750 euros. Its the most powerful digital sculpturing combination I know.
Don’t spoil your talent trying to achieve same results with blender, its pointless and a lost of time.
Yes, the good artist does masterpieces with every tool he has. But…
When the only tool you have is a hammer then everything around starts looking like nails…
In my opinion PM is still standard in both visualisation and animation, and is the best solution right now, and will be for a few next years.
Take a look at vray or mental ray those two render engines have huge part of the industry.
I don’t believe so render api is a cure for lack of GI in blender internal. Only few users will be ready to pay for external renderes like vray/mental ray.
Next problem is, I don’t think mental images or chaous group will write soon official exporter for blener. And this is a problem to someone who is willing to pay a lot of
money for commercial renderer to have a renderer without any exporter support (especially in production)
That’s why I think users should have GI, full feature reach renderer in blender with render passes, layers and full compatibility with all other blender features.
Blender want to be alternative for commercial software, it can’t be alternative without build in decent renderer or without officially supporting one of open source renderers like yafray or any other. By supporting I mean all features compatibility with baking, material, compositing, particles… etc. and full documentation.
I don’t believe so render api is a cure for lack of GI in blender internal. Only few users will be ready to pay for external renderes like vray/mental ray.
Next problem is, I don’t think mental images or chaous group will write soon official exporter for blener. And this is a problem to someone who is willing to pay a lot of
money for commercial renderer to have a renderer without any exporter support (especially in production)
Why do people keep going back to talking as if MentalRay and VRay are the only external render engines that are out there and thus the reason for GI in BI is because not everyone can pay for those engines.
Luxrender and Yafaray have nice GI-based features, are free, open source, and have WIP exporters for Blender 2.5.
Why ppl see only what they want to see.
I’ve already written - Yafary is a damn good renderer.
Maybe ppl talking always about MantalRay and Vray because those two are standard in the industry, are fast, and are best documented I have ever seen.
I know Luxrender and Yafaray have decent GI, and they are open source. And I can’t wait when exporters will be finished, but this is irrelevant to BLENDER INTERNAL and his GLOBAL ILLUMINATION!
Luxblend25 at least is now in a state where you can access and use nearly everything available in Luxrender providing you download a build of Lux 0.8, (very recently adding support for Luxrender displacement, new firefly rejection, and the new glossy translucent material).
But why try to bring about true GI with caustics and dispersion in BI when the shading refactor has to be done first to even be able to do that? The render exporters will likely be in a fully usable state by the time that happens and even then Blender would not have a lot of render-centric devs. for BI compared to the external engines. Getting the render API done and seeing support in external renderers for things like the node trees may actually be less work.
I thought all of this has been discussed before. Blender needs a shader refactor to make it friendlier to more modern render methods, as well as some heavy construction on BI itself, which sounds as if it’s pretty tough to start hacking on for new coders. This biggest obstacle in the way to this happening (besides manpower and time) is probably the backwards compatibility issue.