what rendering improvements needed?

I think Renderman’s way is good for BI (more controllable) use in movies(3dlights) or animations(PRman)
but I never use renderman my impress renderer is Kerkythea (Presets, materials)
And for now the result of BI look old and dark(Simple scene in Finished Projects) then GI and bounce is need.Thanks

for me as an artist, renderers come and go. I think if blender gets a good renderer API going, people might catch up and start coding towards blender instead of us chasing them.

I want the workflow of the internal renderer, no application swapping (this might be doable just by command line stuf into the external renderer and have it render into blenders GUI, with blender UI buttons/sliders etc for the external renderer)

compositing workflow with indexob and multiple render passes is a must, if a way superior external renderer cant do that. I will use internal renderer instead.

baking does also belong to this topic, right? :slight_smile:

i would like to see antialiased baking and some more baking modes (lighting, shadows, lighting+shadows,…).

Is there a coder around here who can comment on a full rewrite, if that’s even feasible or not?

In general ‘full rewrites’ are dangerous and prone to failure. Usually incremental rewrites are the way to go.

How about just making sure all mesh and particle settings are exportable so people aren’t limited?

Otherwise, some form of extensibility (external shading language, shading operations, etc.) would be nice.

I don’t know about most efficient algorithms, but from a user point of view what I would like in BI would be:

  • Global illumination.
  • Good IBL. integrating Smart IBL technique.
  • Progressive results: progressive detailing, noise reduction, adaptive-AA, etc. Being able to interactively select regions where it’s needed to eliminate noise / add detail, instead of keeping calculating the whole scene. For instance box selecting zones, or “painting” the zones where there still needed refinement.
  • Integration with the 3D view. Allowing preview and for instance interactive focus/DOF tweaking, materials/lights tweaking.
  • GPU calculations where is possible.
  • Good speed.

Progressive/interactive results, can make the workflow a lot faster without much need of incredibly efficient algorithms. There is a lot of testing before the final render, and if those steps can be sped up by a good preview workflow, the whole project is done quicker. And the final render can be sped up by adding detail interactively only to where it’s needed.

One little feature I’ve always wanted is ambient occlusion with bump/normal map support (it just adds that bit of quality).

discard this plz. thanks

I think “Use shader” option in render branch Indirect lighting addresses directly this

I think, for now: Stabilize the render branch and merge it in.
And then - on a new branch, a partial rewrite of the whole renderer aiming for speed and GI and the refactored shaders (this will take several years I guess)…

Sweet, I haven’t played with the render branch for ages — will check this out.

Honestly, it seems that every single app, even the one-man-band operations, have some kind of decent GI solution. Why hasn’t Blender got one? It’s really silly.

Why dosen’t the BF approach one of these small apps, or even contact one or two of the Yafaray coders and say "Hey, you’re doing this for free right now. We’ll actually PAY you REAL MONEY to come and write us a render engine.

Yafaray itself found a few GSoC students to implement progressive photon mapping. So would it really be THAT difficult for the BI to find someone to get things going? Why can’t we set our friend from Cuba on the case?

I’m pushing for a Renderman solution as it’s an existing standard and it would probably be easier to get someone who would be able to implement it. Secretly though, I would like to see the work on the Render Branch completed if its possible to do. The impression given is that the main architect has walked away and everyone else has thrown their hands up and said “Hell, I don’t know how it works!” Which doesn’t inspire confidence that it’s going to get done. I would love to be proven wrong though.

well, I must admit I share the same feeling and quote every single word…

I think Brecht has been the author of many many things in Blender but he shares the responsability also for the direction development has gone so far.

I’am grateful of course, but when you say and repeat GI is a standard in CG since at least… uhm 10 years? blender can’t afford to lack, and when you start hoping something can change with render25 branch, you suddenly read something like this:

well, you feel a bit discouraged and tend to think things can’t actually change in blender internal… :frowning:
I mean, if brecht is the only one able to understand blender internal and if he’s leaving now the foundation we only have two possibilities

1_ the renderer will most likely lay in this stage for ages, or
2_ a new developer is finally found to work on it and we can also hope she/he is a little more open to hear community needs and inclined to translate them in real features CG world has got decade ago.

I put myself on the side here and second this thought fully.
Why dont we have GI yet.

BTW we already have a GI branch for Blender but as it looks it was stopped.
Or at least I am not sure what its future is.

Let me share a “dream” here:
remember SharpConstruct? That was an old, clever, free experimental sculpt application developed years ago by… Nicholas Bishop! Now he’s in Blender team (not paid afaik) and is the main multiresolution & sculpt developer. So, these days there were lots of hypes for a new biased/unbiased renderer called Mitsuba. It’s OS software, it’s quite impressive in its first beta stage, fast, stable, rich in terms of algorithm approaches, multithreaded, network capable, ecc… In any aspect it seems that the developer is a really talented guy. So, if Bishop got involved into Blender, then maybe…
Anyone else likes my dream?

It’s total bull (excuse my French) to go for Renderman. The current (free) solutions are far from being production ready, fully integrating them with Blender is not an option and writing a new Renderman implementation for Blender would take at least ten times longer than bringing about photon mapping with final gathering, caustics, micro polygon displacements, MLT, and on cherry top of that one could probably implement a light cache clone, just because there’s plenty of time left to do the research. All of that for BI. If the programmer’s fast, he or she could probably learn Japanese in that very same period of time as well.

What it boils down to is this: if you want Renderman for Blender, write an exporter. Renderman was designed to be exported to.

[/rant]

Every support Blender has for external renderers works through exporting, how did you have in mind a Renderman solution would work in Blender?

I’m not convinced that supporting the renderman standard would be more work then implementing your above mentioned features into Blender

Renderman is a great standard but I think it is way beyond the scope of what most people would be able to deal with.

To make most out of it you need to be able to program the shaders and compile them.
Nicely Maya’s 3Delight integration does that for you.
As log as we do not have a node system like in Maya with 3Delight I see no reason to even consider Renderman.

Did you guys even really work with such a system or did you read about it?

However there is a free licenses for 3Delight for non-commercial use I think.


lsscpp

I share your dream - sadly for many years already …
I am asking myself if it is time to weak up and get a reality check.

Finally, someone who gets (one of) the point(s) :slight_smile:

Did you guys even really work with such a system or did you read about it?

They’ve probably been watching Toy Story :ba:

Have a look at the specs and have a look at existing implementations. Should be enough to temper the enthusiasm for Renderman (as renderer to replace BI, that is).

Renderman is much more than what we’re all whining about here. It’s like killing an elephant with bubblegum. It might be possible, but the elephant is way too big.