Just an idea Not a request, Lux Render/Cycles code merge?

Ive been playing with Lux render for Blender and have to say the latest builds are Pucker!, Lux VR with Opencl works great even on old kit (like my Nicccccccee, AMD HD 5850). What chance is their that Dade and Brecht,Ton can come to some mutual agreement about Dev. If not nothing lost but at least now as an AMD card user i have something to test scene’s on quickly. Have to say Lux VR works sweet, shame Cycles can’t find the magic way to get running on Opencl AMD cards yet but LuxVR gives me hope, even for the people like me on old hardware. Did a quick vid toshow how good this works even on Cyprus based boards, it’s smoother than in the video. But im using free camstudio with realtime video compression through Xvid at full HD 1080p, runs little nicer without it.

:Blender 2.68 with Small Lux GPU 4 on AMD naff card

Hi 3DLuver, Brecht though about to split Cycles code in smaller parts to get it compile with AMD compiler.
This would be so much work as a rewrite (next to) and nobody knows it is working then.
Cycles OpenCL is working on Nvidia so it is an AMD problem not a Cycles one.
I don´t think you can “merge” code as we think as non developer.

Cheers, mib.

Considering what the original goals and paradigms of Cycles and Luxrender were, I would think that the codebases would be so different from each other that merging them would take an incredible amount of work and even then not guarantee that their strengths and features are neatly combined.

For one thing, Cycles is node-based and its codebase is a lot newer than Luxrender’s (which the latter was one of the first consumer pathtracing engines to be released). Doing things like this is just one of many things that simply sound a lot easier than it really is.

@mib, Yep agree it’s a problem with the compiler of AMD. Ive written my own opencl simple tracer before and the funny thing was it worked on nvidia cards (sent to a few friends who had non AMD kit, But at that point about 9 months back also didn’t run on Intel CPU’s) but not AMD, even though i have an AMD card.

The AMD compiler sticks much closer to the spec (even then AMD opencl code just doesn’t work some times and for the love of god can’t understand why) than Nvidia, which means you can get away with opencl code on nvidia cards that isn’t quite set in stone to the fixed standard.

Agree there will be big differences between engines, But lux render has shown it can work with complex BRDF’s and sample lighting models but still work, as such a solution isn’t as hard to find as you may think. I’m working on a new generation of path tracer at the moment that uses SVO’s and DAG’s, I know it sounds odd but i refuse to have to use CUDA out of principal. But things already look like coming together.

With my engine im using opengl rasterisation for direct lighting as to be honest is faster than ray/path tracing, and only using path tracing for secondary traces. Im not even looking at using shading data from a normal path tracer BRDF system, I just record the color info from hits on bounces/Absorption to run through a deferred rendering system so i can do all material shading models also through opengl with a PBS system separate from the raytrace engine.

Means Ubershader can deal with all things efficiently in open gl compute. Still playing but may only end up path tracing shadows (more like Beam tracing) and reflections, to be honest even refraction isnt needed to be physicaly correct for the real-time environment im looking at. Mixing rasterisation with composited path traced shadows and reflections looks at this point a better idea.

@Ace, Im not really thinking what you would consider Merging code bases, but taking idea’s from each other to solve problems. Even simple implemented things like decent MIS, Bi directional Path tracing etc etc. All i can say is i was impressed after playing with the latest builds of LUX GPU even with my old 5850 GPU. Think the Lux devs have done a great job :slight_smile:

But copying features from Lux will mean doing a lot more than just copying the code over, Luxrender’s implementation of MIS and bidirectional sampling works with different implementations of BSDF’s, different variables, different material paradigms (like a special data type for fresnal information), ect…

If you want an example of an effort that brings these features into Cycles in a way that can work with the way it does things, look at Storm_st’s work and how he’s been able to get increasingly good results with his code (even though it will be a while before it’s clean enough for final inclusion).

I know that mate, Im talking about Opencl implementation structures not pick and mix code. Yeah ive been keeping an eye on storm for a while, Clever lad. My work is more based on realtime path tracing, Game engine design. Mixing up every bit of code i can find with proven realtime architectures, I Kidd you not, ill be surprised if a decent quality realtime engine can’t reproduce 95% of offline renderers with 12 months. If i can get my hands on Mantle soon ill have a better idea of what can be done.

Just to note, there actually has been talk from Brecht about splitting the kernal into smaller chunks because, when you look at how large it is right now, splitting it up could actually make it work better and improve performance for Nvidia cards too. The only thing is to not expect such a thing in the very near future because of the refactoring work that will need to be done to make that happen.

If I had to recommend trying something, it would be to port SLGs stackless BVH build/traversal code to Cycles. There’s a strong hint from stripping down Cycles to the bare bones (e.g remove all but AO rendering) that the BVH code is breaking the AMD compiler’s neck pretty much by itself. The existing BVH traversal is written by NVIDIA developers for NVIDIA hardware.

@Zalamander, Yep think you might be right after a quick browse. The problem when i wrote my little path tracer a while back was with const, __constant, __global qualifiers. Even to this day i dont know why AMD’s compiler hates things like this but i never said i was a GPGPU lad. In fact up untill 11 months ago id never writen a single line of Opencl, and if i can learn it anyone even with shit for brains like me can do it. Just for fun im going to look through the BVH code for cycles tomorrow, i actually have my own stackless bvh code that i know works that maybe i can punch and kick to help out.

Not possible, too different ideas behind renderers. Cycles go Arnold -like way, Sobol QMC as main feature to mitigate 1/SQRT(N) rule, fire rays only from camera sensor (exploit great discrepance property of pixel array), keep pipeline as simple as posibe, all data and main loop inside GPU. That direction dictate current Cycles code decision, data structures. Actually it work very well for direct light and low bounce indirect ligth, especially in case top GPU. I have trouble to “beat” that scheme using more smart, as Luxrender, algorithms, as they have some overhead.

Luxrender use PBRT as base, main goal was physically correct work. for example, IOR is depend on wave length, and in non linear, complex way. Same for glossy metal BSDF, many other. It just differ. Default Luxrender sampler is bidirectional MLT, that have big advantages but same time have some bad things, like no QMC (tend to clear noise slower in time on average scenes).

And even more, main Luxrender and SLG are very different too, even after recent attempt to merge. It will take more time untill SLG core will support all features of “main” Luxrender, and AMD compiler state will not help that process, trust me :).

Few of examples are “svm” shader nodes, BVH builder, non physical correct things like color clamp, real-time friendly Cycles multires sampling of first frame (big deal for many ppl, as you can do many things in real time even on huge scenes, although it simple and can be done in lux easily it must be done and tested).

Som you cannot just get and unite them. But sure some ideas and parts can be merged. Multires preview window, node system from Cycles (as i understand some nodes already done). Better dorrect metal BSDF from Luxrender, maybe Kelemen-style MLT for complex stills, real-time tone mapping, complex lens filters.

Just to say, the differences between the future Lux 2.0 and Cycles will be less to some extent than Lux 2.0 vs Lux 1.3/Original.

Firstly the license is Apache 2.0, there is clamping options, different output passes, and more LuxRender features being ported to the LuxCore/SLG/Lux2 design. Also a proper C++/Python API that’ll work well for integrating into commercial/free software.

The Sobol Sampler in SLG is more Cycles like as well, and the MLT sampler in SLG appears to work better than in BigLux.

As Zalamander said, SLG’s BVH code could be useful, and AFAIK, the luxrays intersection code was designed to be used to build other render engines on, ie SLG

Off-topic: Why not just help the lux guys integrate their renderer into the Blender viewport like it works for Cycles? Would give Lux a big boost in userbase…

+1 on that.

P. Monk

+1

İf Lux was implemented better with Blender ( it seems it is going in that direction). There wont be any need for blender internal. I wish maybe Lux render and blender could join forces and replace BI; Everybody complains its full of hacks and its too old anyway.
Baking will be handled by cycles , at least thats the plan isnt it? So Blender Internal is only useful for materials for BGE. Which will be obsolete for BGE too after the implementation/refactoring of the GLSL code.

+1 from me as well.

Luxrender won’t ever be able to replace BI. The internal render engine is still great for quick animations that don’t require a lot of physical correctness. Which is the absolute opposite of what Luxrender does.

If someone is writing that lux render can replace internal render engine. it just shows that that person have no idea about rendering engines and it’s types.

The issue isnt biased or unbiased rendering (Being opposite of Luxrender). Nobody seems to take on the task of refactoring BI, or fix long standing issues like caustics.
If some is proposing to write a fast unbiased renderer for Blender it will be super news but i dont think anyone is going to do it.

Besides if your going for a decent look for a render BI takes longer to setup. The more complicated the scene gets, harder it is to manage the BI setup. Frankly i hardly use internal for anything other than baking and smoke rendering.Im sure someone uses it for quick animation rendering, but from what we have on the table luxrender is the best choice for a decent renderer other than cycles; no matter biased or unbiased.

Might as well introduce a unbiased renderer and allow some custom non-physical light or material setups.Instead of having an old, hard to setup biased renderer.

If anybody has a fast fully functioning biased renderer waiting to be integrated to blender feel free to jump in.

“Besides if your going for a decent look for a render BI takes longer to setup. The more complicated the scene gets, harder it is to manage the BI setup”

What?

I wasn’t talking about biased vs unbiased rendering methods. I was saying that Luxrender is really sloooooooooooooow even for rendering single frames. I wouldn’t even try to make an animation in Lux when BI could just blaze through it in a couple of seconds.

To be clear I am not talking about rendering architectural animations here, I don’t think anyone would try to do that in BI anyway. I am talking about things like BBB, Sintel or any Pixar-like animation. For those kinds of animation it would be insane to try to render them in Lux.

And the funny thing is that I was able to more readily get results when I started to use Cycles than when I tried to get images out of Luxrender. What didn’t help in this case is that Luxrender’s fireflies were larger (4-16 pixels in area) and its images harder to de-noise because of the filtering in place.

What also didn’t help was the fact that the metropolis sampling meant a bit of difficulty sampling darker areas because the original purpose is geared to resolving bright caustics, light from small sources, and glass in brighter areas. Scenes I thought would do good with Lux’s algorithms instead saw Lux struggle to get rid of the noise. (Though the core program now has VCM sampling so I wonder how much better they have it with complex scenes now).

What I don’t get is how some interior scenes in Cycles can actually turn out better than in Lux even though it was just generic path tracing used.