Noise removal For Cycles (This paper has code to build a good implementation)

The only thing is though, any inclusion of required pre-processing will significantly reduce the chance of an implementation getting into Cycles, I remember that one of Brecht’s goals for Cycles is for the user to avoid having to mess with and then wait through a period of pre-processing before he can see anything.

And I do agree with this for the most part, I remember doing renders in BI where you might have to wait for half an hour of SSS pre-processing to be completed before you can actually see the final result, the avoidance of pre-processing would also have another benefit in that it will result in cleaner code and more predictable results (ie. Brecht not having to deal with issues related to the order in which the pre-passes are rendered).

@Ace Dragon, well you can feel good my friend. For example early examples of the same system with luminous to pre bake bi directional data was about 850 ms, for this technique you wouldnt have to wait even with massive scene’s longer than the BVH takes generaly anyway.

After speaking with another contact he pointed me to this paper with source code also very interesting for low sample reconstruction images,im sure this has uses:

Link:

On this paper, Dade from the LuxRender team had a go at this, but the main issue was using it at higher samples.

IIRC, it required the storage of samples before using the filter, which (with LuxRender) required approx 23GB for a 256 Sample image at 600x600pixels. Below is the thread if anyone wants to read over it (starts at the post this paper was about)

http://www.luxrender.net/forum/viewtopic.php?f=8&t=3218&hilit=filtering&start=10#p76836

Holy…! :smiley: Looks like I will have to upgrade my computer then :wink:

Turning Cycles into a raster engine with bidirectional tricks is not the answer. So much visual quality lost compared to other production renderers, no physical accurate materials, huge memory requirements, and failure in corner cases. It will be great for in-engine game cinematics. Not so great when compared to other production engines.

There are examples of bi directional path tracers That are not realtime or rasterisation engines, the example i was giving with luminous was to show that the setups of these real time downgrades can still be used with high end pathtracers. Like bret said irradiance cache’s, photon mapping (but these are pre process steps that would have to be done). The interest in my eye’s with using such realtime compatible ideas is because ultimatly id like to see the full rendering features of blender also being used within the BGE pipeline. Lets face it the BGE as is no where near high enough quality, and in my eye’s the only reason its still in the blender setup is for physics animation bakes. Im thinking if we can structure the advances of cycles around a scalable system for realtime applications but scaled down, there are many new things within realtime that are coming and i would like blender to be in the fight (Blender cycles realtime ability through the cloud, Blender game engine through the cloud) with the long time aim of home machine gaming with such path tracing bi directional rasterisation, More long term full Realtime pbi directional pathtracing (no rasterisation). As Cycles is being built from the ground up, with examples like Brigade 2 realtime path tracing, Luminous engine real time path tracing with ray bundles through rasterisation, building a new top of the line game engine using the same advances just makes sense.

And just look at how massive studio movie companys do things like Pixar, they dont use physicaly correct indirect ilumination in ANY of there cretation rendering, they use point cloud data from a pre bakes sytem through houdini that also a very good approximation of inderect lighting, when you have a frame that contains 300 gig of just texture data let alone model data such system take rendering times down from 24 hours a frame to 2 hours a frame, like the old saying goes, go with what works not whats physicaly correct.

This is no longer true actually, Pixar is now doing a ton of raytracing for its latest movies, also for indirect illumination. For Monsters University I don’t know if they even used point based GI anymore.

Anyway, I think it’s very unlikely that Cycles will ever be used for realtime game engine lighting. That should not influence the design decisions we make for offline rendering.

@Ahhh thats interesting (thought pixar was still using that pipe). Come on Brecht think creatively lol, I agree that cycles development shouldnt be influenced if it doesnt make sense, But Photon mapping and irridiance cacheing are being used in realtime rendering as a pre baked shadow reflection mapping technique. What im hopeing is that many things can be reused for realtime applications but scaled down as part of cycles dev work, After all these realtime approaches are still using ray casting, raytracing, couldnt cycles have a selection of kernels for different approaches and material systems, e.g could cycles be used for just plain one sample per pixel raytracing Whitted style ray tracing etc, Such a mode with baked shadow reflection mapping, shadow mapping could be turned into a nice realtime engine. BGE is looking very old and jsu think now would be the point to take advances with cycles to a realtime engine even if that process takes a few years. (e.g Maybe HLSL,GLSL and a bi directional path tracing ray bundle mode). And i would love to see Blender cycles being able to be run in the cloud. What are your opinions on BGE?

This is the doc ive been referencing, Could you have a look and tell me if Cycles could have a simular approach mode within Kernel (not as part of your offline rendering pipe).

Link:
http://www.jp.square-enix.com/info/library/pdf/Real-Time%20Bidirectional%20Path%20Tracing%20via%20Rasterization%20(preprint).pdf

http://www.google.co.uk/url?sa=t&rct=j&q=real-time%20bidirectional%20path%20tracing%20via%20rasterization%20&source=web&cd=4&cad=rja&ved=0CEAQFjAD&url=http%3A%2F%2Fwww.jp.square-enix.com%2Finfo%2Flibrary%2Fpdf%2FReal-Time%20Bidirectional%20Path%20Tracing%20via%20Rasterization%20Supplemental%20Material.pdf&ei=9a7wULDvDIua0QWRsIGQBw&usg=AFQjCNGqrxFk20q7tAi46SMNl4P6cOshbQ&bvm=bv.1357700187,d.d2k

The only way the game engine will get a big improvement is if a bunch of people dedicate a lot of time to working on it. There is no way you can create an advanced game engine lighting system as a sort of happy side effect of Cycles. I do not believe it is possible to scale down an offline render engine to a game engine without rewriting most of it.

OK, Youve made me want to cry lol. Can you have a look at the above paper by square enix and tell me how hard an implementation that would be even if we had to start from scratch and couldnt use any ideas from Cycles. Im willing to make this my holy grail and rope in others if you think the above could be done for a new Blender game engine, and hopefully with a bit of help from you boys and girls.

Do note that this is based on the assumption that render engines and game engines are similar in terms of code, from what I’ve been reading from Brecht, the code you would find in a graphics engine designed for games would have enough in the way of differences that you can’t just convert one to the other and visa-versa.

Consider the fact that Cycles is a physically-based pathtracer, you can’t just dumb down some of the algorithms and code and get an engine for use in games. You might try to argue against this by citing the Brigade real-time pathtracer, but the reality is that the Brigade engine was designed from the ground up to render real-time graphics for games and there is a reason why you’re not going to see a unity of the Brigade and Octane engines anytime soon. (which are both under one company now).

Render engines use C++ as a base for all of the algorithms, game engines use OpenGL/GLSL, as you can see there’s a lot of dissimilarity in the bases used to create graphics and images, and by no means can you easily unify them or convert one to the other.

As far as I can tell the method as presented does not scale to a full game world, it’s only research at this point. So, if this method will turn out to be practical for games or how long it would take to implement are open questions.

The game engine has many areas that need improvement. It would be great to have good indirect lighting, but the material system with basic direct lighting already needs a lot of work to be competitive.

You Might be interested in this Tech demo video they did:

Looks like the system works, Would love if you could let this paper run round in the back of your mind so we can come up with something. Cheers

I doubt that tech demo is using the techniques from the last paper you linked, unless they found some way to speed it up at least 10x.

If people want to develop Blender I can give advice or review as usual, but I’m not going to start focusing on the game engine…

Implementing something like Luminous in Blender just isn’t going to happen. It would take a team of dozens, even hundreds at least a couple of years to get it ready for primetime. It is cutting edge truly next-gen VPL technology that won’t even run on most of today’s hardware.

@m9105826,Yep that was my point from the begining, This is proof it can be done, and as far as it looks the main engine was done by the 2 guys. Your right it would be a long term project and it would take many people to contribute but surly thats the whole point of such a thing, If the word gets out Blenders going to start developing a cutting edge realtime open source engine more and more people would join im sure. And over the years as tech gets better so would performance. BGE’s had it’s day, im for aiming high and starting on something new (even if it does takes years the rewards would be worth it).

It’s one thing to advocate the idea of a potentially cutting edge project, but it’s another thing to try to get your hopes up over the idea that it will snowball into a project with a lot of developers. Look at Cycles for instance, I know at least one or two here who thought the same thing in terms of attracting a large team, but as of now the entire project so far has had maybe a handful of other contributors other than Brecht.

It’s basically an issue of getting so excited and optimistic for something that it ends up crushing you emotionally when things don’t pan out the way you dreamed. By all means it’s okay to dream big, but by all means try to have at least some hold on reality.

What that Square-Enix paper shows is essentially fancy shadow maps and it suffers from all the problems that shadow-maps imply. You can see light-bleeding already with very coarse geometry in the top images.
The great thing about ray-tracing based methods is that, given enough sampling, they will reliably recover very fine structural detail at the subpixel level.
Given that it relies on rasterization hardware, it also doesn’t fit the architecture of any raytracer. So, go cheer for it to be implemented in the game engine forums, but don’t expect any people working on offline rendering to be interested in this :wink: