Bidirectional Path Tracing and other speed improvements on Cycles

I remember Brecht also had to other jobs to do at the Blender foundation at some point in the past, and could only spend two days a week working on Cycles. If this is still the case, maybe a simpler way to spend the money is to hire someone to do all of Brechts other jobs such as bug checking etc, so that he could spend a full work week working on Cycles. There would be a wider pool of people who could do bug fixes, than there are people who can work on cycles.

We should also pay attention to who is adding functionality to cycles at the moment, because they seem likely candidates to help improve cycles, since they are already familiar with the code, and It’s more useful for the people who are already working on cycles to be able to do more work on it.

More developers working on Cycles would be great! If we get good patches for features we’ll of course review those and help get them into trunk, it doesn’t matter if they come from volunteers, paid developers or Blender Foundation employees.

From the Blender Foundation point of view, the way we do fund raising for development is through open movie projects and now the Blender development fund. Donations are not for one specific project then though, where this goes to gets decided by the project administrators. Usually this for developers who have been contributing as volunteers to work on a specific project.

If people want to set up funding themselves, to get a developer working specifically on Cycles, that’s great though.

If there’s enough donations and there’s a good candidate, the development fund could perhaps get involved too, but it’s too early for this now, when we don’t know yet if enough funds would be available and if there is a candidate.

Small side note regarding unidirectional path tracing, that’s just another name for regular path tracing as Cycles uses. I don’t know what kind of magic Arnold might have under the hood, but I’m pretty sure performance differences are mostly down to more low level optimized ray tracing and shading code, better sampling for common sources of noise, and good defaults and controls to disable noisy effects.

That’s the kind of thing that is priority for me now, I don’t expect we’ll be able to match them anytime soon, but we can make a lot of progress still. It’s a different focus than typical physically based render engines though.

How is the render API coming along?
From my point of view, with cycles fairly aimed at animation, the render API might be a very valuable solution to people who need to do different kinds of renderings jobs (archviz, interiors, realistic light studies etc…).
As awesome as cycles is (it really is brecht!) I still value Jays point. Lux is doing very well with many cutting edge techs. Yaf is robust and although dev has slowed lately they have MLT and Irradiance Caching waiting in the backstage of their GSOC branches, Not to mention the hardcore bleeding edge research approach of Mitsuba or Renderspud or Appleseed or indeed the renderman approaches of Aqsis.
Lots and Lots of clever minds doing clever engines for just about any task. And they all seem to be implementing interactive rendering as we speak… (maybe except Yaf… but hey… :slight_smile:

Unfortunately they’ve been waiting in the backstage since last year, and the last time I read through their forums, Google turned down their request to participate again last Summer.

Meanwhile Lux continues to come along nicely, they now have a fancy new layered material as well as normal map support and the sampling code is being rewritten, they’re also continuing to work on the SPPM integration which one reason why it has slowed down is the research being done at the same time to get it right.

And Cycles, well, this forum has plenty of opinions of how it’s been coming along since it was first revealed.

Still, it would be nice to know how the render api is coming along. Since cycles is supposed to be a plugin, theoretically, any other renderer can plug into blender just as well as cycles does, right?

For example, will this hopefully the end of vray not having access to animation data, or having to export a new mesh for every frame in order to get a deforming character rendered?

That has always been the plan more or less, Cycles plugs into Blender using the render API, which has been enhanced to allow the current level of integration you see now.

So the plugin developers working with Vray, Luxrender, Thea, Yafaray, Octane ect… will be able to have the same or a similar integration with Blender.

Correct!

Ok, Yesterday I wrote to Brecht and today I had a brief talk to him over the IRC channel… A couple hours later he answered my email in a more official way and as you could’ve read, he also answered here (thanks!)… The only thing left for me to do is to contact Wenzel as he seems to be our dream option (in a couple of hours I’ll get in contact with him) and after his feedback, I’ll get back to you. (Don’t hold your breath though… I’m actually very pessimistic, such a smart cookie is probably full of work and being hired to work on cycles is not his dream job)…

Another very important thing… I crop some of Brecht’s words from his email:

(…) Also understand that to speed up development, this developer needs to be able to finish and support the project, getting an early prototype working can be done quickly, but taking it to a fully working, integrated feature and handling any bugs and issues takes a lot longer.

I’m not sure who to suggest, it’s not so easy to find someone like that, most people I know like that already have a well paid job :slight_smile:
He also stressed something important… The kind of talent and work needed for these developments we want, are not cheap… Hiring a coder for this job is gonna be expensive…

But this is no reason to be bumped, as soon as we find a great developer and some idea of prices is known, I plan to keep the ball rolling posting on Blender Nation and other sites to help spread the word of the funding…

Anyway, be patient and give me a couple of days while I contact Wenzel or other possible developers.

Greets.
tuqueque.

What about the people who worked on and made patches for Blender’s old render engine in the past

Mfoxdogg - (Tangent shading patch)
Matt Ebb - (Worked on BI’s volumetric system as well as its capability to render blurry reflections, refractions and shadows, his alias here is ‘Broken’)
Jaguarandi - (The GSoC work that vastly optimized BI’s raytracer)
Uncle Ziev - (The GSoC lightcuts project (never completed though))
ect…

Their skill might be able to translate to the Cycles engine and its shading paradigm, they also have the advantage of having worked with Blender code, but I’m not one to say if they will decide to jump on board.

All right guys… A couple hours ago I wrote to Wenzel and in like 30 minutes I got his reply :)…

This is what he said:

I’m certainly flattered that you want to recruit me for the job! Unfortunately I don’t think I’ll be able to help. I’m in the last part of my PhD with still about a year to go, hence most of my time these days goes into research, which is definitely incompatible with implementing production-ready code.

      Even if I did have the time, I'd be reluctant to take the job           -- I'll try to explain since it may be helpful for your           discussion.
      
      <b>1. </b>Efficiently implementing complex rendering algorithms on           the GPU is really, really hard! It's possible to port a path           tracer to the GPU, as many have demonstrated .. but now try           doing that for something like Veach-MLT. What would the OpenCL           kernel of that look like? The mind shudders. I feel like GPU           development is just not ready for the kinds of complex control           flow that these algorithms take for granted.
      
      <b>2.</b> GPU architectures and coding models are changing at a fast           rate. If you've got some CUDA code from 1-2 years ago, I doubt           that it will compile today. And even if it magically does or           can be fixed, it will run poorly on a GPU from 2011 since the           microarchitecture has changed considerably. Who would be           willing to undertake such a big development project if it will           likely require a major redesign in a few years when the           development model + architecture have converged in yet another           direction? Nobody really knows what the parallel development           environment will look like in 2 years...
      
      <b>3.</b> More advanced rendering algorithms (BDPT, ERPT, Kelemen           & Veach-MLT) don't play well with many of the tricks that           used in production renderings. You can't have "invisible"           light sources. Things like reciprocity and energy conservation           of scattering models (or lack thereof) can become a huge           issue. This means that users who write custom shaders must           have a firm grasp of the underlying physics, or the shaders           will wreak havoc with the rendering technique. These is not an           issue in Mitsuba since it just doesn't allow non-physical           tricks, but I imagine that it could create problems in a           production system where people expect to be able to tweak all           sorts of things.
      
      <b>4. </b>From my own experience, I can tell you that implementing           one of the fancier rendering techniques on top of an existing           path tracer just won't work. Unless the base system was           designed with this in mind already, it will need a major           overhaul (the added rendering techniques will violate           assumptions that one took for granted with only a           unidirectional path tracer in mind).
      
      So.. I'm sorry to spoil the fun and enthusiasm! My response           boils down to this: whoever might work on such a project won't           be done in just a few months. And it's arguable if it is           really a good idea at this particular point in time, since the           development models are still in flux.
          
          Wenzel
        
      PS: The video which is posted on the first post of the forum           discussion references a video by Dietger van Antwerpen ([http://graphics.tudelft.nl/~dietger/](http://graphics.tudelft.nl/%7Edietger/))           who implemented some of these algorithms on the GPU. I don't           know how general/maintainable these implementations are, since           no code was ever released (as far as I know). But he might be           a good person to ask about this as well.

Part of my answer to him was:

I actually went ahead of your suggestion and a couple hours ago emailed Jacco Bikker and Dietger van Antwerpen with the same proposal. And I still have a couple of other names in hold… I’ve done my share of digging already!..
Wenzel also said in a later email:

(…) I’m quite curious about other people’s opinions (particularly Brecht and Dietger’s – they may disagree about some of these points)
Greets.
tuqueque.

Heh, well that’s an interesting reply. Also a very reasonable one.
It’s way more useful than a plain “NO!” :slight_smile:
I’d love to hear Brecht and Dietger’s opinions on those points aswell…

Whilst am I no position to argue with the technicalities of Wenzel’s arguments, it would appear that he is more pessimistic about the future of GPU rendering than necessary.

  1. No doubt attempting to wrestle existing code into OpenCL is difficult - this simply mean that a very different mind-set is required in order to tackle something like this. This is something that has happened right from the very start of computing and has ever been thus; I myself was once tasked with writing a filing/database system for a 4K storage system (don’t ask why). The previous attempts by other developers failed when they tried to shoehorn an existing system into that device. I solved the issue by drawing out every bit (yup - every bit) in a spreadsheet, and then allocating things visually. Oh they laughed, until they saw how quickly my functioning prototype was cranked out. My point is, different problems need different methods for solving them. No-one is arguing whether or not it is possible to do; only how long it will take.

  2. No doubt CUDA is going to evolve as Nvidia creep ever closer to Intel’s tail. Trying to predict 2 years ahead in computing usually results in hilarious conclusions. Nonetheless, since Blender is not a project to be sniffed at, and Cycles would clearly represent a very important part of that, and factoring in Nvidia’s involvement to date (can’t remember where I came across some mention of a patch they provided or somesuch) there is a distinct possibility that the Cycles project will in fact directly influence the direction CUDA takes, making the ongoing development of Cycles easier.

  3. If the worst outcome for the foreseeable future is that Cycles will be restricted to physical-only possibilities, I think most users could live with that.

  4. This is an issue with most software. Adding a grammatical correction system into a word processor, for example, that was originally designed as a notepad with knobs on, could wreak havoc with the existing code. This has been done, though, in countless other areas of computing, when one might have argued for scrapping the code base and starting again. Arguing that path-tracing is a special case is not supported by the evidence. In most cases, at the time, no doubt the argument was trotted out that the project was a “special case”.

In case I have given the impression that I have a beef with Wenzel for daring to blaspheme Cycles and the idea of hiring another developer, kindly note that this is not the case. I have dabbled with writing software on and off for almost thirty years, both as an amateur and professional, and have played with some seriously weird kit as a software engineer. My comments above reflect that, and I wrote them in the hope that they provide some useful fodder for the conversation.

I can give you a prediction: Soon we need more devs for ARM compatibility. :wink:
I don´t know how close anyone here follows hardware development, but ARM holds a bigger market share in CPUs than Intel and AMD combined, and Nvidia noticed this soon enough and now present their new… well it has no name - yet:

It´s a board with a Tegra ARM CPU and a CUDA capable GPU. Cheap and in quantity the mean to produce future supercomputers.
Full story: http://blogs.nvidia.com/2011/12/looking-for-a-few-good-codenames/

So especially in the area of raytracing it´s a good guess this combination will prevail. I am not sure though how good the RISC of ARM is for raytracing and if there´ll happen anything, maybe a RISC ARM dedicated to raytracing IS. Who knows.

While I was researching path tracing terminology I came about an article about an approach “that sits between unidirectional path tracing and bidirectional path tracing”. The author calls it two-way path tracing and states one of its advantages as (emphasis mine):

Like unidirectional path tracing, you only need to track a fixed amount of state, regardless of maximum path length. This is potentially nice for GPU implementations where you usually want to avoid hitting memory and have a large number of paths in flight.

I can’t say I understood the details, but it might be an interesting approach, so I’m putting it out here for those who do understand. :slight_smile:

There are other similar papers… One could be http://www.rendering.ovgu.de/rendering_media/downloads/publications/EyePathReproject.pdf

BTW… http://raytracey.blogspot.com/ is an awesome place to find news about tealtime/GPU raytracing.

As I said earlier, I sent an email to Jacco Bikker and Diedger Van Antwerpen… And 30 minutes ago I also wrote to Mike Farnsworth (RenderSpud’s developer)… And I still have some names in the list but I want to give some time to get feedback from those who I already contacted.

Man, I turn around for one minute and y’all go talking behind my back… :wink:

Actually, I’m pretty flattered by the “Dr. Farnsworth” comment. I have a former coworker at Electronic Arts that used to call me that, but everyone else just calls me ‘Farny’ as there are too many Mikes in the world. No, I don’t have a Ph.D., although someday I might get one for the heck of it.

Working on Cycles sounds like a blast, and a real challenge. I wrote a GPU path tracer (just a toy one) a while back using OpenCL, and it was both really fun and really frustrating. C’est la vie.

Aside from my wife getting on my case about spending too much time programming, I could still work on both RenderSpud and Cycles, as RenderSpud is more intended to be geared as a standalone, multiplatform production renderer. I currently work at Tippett Studio doing R&D for film VFX (we use mostly Pixar’s Renderman, and I am in charge of studio-wide shaders and am one of the two primary developers for our fur/hair system used in quite a good handful of movies now), so production rendering / animation are my main areas of interest. And I’m obviously a former game graphics developer, so I’m familiar with the real-time side of the world too.

As for my experience integrating RenderSpud into blender as a render engine, I used the python interface rather than the native to-the-metal stuff because I wanted to make it as agnostic to the blender version as possible (there was a lot of version/api churn during the 2.5x series). I had to write a proper python wrapper for RenderSpud to do it, which was fun. I understand Cycles did it the other way; and considering how long it takes to go from blender native -> python -> RenderSpud when translating a scene, I can imagine why avoiding the python side would be a good idea.

But, I can’t commit to it in this very post; I have to check in with relevant parties and mull it over. I’m a little torn as this is all very flattering, but I was trying to round up some artists I work with to see if they could help me work on a short with RenderSpud to help me refine and improve it. There are just too dang many awesome projects to work on.

Anyway, 'nuff of that, if you all want to know more about my experience, go read my blog. I’ll keep an eye on this thread now, sorry for being late.

-Farny

PS - A note about Arnold; I’ve talked a bit with the developer, Marcos Farjardo, and he told me it was a straight path tracer written in C. The Sony Imageworks guys (I know Larry Gritz and talk to him every so often) have a variation on Arnold that is all OSL, obviously. Tippett Studio split work on the Smurfs with them and we had to comp our cat (Azrael) with their smurfs. Arnold is a solid renderer, but it doesn’t do anything beyond production-tweaked vanilla path tracing, as far as I know. RenderSpud’s bidirectional path tracing is definitely not production ready, but the path tracer itself is somewhat mature, as it’s quite a bit easier to implement and maintain.

You could get your name changed to Daren or Darryl for instance and shorten it with Dr. :smiley:
You just have to get the Ph.D so you can work on cycles and it can get a “Approved by Dr. Farnsworth” - seal of quality :smiley:
And to those who don´t get it: “I don´t want to live on this planet anymore” :smiley:

Ok, first of all, an hour ago I got Diedger’s response to my email… This is what he said:

Thank you very much for the offer. It definitely sounds like a cool project and I already saw some impressive cycles results. However, I am afraid I have to decline. The main reason being that I currently work at NVIDIA ARC on the iray renderer.

Regards and success with this project,
Dietger
So this most probably means that he can’t work on any related project due to contract restrictions…

But no problem since we finally have a possible developer with the awesome Mike “Farny”!..

Mike, Later I’ll PM you to talk in more depth about the subject and maybe get you and Brecht in contact so you can also discuss more technical aspects… I’ll report back to this thread.

@lsscpp, thanks for suggesting Mike!

P.S.: Mike, judging by your reply, you seem to be such a cool guy, very easy going, with a great attitude, willing to talk to the community and with incredible talent and lots of experience in the production world… You seem to be the PERFECT fit!.. I’d very happy if you could be THE developer!

Lets cross fingers people!

P.P.S.: Wenzel, you are also a great guy!.. I could say the same yo you!

I’m still checking things to make sure I’m truly allowed to (I think I can, but I would be remiss if I didn’t find out for sure) and to figure out how much time I could dedicate to it, so it ain’t done 'till it’s done. :slight_smile:

As for whether I’m a cool guy, you never know – in person I could be an asinine jerk, or just totally bizarre and nerdy. Don’t let your guard down, we programmers are a fickle lot. :eyebrowlift:

-Farny

I’ll say–my wife’s a programmer…