New Cycles Network Rendering Prototype

I just saw this thread. I am always willing to support add-ons and whatever will speed up my work flow. I have given hundreds to the development for the B-Maxwell add-on. As my workflow is now becoming inclusive of Cycles, I depend on networking rendering to finish my work on a timely manner. I use Ranch Computing for a lot of my Maxwell Render work. I am interested in Cycles and AWS work flow. I am just not familiar with it. I am also not familiar with any sort of development or how to compile scripts, or special Blender builds. I like everything to be packaged as an add-on. Not sure how that works with what you have created here. But I am definitely interested. I just need to better understand how it all works.

yup, here ^ i’m coming along…

Hi asmithey, this is not a addon it is a change in Blender source code with about 1000 lines of code in several files.
I am not sure if this is possible with an addon but think it is not.
Please take a look at the patch tracker > https://developer.blender.org/D2808
You need to patch source code and recompile Blender.
You need at least a few days to get familar with compiling, depends on your computer skills.
The developer Lukas Stocker published experimental builds > https://blenderartists.org/forum/showthread.php?318765-Blender-2-7x-development-thread&p=3231216&viewfull=1#post3231216
There are a few weeks old but working.

Cheers, mib

I’m very sorry to hear about your financial issues. That really sucks. I really hope thing work out for the best though.

On another note: I think the addon, Auto Tile Size will fix the “Same tile size for both GPU and CPU.” issue. As far as I can tell while rendering on our network (deadline) it “seems” like it’s automatically setting the tile size depending on weather it’s CPU or GPU rendering. When you network render with Blender on Deadline, you can’t actually watch it rendering because it uses a command line rendering technique (as far as I can tell). But, the render times seem consistent with what I see when rendering in app.

So, you might want to try that. As long as you have set values in Auto Tile Size for GPU and CPU, it “Should” try to render with GPU first and when not encountering a GPU, it will fall back to CPU and use the Auto Tile Size value for that.

So far it’s been one of the coolest features for us with our mixed CPU/GPU farm. There is no option like this for any other renderer that I know of.

@LordOdin
What is Theory? a studio? if so where are you located? do you have a website?

@3DLuver
Do you take on contract work? If so I love to know if you are interested in / capable of integrating cycles into Nuke? Also, how difficult would it be to create an archive format for cycles? like houdini’s IFD or Arnold’s ASS file.

Well as your a good lad, and as Lukas works directly with you guys then i have no issues this last time posting the build to help you guys out. At the end of the day the reason i took it down was not to let great devs like Lukas think i was taking the piss creating builds with his and other devs work. But if you guys are good with it then im more than happy, But again it is a build with my personal branding as explained in earlier posts on this thread as to why.

Lukas Network Build:

:Re written Lukas old Micro jitter scramble for Sobol and CMJ
:Sergey MikT patch
:Brecht Hard edge bevel shader
:Milan Adaptive sample for CPU (works with Network mode)

Maybe some other stuff ive forgot, But NO AO approx stuff at all. Just to make that clear. Even the original build i posted had no AO approx.

@IndyLogic, The auto tile size plugin as far as i know (as if did do what you said i wouldn’t of started the patch in the first place) is for separate rendering. Im talking about using the CPU and GPU to render the same frame even on local machine, But be able to set separate CPU and GPU tile sizes (most important for opencl for the future).

And for network render since you can do cuda and cpu mode at the same time. Being able to do bigger tiles for gpu and smaller tiles for cpu is very important.

The cycles server would kinda have to pick the tile size though

start cycles_server --announce-to 127.0.0.1 --device CUDA

I’m not sure if Network Rendering supports Placeholder and No Overwrite, because maybe this could be a useful workaround to use GPU and CPU for animation:

By the way, I’m not sure how feasible it really is in practice to use GPU and CPU at the same time for production with Cycles, because there always appears some report out there where there are differences between the results with GPU and CPU, mainly in Hair and volumetrics.

Well like said to someone the other day, Rendering with CPU and GPU already works in cycles. Hit spacebar, type debug, in debug window enter 256. Then goto Opencl section, enable split kernel, and ALL devices. Render.

Only problem right now is it’s not any faster in my tests on old i5 with my Fire pro W9100 16gb because there is no way to set separate CPU and GPU tile size. If you can set CPU tile to 32x32 but GPU to say 756x756 (my firepro actually renders shit loads faster at FULL render res e.g 1 tile of 1920x1280 then 128 or 256 or 512. About 800x800 comes close to full single tile but still slower) then that is solved.

Thats why i started the patch, Idea It takes the GPU render tile size and sets the cpu tile size as a division of the GPU size.E.g 512x512 gpu tilesize but you can set a division of say 4,8,16,32 with presets in the render panel.So 512x512 divided by 16 equals 32x32 cpu tile size. Then the cpu renders the tiles necessary to fit into it’s defined gpu tile of 512x512. This way it doesn’t end up breaking any other tile related code like the denoiser. GPU tile size decides the denoise tile size, but the cpu may have to render many smaller tiles into the same tile size as the GPU tile until the denoiser then runs.

Any inconsistency between GPU and CPU is a Bug, Whether your trying to render cpu and gpu at the same time or not the devs will take any difference between CPU render and GPU very seriously, as the consistency is 100% key, If that doesn’t work then the renderer is useless. Which is why commits to one type of render cpu or gpu that give different results in testing will NEVER get committed to master. They have to produce the same outcome.

@3DLuver, hi
Yes OpenCL of course. But at least on my intel, OpenCL on CPU sucks compared to just CPU for Cycles. And also of course, CUDA is better than OpenCL for nvidia.

Regarding inconsistencies…Yes that is so, they are treated as bugs in tracker. But unfortunately there are some cases where there is no solution in sight.

Hey YAFU, they must have been cleared up because I just rendered an animation about three weeks ago for a job that used smoke on a mix GPU/CPU render farm. We use GTX 1080ti’s in all our new machines and the rest of them are lower end 780 and 980’s so they just drop down to CPU.

No inconsistencies that we’ve noticed at all. Maybe there’s an issue with hair as I didn’t use that though. So far it’s been as smooth as butter. :wink:

Buy the way, I really wish we could all come up with a much more defined terminology for this technique. What we’re talking about here is taking a bunch of networked computers and having them all contribute to the rendering of a single frame (or multiple frames but only one frame at a time). What I usually think of when I read “Network Rendering” is when you have a bunch of frames of an animation that get distributed out and rendered by a network of computers.

Calling this “Network rendering” is far to general and could mean a number of different things. So far, Vray calls this ‘Distributed Rendering’. C4D calls it Team rendering I think. There’s a few others that I’ave heard too: Cooperative rendering, single frame distributed rendering, Distributed Cooperative… Etc. Basically, anything other than, “Network Rendering” would be fine with me. :wink:

Hi.
Good to know!
Yes, I did not mean to have the problem always. I know that from time to time it appears some report about this, and I just wondered how often it would happen and if it was feasible for production. These are two cases that I know because I have opened them, but I suppose there would be out there some other problematic particular cases found by other users.

By the way in your case where first part was rendered with GPU and the rest with CPU, then if there was an inconsistency you would notice it mainly in the frame where it changes from GPU to CPU. But I guess that inconsistencies in intercalated frames or even mixed tiles would be more noticeable and annoying.

Yes it is a studio. We did the man in the high Castle stuff. And that horrible Ray and Clovis cartoon lol

Look in my description :wink:

While that is true like 3D lover said any difference is a bug and would be high priority to fix for the devs.

Being able to render with the devices best compute mode is very important so you aren’t wasting power and time just to get a 3 second faster render. Even if it is faster I’ll refuse to use it because I’m not going to use twice as much power to save 3 seconds a frame

3Dluver you should first think about your income, try to find paid coding outside the blender realm.
If your into c++ you should be able to get a job somewhere these days
Having Blender on your CV as a project contributor doesn’t sound bad either.
It proves you can work in an online global team.
If you got other skills that earns some bucks its fine to, but first comes your life.
Keep Blender just as a hobby like most people do, whenever you got time for it.
Online reputations are just a fun thing but its not worth betting your life on.
You dont have to do this at all.

Well, yeah, donation in development is pretty much hard way.

For example, I built my turbosquid account and brought it to double diamond state, to have ability to support such global opensource development as blenderfund, gimp (which, unfortunately, tooks funds through gnome), inkscape, IFCopenshell, godot as well as (pretty much random) local devs and artists as bartius crouch, pyroevil, mifth, Øyvind Kolås, martins upitis, etc, but there are too much recipients, so even I can’t provide more than 100-150eur per recipient, including 50eur monthly patreonship.
As result, donation is sheduled for months ahead.

Indicative is that I don’t use any kind of builds at all, except offical, so, for example, there is too fat chance for me to meet dev like you.

So there need to be very careful with building development on donationware funding system.
I still don’t understand how even blenderfund still goes on, so I just believe in it like in magic.

When it comes to donating to various artists and FOSS applications, you might want to prioritize your donations as to what will bring you the most benefit for your own work.

For instance, funding the development of important applications before you start funding the work of artists (because the applications allow the creation of the content that enable you to actually make that money).

That brings up the increasing popularity of artists setting up Patreon accounts, what ever happened to the concept of working for your money (either through a job or through setting up a business)? It does seem like to an extent that Patreon has become a way for people to fund their life so they can avoid the job market entirely and simply do things they enjoy.

Good idea, but if I need something for my work I order tools directly to developers I tightly work with on commercial basis.
I just buy dev’s time to solve my problems if things turns serious.

The main idea of donation account is to cover higher range enthusiastic development, so main criteria for donation is not usefullness, but awesomeness. Most of such things I will never use by myself (such as pyroevil’s molecular, leomoon’s hdri or fracture build), but this is the way I appreciate that it was written.

Hovewer, it is nothing more just my personal project.