Talking about "production readiness" of features

You devide those 400h by x amount of cpu your render farm have, but yeah thats too much even for a farm

Sorry, my bad, it’s 400+ hours RENDER, not chronological hours. Edited for clarity.

And the rendertimes depends on projects. Small projects are about 0.5-1hr on your machine. But i’m talking medium size here (Read theatrical NPR animations) and that means rendering on a farm, but still talking render hours.

Big projects (read Hollywood level) takes far more time, and you really need a big farm. And at this level, we talk about YEARS RENDER.

But then again, rendering is still not a problem using cycles, since it support rendering on a farm. So not the problem i’m focused here in this thread.

He said 400 hours per frame originally. That implies rendering animation. In that case, you would not use distributed rendering (which Cycles doesn’t even have) to distribute one frame across multiple PCs, but each PC would crunch its own frame from an animation. In that case, it’d still take 400 hours if that was the longest frame out of an animation.

NPR animations should definitely not take that long. What usually takes long to render is complex shading… think of closeup of a glass optics scope with many refraction layers in 4k. That could take even 10+ hours on modern hardware.

I know quite a few people doing Hollywood grade stuff, and it’s rarely that long tese days. Yes, 10-20 hours per occasionally happens on high res frames (8k) with very complex shading, but those are more of an exception to a rule. It’s becoming increasingly important to have good tools for rapid iteration, which allow creative folks to try out more variants before committing to one just because of hardware speed limitations.

Remember, i’m talking about “render hour”. The chronological time, depends on the size of the rendering farm. I’m not talking about rendering on a single PC.

For reference (since personal experiences varies) the following is an answer given by a former Disney/Dreamworks animator to the question “How long would it take to render a 2 hour 3D photorealistic movie using the best PC specs possible?” two years ago:

The “photorealistic” animals in Disney’s live action The Jungle Book took as long as 40 hours per frame to render. The animal is just a single element, and there are multiple elements assembled for a shot. Just the animal element at 40 hours per frame on a single render node would take about 6.6 million render hours for a 2 hour film (2k at 24fps). Not every frame would take 40 hours. Without more information about your film, let’s assume 50% to 100%, so somewhere between 380 and 750 render years (if every frame was perfect on the first render). A shot usually has at least 2 elements: background and character, so double that 760 to 1500 render years.

The fastest desktop CPU currently available is the Intel i9–7980xe. It has 18 cores, but they’re clocked slower than I’d expect any of the farm machines above were, and RAM limitations (128GB) are going to make it inefficient. I think the best we could do on desktop tech would be 4 nodes on a 6 or 8 core CPU. Photorealistic rendering also requires a huge amount of memory per render instance - I’d want 64GB per render node, but I don’t think any current Intel desktop motherboard supports more than 128GB. That’s going to make it hard with desktop tech to get more than 2 or 4 nodes per machine.

So if a desktop configuration can support 4 render nodes (4 render hours per hour), about 200 to 400 years.

Of course, nowadays GPU rendering are the rage, but still requires expensive cards to take advantage of it (at Hollywood level). The alternative?? Unreal Engine, as used by The Mandalorian production, or any future realtime engine that can take advantage of a GPU render farm (I think i saw an article describing using 4 workstations with several nvidia Quadros (64 GB vram each IIRC) to render the big dome used to film the series.)

Unreal Engine isn’t exactly a viable alternative yet. Disney really hyped up the virtual production in Mandelorian but what they didn’t mention was that virtually every shot had to be rotoscoped and recomped anyway.

1 Like

It’s been interesting to read what everyone considers to be ‘production ready’ and exactly what ‘production’ even means. In few cases, while Blender appears to be production ready for the short Open Movies that the Blender Studio make (and lets face it, Spring and Agent 327 are pretty darn good), there seems to be the general view that it’s not able to handle ‘Big Data’.

I noticed some discussion over exactly what big data is and the need for specific example or numbers. This made me remember that one can get access to an example of big production data.

I would assume we could agree that a recent Disney 3D animated movie would be fair to call big production data and that if Blender could handle that, it could be reasonably considered Production Ready.

So, check it out: and grab the Moana Island data.

If you can get Blender to read/edit and render some or all of it then we may have a common starting base for what ‘heavy data’ looks like, how Blender handles it and hence what needs to be improved. Assuming of course that this is the level of production readiness we want Blender to be able to do.

Very true. But paraphrasing the series, “that’s will be the way” in the [not so far] future. :slight_smile:

The discussion about Disney/hollywood level started by an error on my part (that’s what i get when posting at 4:00AM, never again.), and should not matter here, really. Rendering is not really the problem here, and cycles-x is the step in the correct direction to evolve things. No, the problem is in Blender, not cycles. And we honestly should talk medium grade production, since is far more realistic to get in contact to a medium size studio for evaluation and feedback, and results are far more reachable. Blender can already meet the requirements for indie/small studios (well… Richard may disagree, and for some reason i think he’s not the only one), despite the need for more workarounds than really needed :slight_smile:

1 Like

Only 3 month ago there was paper that allowed distributed rendering with up to 16 GPU without significant performance drop (over 94% efficiency). One of scenes was exactly this Moana Island scene. It does not depend much on render engine since it’s about memory management level hence require minimum code change and to prove that they modified Cycles.

So Cycles is one paper away from this “common starting base”.


I think thetony20 was talking more about editor performance than rendering performance there.

I’m taking a crack at downloading the dataset to take a look at it myself. I know Blender doesn’t tend to be able to handle importing single very large files very well, but depending on what the format for it is it may be possible to split it up a bit in something else to bring the total dataset into Blender piecemeal.

Well, really both editor and rendering performance, looking at Blender as a whole (yes I know Cycles is really just a plugin and you can plug in other renders), but for many they use Blender more or less as it comes.

So while it may well be good if Cycles is one paper away from being able to render that sort of data, yes you still need to be able to create it, edit it, shade it, animate it, etc.

I’ve not download it myself, I just don’t have that level of internet bandwidth or data, but on looking at the readme, it sounds like a lot of the ‘objects’ come in their own OBJ files, which if so, you can slowly load data in and see at which point does Blender ‘break’, depending on how one defines breaking, outside of just a total crash.

From the read me file on the website
he scene is made up of 20 elements containing meshes with more than 90 million unique
quads and triangles along with 5 million curves. Many of the primitives are instanced many
times giving rise to a total of more than 28 million instances of everything from leaves and
bushes to debris and rocks. When everything is fully instantiated the scene contains more than
15 billion primitives.

I’lll try but I don’t think my 970 can handle opening this file nor do I have the system RAM to support it.

[Derrailing] And speaking about the not so distant future…

(includes Blender cameo :slight_smile: )

Ok, here you go:

1 Like

XSI was a great package but they made the same miscalculation a lot of people make about Maya. Maya is not just a DCC its a complete platform. XSI wasn’t that. It was too tied into Mental Ray (the whole package was built around the renderer) when it first came out, it relied MS technology (active x etc) for extensibility. They wanted you to use an embedded webpage (XSI had a webviewer built-in) for extensible user UI originally. It was a complete mess. Meanwhile even 3DSMax had more extensibility our of the gate. Maya is extensible down to very core of the application, you can literally build almost anything with it as the base. That’s why Maya is not going anywhere in the industry. The only software that kind of did the same was Fabric Engine but they made the mistake of using some proprietary language that no one wanted.

By the time XSI addressed the extensibility issues Maya was completely encroached in the industry and they weren’t budging. Whole studios built their custom tools and pipelines around Maya. There was nothing XSI could do. XSI had some success as a game development tool since it had great tech at the time for that.


It’s on the work already for mantaflow

Do you have a source that the short release cylces have been introduced to reach 3.0 soon after 2.8?

Currently, there is no official GPU fluids testable by users. If it is not the case, now, It will not be in an official release in 2 months.

What is shared in those reports is the fact that Mantaflow’s developer stopped his work on GPU fluids to work on 2D Fluid simulations in a separated branch. The work done in a new branch will probably not end-up in an official release, this year.

But Fluids are out of scope. Yes. I made the mistake to write “GPU physics” in my reply. But it was a reply about Cloth and Particles.

I did not write it was introduced soon after 2.8.
That is my unofficial sum-up of pasted 3 years of improvisation.

None of this convinced me.

After disappointment that 2.80 will have obvious lacks and instability, choice to speed up release cycle to reach a satisfying release, as soon as possible, was made.
That does not make sense from a development perspective. Time necessary to incompressible work to reach such goal has to pass.
But from a marketing point of view, explaining to users that what they want, could happen in next release in a few months : that make sense.
And in the same kind of logic, abandoning the numbering of series from .0 to .9 to use a numbering from .0 to .3 to reach a round number like 3.0 as soon as possible make sense.

2.90, 2.91 and 2.93 has to been seen as 2.84, 2.85, 2.86 releases if you want to compare period we are living with anterior ones.
Maybe LTS should be compared to b or c releases.

With 2 releases per year, Blender release cycle would still be faster than comparable software.
And that would improved its reputation to deliver a safe LTS release per year after months of features testing of half of novelties instead of a promise of 2 years of bugfixing.

3 releases per year does not seem to match an agenda that would satisfy developers that are volunteers that may be volunteers still working at universities

4 releases per year, it is too much. On paper, that may match agenda. But in reality, they are not succeeding to maintain it.

2 releases per year as you are suggesting would mean the developers would get a lot of feedback twice a year. Or with the amount of users and feedback, they could be found within a huge pile of bug reports.

I agree with you that the initial quality of LTS versions should be better. But, this could be achieved by slightly adjusting the way releases are handled. One way would be to spend more time on those. Another one could be to literally take the previous release and simply make it the LTS version, meaning it went through lots of user testing and has gotten plenty of bug fixes.

Could you explain why you think that?

That should have no impact on feedback.

Yes, we could expect twice more duplicates report to close per release. But developers would have twice less release to treat. So, that should be equivalent.
If somebody make a duplicate report because he did not find relevant existing one, that nothing to do with how release is numbered.
Yes. a release would last twice longer. But months in a year are not doubled. In the end, users still have some amount of time in a year to make duplicates.

In fact, we could expect that existence of a bug could be more spread among communities of users. As a consequence, a doubling of duplicates should not happen.
We can also consider that some duplicates of reports are due to releases.
At the occasion of a release, people are more inclined to try blender, for the first time and wrongly report a long term issue as a bug, for the first time.
By decreasing amount of such events, you should decrease periods where bugtracker is massively visited.

Students who worked on a ggogle summer of code want to use it to promote them, the following year.
It is easier with a short term release cycle.
Job is done in summer. Integrated in official Blender in autumn. Feedback from official release is treated and bugfixed in winter. They can be proud of it and show it in spring.
That is way more easy for them to be invested in it than saying : “I worked a whole semester on that, this year. But that should be publicly acclaimed, at the beginning of next year.”

I am not talking about duplicate reports or anything like that. People are mostly using the official releases. That’s why most reports come in after an actual release. When there are fewer releases, with more changes, instead of having the reports spread out over more releases, they would only arrive twice.