Talking about "production readiness" of features

Everything that was mentioned in the answer towards you is relevant for polygon and object performance. So, the answer is yes.
You may care about other use cases than the mentioned ones. In that case, you have to be more specific than polygon and object performance.

4 Likes

The problem is that Richard believes that is quite obvous that Blender cannot manage microscopic sets of around 25k+ objects without freezing for 1/2 hour or so.

Sorry Richard, but you Really need to put numbers on the discussion table for people to have a perspective of what Blender can/cannot do. Until now, Blender Institute movies are done in a quite smalish scale. So Blender is tuned for that. I Mean, you can’t ask miracles (a.k.a. what is normal for packages like houdini, for example) on worstations with as little as 16/32 Gb of RAM. Feelings are good, but numbers are better.

For devs: Rendering is not the problem. For medium size production renderings, 400+ render hours per frame is quite standard, and Cycles already can manage (more or less) these without great complications. No, the problem is in Blender. Simply it wouldn’t scale for scenes where certain degree of complexity is required.

Pablo’s plan to crack this, is quite the real starting point, but is that: a plan. Nobody is looking at the problem right now, because nobody is seeing the problem. And that’s frustrating i believe.

Edit: I didn’t mean to reply to DeepBlender, sorry… Damned be Discourse.
Edit 2: clarified the 400+ units of time.

That’s what I was aiming for with my questions. There was some revision around the 2.78-2.79 time frame I think that added some extra cleanup/sanity check after objects get removed, and that made, from one build to the other, deleting an object from a scene with a couple thousand of objects very slow. I think I filed a ticket at the time (I hope I did - I even wrote a python script to create those scenes), but since depsgraph was in development, I was assuming that depsgraph and 2.80 would not have that problem to begin with. I actually haven’t checked since.

I know, it’s easy to see the problem in front of you and wonder why nobody fixes this obvious flaw in Blender. It’s because there’s a million ways of using Blender and what’s an everyday obvious use case for one is an exotic obscure use for the other. That’s why precise and detailed bug reports are essential.

7 Likes

If you use Linux, you might try GafferHQ.

Yes and no. Until we’ll have at least texture-caching and mip-mapping support in official Cycles-builds, it does not scale well yet.

greetings, Kologe

2 Likes

That’s such a BS. These days, I’d say frames on average take about 0.5-1hr. Rendertimes got significantly better past few years due to optimizations and rise of GPU renderers. But even dozen of years ago, when both hardware and software wasn’t there yet, and movie quality frames could take even dozen of hours per one, rendertimes over 20 hours per frame were already considered non standard, regardless of the size of studio/production, while 400 hour frames would be considered extremely rare.

No production could even function if that was, as you say, standard. You can’t just tell client/director that you will have a single frame of the next iteration ready in 17 days.

2 Likes

You devide those 400h by x amount of cpu your render farm have, but yeah thats too much even for a farm

Sorry, my bad, it’s 400+ hours RENDER, not chronological hours. Edited for clarity.

And the rendertimes depends on projects. Small projects are about 0.5-1hr on your machine. But i’m talking medium size here (Read theatrical NPR animations) and that means rendering on a farm, but still talking render hours.

Big projects (read Hollywood level) takes far more time, and you really need a big farm. And at this level, we talk about YEARS RENDER.

But then again, rendering is still not a problem using cycles, since it support rendering on a farm. So not the problem i’m focused here in this thread.

@razin
He said 400 hours per frame originally. That implies rendering animation. In that case, you would not use distributed rendering (which Cycles doesn’t even have) to distribute one frame across multiple PCs, but each PC would crunch its own frame from an animation. In that case, it’d still take 400 hours if that was the longest frame out of an animation.

@stargeizer
NPR animations should definitely not take that long. What usually takes long to render is complex shading… think of closeup of a glass optics scope with many refraction layers in 4k. That could take even 10+ hours on modern hardware.

I know quite a few people doing Hollywood grade stuff, and it’s rarely that long tese days. Yes, 10-20 hours per occasionally happens on high res frames (8k) with very complex shading, but those are more of an exception to a rule. It’s becoming increasingly important to have good tools for rapid iteration, which allow creative folks to try out more variants before committing to one just because of hardware speed limitations.

Remember, i’m talking about “render hour”. The chronological time, depends on the size of the rendering farm. I’m not talking about rendering on a single PC.

For reference (since personal experiences varies) the following is an answer given by a former Disney/Dreamworks animator to the question “How long would it take to render a 2 hour 3D photorealistic movie using the best PC specs possible?” two years ago:
}

The “photorealistic” animals in Disney’s live action The Jungle Book took as long as 40 hours per frame to render. The animal is just a single element, and there are multiple elements assembled for a shot. Just the animal element at 40 hours per frame on a single render node would take about 6.6 million render hours for a 2 hour film (2k at 24fps). Not every frame would take 40 hours. Without more information about your film, let’s assume 50% to 100%, so somewhere between 380 and 750 render years (if every frame was perfect on the first render). A shot usually has at least 2 elements: background and character, so double that 760 to 1500 render years.

The fastest desktop CPU currently available is the Intel i9–7980xe. It has 18 cores, but they’re clocked slower than I’d expect any of the farm machines above were, and RAM limitations (128GB) are going to make it inefficient. I think the best we could do on desktop tech would be 4 nodes on a 6 or 8 core CPU. Photorealistic rendering also requires a huge amount of memory per render instance - I’d want 64GB per render node, but I don’t think any current Intel desktop motherboard supports more than 128GB. That’s going to make it hard with desktop tech to get more than 2 or 4 nodes per machine.

So if a desktop configuration can support 4 render nodes (4 render hours per hour), about 200 to 400 years.

Of course, nowadays GPU rendering are the rage, but still requires expensive cards to take advantage of it (at Hollywood level). The alternative?? Unreal Engine, as used by The Mandalorian production, or any future realtime engine that can take advantage of a GPU render farm (I think i saw an article describing using 4 workstations with several nvidia Quadros (64 GB vram each IIRC) to render the big dome used to film the series.)

Unreal Engine isn’t exactly a viable alternative yet. Disney really hyped up the virtual production in Mandelorian but what they didn’t mention was that virtually every shot had to be rotoscoped and recomped anyway.

1 Like

It’s been interesting to read what everyone considers to be ‘production ready’ and exactly what ‘production’ even means. In few cases, while Blender appears to be production ready for the short Open Movies that the Blender Studio make (and lets face it, Spring and Agent 327 are pretty darn good), there seems to be the general view that it’s not able to handle ‘Big Data’.

I noticed some discussion over exactly what big data is and the need for specific example or numbers. This made me remember that one can get access to an example of big production data.

I would assume we could agree that a recent Disney 3D animated movie would be fair to call big production data and that if Blender could handle that, it could be reasonably considered Production Ready.

So, check it out: www.disneyanimation.com/data-sets/ and grab the Moana Island data.

If you can get Blender to read/edit and render some or all of it then we may have a common starting base for what ‘heavy data’ looks like, how Blender handles it and hence what needs to be improved. Assuming of course that this is the level of production readiness we want Blender to be able to do.

Very true. But paraphrasing the series, “that’s will be the way” in the [not so far] future. :slight_smile:

The discussion about Disney/hollywood level started by an error on my part (that’s what i get when posting at 4:00AM, never again.), and should not matter here, really. Rendering is not really the problem here, and cycles-x is the step in the correct direction to evolve things. No, the problem is in Blender, not cycles. And we honestly should talk medium grade production, since is far more realistic to get in contact to a medium size studio for evaluation and feedback, and results are far more reachable. Blender can already meet the requirements for indie/small studios (well… Richard may disagree, and for some reason i think he’s not the only one), despite the need for more workarounds than really needed :slight_smile:

1 Like

Only 3 month ago there was paper that allowed distributed rendering with up to 16 GPU without significant performance drop (over 94% efficiency). One of scenes was exactly this Moana Island scene. It does not depend much on render engine since it’s about memory management level hence require minimum code change and to prove that they modified Cycles.

So Cycles is one paper away from this “common starting base”.

2 Likes

I think thetony20 was talking more about editor performance than rendering performance there.

I’m taking a crack at downloading the dataset to take a look at it myself. I know Blender doesn’t tend to be able to handle importing single very large files very well, but depending on what the format for it is it may be possible to split it up a bit in something else to bring the total dataset into Blender piecemeal.

Well, really both editor and rendering performance, looking at Blender as a whole (yes I know Cycles is really just a plugin and you can plug in other renders), but for many they use Blender more or less as it comes.

So while it may well be good if Cycles is one paper away from being able to render that sort of data, yes you still need to be able to create it, edit it, shade it, animate it, etc.

I’ve not download it myself, I just don’t have that level of internet bandwidth or data, but on looking at the readme, it sounds like a lot of the ‘objects’ come in their own OBJ files, which if so, you can slowly load data in and see at which point does Blender ‘break’, depending on how one defines breaking, outside of just a total crash.

From the read me file on the website
"
he scene is made up of 20 elements containing meshes with more than 90 million unique
quads and triangles along with 5 million curves. Many of the primitives are instanced many
times giving rise to a total of more than 28 million instances of everything from leaves and
bushes to debris and rocks. When everything is fully instantiated the scene contains more than
15 billion primitives.
"

I’lll try but I don’t think my 970 can handle opening this file nor do I have the system RAM to support it.

[Derrailing] And speaking about the not so distant future…

(includes Blender cameo :slight_smile: )

Ok, here you go:

1 Like

XSI was a great package but they made the same miscalculation a lot of people make about Maya. Maya is not just a DCC its a complete platform. XSI wasn’t that. It was too tied into Mental Ray (the whole package was built around the renderer) when it first came out, it relied MS technology (active x etc) for extensibility. They wanted you to use an embedded webpage (XSI had a webviewer built-in) for extensible user UI originally. It was a complete mess. Meanwhile even 3DSMax had more extensibility our of the gate. Maya is extensible down to very core of the application, you can literally build almost anything with it as the base. That’s why Maya is not going anywhere in the industry. The only software that kind of did the same was Fabric Engine but they made the mistake of using some proprietary language that no one wanted.

By the time XSI addressed the extensibility issues Maya was completely encroached in the industry and they weren’t budging. Whole studios built their custom tools and pipelines around Maya. There was nothing XSI could do. XSI had some success as a game development tool since it had great tech at the time for that.

2 Likes