Do you find blender limiting when it comes to scene size?

I’ve been starting work on a scene with a vast amount of polys and found the memory usage shoots to over 700 with less then 400,000 polygons in the scene. In the 3D view it can get pretty slow with as low as 200,000 polys or even lower. Compared to other 3D apps i’ve seen that can handle millions to billions of polygons in seconds, Blender’s just way behind in handling of polycount in the 3D view and renderer. I know 2.42 will greatly help with rendering large scenes as needed in the Orange project but i’m not sure about the 3D view. It just makes me think if getting Blender to render large elaborate scenes or handle them in the 3D view with possibly a million polygons or several million is even worth it.

And this with a 2.8 ghz computer with 512 MB of RAM.

Come to think of it even scenes in recent computer games are larger.

Well, the number of polys Blender can handle depends on your graphics card, not your processor. What type of graphics card are you using?

I don’t think Blender uses the GPU, i’m using a Gfx 5200 but it should be powerful enough to handle more if that was the case. This is also just plain solid view, and the 700 meg rendering is just materials, two lights, and raytracing off.

I have OpenGL 2.0 too.

with an ATI 1600x it starts to drag at 700,000 faces… And rotating sucks eggs for it as well as navigating after too long

But think about this a bit. You’re saying that the render is 700MB and you have 512MB of RAM. This means that it’s having to swap out to disk quite a bit.

With 400,000 polys, that’s what 1.7KB per poly. However, how large are your textures? OSA and Mblur are going to cause the memory use to shoot up as well.

As far as the 3D view. Remember, all the 3d data must be streamed to the graphics card for every frame. If you have allot of vertexes, that could slow things down.

Granted, blender does have issues with large scenes, but this might help explain what some of your issues are.

IN this cause multiple display options would be handy. Like when you have a huge scene with like a few big meshes, you select a mesh, display it as an object rather then a detailed mesh instead of displaying the complete scene as simple objects. I used that display mode a lot when I have huge scenes, I just name my meshes display the names, view them as simple cubes and when I enter editmode, only then it shows all it’s vertices. :slight_smile:

I’m not sure if this can be done already, to select and object and choose how to display it, instead of the complete scene.

A few things to keep in mind:

Blender currently does not use openGL display lists or vertex arrays to cache state changes/drawing. Either of these would probably yeild a huge improvment in viewport speed, but OpenGL display lists would only give significant results for objects that are not deformed and vertex arrays are not always supported well on all cards (this may not be much of an issue anymore?)

Secondly:

Blender deals with high poly counts much better if all the polys are in the same object. The reason for this is that multiple objects require multiple 4x4 transformation matrices, which requires costly state changes in the OpenGL drawing.

The solution to all of this is to open your favorite code editor and write several thousand lines of C… :smiley:

Cheers,
Xarf

Here’s a few tips that might help you.

Get more ram. A gig runs for about 70-80 bucks. Get 2 if you can. If money is an issue for you, you could easily just get another 512 for pretty cheap. I recommend Crucial or Corsair memory.

Get a better video card. This sucks, I know, given the increasing prices. Blender does use your graphics card, as it is GL accelerated. The fact that you have OpenGL 2.0 means nothing really. Blender doesn’t use it (yet.) Make sure it’s nVidia. It has much better GL compatibility than ATI does.

Perhaps a more practical tip, and an old Photoshop trick, is to force Windows (that is if you’re running Windows) to write/read it’s page file on a seperate hard drive. Not a seperate partition. Dedicate an entire drive to this. Make sure it’s fast. 7200rpm, not 5400. You don’t need a big drive of course. If you can get an old 10 gig that’d do just fine. If you run Linux, put your swap partition on this drive.

One thing that slows Blender down is using shaded/textured mode vs solid. Also, I think you can reduce memory usage by reducing the number of undo steps.

Yeah I think the vertex arrays would be a good option. The trouble as you say is supporting older cards. Part of me wishes that there was less of the legacy support because there comes a point where you are holding back the majority in favour of the minority. Another part of me wants to keep it because I was quite pleased to see not too long back that I could run Blender in the Virtual PC emulator. However, that is a small part and in general I think some optimization could be done and still support the majority of GPUs.

That’s odd, I actually found the opposite. I can get far higher with duplicated objects with few polys each (2 million+ on a GF 6600) than one object with many polys (about 300-500,000).

You have to be careful when talking about render polygons and display polygons. Renderman can easily churn out 2 million polys and my Mac Mini won’t even blink but it won’t display anywhere near that in real-time. Also 3D games can handle more because they have seriously optimized engines that take advantage of the fact the meshes generally don’t change.

:wink:

I’m not sure if this can be done already, to select and object and choose how to display it, instead of the complete scene.

Yup. Heres a pic:

Other than that, use layers. For rendering, you could do different passes.

Ian

I work as an IT admin for a medium size buisness. My boss’s rule of thumb is that it takes 512MB RAM for Windows XP to run at a decent speed. 256 is duable, but not practical.

I assume you are using Windows? If so, try the same scene in Linux and see what you get. For that matter, I would love to try it out on my machine. If you e-mailed it to me, I’ll test it in Linux (and Irix just for fun) and see how it goes.

well on my relatively basic system (2.4ghz, 1gb of ram 333mhz, nvidia 6600 128mb)…I can manage scene of 700 000poly without much problem…and even tho it lags alot, it doesn’t even crash in the 3dview with 8Millions polys…

As far as I know, Blender is really good at managing heavy scene, compared to other app, but I might be wrong on this since I never really tried. I,m saying this from people that told me so…

I don’t much mind the rendering times of larger scenes - I think that should be expected and in any case isn’t the renderer being recoded to use less memory ? What really irks me is the useability of Blender when filesize gets big. Past 50mb things start to slow. Past 75mb it begins to annoy, over 100mb and it grinds. Even when using bounding box display mode and only displaying layers with a few objects Deleting objects can take seconds or even minutes to perform.

I use Windows XP for all my Blender work, I have managed million plus poly scenes taking under 512 when I used the new particle system, and I know there’s hope with the new render system in 2.42, I think we may be seeing larger rendered scenes being a possibility once the recode is complete.

Is there by any chance some tests that were done to prove the results with the 2.42 recoded renderer compared to the 2.41 renderer when it comes to render times with large scenes?

And I generally never go into the shaded view, I usually work with solid view and wireframe if I have to.

I know you can hide the verticies you’re not working with in a single high poly object to speed things up but generally hadn’t used that much since Z-buffering was implemented for the 3D view.

Coding a few thousand lines in C? Someone else may have to do that as I don’t have knowledge in that language or any major language.

There’s a vertex array button in Blender OpenGL settings. Is it supposed to do something?

There’s a vertex array button in Blender OpenGL settings. Is it supposed to do something?[/quote]
That’s for the game engine only.

Martin

A standard computer game doesnt even reach into the millions.

As with a game you got to much happeing to have everything rendered. What happens is that what the player cant see doesnt get drawn. Allowing for high levels of details, but the polycount is no higher than a few hundred thousand.(No where near the millions.)

When it comes to Blender, everything gets rendered, even the stuff that the client cant see, so it will go much slower compared to games.