hello Blenderheads,
I am trying to render a large scene using the blender cycles render engine. The scene itself exceeds my gpu(s) ram limit. For this reason i am gonna try to split the scene up using either scenes or layers, so that i could render each element seperatelly.
My question is:
Which should i use layers or scenes?
I have created a test .blend file so that i could play around with compositing 3 different movie clips together. Sadly my attempts creates transparency issues, the composite also appears white washed.
My actual project would be rendered in cycles, so far the renders produces out memory errors, i.e. my GTX 560 ti(448)+gtx 550 which are both only 1gig cards.
How would i go about optimizing my scenes for GPU rendering without spending a dime.
Use scenes. Blender does a memory dump after each scene is rendered so this helps prevent crashes. Just be sure that you dont overload the compositor because even with the save buffers option you’ll only be able to read back the render layer/s from the scene in which you started the render. All other scenes will have to be re-rendered. This usually only becomes problematic with hi-def visualizations.
As for the GPU optimization, someone else is gonna have to answer that one.
At one point, I did find what appears to be the ability to reduce a scene’s memory usage if you had the scene make a heavier use of Ngons rather than quads, perhaps because you would then have the absolute minimum number of triangles needed to define the surface.
So if you had a mesh which has a dense grid of quads to get some smaller details in, you can slash the triangle count significantly by using the ‘dissolve’ or ‘limited dissolve’ tool on parts of it. Though if your scene is mostly organic modeling with subsurf, then it won’t help that much because you would need to keep most of the quads.
Another method is with displaced meshes, keep a copy on another layer for safe keeping, and then apply the existing modifiers and apply a decimate modifier to it, also turn down the subsurf levels for far away or tiny objects as you won’t notice the reduction in detail in the render.
The problem lies with the format they chose aswell… avi’s dont support alphas… so alpha over wont work.
Solution, render to a format which has an alpha channel. If you are doing compositing work, i highly suggest you render as an image sequence, say as a png (or exr if you need the flexibilty).
I didn’t pay attention to the format. Initially rendering to ANY type of movie file is a BAD, BAD idea. Always render to an image sequence first (.png, .jpg, .exr, etc) then convert to a movie file else you’ll be very sory if blender crashes. I once rendered a quicktime movie and 6 hours into the render a lightning storm crashed my machine and the .mov file was completely useless.
Also, be sure that Blender isn’t putting anything into places where “nothing” ought to be. No “ambient light,” for instance. A single frame, in all areas where there is “nothing there,” should have (R, G, B, A) = (0, 0, 0, 0) and you should verify that.
Compositing, in a very general and all-inclusive sense, is always a reliable way to reduce both resource-consumption and time … if you carefully plan for it. You’re breaking down the total-problem into a series of smaller problems that can be combined, in a process that is exactly like the “mix-down” that occurs when recording a pop song. The renderer, now, produces raw-materials, not “the finished product.” It’s a totally different way of looking at the problem, and it applies equally to BI, Cycles, and external render-engines. Really, it’s a different approach to the workflow.