How speed up rendering of animation recorded simultaneously from multiple point of view?

I’m rendering an animation recorded from multiple points of views and then integrating the result through the compositor.

The current solution is creating multiple scenes, one from each point of view. Details are reported here:

Is there any way to speed up the process?

Note that I will proceed on batch rendering after initial rendering of few frames and that the scene is built through scripting

Now it is very slow because:

  1. Rendering of the sub scenes is done at the resolution of the final scene and then cropped and scaled down

  2. I don’t think it reuses between the scenes any of the computations that are independent from the point of view, maybe baking could help but the animation is not very rich… I only have textures on an armature.

Current alternative is to render each subimage and store it to disk and recompose in a second phase… I wonder whats the gain of having a compositor then.

Yes, I have had the same issue, take this render:

Node tree in compositor:

Three scenes are composited together, I tried setting the resolution for each scene so they fitted the space allocated for them, Blender refused to use this, rendering everything at the resolution of the overall picture (the glider). It would be far quicker if I could render the two inserts at their own resolution, then I only have to transform them rather than scale them as well.

So I would like a resolution (excuse the pun) to this issue also please!

Cheers, Clock. :beers:

1 Like

Of course one thing that you can do immediately is to reduce the size of the other renders so that they are no larger than they need to be to serve as an insert . . .

@sundialsvc4 no that’s exactly the point… blender automatically defines the size of the other renders as the size of the final output…

Why not running three instances of blender at the same time?

mmh not sure it is faster… communication between the istances can be complex

Render every scene separately into image sequence at the resolution you need, load these rendered sequences into comp and in the end render the comp.

1 Like

Thanks @kesonmis. This is the alternative I hoped to avoid… the compositor is quite useless then. I can just compose everything in python with another script (Why bother loading blender?)

Why do you want to avoid separating comp from render? It has more upsides than rendering everything in one go, the first and foremost being that need to change just one of multiple elements that makes your final comp won’t require you to re-render everything.

You can comp stuff in python, makes no difference, main point is to speed up your workflow by eliminating redundant work that is inevitable if you try to do everything in single go.


Hello Sir! I, for my part don’t want to separate things, I have just rendered the video I showed in my post (all 2,830 frames of it), but I would like to have different size settings for each of the scenes rather than have Blender do the lot at the overall (biggest) scenes size, just a waste of effort for my GPU for it then to be scaled down. Are there any plans in 2.8 to cure this issue, or even to fix 2.79. My renders took nearly twice as long to precess than would have been the case, if Blender recognised the different scene sizes, because of this problem.

Rendering each composited frame makes sense to me, I am 100% behind you there!

Cheers, Clock. :beers:

1 Like

Well to my understanding separating render from compositor can be done only by saving renders to disk. I have about 40 subscenes to render per frame and just saving and loading them may take more time than rendering them at their actual resolution.

Also I know that rendering the same frame from multiple viewpoint can be highly optimized as I suggest in the question (, cannot be checked as is down right now …)

If your stuff renders faster than reading files from disk, there isn’t much to speed up anyway, it is already fast.

Multiple viewpoints rendering can be optimized, but this doesn’t help with rendering multiple scenes in Blender in any way in current stage. Multiview rendering is a special subset of views where it is known that it is the same exact scene from relatively similar viewpoint. Rendering a bunch of random views from random subscenes does not fall into this and without light caching or whatnot won’t lend itself to optimization easily.

1 Like

well in my case I’ve to render a lot of these animations (~100) so I need all the speed I can get

What is different in each scene is just the viewpoint as I say in the title. What I’m trying to do is similar to generate renderings of a game sprite from different perspectives and put all of them together in a sprite sheet.
Having a scene for each viewpoint was the only way suggested to combine multiple viewpoints in the compositor.
So lights and objects are exactly the same and each scene is created by link-data

I actually don’t see the reason to not back-propagate the size of the actual rendering to the render layer but apparently my usage is too specialized.

Another question if I just render for each camera… is better to render the whole animation for one camera at time or do all the perspectives and then advance frame?

Scene size should back-propagate, it is how I would expect it to work, but compositor as is is just too basic in its current state.

Regarding rendering, I think rendering one view for all frames, then next view etc might be faster, but with BI, I’m not sure if any data can be kept in memory at all, probably it is all trashed before rendering next frame. In Cycles the persistent images checkbox should keep something up, but what and how much it actually helps with animation, I don’t know.

1 Like

One thing also to consider is that it is quite likely that, even though you are rendering the same scene from a number of different viewpoints, you are probably not using “every one of the frames” from any of them.

Therefore, as I say in a tip, “Edit, Then Shoot.” Work through the entire presentation from start to finish, determining (using fast OpenGL Preview renders) exactly where the “cuts” are going to fall, exactly what frames are going to be visible in the final show from each of those [named …] cameras. It’s important that you have decided on the “final cut” before you proceed with this rendering and assembly process. Put in at-least temp music, and make sure that you’ve got the “rhythm” that you want.

Then, render only these frames, to MultiLayer OpenEXR files. You can then use the compositor to insert the material into the appropriate film segments.

Rendering to an appropriate output frame-size – just a little bit bigger than what you need – simply saves time when building the inserts. Just be sure that the images are not smaller – be sure that the compositor is down- sampling the data.

Consider first doing this using OpenGL Preview generated material, so that you can be absolutely certain that your “final cut” is correct with the inserts actually in place. (Use the Stamp feature on these previews.) Work “Preview renders” just as hard as you can to save the most amount of time and to minimize re-work.