In order to fastest drastically the render time of an animation, I would like to reduce render zone striclty to the animated one and complete the render with the static part of the image as a background. I though I could easily achieve this in the compositor but I failed (I’ve just started to tame this amazing application. I’m not an experimented user at all). My intuition was to simply compose the image using an " alpha over " node and an eventually “scale” one to forced sized. And then restrict the rendered zone on the camera. Though in the viewer node the result match the goal, the final render only produced an image corresponding to the restrict rendered zone, the rest of the image remaining transparent.
I can hardly imagine this goal couldn’t be achieved. It’s look like a nominal use case to me. Can anyone give the trick ?
Hi,
Thanks for your reply.
I’ve flushed my original file. Here is what I’m pretty sure what was a attempt to achieve this result.
In the compositor, the viewer shows a good result (the cube is displayed on top of the background). But when I execute the render, the region around the crop region defined for the camera remains transparent.
Note: as a new user, I can’t join the blender file. I’ve not touched any parameter (except checked the transparent option (no need in my real goal) in order to confirm the render doesn’t mask the background)
Note: my goal was to make the reels turn. So to reduce, for the animation render, the render to the region including strictly those two reels.
Oh, tha’s interesting. In fact I could reproduce the issue. When I selected ‘Viewer Node’ instead of ‘Render Result’ in the image editor, I see the correct result.
The fun part is, that after clicking around a while on various settings, suddenly it also looks okay for ‘Render Result’.
Obviously a bug. Actually I have seen a lot of similar issues in 2.90, about stuff not getting updated (for different things, e.g. like hair or texture spaces). As I understand there was a major rework in the code of handling dependency graphs. Apparently it still needs some fixing…
Actually I’m still using the 2.83 version (so much to learn, no need to rush for new features). I can hardly think this goal is not nominal for an animation project.
So I think it would be logic that blender gives the result I’m looking for with these simple parameters, I’m still convinced there a way to achieve this anyway in blender. The fact you finally got the result touching parameters here and there prove it. I’m gonna try.
Blender remains an absolutely amazing application. And for free. It’s a pure gem. Bless those programmers.
Thks
Hmm, okay, just checked, it seems behavior is the same back to at least version 2.79.
In any case, it seems the ‘Render Result’ output is only updated after re-rendering, but then the image looks like it should. I find that surprising too, as I remembered it differently. Maybe someone else can enlighten us.
Still as a workaround you can select Viewer Node in the image editor and save the result also.
Thx [sorry for the delay]
For a single frame project, your workaround makes the job. But for an animation : there’s a way to redirect the automatic production and save of each frame to source it on the view node ?
I pause this project (there’s so many things to explore and do with with amazing app. Purely amazing).
But there’s no doubt I will have to come over this problem sooner or later.
Thx again. There’s one thing absolutely great with this app : its community. I’ve tried to learn 3Dsmax the last millenium, by myself, with a shity book (very few tutorials on line by this time. No tubes. You had to come your way by yourself). What a pain, for very poor result. I’ve learned more on Blender in few months than on 3Dsmax in few years.
When you render an animation, wouldn’t you do the tweaking of the compositing on a single frame, then render the animation with the same compositor settings for all frames? In that case the problem shouldn’t arise, no? Or what is your workflow?
Hi,
Actually I don’t get it. It might be a combination of my Blender and English langage weaknesses : /
To my understanding, i proceed as you describe (i.e : do the tweaking of the compositing on a single frame, then render the animation with the same compositor settings for all frames), but the problem arises. So maybe i don’t proceed as you and I think …
In the image editor (which I had never opened before your proposition … so much to learn), i can select the view node and then effectively save the current frame perfectly composited. But when rendering an animation, the save process is done automatically (actually so far I even only used the direct video format output, but I could use the frame by frame output if needed), so I can’t manage how to use this workaround in this case.
Note: during this investigation, I discovered the “render region” checkbox. So far, to switch from region to full (and reverse) , I had the habit to redefine the region to the current goal. I’ve done it hundred times (million times) : / . So a great discover to me : ) (actually I should always look each time for a less laborious way to proceed an action. If something is laborious in Blender, you probably go the wrong way).
Hmm, that is indeed strange. To me it works that way, that the render output gets the compositing applied, but it gets applied the compositing as it was set up before rendering (and not after). So to me the problem is just that you cannot tweak compositing after rendering.
So you say, the compositing is not done for the produced animation frame images? Did you enable Output Properties -> Postprocessing -> Compositing? If that is not the problem, can you post an example blend file?
It is a very common technique to render oneframe of a background that isn’t moving, and to composite this with one or more strips that consist of “characters moving against shadow-catchers.” These are output to separate MultiLayer OpenEXR files and then “comp’d” together to produce the final image. In fact, I’d say that most CG images are composites.
The famous (flm …) photographer, Ansel Adams, observed that “a picture is captured in the camera, but it is made in the darkroom.” The same principles generally apply here. The final scene is built.
(Note: “shadow catchers” are invisible objects corresponding to scene surfaces, upon which shadows fall. You then capture only the shadow information [channel …], omitting the “catcher” itself. When composited, the strip now appears to cast realistic shadows.)
I fear it’s not strange. Langage (I’m french … if miraculously we share the same natural langage) and Blender’s knowledge weaknesses are probably here the cause of some misunderstanding.
It’s not clear to me : you ask me to post a file (which I cannot do. Error message “Sorry, new users can not upload attachments.”). Does this mean you manage to get the “good” result ?. If it’s work for you, I’m really up to upgrade my English level to solve my problem.
So I’m gonna try to make complete, clear and precise statements :
situation:
default cube in blender (outputProperties>Postprocessing>compositing checked by default)
Transparent checked in camera>film section.
Compositor set as in the previous post
camera restrict to a region around the cube
the problem
the background is not taken into account around the region. There, the render remains transparent ( but the composition is ok in the viewer node). But it is taken into account in the region (the compositing works fine in the region)
for a simple frame render and for an animation render (same behavior in both cases).
for a single frame, as you mention : I can use the “image editor>select the viewer node>save” as a workaround. But I don’t understand how I could use this trick in a animation render case
doing previously a full render without region limitation does’nt solve the problem
As mentioned by sundialsvc4, another workaround would be not to restrict the render region but :
to produce first a render without the moving elements, a render to be used as background in the compositor
then to set all the static elements as shadow catchers, and render the animation with the moving parts on. It should works. It even would be an elegant solution. Technically, strictly, the right one, no ?. BUT, very laborious : / (a scene could got many elements (mines remain simple so far).
My “solution” was primitive but quick to set. And to me, it should work. It should be the default behavior. I mean, I can see a case where the current behavior would be productive.
Thks.
I still don’t understand why my trick doesn’t work but this solution is elegant (when I post my problem, I didn’t know the shadow catcher. I just used it few days ago for my first camera match render).
It’s probably the right way to proceed.
But it’s very laborious if your scene contains hundreds elements. You had to checked them one by one ? There’s a shortcut ?
Ah, now I get it. Sorry for the misunderstanding. Apparently when restricting the render region, the compositing on the render result is restricted to that region, too.
Hmm, also don’t know of a setting to help. If all you need for compositing is the alpha over, you could do it with a different tool, e.g. ImageMagick.
If you need/want to do it in Blender, probably the best you can do is to actually render the full image, but such that there is nothing (much) to do outside the region of interest, so rendering will be fast. You could hide the objects from camera view, or alternatively place a rectangular mask object in front of the camera, with visibility set to camera only and holdout checked. Like this:
Thx a lot for taking of your time to help me.
The holdout option (I didn’t know so far) seems to do the job. I’ve checked surrounding objects masked by the holdout grid nevertheless cast shadows on the visible objects. Perfect. What is not totally clear to me : does blender nevertheless process the render of those masked objects (I’m gonna make a test). If not, this solution is the perfect workaround to achieve easily what I wanted.
But tonight I realized another problem, what might be a problem with this process (composite a render region into a background instead of using sundialsvc4 solution). For an animation, the static part of the rendered region would probably be contaminated by cycles render noise, I mean this noise would probably be animated (due to the fact parts of the rendered image are changing). So after composition I would get a region where static object are contaminated by animated noise and another (the background) where this noise is static. In this case, the composition would probably be obvious. It could be a suffisant reason for my process (even using your solution) to be inefficient for an animation project. And the reason why sundialsvc4 solution is the efficient one (even if I still don’t get how you can efficiently switch all the static elements of a scene to shadow catchers).
I probably gonna try both solutions (just after finishing my current project, a simple graphic animation for an opening title using Evee to make all the process very light) to answer those questions.
Once again, thanx a lot. I hope I would reach someday a sufficient level in blender to be an active part of the blender community. This community really makes the difference (beetween 3Dsmax and Blender, I’ve tried to use an alternative solution : a 50$ japan app, powerfull (but less than Blender) but without any community.I lost few years trying to make my way)
Thx
I’d say, yes and no. When a ray from the camera hits the holdout plane, it will be terminated there, so there is no further work to be done for the ray. But the hidden objects will still be hit by indirect rays for pixels in the visible region (which is what you want for shadows, indirect light, etc.). But that is just the same as when you define a render region. And there might be some overhead for the tracer to determine that the holdout plane is actually the foremost object, but I assume that it is small.
He, you found there something, that bugs me also. I wished there would be more options to change material/visibilty for render layers. I only things a know of are to put stuff into collections which can have some attributes changed per layer (overally visibility, holdout, indirect only). Or there is the material override per layer, but this one can change only all materials of all objects in the scene together.