Cycles: Experiment with shadow catcher

I think people forget how much compositing and colour grading goes in to compositing effects into footage. There is no magic button, you have to do some of the work. This simply is another step closer to easily compositing shadows into scenes, it is not a solve every scenario feature. It is leaps and bounds forwards from compositing shadows previously, just put a little effort into working them into your scenes.

I noticed as well that the shadow is to light (in laymans terms, because I don’t know better than that).
Tried to solved it as follows:
a) Reduce environment light. (That shouldn’t be a solution, because the environment light should match whats in the picture)
b) Strengthen the lights. ( Not really a solution either)
c) Put two alpha nodes in a row each having the same input. So you got double the shadow.
See also thread to one of the latest tutorial about the shadow-catcher where I ask the same question.

Alright,

so I’ve actually made a practical example to show you why Cycles’ shadowcatcher in this state just won’t work.

First, let’s break down what you usually need in order to successfully integrate CG elements into real life footage:
1, A plate/footage
2, HDRI captured at the location and point of interest where CG objects will be inserted
3, A 3D scene with Camera that has been positioned to match the position and orientation of the camera which shot the plate/footage.
4, 3D scene also needs to contain stand-in geometry or scanned geometry of the main elements present on the plate in order to effectively catch shadows and cast reflections.

It is not easy to find a good source of HDRI with matching plate(s) that is free to share, so I’ve made my own, synthetic, but still quite valid example: https://www.dropbox.com/s/a3elcjiqbh3rvyg/ShadowCatcherTest.zip?dl=0

The archive above contains 4 files:
1, Plate
2, Matching HDRI
3, .blend scene containing both stand-in geometry for the objects already captured on the plate, as well as new CG objects to be inserted on the plate
4, A reference image

This is the reference image:


Wooden planks, concrete wall, window, golden teapot and pepper mill are objects captured with the plate

Chrome teapot, red teapot and glass torus are CG objects inserted after reconstruction of a “shot” plate in CG scene.

Things you see on the plate that must happen:
1, CG objects cast same shadow color as the objects shot on the location
2, Thanks to the stand-in geometry, objects shot on the location are being both reflected, and refracted on/in the CG objects
3, ShadowCatcher solution needs to employ differential shading to be able to dissolve (not just add) reflections of the CG objects on top of the objects shot on the plate, assuming stand-in geometry is in place. On our picture, it also means that introduced CG objects correctly reflect in the objects that are already on the plate.
4, ShadowCatcher automatically projects plate geometry on top of the stand-in geometry, so for example in the reflection of the chrome teapot, you can see reflections of the planks are correctly distorted, and it really looks like the chrome teapot belongs in the scene.
5, This is extremely important: ShadowCatcher is catching shadows not only for direct eye rays, but also for reflection and refraction rays. So stand-in geometry you see reflected behaves exactly same as how it looks directly. It is a surface that has no shading of it’s own, but is being shadowed by the scene lighting, and receives bounced GI light from CG objects.

If shading in reflections was just diffuse, then you would see that lighting on the stand-ins in reflections does not match lighting of the stand-in when viewed directly, because light that was already there, captured on the plate, would be added twice, and so would the shadows

If shading in reflections was just emission or holdout material, then you would not see catched shadows in reflections, resulting in very ugly, bright fake look around contact areas.

So, you have the file, you have the reference. Anyone can now try and see how poor the current Cycles’ solution is :slight_smile:

No it does not. I do it almost on daily basis. The fakery such as tinting the shadows manually in the compositing, using AO, and so on, is the workflow of early 2000’s. Every major production renderer currently has a correct, fully working solution that makes result accurate and minimizes amount of manual tinkering. I use it on daily basis, and it never failed me.

The issue is that not many people knew how to use it back then, so they did terrible things. And some people these days either still don’t, or just don’t use a renderer that has any capable solution, Cycles being a prime example

I don’t have much to say on this particular matter, but I will agree to the points regarding the need for a WYSIWYG workflow while rendering the image.

I know that Blender has a compositor, but in my opinion, the only way you would guarantee a heavier use of the compositor by purely creative types (and not have an expectation that everything gets done before that stage) would be to actually have the compositor run every N passes while the render is in progress (then show the composited result in the render window or rendered viewport, regardless of whether you’re using the progressive refine option).

In fact, I think that idea above might ultimately be better in the long run, it would really speed up iteration time when you would not have to stop the render to see the final composite.

I will have to take your word for it, as I am not a professional production artist/compositor. But most of the video content (TV, Digital content, etc.) I have ever worked on (Art Director) goes through several sessions of colour grading (DaVinci) and compositing (typically Nuke or Flame). So my comments are coming from outside of the Blender experience, but from post sessions with people who do that for a living as well (Film and TV). That is inclusive of shot video, raw from camera. I cant think of a single time that shadows, highlights, objects, characters, etc. have not gone through colour grading of any type.

Again I am not the one doing the final compositing work, so maybe my observations are flawed, but I still think that my comment stands - the shadow catcher is a massive improvement from what was previously in place for cycles (AKA nothing).

So a few hours ago, I did dabble with the shadow catcher option a little, and I’m left to wonder if a key element is missing if the compositor is to be used.

So now we have the code to calculate just the shadow from a mesh light, yet the shadow pass remains in a state where it doesn’t support it. Would it be possible to extend it in order to support mesh lights based on this new code (t’d make compositing easier as it would only be a matter of using the shadow pass in cases)?

So, you have the file, you have the reference. Anyone can now try and see how poor the current Cycles’ solution is :slight_smile:

With split kernel , principled shader, denoïser and boost where it is possible; I did not have great expectations for shadow catcher in 2.79.
I don’t deny that is not satisfying if you want to use it for a more complex scene than a subject and a ground.

IMO, the fact that we cannot annihilate shadow from a CG object in shadow’s area of shadow catcher object is more problematic.
Condition of ignoring of self-shadowing is not satisfied.
In red, what is present in real footage.
In green, what is wanted.
In black,what will be problematic after alpha-over.


Hey all. So what’s the final assessment of the current condition of Sergey’s shadow catcher tool? As in what still needs to be improved? As in a list of things, not a long drawn out explanation. I know Sergey and others said it’s hard to get Cycles to do a good shadow catcher because of the type of render engine it is and how it internally handles shadows. But I think there are a lot more clever and experienced programming minds out in the wild that would like to take a crack at it. There are hundreds of out of work render engine programmers around the world I am sure or recent college graduates that can tackle this. I’d like to still get this fixed for 2.79x even though 2.8 is in development. From what I can see 2.79x will be around for a long time, lol… Thanks.

I’m no programmer, although I wish I could help in that department. However I think that this setup I created works much better. It allows you to have the object interact with the scene more. To a reasonable extent, colors bleed from diffuse objects, caustics are cast from translucent and reflective materials. Look through this and let me know what you think. I would think someone could easily integrate this into cycles. The key things are scene node setup, load an image into the far left node. And the Plane object. Put the same image into the material texture node. Add lights and objects and render away.



Changed image to include better example
Blender file below.
composite test.blend (1.19 MB)

Sergey had a challenge with the shadow catcher and Cycles from what I understand because the type of render engine Cycles is does not lend itself well to managing shadows directly. So coding a shadow catcher for it presented some technical challenges in the fine details. Now, a clever mind may be able to come up with an algorithm that works around the fine detail challenges…But first the fine details of what is missing or relatively wrong in the implementation need to be identified. Thus my question…

If you backtrack and read all my post in this thread, you will find every single requirement of a proper shadow catcher implementation as well as description of which of them current implementation doesn’t handle correctly. It can not get any more detailed than this.

Rawalance, does my setup meet all of your criteria?