Scene Exposure Tweaking, how to exclude backplate?

Hi,

I’m doing some practice with camera tracking and compositing. I have a footage shot with my dslr and I want to composite some 3d objects on the plate.

Now I wish to use the filmic blender color management and real world unit light intensities (see this for more infos: https://blendergrid.com/news/cycles-physically-correct-brightness ) tweaking the exposure with slider in the colormanagement tab to fine tune the 3d objects lighting and exposure to match the lighting and theexposure of the back plate.

The problem is that if I put my rendered 3d objects on the plate using an “alpha over” node in the compositor, the exposure slider will affect the whole view, including the back plate itself. Is there a way to exclude the backplate from the color transformation? Maybe a node with inverted settings to compensate the exposure adjustments done under the scene settings tab?

Thanks in advance!

Instead of the Exposure slider on the Colour Management panel, use the ASC CDL node in the compositor, one for each input if necessary, to match the cgi to the backplate. Then you can adjust them independently. https://blender.stackexchange.com/questions/55231/what-is-the-the-asc-cdl-node

Thanks, I guess that the equivalent of the exposure is the slope,to be set to small values to get the equivalent of the negative exposure value in the color management.

I’m very willing to investigate this because would really like to simulate a camera workflow with real values, so , with the default exposure and real sun and sky (about 441W/sm and 28W/sm) the render is overexposed when using an exposure of 0. So to be correctly exposed i need to lower the exposure like i would do with a real camera (Lowering the exposure time and or Increasing the f number).

The exposure slider in the color management section is very handy if the render is pure cg without photographic elements to match. But in this particular case with the footage backplate I’m forced to use the color balance node. thats why I’d like to know how the slope slider relates to the exposure?

A combination of the exposure slider and the contrast presets is the same as adjusting the power and slope on the CDL node.

Make sure all your input data is scene referred however. Matching scene referred renders with display referred backplates is surely an exercise in frustration. :slight_smile:

If the photograph is yours, DCRAW and a linear .tif is a good starting point.

Hey, thanks for the info. The footage i’m working on Is a video footage .mov from my canon 700d dslr, so, not so much control on the input file. Just 8bits per Channel.

What do you mean with scene referendum/display referred?

The short answer - Scene referred is the source data used to ‘record’ the image, while display referred is that data transformed for display on a device, often an 8bit gamma 2.2 sRGB display format, shown on a computer monitor.

The (slightly) longer answer - http://www.vfxio.com/PDFs/ACES_v02.pdf

The Filmic LUTs are based on the ACES system.

So, your render data is scene referred, but the .mov is possibly compressed, encoded, 8bit, gamma 2.2, sRGB, and so display referred.

The problem you have now, as you mentioned, is the lack of flexibility when compositing the .mov. Firstly it’s very likely already compressed, so it will suffer more data loss when compressed again to make the final movie, and you have less leeway when grading it to match the cgi.

Great! Very informative.

I was pretty familiar with this topic but I totally Ignored the scene/display referred data terminology.

As far as I understand, If I had linear 16bit raw footage (aka scene referred input data) I could composite all my CG generated 16/32 bit linear on the footage, working in linear space in blender and use the Exposure slider without any problem (or almost).

My only curiosity is:
When shooting raw video with a ACES compliant camera , every pixel has a certain value stored in it, which changes in a linear fashion with the light intensity that hits the pixel sensor.
To make the whole ACES system work, the value stored should be the exact luminance value, not an arbitrary voltage number which may change from sensor to sensor.

Is that the case?
If yes, this is a good reason to use real world intensities units and Phisically correct shaders in blender (As I would like to do), because if I use ,say, a 60W point lamp in blender at 2m from an almost white diffuse 3d paper sheet, and the shoot (almost) the same scene with a good camera in real life, the pixel 32bit raw value stored for a certain area of the shaded sheet in the render, should reasonably accurately match the values in the real footage.

Anyway, since the limited video capabilities of my camera (what you said is probably perfectly correct: compressed encoded 8bit per channel gamma 2.2 sRGB profile), all I can do is take the footage as it is, work on the CGI separately, and try to color correct/grade it to match the look of the display referred footage, possibly add a grading at the end of the pipeline and reduce the exporting compression to limit the additional data loss due to double compression.

I tried Magic Lantern to shoot raw videos , but the 700d is probably not capable to handle the bitrate, or at least my class 10 SD card @95 MBps is not.

I was thinking about at least linearizing the footage just after the input node with a reverse gamma correction, but with compression and only 8bits per channel… meh… I’m skeptikal.

By the way thanks again for your answers!

I don’t have any experience working with ACES, but I believe the ideal is that it should allow you to work backwards to undo the tranforms to get back to the original scene referred data.

Oh, careful now, please don’t say white in a thread troy_s might read. :slight_smile: White doesn’t exist in the real world.

I can’t say they will match accurately but it will certainly be a good starting point and will make compositing and grading easier. You would be able to create custom transforms so that they will match exactly :slight_smile: https://blenderartists.org/forum/showthread.php?428779-Colour-Science-(Was-Blender-Filmic)&p=3211832&viewfull=1#post3211832

Definitely convert the .mov to an image sequence of linearised .tif or .exr. It won’t add any data, but it won’t do any harm either.

For better answers pop over to the Colour Science thread, or visit The Rabbit Hole at stackexchange. Or indeed post any questions you have at Blender Stack Exchange. There are lots of knowledgeable people there. https://blender.stackexchange.com/

Thanks for the links organic!

Oh, careful now, please don’t say white in a thread troy_s might read. :slight_smile: White doesn’t exist in the real world.

Ahahah, thanks for the advice! obviously, with white I meant the “whiter” object I can find (With the highest albedo, as close to 1.0 as i can find)