Creating a Scale of Lighting, wide exposure

I’m having fun putting a scene together very slowly but in the right “order.” The setup, the objects or place holders, examining angles and when it comes to the lighting, I would like to create a situation where I have to adjust the exposure of the render just like a camera or camera phone has to adjust its settings to account for light instead of adjusting light to account for the render. Do I explain that well?

The environment light is pushed with a Portal setting of an Area light and the fluorescent lights (green) are totally invented since they’re not Blackbody emitters. There are ceiling lights and all that jazz and a hallway where the light falls off significantly toward the bathrooms.

I have not been afraid to set light strengths way up high to help create blowouts or over exposed areas but Filmic does stretch really well. I used to be skeptical about Filmic’s range and maybe still am a little, but this is the question, have you organized a scene with wide exposure values successfully for a still or animation? Not using HDRI by the way!

The image so far is cycles, about 512 samples and denoising is on. The size is 50% and the render time is massive because the computer I’m using is nearly a decade old with a “slim-profile” video card! I’m lucky it works at all!

"Intuitively, I would suggest that you must “adjust the light to account for the photogra … I mean, the render.”

These days, “our phones” have amazing abilities to compensate for “whatever we might throw at them” and to “produce an acceptable photograph, regardless.” But you really can’t be sure what they’ve actually done, and you certainly can’t rely on it.

“The renderer” is actually very much like a piece of photographic film – although video, strictly speaking, is not like “film” at all – in that it must somehow assimilate whatever is put in front of its “lens.” Therefore, as the great photographer Ansel Adams pointed out in articulating his famous “Zone System,” you need to carefully control what it “sees.” And you need to do this, not only for “practicality,” but also for dramatic effect.

I know exactly what you mean, and there’s an absolutely fabulous addon to do it (which also gives you a bunch of other really handy camera-control features) called Photographer.

It sets a standard for brightness values regardless of where they come from, images, mesh lights, sun, etc. This way you can use real world values and get really nice results without having to play with it too much. When you change the ISO, aperture and shutter speed of the ‘camera’ (settings Photographer gives you) it will change the DOF, motion blur length (if enabled) and brightness automatically.

As for how forgiving Filmic is, that is part of its design. If you want to more closely match a particular camera, you’ll need to export the EXR and run it through a profile for that camera, though there is a lot to learn if you want to do that stuff right. An alternative is to just crush the whites a little bit after the render rather than trying to push lights so bright that they blow out.
This is because Filmic (if I recall correctly) has 22 stops of dynamic range, while 15 is pretty good for professional photography cameras these days. That means ‘pure white’ in Filmic is more than 100 times brighter than white on a professional camera.

1 Like

When I take even an “ordinary” photo on my iPhone, the phone actually captures three exposures and then combines them, discussed here:

Now, it’s easy enough to apply that same algorithmic processing to a CG-produced image … but if you are looking for a truly professional-looking result, in a complex lighting situation (like photos taken on this set promise to be) the results can easily wind up to look like compromise. And, if you need to render them (in particular) to printed output – to the CMYK color-space – you’re going to find light and dark spots with loss-of-detail.

The trick is to light the scene in such a way as to bring the overall tonal-range down with lighting, so that areas which are close to one another but noticeably different in illumination – e.g. the lighted display case with dark storage shelves just behind it – all render nicely at the same time and can readily be viewed (after JPG compression has done its murder), or printed on a decent device. Blender’s “histogram” tool is a go-to for quickly assessing the relative levels in the scene and in differing colors.

Camera magic-tricks are marvelous but they only go so far. (Apple’s example pictures are carefully crafted to make the HDR result look very good, but notice that the three exposures are not very different from each other, nor do they need to be.) As my photo mentor told me: "Look at the light."

1 Like

That was a really thoughtful response, sundialsvc4 and in real life, I’m a fanatic for HDR and Tonemapping and experimenting with compression techniques.

The essence of this post was actually the opposite, not to bring a range down, but to experiment with what it takes in order to create a loss of detail!

Strange, right? :slight_smile: I’m just a little bored with perfectly balanced renders because now that’s the look of 3D. The “Photographer” addon comment was the reminder I needed. I’d heard of that tool but totally forgot about it.

All this balance has eliminated stark lighting as a tool for composition. For example, in advertising, contrast is an important function of drawing attention to the right product instead of the whole scene. Plus, I’m not too proud to say that NOTHING I make is so good that it needs the full attention of good, balanced lighting! I need to hide some stuff, man! :slight_smile:

1 Like

The photographer addon is a good way to set your initial strengths, as you can adjust lighting levels to match what would be familiar in real life. This has a couple of problems;

  1. Light doesn’t bounce around enough (contributions being cutoff or insufficient bounces).
  2. Specular light transport is usually lost, as we tend to turn off caustics to help the noise situation.
  3. Image stamping of metadata reacts to exposure (unless fixed recently, can’t check right now), which makes the feature unusable. Maybe this should have been a compositing feature instead?
  4. In the past I’ve had serious issues using Eevee with “proper lighting and exposure”, even some crazy bugs - although I expect bugs to be fixed now or eventually.

When you have set the exposure and light levels and experience image being underexposed, you can start tweaking the lights beyond reasonable realism to mimic real life trickery they would use on set or on location. I.e. dimming down the HDRI background to mimic ND filters on windows, and adding “set lights” to compensate for all that “lost light”.

I highly recommend watching some film masterclasses on youtube.