following some tuts and course about Lighting in Blender, I met some doubts…, so I would like to ask some questions that apply to both rendering engines ( Cycles and EEVEE):
1 - I one tutorial the instructor suggest to use Exposure value (Render proprierties/Color Manager/exposure) to increase the light intensity instead using the Power parameter of the used light in scene (Light Object data/Power)…, assuming that the World is disabled (or with 0 Strength). I don’t think this is correct. You should always let the Exposure value to 0 in Color Management and increase light intensity with Power. What do you think about?
2 - Filmic in Color Management.
In the Color Management, Filmic is a default value, but I think it is wrong because “Filmic” in a sort of LUT, so is something you have to apply after your lighting, when you 'll make the Grading process. Correct?
Okay, these are my first question about general lighting set up, and I’d like to have a glad feedbacks order to deepen the topic “lighting” in the correct way.
By the way, if someone can suggests course or tutorial REALLY WELL DONE and of course, UPDATED (I started with Blender 2.8, no exeriences before it) would be great!
Thanks for a replay!
I am not a color management scientist, but I use EEVEE extensively and developed an add-on for lighting up your scene in EEVEE. That took quite some research and experimenting and I think I have enough experience to answer you by heart (rather than by theory).
Like you, I used to leave the Exposure to 0, and often I still do. But imagine you have a real camera and you want to make a picture of your family at night who are sitting around a candle light. What are you going to do when you barely see anything in the camera? Are you going to bump up the strength of the light of the candle, or are you changing the exposure in your camera? So the answer is: It might be a good habit to start with exposure on 0, but it’s just a reference.
Now, to get an idea how strong your light is, you can do the following: Add a monkey, and add a material to it and give it for each R,G and B value: 0.18 (colorpicker). That is called mid-grey. That means you can adjust your light or exposure so that it shows grey in False Color (Color Management > View Transform). Mind that it’s not possible to make that monkey grey, but you will see some grey areas. Blue indicates it’s to dark, and Yellow indicates it’s to bright and in the middle you see a small area of grey. So, if you render this out, you will see that your mid-grey is now indeed midgrey in the render: 0.5. (Between black and white). That’s one point of reference. The other two points of reference you need is Exposure and strength of light. If you have already the strength of light specified (which is quite difficult already, because do we know what is watt exactly?) Anyway, at least you have something to hold on to. Next is then to adjust the strenght of your light as good as possible (try to get real life values). And then you adjust the exposure. Or explained differently: Get your phone, turn off Auto-exposure and take some pictures in- and outdoor. Don’t play with the exposure and count how many of your pictures are a success (1 out of 20 maybe?).
That sounds a bit strange: apply Filmic after lighting? The problem we had with the default view is that the light (in our high dynamic digital scene) is clipping (on our low dynamic range on our monitor), in our viewport, or on our monitor. And that clipping also distorts the colors: they don’t show up correctly and they look really ugly once you realize that. In laymans terms (forgive me @troy_s ,I am not a color scientist), Filmic is sqeezing the high dynamic range in your digital scene (where the values are much different than on your screen), in a way that you can see whole of that range on your monitor. The process of squeezing from the scene referred domain to our display domain (our monitor) is not that straight forward as you might think and quite mind boggling process, but Filmic is the best answer to that so far in Blender.
So I am not sure if I understood your second question, or where you are coming from. Another thing is that if you save your render in High Dynamic range, like with .exr, Filmic is not applied to the file. So Filmic, is for you to see the whole high dynamic ranges squeezed to the low dynamic range of your monitor. In case you render in .png or .jpg, surely you need to post process (use another program, or use blender but then in default view and not filmic anymore) and play with the levels or brightness and contrast.
Okay Peetie, thank you for your answers.
But unfortunately there are some thing, above all about Photography, that are not correct.
Exposure is a parameter in Blender but you don’t find any Exposure parameter in a Real Camera. In the real camera, exposure is a value you can get trough different parameter…: Distance of the camera from subject, Aperture (f/…), Shutter speed, and obviously light intensity. There is also another problem in your candle light example: as a goof photographer and/or lighter, you have to decide which is your target, focusing the attention on it. Not all should be perfectly lighted , or visible.
For these reasons, when I’ll have a general solid knowledge of lighting in Blender, I 'm gonna to use a real camera parameters (I d like to test a Photographer Add on) which seems very good!
In other words, because all is possible in CG, I prefer to pump up the candle light intensity instead to touch a not real parameter as the Exposure in a 3D software. This is a photographer and filmmaker approach: I try to do in computer graphics what I would do if I were on a real set, with natural or electric studio lights.
For these reasons, this thing you say here hardly makes sense to me:
To know how intense in the light I am using is not a big problem for me…: I simply use different parameters to measure it, as Watt, Lumen, Lux…, again, this is my approach as film-maker and photographer based on the real physical lighting. Of course, I need real lights and real camera add on (I.e. https://blendermarket.com/products/photographer for the camera, and https://blendermarket.com/products/extra-lights…, I mean Camera and Lights with real physical parameters, and this is maybe more complex, but for me and my knowledge it seems to be much, much better.
About Filmic, yes I understood: is a modality only to display what are you watching in your monitor: Standard corresponds to sRGB, Filmic is a new display mode with a bigger Dynamic Range and it allows to see really how can be your final render. So you are sure if there are something over or underexposed (sometimes people want it, above all underexposures for artistic and contrast needs…!) or you can exactly know how can increase a light intensity without clipping and vice versa, for underexposure.
It is great!
And of course, as you mentioned, I won’t be applied the final render because, and this is a strong rule, ALL render should be done in EXR. No way!
Okay, thank again for you help!
On a majority of DSLRs, this is false. The ISO is a gain applied to sensor data. In higher end cameras, the Exposure Index is applied to the collected data, and also equates to a gain. This is ideally applied to the radiometric-like data, before the camera rendering transform.
I’d stress that no matter what anyone tells you, you are always viewing the “data” of the radiometric-like RGB through the rendering transform of something. If someone uses the display linear 1:1, is an extremely problematic, whether folks like it or not, rendering transform.
Camera rendering transforms are not simply “tone maps” and in fact may incorporate a number of more sophisticated transforms to the data. Thinking that “having the data” is useful is an unfortunate thought process that some people get caught up in; the camera rendering transform is at least as important as the data itself. It is entirely about what ends up presented to the audience or author.
Why is this relevant? In many instances the creative cycle is a feedback loop between the image maker and what they are seeing; they will always make the creative decisions in relation to that rendering transform.
Choose it wisely, and be sure that everyone is under the same understanding.
The most important point: It is impossible to escape a rendering transform. There is no such thing as “no transform”, no matter who tries to convince you of such; all data is rendered and warped to whatever choices are made under the viewing context inclusive of the rendering transform.
ISO (sorry, I forgot to mentioned it) is another parameter as Aperture, Shutter Speed, lights intensity on set… to set up Exposure, is NOT the Exposure. (I mean in a real camera, is the same if it is a DSRL, Mirrorless or Pro cameras).
By the way, I am curious to use a Photographer add on to see if it has ISO as well…
About the Filmic and the other things you said, I frankly don’t think I said otherwise or maybe I didn’t understand what you mean. I am not a TD Lighter.
In other words, I didn’t understand what you mean when you said:
It is definitely my fault that I don’t understand your technical terms. I’m not a TD Lighter but a filmmaker who makes 3D graphics. But I’m very eager to learn, so if you can explain yourself in simpler terms, maybe I can understand better what you mean …
Anyway, really many thanks for your explanations!
See you soon!
The point was that in real cameras, Exposure is a very “real” thing that shifts the data values in much the same way a physically-plausible path tracer generates data values. The umbrella of Exposure is a scaling of the data, and that slider does exactly that, applied to the rendering transform. The Film panel bakes it into the data.
The two statements above are where it is worth tracing your thinking on.
An EXR contains ratios of linearized data, or in some cases, absolute units.
The image is formulated however, not in the data, but rather the rendering transform.
If someone cares about the image, it is critical to pay attention to the rendering transform; the data isn’t the whole equation.
I am agree but again, I didn’t say otherwise. And again, Exposure in real camera is not a tool or a botton or a menu voice you can set up such the Exposure parameter in Blender. In real photography, Exposure is a value (EV) you can obtain setting up Aperture, ISO, Shutter Speed, lights intensity…, so the point is that in real camera, when you set up one or more of these above mentioned parameters, you know exactly what are you doing. In Blender, with the Exposure value in Color Management, I don’t really know what it means, what it corresponds to …
Increase Exposure value means increase the light’s intensity of the all lights used in scene, o? Or maybe it means open the Aperture in the 3d Camera? O what…? For this reasons, I let always this value to 0 and used only the parameter I can control and with a real correspondence whit the real lighting.
I am sorry, but don’t understand what you what you mean. In other word, what is wrong, what is good… and why? Al said with more comprehensible words. Maybe also some examples would be better!
It does. You can setup sunny/16 with iso 100, shutter 1/100s, and aperture f/16, and the exposure will be set automatically. This is named Hazy Sun (in my dated version, might have been updated now to match new sun values, not sure) and sets exposure to -6.64. The reference 0 matches office lighting (iso 640, shutter 1/50s, and aperture f/5.6), and 2 matches home interior lighting (iso 400, 1/8s, f/5.6 - seems reasonable according to my own D80 with only UV filter on it). ISO100 - ISO51200 available. You can set it up manually or by adjusting an EV value. Shutter and aperture can be set to drive motion blur and depth of field. Highly recommended addon, should be standard especially the exposure usage readout to set you up in the right ballpark for what you’re trying to do.
Exposure is only important if you want physical relations between your lights. Unlike a real camera, a render will save (using .exr) all lighting information anyway and all you have to do is develop it later. A camera captures a limited range of the lighting information, therefore exposure is important to set right before shooting. In addition, using a very high ISO in a render won’t introduce noise or other degradation as in a real camera.
The two main workflows:
If you have lighting assets you know to be correct - place them in your scene and expose for them.
If you don’t know the lighting values, set the exposure using Photographer addon, then place lights in the scene adjusting their strength to be ballpark correct.
Ignore the above, and end up with problems causing additional work once you add correct lighting assets or a sun where you didn’t intend to. If you need to light your scene in sun/daylight in addition to interior lights, you can fake the environment light to better match interior light, but you know you’re cheating and you know how to get out of it.
I’m not sure if I’m up to date with my version, but recently Nishita sun&sky texture was updated to use absolute values. That means we finally have a fixed reference point to tune our other lights to, for proper asset production.
As for scene lighting, it depends on your goal. I’m no archviz wiz, but I’m guessing you mostly rely on natural light with artificial accents if any. Or rely on artificial lighting for a nighttime scene. But you’ll likely not think like a cinematographer or gaffer for this kind of work.
Allow me to give it a try, I can afford myself to use layman’s terms. We have our scene (also called “scene referred data”, “radiometric like data”, “before the view transform”) on one side, and on the other side our monitor: “Display referred domain”.
The problem is that the values in scene referred data is much different than the values on our monitor. It’s not possible to have it 1:1, which means probably that we need a sun in our monitor.
To translate the values from scene referred data to another language that your monitor understands, is not that straight forward. It’s not like we just turn the volume button down on a stereo. There are all kinds of transformations and also, in each program they might have other view transforms. What Troy said it means, I believe: Often people bother about having that scene referred data intact. That’s good, but at the end you also, and always need to transorm it, so that we can see it on the monitor and show it to the people. And about this latter one, not every-one seems to be bothered about the view transforms and only think “Yes I have the scene referred data, so it’s all good, finish story, succes guaranteed”. They should bother like: ok we have our scene referred data. And now? How are we going to represent it? How are we going to transform it to make it suitable for in the cinema, tv, monitor, etc.How do we bring these real world values in values on our monitor (between 0 and 1).
Hi CarIG and many thanks for your reply.
Something you said is not clear for me, sorry. For example, here you say:
What does it means? I mean, when you are shooting on set with real lights and real camera, you have a physical relations between your lights, okay.
Instead, what happens in 3D software is very different. In fact, in a 3D environment, there is no physical correspondence between the lighting sources (unless you use a version of Camera and lights with real physical parameters, as in the Addons I mentioned …) and therefore the virtual lighting of a 3D program is only an approximation of reality, the parameters used, try to get closer to what happens in the real world. So both the various lights of Blender and the World try to simulate real lighting through parameters that often DO NOT correspond to those of the real, physical world.
This is where the problems arise because the subjectivity of the people comes into play, often at random or in any case without a solid foundation of REAL PHOTOGRAPHY, trying to be as realistic as possible. But as mentioned, this is a madhouse of subjectivity which in my humble opinion does not lead to anything good!
Obviously there is not only realistic lighting, and in fact you can illuminate as you want, arriving at excellent or perhaps unusual, abstract lighting by combining the parameters offered by the software in a more or less random way. But you don’t have solid reference points to move around …
Instead, if you base yourself on the parameters of real photography, not only can you get realistic and convincing results, but you can access absolutely unrealistic lighting but knowing how you did it, moving on a system of solid references.
So, shortly, I don’t understand why you shouldn’t have a physical relations between your lights in both cases, real photography or 3D environment…
I don’t understand this differences: in real photography you can shot with real camera in RAW. And with the RAW format you can store all the lighting information included in the dynamic range of the sensor.
The question here is which is the different between Open EXR (3D software) and RAW (Real Camera) in terms of exposure…, or said in other words: what do you mean when you say:
[quote=“CarlG, post:10, topic:1247168”] A camera captures a limited range of the lighting information, therefore exposure is important to set right before shooting.
…because also a RAW file can be worked in post.
So, it obvious you have to rightly set up Exposure on set, when you are shooting, but in 3D software why I should be not important? I mean, the correct or desired exposure should be important and done both in the case that you are on a set with a real camera and in the case that you are in a 3D scene.
About your suggested workflow:
Okay, this is my workflow I am trying to build inside Blender.
In this case, I don’t understand what you mean by “If you don’t know the lighting values …”.
You perhaps mean that when using 3D software like Blender, which uses lighting parameters and tools that don’t have a real physical match (but just try to simulate it as best as possible), you need to be aware that you are not using lighting values. physically correct and this means that your workflow becomes unreferenced, random and subjective, even if in the end you can still arrive at a very convincing and effective result!
This, if that’s what you mean, is exactly what I was trying to explain above: if you can light with the physically correct parameters of photography, you have absolutely more control over your virtual 3D lighting.
And in fact this is exactly what I mean too and that I would like to avoid. This is why I try to base myself on real photography (even when maybe I will go and do an absolutely abstract work!):
Obviously the 3D software must offer you the tools to simulate a physically correct lighting (Physically correct Camera, Physic correct lights …) and as mentioned, I have found some addons we have talked about, which allow you this approach.
Again, you can also use the existing tools, not physically correct, which try (in vain) to simulate real lighting, but for me it is a waste of time …, it seems to me all too reliant on chance and subjectivity.
Of course, it could be an ideal approach for unrealistic works, which are many! For example, with Mograph I often do unrealistic lighting, sometimes it is really random, moving randomly the slaiders of the parameters of a light or of the Color Management to the right or left …
but for these things there are the LUTs, for the real cameras and the monitor display modes for 3D programs.
In other words, in a real scene on set, you create the lighting and make the exposure by checking what will be captured through a control monitor. When in a control monitor you change a LUT, that is, the display mode, the exposure also changes. By I don’t mean the physical exposure, which is what is imprinted in the sensor and which will be saved as such in the RAW format. I mean the exposure that you can see with your own eyes by checking the monitor. So, if for example you are using a LUT log, what you see on your camera monitor is generally a very flat image with very little contrast. But if you change the LUT to something like a Rec 709 instead, it will change the contrast quite a lot and you will have underexposed areas and overexposed areas.
So I was wondering if the LUTs or Cinepictures, and in short the display modes of a Camera monitor, correspond to the display modes on the monitor offered by the 3D program. It seems to me that I understand yes.
Therefore, when you light up a virtual scene in a 3D program, you should see the image as you want it to become the final rendering. Obviously, by exporting to OpenEXR, you can then change a lot 'of things, but I think it’s essential to have a clear idea of what is happening. And to have a correct vision of the lighting and the mood that you want to obtain in a 3D software, I believe that it is of great importance to use an adequate display mode, such as the Filmic which has a wide dynamic range.
In short, I believe it is absolutely correct that you should create your scene and make your lighting by displaying it in the most faithful way to the result you want. Your job as a lighter ends there.
Then when the scene is reproduced on who knows what device, monitor or smartphone …, it is clear that it will no longer be as you saw it on your monitor, because it depends precisely on the playback support. But you. as a 3D artist, you have to start with a result that you want to achieve in front of your eyes.
Just like the reproductions of the catalogs of works of art …, there is not a painting by Guernica that corresponds to another, much less the original, that you can see at the museum. However, this does not mean that Picasso painted at random …, he was faithful to the pictorial idea he had in his head and which he then was the first to admire before his eyes. Likewise, I believe a 3D artist should behave.
We are probably saying the same things but in different terms.
For me, in my experience this is not possible. Everything in Blender (if you have Filmic enabled) stays in Filmic transform view (strength of light, compositing etc). What you see there in Blender needs post-processing and is not the end-result.
( Because I mainly use Blender to develop assets and addons, I post on Social Media to showcase my work. So I am not making video-productions. For me it means that I would like to see a node in the compositor called “After Filmic tranform” or “Push pixels after here”, so that what I do after that node, I can post process and make the render ready to post directly on social media with a .png, jpg, etc). So what I do often save the .jpg of .png and adjust the levels, brightness, or contrast in an external program, or I believe we can load it in the VSE while Blender is still open and the VSE seems to be separate that is using the default view. In case I am going to make video productions, I would do it differently ofcourse and save everything in a format like .exr and load that into Fusion, Davinci Resolve, Nuke, Natron, etc. It’s not what I do generally so far.).
But now with the updated sun&sky texture, we do have an absolute reference point. Meaning I can have a light asset with a lightsource that has the correct relation to sun&sky strength. Previously? A bloody nightmare to be honest - nobody could give the answers to lamp light strengths, i.e. a zenith 5500k sun vs a 2700k 60w incandescent bulb with 2.7% efficacy. Now maybe, finally, that can be laid to rest.
You should indeed, at least as a starting point. But previously we couldn’t. Add a sun and a sky texture - what strengths would you use and why? Now add a household 700 lumen bulb, what should it’s strength be to be relatively correct to the sun and sky? Keep in mind Blender lights operate in watts, something that is not readily available to bulb customers, so how to get there? I do know now thanks to an addon, but still - what a nightmare battle. I haven’t tested this against the updated sun & sky yet though.
It’s important if you save to jpg or regular png as they crop away information outside the range (similar to how a camera tonemapped jpg). Raw several bits worth of more information, leaving you some headroom in post without causing stepping. But not all information, which is why we need to shoot a bunch of exposures and assemble them into hdrs (like exr). A lot of hdrs from multiple exposures are still clipping high brightness values like the sun. My D80 can’t shoot the sun without clipping, even at fastest shutter, lowest iso, and stopped down as much as possible.
As for on set lighting, I mean they will use a lot of tricks you wouldn’t normally consider replicating when lighting an interior for CG. Fake backdrops, silver reflectors, tricking inverse square law and so on. Instead, we have other tricks, like camera invisibility, unrealistic linear falloff if we need it. Look up some cinematography lighting workshops on youtube.
Build an interior scene. Using photographer addon you know you should set your exposure to around +2 iirc. Now light that scene with an HDRI - you don’t know what absolute values it contain, so you adjust it’s strength until the image look about right at +2. It’s not accurate, but you’ll be in the correct ballpark when you bright in your lighting assets if they are setup with absolute values (using the sun & sky as a reference).
I don’t disagree. The developers have disagreed with me when I raised the issue. Luxcore has lights where you can choose the unit - you can always test there and adjust in Cycles to match - but yeah, what a workflow… Even filament render engine uses physical camera parameters to control exposure.
I said render, not the image after post processing…
So, you make your lighting and render out your scene, THEN you can let it so or post process it in compositing. Is your choice…
Your workflow seems to be okay for your Social Media purposes, no problem, but the standard workflow in a studio or freelance pipeline is to export in OpenEXR your CGI, ready for compositing.
And it is great! But is only about sunlight.
Anyway, for examples with Extra Light addons, you have also many sort of Lights with the correct physics parameters! Maybe in next future, real Physical Lights will be added to Blender…
Of, course, but we are say the same things…
Okay for on set photography, but I was talking about exposure in a 3D software such Blender because even in this case, you still have to do a correct exposure even if you save in Linear Open EXR if you want to do even a simpler and more effective post-production.
The point fro me was the parameter called Exposure (in the Color Menagement) which has no sense…, or better I can’t understand what it means, what values it corresponds to in physical terms. In other words, id I set up Power on the Light parameter, I know it means to increase the light intensity of that source light. If I set up Aperture o ISO in the addon Photography we mentioned, it means that when open o close the shutter (Aperture) or the Sensor sensibikity. But if you pump up or down the Exposure value in Color Management, what does it means…? I think this is a virtual parameter which try to emualute a global exposure wgain up or down a sort of “global exposure” without a correct correspondences with the real world. For this reason the Exposure default value is 0…! And I let it always so.
I have yet to start with Photographer, so I can not answer but anyway, keep in mind that you can get different approachs. My workflow has a cinematography approach: it means that there is not a sort of “good” reference exposure. but have to decide WHERE would you like to take your exposure, where is your focus area, and in that place, set up your exposure. This create contrast in photography and it is the magic of illumination. That means you can have a different Contrast ratio in your image, different exposure zones (which creates the Contrast) and for this reason for me have non sense to have e +2 iirc (???). In your frame will be zones i.e. with a 1 stop o 2 o 3 stops over / under exsposed compared to others. There are not a unique Exposure value, unless you have a unique color in your frame.
Yes, of course, and consider that everything is complicated by the fact that the virtual simulation of real files is done by different rendering engines, each of which tries to do this job with its own parameters. This makes it all very complicated if you rely on virtual parameters since it is a rendering engine calls the intensity of light as Power, another as Strength, another as Gain, mixing improper terms borrowed from other sectors and circumstances. . This is why I insist or in any case try to base my workflow on real data and not on those given by the software. So, the goal of my discussion, the most important point, is to understand what the real and physically accurate parameters of photography and lighting correspond to with the parameters of the software in use and / or its plugins - addons.
Yes, of course. Mine was not a criticism, but an observation. Everyone is free to use the method they prefer. I’m just saying that I prefer to move from photography and real lighting because I can thus have control and awareness of what I do. But obviously you can also totally or almost totally ignore photography and still be able to illuminate a scene in an excellent way.