Hi there! Long post incoming. I know that you didn’t ask for so much detail, but I hope to share my knowledge the best I can. And if someone sees errors in there, please join the conversion!
First, Photographer and Lightpacks will always be 2 different products. Photographer is a Python add-on to help you light and render your scenes using a physically based workflow, while Lightpacks are libraries of lamp assets that can work on their own. There is no intention to merge them
About the 360 HDRIs, I will always recommend reading Unity’s Artist friendly HDRI creation tutorial. This tutorial goes through how to make sure pixel values are extracted from RAW files without applying any curve correction to them. For instance, stay away from Adobe if you want to create proper HDRIs.
I still haven’t found a software that creates HDRIs with accurate absolute radiance out of the box. hdrmerge is probably the one that tries to be the most accurate, but it has its own problems. So what happens is that when merging brackets into 32-bits images, softwares often applies an arbitrary exposure level (based on the average exposure value of your bracket, or something similar): they are already doing some pre-exposure adjustments. I guess they assume that, if artists would see a fully white, or extremely dark images after merging to HDR, they would think something is broken. Same if you would use an HDRI in Cycles and you would see nothing with the default strength of 1, you would think something is wrong with the engine.
Then when you do lookdev, you usually capture a color chart in your HDRI to be able to adjust the grey square to be exposed to a certain value, and do white-balance corrections. That’s another pre-exposure step. You do that because you want to be able to swap HDRIs to check your objects in different lighting conditions without having to do any change in the camera or compositing. For ease-of-use, you are baking the exposure and color-correction in the HDRI instead of doing it at the end of the pipe.
Back to physically-based lighting workflow. What I suggest with Photographer and Lightpacks is to not bake the exposure and the white-balancing in the HDRI, and do it properly “in post”, like what real cameras do. To be accurate, real cameras don’t do exposure in post, they use a mix of physics and electronics for exposure adjustments, but in 3D rendering, it is a post-rendering adjustment.
It’s easy to get rid of White Balance corrections: set your camera to 6500K and leave it there when shooting your multi-exposure bracket. Do not do any color modifications after merging them to 32-bits.
The absolute exposure is a bit more problematic.
First issue, to make things a bit more difficult, render engines didn’t pick the same way to output pixel brightness. Some will output luminance (LuxCore), some went for radiance (Cycles). The difference is a 683 factor. So when I create absolute HDRI, let’s be clear that these are for Cycles so that you don’t have to do any strength adjustments. You might require adjustments for other render engines (Photographer fixes it for LuxCore for instance).
In the end, it’s all about keeping correct ratios between them, which scale you use doesn’t matter.
In practice, to make your HDRI absolute, you have to bake exposure adjustments into your HDRI image (in the 2D editing software of your choice) so that when you plug your HDRI in Cycles and set the Camera exposure to match one of your RAW file you used for the merging, then the brightness of the scene would also match. You can eyeball it, or you can use a color chart to be more accurate. But there is no magic bullet there.
Avoid using Filmic tonemapping as highlight compression could affect brightness perception, I compare RAW files with a linear response curve and sRGB/Standard render in Blender.
Now… let’s talk about the ugly things. With all this process, there are a lot of inaccuracies that happen at all stages of the image processing.
- Cameras from different brands (even different models of the same brand) have different color science.
They don’t capture colors the same way, even in the RAW file. You want to reduce color adjustments during demoisaicing and make sure values are read as linear without applying any curve, but that doesn’t mean that the pixel you get in the output has the exact scientific tint it should have. - Lens attenuation (that I mention in my last video) is probably slightly different between the brands. It is also different between lenses. Which means that the same exposure settings between two cameras could give slightly different brightness results. Now if we also decide to remove lens attenuation like Unreal, then this also introduces an exposure bias between real cameras and CGI. Is it the right way to do it? Honestly I’m not sure, but I trust John Hable from his impressive experience, and consider Unreal as the new 3D graphics standard that probably a lot will follow. I am open to discuss which value should be the default one.
- 32-bits merging softwares do things differently. Trust me, I tried them all, they all give different results, whatever you give them as inputs (linear response curve TIFFs, out-of-camera RAW…). Some seem more accurate than others, but might fail in some specific cases with highly saturated colors for instance. I totally recommend Affinity over Photoshop, but is it 100% accurate? I can’t say.
- Color gamuts: there will be a revolution soon. Right now I still create HDRIs in sRGB (linear curve, but sRGB gamut), which means that I am probably throwing away a bunch of color information that my camera captured. sRGB is still the king of the Internet at the moment, but it is an old, limited standard, and we should all move to ACES soon. Wide-gamut monitors will become more mainstream with HDR monitors and we should support DCI-P3 and REC.2020.
So can all of this be used for scientific research? Nope!
Can it help you render photorealistic images faster? For sure.
Should you only rely on physical lighting to make cool art? Definitely not. Photographers / cinematographers fake a lot of things, and so should you