Insta360 One X HDR panos for Blender environments etc

This is the problem; you can’t identify lumens without identifying the colour of the light.

Again, I don’t know what an “artist” is. I know that the folks who perform their crafts typically are very well educated and experienced people.

Calculating the emission strength can indeed be done via the value in question.

Huh? What’d I say?

What value. And how? I don’t believe that you don’t know exactly what we’re asking.

Then I am not experienced enough but am ready to fill that gap. It’s not that radiant flux is an impossible-to-understand concept, it’s about ease of use and following what the whole industry is doing.

Of course intensities of lights will vary depending on the colors, you can even use a dark red instead of a bright red… Shouldn’t we use wavelength instead?

Thankfully, light bulb packagings also state the color temperature of the light.

As far as color go, we’re talking about 2500-3500’ish Kelvin for typical household lighting. The difference can’t be that extreme.

@chafouin:
Oh, sorry. I get it now :slight_smile:

It’s a great question. Under an RGB model however, the colour is specified by chromaticity. That is, many spectral combinations yield the same chromaticity to a standard observer. Similar to how narrow band CFL lights manage to simulate a particular chromaticity.

Again though, you can’t calculate the energy without going back through the chromaticity of the reference working space lights. That makes the calculation vastly more convoluted, and sadly doesn’t solve the issue trying to be solved here; it makes it more obfuscated and complex.

This is chromacity though, not luminance. If one doesn’t boost the value of the RGB color to its maximum brightness, then the energy value doesn’t have much sense, I guess?

Of the three energy axes, XYZ, Y happens to be luminance! It’s part of the clever design of the XYZ model the CIE came up with.

The larger issue is how that is communicated in the software, which can get quite tricky to keep track of etc.

1 Like

Then maybe you da man to ask :slight_smile:
Based on this, I use 441 as sun strength (4500K). For sRGB, my sun disk will clip in full white reflection at EV-11.5. -11.5+20=+8.5EV - that’s a lot of shadow detail. As a range for the camera, sure. But a probe having 24EV?

Sunny 16 gets me to -6.64 and everything lit by direct sunlight is properly exposed (although above I didn’t account for atmosphere). Are these numbers in the correct ballpark according to you?

I also quite recently read an article or saw a video by some of the big players in development regarding sun hdri shooting. It was very informative, but I’ll be damned if I’m able to relocate it.

Okay, first off a lot of information cropped up since I last checked this thread and it will take me some time to process it all, so sorry if I say something stupid or redundant. Anyway…

As far as I know 24EV is the maximum EV range found on Earth. meaning it should encompass the sun in clear conditions, so the disc should not clip in the image itself. Having said that if you use an X stop filter to get the sun in 1 shot beyond the initial bracketing it is just cosmetic and wouldn’t work right. I don’t know - I only used a filter once so far because mine has terrible colour cast.

I remember seeing the -11.5 EV somewhere. Might have been one of the reoccurring numbers for my sets of shots in Hugin. So without filters sun was always clipped at 1/8000;f/22;ISO100… the sun is always much, much stronger than anything around it; while I can’t say anything for sRGB if you correct the gamma curve to get linear response you see just how much darker everything gets - which could account for the numerical difference. I checked a few of my Hugin files for EV and I had the “proper” shots around 10, 11… so in an imprecise/general way yeah, I’d say it’s correct.

I’d be interested in that very informative video too. I did stumble on chafouin’s video’s linked Unity HDR guidelines and what I read was very good. Never can go wrong with more info to go through. Thanks for that blendergrid thing too, didn’t know it existed.

It really doesn’t take that long to make good HDRI with a digital camera equipment once you have the pipeline setup. Only problem I have is trees and leaves moving in the wind. A device that can quickly capture a huge dynamic range on a 360 sphere would be nice.

For completely stationary scenes, I wonder if one could hack the electronics in this to make multiple images at various stops and stitch it later manually.

Just tried for giggles. Exposing for outdoor shooting on this overcast day. Taking it inside with overcast daylight coming in a closed curtain living room, bouncing around to enter a slit in a door to a dark room. That’s a 12EV gap to expose for the room (ISO800, F4, 1/2000" vs 2"), with no extra glass on an F4 Nikon pro lens. That’s one stop below OC/8 rule which overexposed some. I’d say that would be more than enough contrast to keep in a probe. My camera is 1/4000 f22 max and I don’t have any heavy duty NDf filters, so I have no experience getting the full sun; frankly I prefer adding a sun as it converges faster.

The builtin ones as very low res, but appears good contrast wise. Unfortunately they have some very nasty pixels.

I’ll see if I can hunt around for that video.

To capture the sun, an ND 3.0 filter would be required minimum, in addition to a high shutter speed, small aperture, etc. The units are log10, which means the 3.0 filter is a 10 EV optical density. As you discovered, they are extremely tricky to manufacture without non-uniform spectral absorption, which would require characterizing the filter itself. Rather fussy.

Bear in mind that the sRGB specification characterizes a display. That means that the specification describes how an ideal sRGB display behaves. When an image is encoded “to sRGB”, the nonlinearly encoded code values are to prepare it for said display.

This means you cannot “invert” the sRGB transfer function and get scene ratios. It is roughly the equivalent as taking an aesthetic output from Filmic + contrast, which is sRGB ready, and using the inverse sRGB transfer function to derive scene values. It doesn’t work.

The raw sensor values on the other hand, are more or less linear radiometric quantities.

Temporal artifacting is of course a serious issue. There is some discussion of the Wiener filter here, which might be useful.

But if you go ND, you go much higher ND while you’re at it so that you can shoot at wider apertures for sharpness. F22 is something we’d do out of necessity if not having filters. We prefer around F8-F11 or whatever fstop is the sharpest - I’ve seen claims that wide open is best for some lenses.

I’m not doing this anymore, but I would likely find using filters too cumbersome and live with either painting in the sun or matching a real sunlamp, shooting manual/tethered depending on the equipment at hand. I’m using real sunlamps now as it converges faster and doesn’t require high sampling of hdri which slows down considerably. Finally we got a proper sun size property in Blender.

Found the thing I mentioned, it was Unity.
https://docs.unity3d.com/uploads/ExpertGuides/High_Dynamic_Range_Imaging_HDRI.pdf

So, after all that it seems that I can’t use my 360 OneX camera to shoot HDRI

If you have manual control, shouldn’t be a problem grabbing more shots. At least you don’t have to stitch.

From my experiments so far its a .DNG with about 4 stops maximum. Maybe there’s a way to cheat it better. I’m still trying because the one click convenience is awesome. Also the company may do a software upgrade to give us more range?

1 Like