Lens simulation

*thumbnail image

Hi!

After getting some hype from this great tutorial https://www.youtube.com/watch?v=jT9LWq279OI and finding out about the new “Ray Portal BDSF” shader I just had to create a lens simulator :smiley:

Coming from a Houdini workflow I decided to simulate the lens virtually in a custom OSL shader, this gives me full control over everything without having to leave my cozy coding environment.

Lenses are implemented from real world lens patents, to implement a new lens I simply enter the lens data from the corresponding lens patent. I need to do a few manual adjustments pr. lens since the aperture position and size, plus the lens cutoff height is not set in the patent.

So far I have calculated chromatic aberration in a non realistic way, simply shifting the RBG rays based on a imaginary number I think looks good. A random color is simulated per sample(R, G or B), this introduces more noise in the image since more samples will be required to average the result to white light. Would be cool to simulate the wavelengths in a realistic way, but that is a little bit over my math knowledge at the moment.

Since the optimal censor position for a given focus distance varies heavily between each lens, focal length and other factors of the lens, I decided to go a more mathematical way about it instead of guessing or building a lookup table. I first ray trace a single ray from from the desired focal distance to the camera, given the ray hit position on the Y axis after tracing through the lens I get the optimal censor position to focus the image on the desired focus distance. I then proceed with the normal lens simulation based on this new censor position. This slows down the render with about 8.59437751004016%, but totally worth it. Manually setting the censor position is also possible.

I also added a lens schematic mode where I can debug and view the lens shape, this includes the ray for calculating the optimal censor position.

The rendering performance does not seem to slow down significantly when rendering through the lens, I think the real performance killer is rendering DOF in general, not necessarily the lens it self, need to do some benchmark tests on it. Also, instead of changing the size of the simulated lens aperture we can simply change blenders F-Stop, this will more or less have the same effect but greatly improve the render speed since we’re not trying to sample light through a small hole when we want to reduce the bokeh size.

A great side effect of using the “Ray Portal BDSF” shader is that most of the AOV passes ignores this shader, the light ray is simply just redirected.

Here are some results:

Early focus test:

Canon 85mm f1.5


patent: DE1387593U (old german lens)


Kodak Petzval 1950


Minolta Fisheye 1978


Cannon 53mm f/1.4

Outside view of the lens, added a toggle to flip the image for convenience, the original lens image is upside down.

45 Likes

Hi! This is very cool! I was hoping someone would do something like this with this shader. I have been experimenting with similiar things for a while. I have a setup that creates the lens as actual geometry from patent info with geometry nodes, but this is quite a bit slower to render than normal DOF: blendswap link

I have also been playing with implementing the whole lens in a single shader on a plane, but in this case I have not used actual patent data, but just tried to recreate typical lens effects with simple artistic controls: blendswap link

You seem to combine both ideas, very nice! I would love to see your setup/code if you are willing to share.

Regarding chromatic aberration: You don’t need to have a hard split between RGB channels, if you take a white noise sample of a virtual wavelength you can use it to adjust IOR at the same time as adjusting the resulting color. I did this in both setups, but in the real geometry lens it is disabled because it is so much slower. Also, if we want to be physically accurate, I don’t know if the Abbe numbers that are usually listed in the patents are enough to get a useful chromatic simulation.

(and just a note: that the AOV passes ignore this shader is not really a side effect, it was very much intentional, basically the whole reason that this shader was written in the first place :slight_smile: )

3 Likes

The results look great!
I could definitely see this as an addon for blender. :wink:

1 Like

Thanks! Haven’t yet decided what to do with it… :thinking:

1 Like

Thanks!

I guess rendering the lens as a geometry is always going to be heavy, I wonder if it makes any visual difference.

This is very much a WIP so far, the chromatic aberration I have now is just a quick implementation. I’ll test out your version and check how it performs, my guess is that it’s going to be much slower since it takes longer to converge to white light, maybe only a few hard colors could be enough. Did you use a lookup table for the colors?

The only problem I have with the AOV’s are that for example depth and position seems to be based on the first pixel or something, so if those pixels hits the lens wall they will be black, making those passes useless.

Not sure what I will do with the code, but you can contact me on discord if you want to test it out :slight_smile: heinzelnisse#5900

1 Like

Implemented lookup table to get a continues lerp of the chromatic aberration, to my surprise it dit dot affect the render time compared to my old method!

It consists of 16 different colours, a random 0.0-1.0 value lerps between them linearly.
image

Chromatic aberration seems to increase the render time by 60% (noise threshold set to 0.01, 4096 samples).

3 Likes

I did indeed use a lookup table, there are multiple versions in the file, some images and a color ramp (you probably already saw based on your results). The version that does not really go through all colors but has a mixed/white part in the middle renders even faster, but can obviously never produce the full rainbow effect.

Position and depth should both work, they do in my tests… maybe something to do with OSL? Depth is just calculated as a straight distance from the surface to the camera object though, it does not take the additional distance added by the portal into account.

(And I just sent a request on discord, id same as here)

Cool, I haven’t checked your blend files yet, the mixed white part is a good idea, definitely going to check it out!

As you said on discord I think the main issue with the depth pass is how blenders position/Z-depth samples the values, would be nice if it chose them based on a median value instead of taking the first value it registers (I guess that’s how it works).

Rewrote the whole schematics render, its now in 3d!

Debugging single lenses:

This comes in handy when implementing cylindrical lenses:

Got some issues with anamorphic lenses, they are always out of focus…

4 Likes

I was talking to Zircron45 and he helped me debug some lenses and also teaching me how anamorphic lenses works. He tried to import the OSL shader into a larger scene and it seems like OSL is not working unless you tailor your scene to that specific purpose. Ran into some bugs and max node limitation with OSL. It’s more or less useless on larger scenes.

I therefore decided to do some tests with blenders native nodes and this is the result (Canon 85mm f1.5):



The ray tracing part of my OSL code is quite simple, had to simplify some things due to the lack of loops but everything seems to work so far. The setup is ish procedural but limited to 10 lenses max(can easily be expanded). Changing the input parameters will result in a different lense.

I wonder how I can optimize this more, it renders 33% slower than the OSL code. There seems to be no break condition or switch nodes in material nodes, so I have to trace through the whole lens even tho the ray was marked as a none hit ray at the start of the lense. Maybe the mix node only evaluates one of the states if the input value is set to 0.0 or 1.0? Need to do some tests…

I think I’ll focus on this for now instead of implementing more OSL features. Having a working lense is much more valuable than one that only works in a small test scene.

3 Likes

As far as I understand, blender does not support OSL Camera. This mechanism is also available in other renderers, for example in the combination Ciname4d+Octane, 3dsmax+Vray.
I am sure that OSL Camera in Blender for Cycles can greatly simplify the solution of such problems.

Yeah, and you can write your own lens shader in houdini with cvex. I hope they implement a better way to do it or improve the current OSL implementation, but it does not seem like it is a priority at the moment.

Bumped into some optimization issues with blender nodes, even though you have a mix node with factor 0 or 1, both input values will be evaluated. This means that for every feature I add, even though its not enabled it will have an impact on the render speed.

I found a decent way to build my system, instead of having a large multi purpose solver that can handle any case, I built this modular system for each lens element:


Choosing a lens is as simple as changing the output of my camera shader:
image

I also added a focus system that maps the input distance to the optimal censor and rack positions based on carefully selected preset values. For example, I have set the optimal sensor position on 1m, 2m, 3m etc. The values will be interpolated linearly.

Anamorphic lens:



With chromatic aberration:

I LOVE this project (and all other projects that did a similar thing). As a camera nerd and photographer, this is amazing to see!

I understand this might not be the kind of thing you were going for, but is there a chance you could add more modern lenses? I know the “effects” might not be as strong due to technical advancements in the last few decades, but I’d love to see that too.

Looking forward to where you might take this project!

Thanks!

Yeah, more modern lenses has in general less imperfections, which makes the result less visually pleasing, and it’s harder to find the lens patent due to companies hiding their secret (correct me if I’m wrong here, I’ve only done a few google search on lens pattes :sweat_smile:). And modern lens has in general way more lenses, which will slow down the render and is tedious to implement.

That being said, any lenses should work (not aspherical, yet? :thinking:), and I chose the once I have now due to simplicity and because they were easy to find. Do you have a lens patent that you want me to test out?

You’re right, it probably doesn’t make a lot of sense to do more modern lenses. I found one patent drawing for the sigma 35mm 1.2 lens (I think, according to a comment): https://www.sonyalpharumors.com/triple-fast-lens-patent-sigma-designed-the-35mm-40mm-and-50mm-f-1-2-lenses/

Its basically a perfect explanation of your points: Lots of lenses, tedious to implement and slow to render :stuck_out_tongue_closed_eyes:
Probably not the best idea to do one of those…

This one is a aspherical (can’t do that yet), but I can try to implement it anyway by making the aspherical elements spherical :slight_smile:

I guess the aspheric lens data is not optional… :sweat_smile:

2 Likes

Haha, ok, nevermind! :smiley:

Implemented aspherical lenses, this is a Arri Zeiss Master Prime:

6 Likes