Thumbnail Update:
Hey everyone! Hope you have fun looking at whatever projects I post here.
COPPER:
Thumbnail Update:
Hey everyone! Hope you have fun looking at whatever projects I post here.
COPPER:
Here’s another illusion for you. Can you spot the seams?
CHROMIUM:
Man, the JPG really takes the quality down a notch, huh?
Here is a bubble I made using a thin-film shader I adapted! Sadly, even at 3000 samples, some of the reflections are still noisy. I would denoise it, but it produced really bad artifacts around some of the brighter reflections. Later, I’ll be able to render it with more samples… unless someone has another idea for reducing the noise!
BUBBLE IN DRESDEN STATION:
HDRI from HDRI Haven
Also, I’ll be posting the node group soon (based off of Pruster’s code). Look forward to it!
I made a simple animation with the thin-film shader! Of course, it’s of a bubble.
BUBBLE IN PRELLER DRIVE:
https://www.youtube.com/watch?v=o6HoOB2M9X0
Still frame:
I realize the movement of the colors are very fast, but at this point, I’m too lazy to fix it. I will say, I think I did improve the material from the render in the post above, the lighting here is also better for bubble reflections (albeit, with some trickery), and it does make a perfect loop! (I also figured out some denoising settings that actually worked but, it seems, only in this specific render)
Maybe I should have looped it in the Youtube video to demonstrate that while still keeping the quality… something to keep in mind for later, I guess.
Also, if you would like to try out the shader, click this download link: Thin-Film Interference.blend (5.37 MB)
Hope you guys enjoy using it!
copper and chromium pictures are looking really good. well done and keep it going!
@m_squared, Thank you! I want to make more optical illusions soon.
INTERFERENCE MATERIAL EXAMPLES:
So after a few months of mulling on the interference node group and trying to make sure it’s perfect, I think it is finally finished for now. I had a great time doing the research, deciphering script, and building it; I’m proud with how it turned out.
Here’s the link to download it: https://www.dropbox.com/s/05eeeqv242kzwo9/Thin-Film%20Interference%20Nodes%20Materials%20Library.blend?dl=0
@pruster, Thanks for the initial script you created which helped me so much throughout the process. Oh, and I won’t forget you ported over the materials you created with your interference script when I couldn’t!
Here’s the link to find the original script: https://blenderartists.org/forum/showthread.php?403299-UPDATE-v1-6-Cycles-PBR-thin-film-interference-and-metals
Hope you all enjoy it!
After some looking into some methods for creating a relatively more realistic glass material, I was able to create glass with dispersion and absorption.
For dispersion, I never really did like the method of combining three different glasses, so I took inspiration from this method of using a texture to mimic the use of different IORs and Glass colors. The texture is from a modified pixel texture.
For absorption, I modified the node groups from these two sources: 1 and 2.
The node group is still unfinished, but I will eventually post it. Right now, I’m looking to refine the nodes more and maybe add some caustics effects. And due to the small scale texture, the render does stay a little noisy even at high render samples, so I find it’s best to always denoise the image. Of course, the final goal is to look as good as the Prism glass shader on the Blender Market.
(WIP) FISHBOWL:
At some point (which of course, may turn out to be never), I’m going to draw a simple fish swimming around in it.
and it s perfect my friend ! I ve just loaded these materials in and don t trust my eyes. this is a masterpiece, I m really impressed!
Thank you for sharing
It’s been more than a year since I last posted, and I want to get back into it.
I’ve been working on a possible extension/update for the thin film interference nodes, and part of it has already been completed. It allows one to simulate the repetition of a stack of thin films.
Imagine it like this:
1 unit stack: → A B C →
3 unit stack: → (A B C) (A B C) (A B C) →
Using a formula for a matrix to the nth power, any number of repetitions can be simulated. This means we can more accurately simulate materials with more extensive thin film structures, such as a pearl.
There are some limitations which I might talk about later, but for now, I would love to hear some feed back about the new pearl material made with the new nodes:
Which material looks the most pearl-like to you?
After making the pearl material, I decided to try to implement the spectral calculation formulas found here: Click!
Unfortunately, I have been unable to generalize the formulas to multiple layers and lossy films, much less the 100s of layers used for the pearl material. I was able to implement the calculations in the paper, so it’s a start.
Here’s a linear thickness gradient–50 to 1000 nm–of a SiO2 layer on Si:
Top: Interference calculated with 3 samples (the current nodes)
Middle: Colors from the chart here
Bottom: Spectral calculations
0 to 15 microns:
Top: Interference calculated with 3 samples (current nodes)
Bottom: Spectral calculations
Edit: Looking back at the paper, it seems the convergence is gradual, but to get those results, I might need to perform another integration…
Silly me! The reason the convergence wasn’t gradual was because of the fit functions. Namely, I sent the wrong calculations into the output. Coincidentally, even with the incorrect output, the test comparisons to the chart still matched. Here’s an update of the silicon image:
Top: Functions w/ mistake
Middle: Colors from the chart
Bottom: Corrected functions
In the following render, the thickness only reaches 2000 nm to converge. It makes much more sense than the 15 micron shenanigans.
Top: Interference calculated with 3 samples (current nodes)
Bottom: Corrected functions
I also tested whether using more accurate fit functions at the cost of a little speed was worth it. It’s not.
Top: Difference between the tests
Middle: Less accurate functions
Bottom: More accurate functions
nice tests,i wonder if your calculations takeing wave amplitude doubleing and out cancleing into account,especially that would make a difference in color intensity.
the color chart in your last post, shows a little shift the thicker the material gets to the right.the reason for this is that the tri chromatic lambdas are not in the same wavelength “distance” to each other.just try to change one lambda and its IOR and k value,then you see a difference in color distriburtion.
i guess it would be optimal, if you have the original IOR values and its lambdas from the colorchart tests,to get as close as possible with your test renderings,and as comparsion how accurate your current calculations are at the moment.
for speeding things up,your middle render in the last render test are good enough.
Thanks!
The canceling and doubling of the wave amplitudes should be working correctly. If I understand correctly, the reason it becomes less saturated is due to the color fringes produced by different wavelengths overlapping and averaging out.
The shift is actually moreso due to the chart being given with uneven intervals between each color, while the gradient is evenly spaced. Though, the colors it has difficulty replicating seem to be the nearly white ones.
Yeah, it would likely be best to calculate the colors with the correct IOR values and compare those to the node group. I wonder how closely the node group approximates the true colors. That could be fun to test and compare!
After the most recent opal thread got me interested in diffraction gratings for the second time, I revisited some nodes I made to simulate the effect.
After spending a few days trying to understand the full formulas for calculating reflectance and transmittance from a diffraction grating, I decided the math would be too much to implement with Blender nodes. If I understood it correctly, I could have modified the thin film nodes to simulate even layers of diffraction gratings! Of course, the magnitude of such a monstrosity would require something like the thin film nodes for hundreds of n and k values for the multiple modes in the positive and negative directions. Not possible in Blender but fun to think about!
So looking back at the simple diffraction grating formula with a better understanding of how they work, I fixed up some problems with the original nodes.
Important notes about the above formula:
With these notes in mind, we can create a textured normal map from θ_m and a textured color map from the range of wavelengths.
Here are some results of the fixed nodes:
Top Row: Normal Gratings
Bottom Row: Gratings without 0 order reflections
In the bottom row, the 0 order reflections have been removed, so why can we still see them? This is a result of fixing the problem in point (4); the displacement angle is set to zero when arcsine does not evaluate correctly.
This is what it looks like when we don’t fix the problem:
Here’s a sequin material based off of the nodes from the original grating material thread:
Left: Sequins
Right: No sequin texture
Here is the untextured cloth with and without 0 order reflections:
The problem with this material is how terribly it converges, especially when it reflects itself. It would be nice to not require two small textures to simulate, but I haven’t yet found a way to do that. The biggest problem is that we have no procedural way to determine where light is coming from. Basically, this can’t be used in the opal material.
yes its a little bit annoying.but if you think about again,it is clever how it is now,because you have to reverse engineer how the shader should work.in fact you have to “tell” your shader how should it behave if light hits the material based on IOR ,view ect you get the reflection back and so on.
If you look at your lookdev testrender balls,your hdri has not only one light source,it has the windows light and reflections,the sun light and maybe practical light baked in the hdri too.
same goes for scene lights,mesh lights ect.you have rarely one light in the scene.
if you tell the shader how it properties is,it is not limited to one light source,you can use it basicly in all scenes if its build based on pbr of course.
my only idea is to take the object mesh normals for calculations,maybe subdivide it even more for finer results.
here i found this little code,this maybe help to implement a working solution simple as possible.
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch08.html
That’s actually a very cool way of thinking about it! The shader reacts to everything in the scene because of how it was designed.
The shader does use the normals! Check out this graphic Secrop posted: Click! We see the angles θ and β which correspond to θ_i and θ_m, respectively. To be able to sample the direction that the light is diffracted in, θ_m, we need to rotate the normal by some amount such that θ_i and θ_m are equal–the reflection of light. In the case of the graphic, that’s the angle, φ.
φ = θ_i - (θ_i + θ_m)/2; it’s the difference between the current normal angle and the angle we want it to be at.
By setting the shader to only display effects for the camera, the results look and converge much better. The following are at half the samples of the previous renders:
I also played around with the grating spaces of some disks:
Left: CD
Right: DVD
For now, I’ll be working on a parallax material to try to simulate the volumetric effects in Opals. From my initial impressions, it’s simpler than any of the recent materials I’ve been working on, but it’s just as magical!
I’ll check my logic when I have time…
Carefull, because this is only valid for rays in the plane perpendicular to the Tangent
That graphic is only a 2D representation of that plane.
For all other rays, the equation D*(sin(In)+/-sin(Out)) = m*WL
becomes a bit more complex, as D' = D * dot(cross(In, Out), Tang)
.
Thanks for telling me! Ah, is that why you have the factor that scales sin(In)
?
The cross between the In and Out vectors results in a tangent vector. The In, Out, and normal vectors are coplanar, so the Out vector isn’t necessary in these calculations, simply the normal. Taking the dot product of the calculated tangent with the object’s tangent gives the cosine of the angle between them to be used as a scaling factor. Although, I’m not quite sure why it’s modifying sin(In)
instead of D
. And, going by trigonometric rules, why isn’t it modifying through division instead of multiplication?
Modifying the tangent vector to be perpendicular to the incoming vector through the Graham-Schmidt process could be another way to solve this, right?
No it don’t! Only the normal vectors are perpendicular to the tangent vector. The incoming and outgoing rays come from every where, and their cross product is most of the time not parallel to the tangent.
My usage of it is just to scale the slit distance in relation to the incoming/outgoing plane. It’s not perfect (as it doesn’t account polarizations, neither frequency shifting), and it still has plenty of details to solve…
(and about the OSL script, you might be right about the angle range… sadly is not that simple as [-pi/2, pi/2]… it depends on the Incoming vector, thought I haven’t found the correct relation yet)
Hmm… tangent vector was the incorrect word to use. I should have said, “a vector perpendicular to the In and Out vectors”, is that right?
I’ve been playing around with the scaling factor, and the results seem to make more sense. At about D = 500 nm, the colors reflected should be around purple and blue. Without the scaling factor, my shader reflected all colors (the first column in the test renders of post 14 show 500 nm diffraction gratings). With the scaling factor, the reflection colors are blue and purple.
I rationalized the range, [-π/2, π/2], for the outgoing vector by thinking about the diffraction orders. When we plug in orders for some specific set of incoming angles, distances, and wavelengths, the outgoing angles must be physically plausible. Outgoing angles that don’t make sense, would be anything into the plane. So it seemed to me that any order satisfying the equation and doesn’t produce anything physically incorrect would be a possible order. In that case, the range of possible outgoing angles is independent of the incoming angle, but when solve for the range of orders, the rounding of those orders corrects the range of the outgoing angles, dependent on the incoming angle. I don’t know if my reasoning is sound, though.
Also, while writing this I realized that If we do have the correct range of orders, then it’s not necessary to clamp the calculated outgoing angles to 0 when the evaluand of the arcsin function is outside [-1, 1] because with the correct orders, no values will be produced outside that range. Here’s the result:
Edit: After having tangent spaces on my mind because of my research into parallax occlusion mapping, I wanted to test the calculations for the diffraction grating in tangent space. Interestingly, without the scaling factor on sin(In)
, the results are completely different, but with it, the results are exactly the same.