Yet another fake SSS

Hi,

To save you having to render from the front and back, I think you can use the ZInvert option of the material of your object. You won’t need to worry about the perspective distortion either.

I.e.:
Renderlayer1: The object has a material without ZInvert enabled.
Renderlayer2: The object has a material with ZInvert enabled.
Subtract the Z-pass of renderlayer2 from renderlayer1, normalise. You now have your depth map which you can use in the way you’ve already described.

even if blender did have renderer support for SSS materials, the way it would go about doing it would not be by calculating every bounce of light underneath the surface of a material. doing that would require an inbiased raytracer like indigo. doing it in an unbiased method already exists and is fairly easy to set up. only problem with that is it takes a tiny bit longer to render.
Nvidia’s rendering engine, gelato, and 3delight both have the power to compute SSS based on physical parimiters such as ior and thickness without unbiased raytracing. in fact, they do it fairly quickly. Papers have proven that there are methods of calculating the lighting from SSS by an aproximate equation that takes into acount the incident lighting from one side, the lighting from the otherside refracted, the thickness and the normal. these equations may not be 100% perfect, but the renderengines that do use these equations dont do anything 100% perfect either. if you want 100% perfection download indigo. SSS effects and translucency are basicaly the exact same effects. in fact, translucency is a type of SSS. the difference mainly is that because translucency is applied to single sided objects, there is no need to calculate the amount of light leaving the material on the incident side or the thickness of the material. to calculate these things quickly you need a way of getting the depth of the object. once that is done the light hitting the object can be blurred based on the constant ior values and density of particle values. the ammount of light applied on the backside of the object can be blurred and weakened based on the refracted light hitting that side, the thickness of the material, and the constant values such as density, backscattering, forward scattering and phase.
this method starts to get more complicated as you add multilayered SSS materials, but equations for aproximation do exist. most of these however are less equations than multilayered nk profile data and stuff.
i suppose one way of appleing a sss material with this method to only one object in a whole scene would be to use the object index data just added as a mask to apply the effect. i also suppose that if this custom node setup can be condensed into an algorithem like thing which only acts on one object and uses the view camera in ortho mode, it can be turned into a material type, but the blender cvs would have to be hacked to check for material type and data at compositing time.

@ rawpigeon, that’s a good suggestion, I didn’t think of that. I’ll give it a try and see how it works. This could save us a lot of time and perspective alignment problems.

@ everyone else, I am aware of the physics behind SSS, and physically accurate sss probably isn’t the way to go since the blender internal rendering focuses on speed rather than visual accuracy. Indigo is capable of real SSS, but it’s integration is still lacking in many ways (no nodes, no particles or halo, no procedure textures). The bottom line is I am not looking for 100% accurate sss, but a visual hack for acceptable result.

mpan3 you are right.
If we do an animation no one will see thats a fake,
but it has to be quick, without a lot of tweaking.
keep on its a good idea.
Locke

keep it up & keep us posted!

If you want a real example of real-world sub surfice scattering - the way the light scatters below the surfice of the skin - you’ll have to see a sharp-edged shadow on skin. You’ll may notice a red glow around the edge of the shadow.

And doesn’t “real” SSS take years and years to render? After all… if it’s not for science, who cares if it’s physically accurate? :stuck_out_tongue:

Hmm… would be nice, to get this working with a UV-Map…
-> spheral UV --> bake Zvalues to it --> based on this, calculate the thickness…
The poblem would be, that you’d have to remake such a UV for every single Frame. So it may would be a mess to do it with animations…
The advantages how ever would be 360° :slight_smile:

This is really a very interesting approach, and it’s results are quite amazing :slight_smile:

I fiddled a bit around with it, and I figured a big problem is that you are fixed to one particular camera and object position, tweaking might take some time, and that all sss Objects have to be positioned on similar coordinates.

So I tried to find a maybe more universal way, and I think I just came up with something fast, and easy to use, also in animation.
The drawback is even more loss of accuracy.

The idea I had was to produce something similar to the z-buffer, but more dynamically. I first tried to copy the scene and simply replace the Materials with Fresnel Material. Those turned out to be very lighting sensitive and inaccurate.
Then I simply used a Normal Rampshader from black to white on the Objects, turned off all specularity and enabled tangent shading. I could then control the amount and range of SSS with the Shaders color values.
But that was giving false results when there was another object behind the actual sss object, as it was simply not taken into account for the SSS.
So I simply added a bit of transparency to the Fake Material, so it would get darker when an object was behind it. Several SSS Objects would here of course also add up quite well.
Only Problem I have encountered yet is that you can’t use Ambient Occlusion in the Fake Scene (in the actual Scene you can use it), because it brightens up even black areas very strong.

Then I simply treated the Scene with these materials like the one in the tutorial posted here, except that I didn’t blur the Fake Layer, as it gave an unpleasant white glow around the objects (which will turn into a pleasant glow if you have a full scene) and I think the result is not too bad, except for some untweaked values (You can actually see suzanne’s behind ear very well, due to the transparency).

Sorry for my bad explanation english, but here is an Image, so you might actually rate this technique yourself…

http://img81.imageshack.us/img81/2273/ssstestwt9.th.jpg

Material settings and some example blend file would help immensely here! :wink:

I was hoping to tweak it a bit more, but well, here is the blend, as I don’t have time to work on it right now…

Best thing is if you get the material settings from the blend file, as I neither have time to post them now. Just remember that there are two Scene’s used under the “Scene” Tab, and that the Ramp Shader’s Mode is Subtract…

Meanwhile I did a small animation test that I aborted, as it gave partly weird results…

Blend:
http://home.arcor.de/rinne88/cg/SuzanneSSS.blend

have fun, I might update it tomorrow to enhance it a bit, and at least give better material names than material.001 ^^

Subsurface scattering is a directional properties,light enter at some point, and exit at another,but it’s related to light position,how deep goes into object it’s depend on material properties.
I don’t understand how this work with light setup,maybe I miss something.
Btw,visually it’s look good.

If we have IOR, would’nt someone be able to adjust it to simulate the SSS so we could have an SSS option as well?

Not to be Mr. Negativity here but, these really just look like they’re back lit with some translucency/transparency as do all the fakes so far.

technicaly these solutions arent good solutions because they do not take into acount forward scattering, or the amount of light emmited by the object due to the incoming light on the opposite side. however, i have figured out a way to compute this using this basic method. to do this i added a plane to the scene from the front camera veiw and turned the translucency on which shows the distrobution of light throught the scene. the added plane can then be used as a point to mirror the camera from, if a node or material is ever created. here is a picture of my node setup.
http://img294.imageshack.us/img294/18/ssstestlb0.th.jpg
my current setup is rough and still can not realy do ior, although by changing the blurring and
stuff, you should be able to create the effect of ior. i used perspective in this one, just to see what would happen, and it still worked! to keep the effect only applicable to that object, i used the object index pass. also, to keep the scattering values from effecting the specular, i split them up into passes and recombined them after the effects.

Basically I took the depth buffer and used that to influence the brightness of the object. In general, the thicker the object, the darker it is.

Ultimately, SSS is working when it looks ‘right’. I mean, we could have the most physically correct renderer in the world and still get a bad image out of it because it wasn’t setup correctly. but then, I also realize that SSS is highly subjective.

mpan3, If you try some light setup you will soon realize that this method doesn’t work,the relationship between light and scatter is important,here I see only a camera aligned back scattering,but there is forward scattering also,and,most important,multi scattering(surely with this method you can’t do it).Btw,why didn’t you use the invert Zbuffer flag for material?For the back Zbuffer probably it’s easier that finding the inverse camera location.
If we can do the same thing(finding the depht)from the lights position this could be a very good ways to do forward/back scattering.

reverse Zbuffer doesnt do the effect that you are thinking, i’ve tried it. maybe between now and next blender release, we can write a script that sets up the nodes automaticaly for each object (and or light) with any camera possition that will work only for the objects with a blah.scatter material.

Where I can find the paper from Nvidia?
I’ll take a look at it,maybe I can find something interesting to improve my own system,it can be used as a mask for back scattering.
For the inverse Zbuffer,yesterday I have tried and the result seems quite similar

while were using multiple camera’s for SSS techniques, does anybody know how to link a specific camera to a specific render layer so that when you click render it changes camereras automaticaly and you dont have to do it for each node?
and odd, whever i try the inverse Zbuffer, its just as if it had a normal Zbuffer