Yet another fake SSS

Maybe, you can get all the Z’s by using the Z-value and get the right values just in a single layer and scene?
This is a very promissing thing! I hope, it will be implented to blender soon, as node…
Also, for later, it would be nice to have more “render modes” - the unrealistic render engine will be added quite soon… So maybe, you also can add an unbiased render mode… with physically correct SSS! And for less accurate but very good looking SSS, there should be a simple node, that automatically gets depth of a mesh and direction of the light…
These realtime tings also take in account the normals, right? Maybe, there is somethng missing in this system, as it doesn’t check normals at all?

blender file
it doesnt work quite so well with spot lights, but the effect seems to be very realistic with lamps and normal lights. unfortunitly in this file the light maps are in a different scene so that if you change the lighting in one scene you have to make the lightmap scene again. this can be easily changed to be done with layers so that both the lightmaps and objects depend on the same lights.
oh, btw, the light is right behind his ear, showing off forward scattering.

Hello All,

Currently rendering out a rotating camera video test on my Davy Jones model using a modified setup of oodmb’s file. All the pieces required are on 2 layers in the same scene and clicking render/Anim once gives you a completed image. I’m also using spotlights, mainly out of curiosity to see what effect happens. The cameras are now focussed on an empty in the centre of the scene, so its a matter of moving/rotating that. I guess scale on the empty moves the camera(s) out, I haven’t checked. Animating the character and all other stuff (including moving the lights about) shouldn’t be a problem (will try it on an animated character soon.)

What I want to try and get working next…

  1. Use one set of lights instead of two. Might work, might not. Can parent the lights on layer 2 to their counterparts on layer 1, but would be easier to keep track of with one set of lights only.

  2. Exporting to Multipass/rendering out tga or png for each pass. This is more of a ‘I don’t know what I’m doing thing’ rather than a problem with this method. I can export series of .exr files ok, but reimporting them into this node setup isn’t quite working for me yet, even in the openExr fixed patched build. While you can tweak the settings on one frame before you render out the whole animation, one might want to change them later in the process when the rest of a scene is added…

  3. This should be a relatively simple one I hope - Rendering a model with sss/skin material and for example, metal armour with no sss whatsoever. Can make a new layer with an alpha map for a separate object, but if a different material on the same object? Will see how that goes.

  4. For fun, perhaps mixing the toon shading method and this method, see what happens. Probably just double up the effect.

Will post the .blend and the video when rendered. This method looks like it might be quite useful once people get it all ironed out how they like it.

Cheers,
Ben.

P.S.
Finished Rendering. Main thing I noticed with this variation of the method is that as the deflector plane moves side on to a light (I’m assuming that would be for lamps/spots and all lights), the effect is lost. This isn’t so bad if you have a still camera, but a moving one might become an issue. Anyway, see for yourselves. (p.s. not happy with the colours, half of that is the compression.)
http://uploader.polorix.net//files/12/FlatTGA.zip
http://uploader.polorix.net//files/12/SSSDavyRoughTest.MP4

P.P.S.
Note: Rendering at higher resulutions giving me much more control/quality, particularly on really thin objects due to the general nature of the blur node.

To be honest,these renders, BenDansie,haven’t nothing to do to subsurfacescattering,the look isn’t right.

that looks realy good, although it needs a bit more contrast in the scattering area, this can be done with the map value node. if you look realy closely at the tenticles while its moving, you’ll notice flickering of light on them, proving forward scattering like in nvidia’s demo. also, i think it would look better if i pluged a multiplyer back in there at the end with the orgignal depth stuff, so that it would look a tad more like multiscattering and not just forward scattering. my nodes that use backscattering as well arent even half done. currently they reley on a blurred phong specular map applied to the model in nodes, which turns out very weird

for a different object insertion, you just apply the scattering object with a certain object index to a mix node and mix it with the rest at a different level. for two different materials on the same object, it might get more complicated, you could mix the object index node with a color key node and work from there.
the reason the deflector plane wouldnt work on its side is that its not realy a deflector plane, its more of a catcher plane. the plane realy only shows the translucency from the back. this could be fixed with another renderlayer with a diffuse material appled and to mix it with the trans renderlayer, although fixing it will not make it more realistic.

now that i think about it, this could also be done by mixing a layer taken from the front and one from the back

i got shadow buffer sss working on it after about an hours worth of tweaking, then tried to close it and bam, it shattered. btw, the map value to -3 and .2, you notice more sss effect. also, remove one of the specular screens, its adding extra light and taking up time.

ok, after some more advanced fiddling i increased the node system to better compensate for back scattering and multiscattering, and even better for forward scattering. note that there is probably no way this is at all a reasonable method of doing things, and that my model looks like some sort of jungle cat because i was going crazy with colors, not because the system is weird.
here is a picture of the monster node setup:
http://i16.tinypic.com/4bfrv44_th

and here is a picture of the result
http://i16.tinypic.com/2zxwjkk.jpg

blender file

Again,guys,nothing to do with sss,I don’t want to be harsh,but first you must understand what SSS is,and,maybe,after,you can try to do some node setup to do it…

do you understand what sss is? maybe you can explain it to us, that is, unless its something other than the predictable scattering and diffusion of light throught a medium.

1 Like

http://graphics.ucsd.edu/~henrik/images/subsurf.html

There is a part of the page where he compare more traditional rendering (BRDF : Bidirectional reflectance distribution function, which I remember trying with 3DSMax) and SSS.

http://graphics.ucsd.edu/~henrik/papers/bssrdf/ <-- the two images compare BRDF to SSS.

Look at the shadows and slight transclucency. Even the shadows look blurry because the light diffuse in the material. Diffused photon = diffuse shadow (that’s how I understand it :D)

For human skin, the light that passing through the skin is absorbed for certain wavelength by the blood flowing under the skin. When this light is diffused back to the observer, this light is slightly reddish, but not too much.

My understanding of SSS is very basic. So don’t quote me on this :smiley:

If you are interested, you can also read some theoretical stuff on this paper by the author of the webpage. I found it interesting (if you except the parts where he babble about maths. I suck at maths…)

http://graphics.ucsd.edu/~henrik/papers/bssrdf/bssrdf.pdf

:slight_smile: but from the look of it, yes there is still a long way before faking SSS looks right like in the example I cited previously.
Mpan technic is great for simple SSS (marble statue, wax candles, jade buddhas :p) but skin is another problem called, multi layered SSS. Why multi? Because the skin is composed of several layers (epidermis, dermis, fat; blood vessels) and each one of these have different SSS properties :D.

Softwares like Maya and 3DSMax boast about ther SSS shaders. But it is great for simple SSS material, just like Mpan method. Hence the impression that the SSS skinned character looks like wax statue…

But as I said sooner, don’t quote me on that :wink:

@ oodmb: I don’t know, if you use reversed Z-buffers, now, or if you figured out, how it works…
But if I understood you correctly, you tried to blend two mats with the material nodes, one with Z-buffer reversed, the other one, without.

This is, what I tried first, too :wink:
But you actually have to do it in two layers/scenes! one has to have the models with correct Z-buffer, the other one has to have the modells with reversed z-buffer!
This way works quite good :wink:
You don’t need to care for perspective at all!
Trying to blend two materials, wont work, as the blended result has it’s own buttons like tangent, radiosity… … … … and reversed Z-buffer…
Have a try :wink:

at first i tried it with material nodes, but then with with the scenes. from my experiments, it seems as though reversing the Z buffer does not return thickness data, but the normal scene depth data. because of the nature of a z-bufferes, they end up looking the same.
more info on my experiment: i have one long box in the scene, cube side facing the camera, a short cube next to the long cube possitioned closer to the camera, and an identical cube possitioned next to the other short cube but furthur from the camera. after the data has been calculated it should show up that the short cubes are both light colored and the long cube is dark colored, however, in the same way that a normal depth pass would do, the cube closer to the camera is much darker than the furthur cube, and is the same shade as the long cube.

hmmm… wait…
You mean, it didn’t give you a thickness value?
I got one… I think, at least…
You mean, the whole thing get’s darker, when it’s closer to the camera, than when it’s farther? Hmmm…
Could be… would be nice, to be able, to just calculate the thickness with a dethpass… with more than 256 possibilities -> better quality…
Your methode is great, but a mess…

yep, i think the problem is that zinvert is for shadows specificaly, not depth
the distance pass is a float, the map value turns it into 256

Minor note: It doesn’t turn it to 256, it just maps those float values within a range of 0.0-1.0. You still have float precision.

I thought about a usage of the whole RPG spectrum for 16.777.216 or even 1.003.003.001 steps (first value = 256^3, 2nd 1001^3)
You just would have to search for a formula, that would count the whole colour range in a propper way…
(And as blur’s used, it also needs a feature, that blurs the whole thing, without mixing the values wrong…)
I had the idea to use it as:
RGB = 100, 10, 1
so minimum would be “0 0 0” = black, second would be “0 0 1” (which wasn’t translated to blue, but to very very dark neutral gray), 257 or 1002 value = “0 1 0” which already is much brighter, but still very dark!
50% grey would be… “500 0 0” or “127 127 128”
pure white (logically) = “1000 1000 1000” or “255 255 255”

a bit complicated, but I think, it’s possible to get this work.
Did you understand, what I mean?

I used “1001”, 'cause we start counting at 1 but computers start counting at 0 and so, there is one hidden possibillity… (pure black isn’t 1 1 1, but 0 0 0…) correct me, if I’m wrong! :smiley:

if it remains a float, the problems you are thinking of are solvable using curves or color rams, not computational problem. float is incredibly percise. the problem you are wondering aobut would be compresion, which is done after the composite and with the type of extention you choose

Here’s how I think mpan3’s method could work in a direction dependent way.

A special lamp should do the Z-buffer subtraction from its point of view, and also make a normal shadow buffer. it should then apply the “thickness mask” it’s made to the degree of shadowing that the back-faces get from the normal shadow buffer.

I think that this is very similar to how blender’s “translucency” setting is supposed to work. However, even when maxed out, it doesn’t illuminate much of the backface at all.