SSS in Blender? Easy!!! Tutorial on line.

Wow, SSS possible in the compositor, I’m guessing a reason to use the compositor for it is that you can tweak it without having to re-render over and over.

I can see more realistic liquids, skin, wax, and a whole cache of materials with this.:smiley:

Also, I may try to use this on a white mesh and blur it in the compositor to give the impression of a cloud with real internal scattering:spin:

ZanQdo: You should post your computer spec when telling people your render took 4 seconds :slight_smile:

I know you are running a Core 2 Duo 2.4 :slight_smile:

@brecht or others:

i’m not that technically skilled to understand the whole paper. so please, can you describe the process, how it’s possible to make sss as a post-process…what’s the way to achieve this?

Thanx
S.

simhar, in the paper points are generated evenly distributed over the surface, and their diffuse color is then computed. To compute the SSS for a particular pixel it looks at surrounding points and makes a weighted average of them using a falloff function.

As a node, rather than generating and computing the diffuse color of our own points, each pixel in the pass is used as a point, and the 3d position and area of the point for that pixel is computed with the Z and normal pass. This means of course that not all of the surface is covered, but the results still look practically the same, except at edges and on thin surfaces.

Really nice with backscatter! I did a quick test, and I’m surprised how nice it looks. Is’
anybody aware of situations where the method used in this node won’t work at all?

http://mathiaspedersen.com/imagehost/misc/brechts_scatter_back1.jpg

Great work Brecht!
//Mathias

That’s really neat =)!

M.h.p.e.
Is that with the current build or a new patch? And how did you do it?

Funny how one person like brecht can make me go 100 times more wow than Microsoft
… well, pretty lady, he’s got the right method …

I’m wondering if this would work for transparent materials (ice, glass, ect). This method looks good where you just see the surface of the object, but I doubt it would work for something where you can see ‘into’ the object?

(I haven’t read the whole thread yet, so this may be a moot question)

I would imagine since it’s computing using a normals pass then say you had a transparent object held in front of an SSS one, the combined normals pass would only show the normals of the transparent object. What you’d have to do is render the two objects separately, use the SSS post-pro method on the SSS object and then composite the two. It wouldn’t refract/reflect the SSS though. In fact, I’d imagine that all reflections would have problems with this.

Also, Blender probably only renders an 8-bit z-pass so SSS calculations on large scenes might be inaccurate. For example if you had a candle flickering in the background of a large room. This might also affect animation if the values change too quickly.

It also might not accurately calculate color absorption.

It’s with the new patch. ZanQdo updated his build at graphicall.org. I just downloaded the
back-scatter example scene Brecht provided, and figured out how it worked. It’s not simple,
but the results are worth it!

//Mathias

Ok, thanks for the tip. I’ll show some results when I get it working =).

Brecht, you kick ass!

That backscatter example file is strange. First of all, the second scatter node doesnt have the the “out Z” and “out Normal” slots. Secondly the add node has no factor. What is going on? And what is with the blend texture? I must say this is quite tweaky.

… Seems the lil plus button hides the non-connected dots, and random values. I had to pull the path lenghts quite far to get the light through. Seems my model was big or something…

Falgor, yes, as I mentioned, it’s a hack. The blend texture has nothing to do with it, it’s still there from the original .blend I got the model from. The out Z and normal are only needed for back scatter, in case of front scatter you could connect the same Z an normal again, but it only makes the setup more complex.

osxrules, the Z buffer is more than 8 bits, there are no problems with that. I can’t think of issues with color absorption…

The renderman SSS shader looks good. I don’t think this node integrated into the internal renderer would be as fast, for the sole reason that in renderman the shading rate can be decoupled from visibility, so it only has to do SSS once per pixel, and not 5-16x as in Blender.

Brecht, what are the path length and diffuse reflectance actually? And could the “back z” be done with just using the z-invert in the material buttons? Thanks for this awesomeness =).

I would like to ask if you could make an option to scale all the 3 reflectance, and the 3 path lenght values in ratio, so you don’t have to always edit every one of them separately. For changing the “penetrability” this would be good, when all the 3 values can be scaled in proportion. And if you wanted, you could “unlock” the ratio-lock, and tweak the rgb values separately. This would be the default of course.

Here is what I came up with a bit of testing:

http://www.falgor.net/pics/backscatter.jpg

Here is the blend

I wonder if this means we’ll have tons of SSS tests in the test forum?

All the images with SSS are good and I can’t wait to play with it, but I see that once this is out no one will have an excuse to not put SSS in certain materials anymore;)

OMFFFFFFG

This is gonna be amazing, Ill be using this in my next piece…

Here is what I’ve managed.

http://www.christianstoray.com/postimages/minotaur15nodes.jpg

http://www.christianstoray.com/postimages/minotaur15sss.jpg

Because I was using IDs, I had to render at 1200x without OSA then scale it down to 600 - otherwise I’d get strange aliasing on the edges. Also, it appears this node does not work well with fog. I think it would be better as a material node rather than a composite node. It’s still cool though.

now that we have two different ways of getting sss effects using a camera render from the other side of the model, wouldnt it be usefull to have a setting under renderlayers that allows you to render the image except from the other side with respect to the focus point (dof dist) of the camera so that we do not have to set up multiple cameras and multiple layers and even sometimes multiple scenes?

Howitzer: BrechT said that the node form was just an easier way of previewing before actually integrating things into the render engine.