Please make SSS backscattering work in any direction (not just front and back)

Okay, I’ll be quick here to say that unless SSS backscattering is reworked to be able to work on every face in every direction with unlimited lights, certain gas effects like a gas explosion I’ve been working on will remain impossible to do in a way that doesn’t involve faking or really cumbersome object setups to make up for it using dupliverts.

I like the SSS, but the limitation that backscattering only works front to back and not in all directions with all faces is a serious limitation, it makes good VFX work using good SSS techniques next to impossible unless you want faked node setups or other cumbersome setups. Even creating the humble candle falls victim partially to this.

An SSS system like this makes Blender technically still without SSS that looks a lot like a G.I program. I hope the Peach team is aware of this, I’m sure many would like this limitation removed.

To prove what I’m talking about make 4 cubes that are part of the same object, put an omni light in the middle, and set SSS.

EDIT: About faking with nodes, I suppose you can try to make an entire SSS algorithm using lots of vector curves and math nodes to mix in an emit texture, but then you’d have to be a math genious and I don’t even know trigonometry.

ALSO: If you remove the limitation and make it optional, I don’t care if it’s twice as slow with it on but for certain things this is critical.

3D CG is all fake anyway. stage magic. i say treat it accordingly.

When I say ‘fake it’, I mean doing SSS in the old ways before 2.44.

I have the material and setup all set up and ready to render, now all it needs is this one limitation removed. lists the limitations of the current SSS system, I never knew this could easily prohibit good volumetric gas effects.

Well, in any case, its going to be a long time before the next release, even if they do decided to “remove this limitation” as you have pointed out. Go ahead and render what you have and show us what you mean.

If you’re trying to do such realistic effects from real life, you can do two things:

  1. use Indigo, or:
  2. eat beans and produce realistic gas.

oognoepje : and then use a 3d scanner! :slight_smile: :slight_smile: :slight_smile:

To prove what I’m talking about make 4 cubes that are part of the same object, put an omni light in the middle, and set SSS.

The current SSS algo is a mathematical fake of true SSS. For what it is intended to do, the SSS effect is practically indistinguishable from true brute force SSS calculations.

So, what you are asking for is really the next step. And Brecht said it already, the current SSS effect is not truely volumetric. It doesn’t scan for subsurface structures. So no! You won’t get a SSS if hold your chicken embryo model (with all its internal organs) against a strong light. Similarly those explosions or self illuminating clouds are a complete different ball game.

I’ve seen in one of those Siggraph Renderman tricks papers that for Finding Nemo, they did some quite amazing stuff with SSS showing internal structures using shadow buffer techniques that rendered quite fast also. If I could find that paper again I’ll post you a link.

Before current SSS there was Node based solutions and before that shadow buffer solutions and before that… well texture and lighting based solutions or something. People have always been able to overcome limitations with some ingenious trickery which got the convincing results and these techniques were quite fast and they are still available and if they’re not available some other rendering engine does have it.

If you wait for full blown SSS and when you get it you will wait for better volumetric effects and when you get that you’ll wait for something else and then you’ll want blender to do all the work for you. You’ll never be able to realize all the ideas for want of a couple of features.

Why not spend time with the masters in this forum and learn how to use the tools at your disposal. Like Guitar87 said, show us your render and we’ll try to show you how to make it better. When I say we I exclude myself as I suck at such things but I’ll try to help in anyway possible :slight_smile:


what he said.

hey guys

to get real realism often fakes simply don’t work if your goal is realism.

No matter how much you try to fake some physical effects are close to be impossible.

And than it matters if you are in still or motion.

I motion you can ignore few details because you might not even have the time to see it.

In still it is very different.

We will use Blender as a animation and modeling system to replace MAX.

  1. to give students a free tool
  2. to teach them basic rendering

However SSS, or caustics for example is something you will only get real when you have a software which can calculate it - currently a problem in Blender because it’s internal engine is falling back compared to where the market goes to.

But with Indigo or others those situations can be overcome.

However for the beginning Blender will be more than enough to start with.
And car renderings can be done quite well already.

I’ve been trying the shadowbuffers and early results are promising, I think it would serve as a temporary solution for now.

I say temporary because hopefully good volume SSS that works with raytracing will make its way into Blender.

I agree with cekuhnen, there are times when faking just doesn’t do it or you have such a mess of lights and what not it becomes a super headache to deal with. I’ve done this for one of my videos with particles and SSS. You get nice results with tons on things hanging off the object that you have to figure out how to move it with the scene. Sometimes have the option to use a real algorithm is nice.

Oh and Indigo is not free and does not run on Mac. This is not a solution. Right now I see Yaf(a)ray as the only solution moving forward. Yafray has nice fake SSS that is slower than anything I’ve ever seen.

Waiting until someone builds you a full SSS implementation isn’t either.

I really don’t understand this,the current blender sss is the best solution in the terms of speed and usability,at least until some genius write some new paper.
I think is better asking for more control(texture channnel for radius is the only important thing that miss) and maybe some optimization than another sss shader.

Indigo most certainly is free! It’s not open source, but Ono has never charged a dime for any copy Indigo. Although it won’t run on a Mac natively, it will run on a Mac using VM Ware or Parallels. Since OSX is basically just BSD, I wouldn’t be a bit surprised if someone could get it working with WINE.

Yaf(a)ray will be be great IF and WHEN he gets it done. Considering the lack of development lately, I’m not holding my breath.

Right, Indigo is not open-source. That doesn’t bother me as much as the lack of platform support. VMware is slow, and parallels requires an Intel Mac. Neither solve the problem, neither does Wine. It might be a great engine just not accessible everywhere like Blender and Yafray are. I expect Yaf(a)ray will probably be as slowly released as Yafray was. I guess the point being we really need a good integrated realistic engine to coexist with BI

Oh and Indigo is not free and does not run on Mac. This is not a solution. Right now I see Yaf(a)ray as the only solution moving forward. Yafray has nice fake SSS that is slower than anything I’ve ever seen.

Maybe Luxrender will have it?

3delight has true SSS built in and someone recently posted an exporter:

3delight is pretty fast and the results are very good. Also, with programmable shading you get a lot of flexibility so you can do close to anything you want. For example, you could even implement SSS long before anyone else had it - Renderman was used in the Matrix to achieve some sort of illumination models that would be hard to do with other rendering engines.

Also because of the programmability, you can do texture based falloffs for the SSS effect if you wanted. In Final Fantasy The Spirits Within, they used Renderman for the hair and they were able to use the AO from the body to shade the hair as you don’t want to be raytracing 100,000+ hair strands.

It also has micropolygon displacement, supports animation, true motion blur, area lights and so on.

Unbiased renderers look nice but they are so slow. I don’t think any will be good enough to replace a good flexible, programmable biased rendering engine any time soon.

osxrules, you’ve got a weird way of thinking. The man says he doesn’t want engines that aren’t free, and then you’re starting a bunch of talk about 3delight… :ba:

3delight is free for personal / educational work oogsnoepje.