Its just a simple blend texture that acts on the emit value. Next I will try to use a texture on the lamp and let that affect the emit value. I just beginning to understand Caronte’s DoF method. I think this method can be used to achieve fake transluncency and fake Subsurface Scattering.
In this link http://zj.deathfall.com/trans.htm , they explain how to do it using shadow buffers. Now I know that Blender uses shadow buffers too. And that’s the point where I lost it. I have to study this topic in depth to translate it to a Blender rendering. Maybe some of you can help me out?
it makes an object only recieve shadow, otherwise it is transparent
(so you can cast shadows on invisible objects)
I’ve played with shadows in blender and am annoyed about how they work: the degree of shadow (with only shadow pressed on the lamp) depends on how directly the object is facing the lamp.
I would like to find a way to set all of the normals of the object to face the lamp so effectively we are using just the depth buffer
of course this would be more limited than that shader you pointed to, the light wouldn’t bend as it entered an object
OK, I’ve been messing around with some of the materials and textures settings. I think I’ve made some progress. But it is still faking. And I’m certain its been done a 1000 times before in Blender. I’m still trying to make a Blender translation with its native materials and textures for this depth buffer thing. I will keep you posted. Meanwhile here are some examples:
I’ve been thinking about the same lately.
Unfortunately we can’t use raytrace tricks like in the example you offered.
It would be nice if we could do something with the thickness of a mesh in the direction it’s facing the camera.
That way you could fake some absorbance , if you combine that with some emittacne, or with the gradient textures mapped to the lamps center it could be done ass well.
No refraction though.
What surprised me in the tutorial is that they used some sort of a hack asswell and it seemed as though it was doen in maya.
Always thought that maya could do this stuf already.
But keep in mind. Caustics, SSS, refraction, G.I. HDRI, DoF, fur etc. , those things are just minor functionalities. Look at the Final Fantasy movie and look at the lack of succes it had.
The main thing that make a render picture or animation pleasant to look at is (in order of importance):
The theme
Animation
Lightning/ shadow
Special effects: caustics, HDRI, G.I. etc. etc.
The reason why the final fantasy movie flopped is becuase they reversed the list above. For that movie, I think photorealism was of paramount importance. The theme came at the very last. The animation was horible. In a subcouncious kind of way the face animation and lip sync was totaly unrealistic to the point it made me nausious. Your brain just won’t accept a realistic face that’s talking like a Firebird doll. You see it’s not compulsatory to put SSS objects, caustics objects and such in the scene. Look around you don’'t see those every don’t you ?
OK, that was a side step. It would be fun to have true SSS in blender and I’m certain it will be one day encorperated in Blender. I myself am trying out some routines in the code that will produce true SSS. I just seen another paper on this topic that supposedely could do hardware accelerated SSS/ tranclusance. I’'m looking into that too. As a quick fix, someone could build some code in the Tohopuu where the shadow buffer could be used for easier and more believeble translunsency.