Heres a really small render of the living room Exar (our clan, Me + Blenergetic + UltraX + some one else that i always forget) was working on a while back:
Blenergetic: The base mesh of the building and the stairs
UltraX: some lamps (not shown… forgot to include them… sry man)
Me: The couch, cieling fan, bevelling on the stairs, textures, plants, pots, table, background, lighting setup, carpet.
Anyway i used irradience caching (probably with not the best parameter setup)
Ive used the living room mesh in a lot of my other test scenes too.
Hmm… i havent really generated any proper documentation yet, but look in world buttons, GI Panel.
Caustics should work for all lamps just make sure trashadow is on. But caustics work best with sky textures (big light source)… anyway the caustics you get wont be very appealing… in future releases ill probably have them handled by photon mapping. No caustics only lamps though… because those suck
I stopped those because they were so slow!! lol. Actually im considering implementing shadow buffers for all lamps or something.
Anyway, i will revisit distributed raytrace shadows when i implement photon mapping. Henrik mentions a very cool optimization for soft shadows called “shadow photons”.
What these do is shoot photons from the light source that transmit through objects, and help identify the areas that are in shadow… using these we can either: intelegently shoot shadow rays and REALLY cut down on them (at least 60% less if not more) or estimate the shadow directly from the map.
OH! and if youve turned on AO make sure you turn it off before picking path tracing… if AO is on it will use both AO and pathtrace (Making things SLOW)…
unless you inteded to use AO + Path this is not recommeded.
I can’t seem to be in #blenderchat at the same time you are.
Anyway, for SSS the approach Jensen took is to cache irradiance ‘samples’ in an octree and then use those samples to calculate the actual response for each raytracing sample.
The difficulty in Yafray was to access the geometry in a non invasive way and also truthfully to understand the raytracing process from the code.
But I managed to get the SSS to work in that environment.
Yet there are 2 major problems with the method as I see it :
1- Generating samples that are at an optimal density over the surface memory intensive. SSS introduces metric components into the image, namely the mean scattering depth, and this in turn determines the density of sampling that should be performed.
Diego and I came up with a beautiful and efficient way to generate the samples, by calculating the intersection of a model with a grid, much in the way the marching cubes method uses a grid to generate meshes.
The advantage is that the resulting sample set does not depend on the original resolution of the mesh, and thus a considerable memory and time saving.
On a small model, this is manageable but on a large model, the memory requirements are incredible, for the payoff of minimal contribution to the scattering. I did not implement the octree method because I saw no point in building a complete octree to ‘use’ only samples that might be contributing very little to the scattering. So my method is slow and memory intensive at the same time !
I also found that the scattering does not seem to respond well to the various materials Jensen provided. In some cases bad artifacts appeared although this also could be caused by not caching the irradiance. or by my lack of understanding of how the global photon map is supposed to work in Yafray. Apart from karble and ketchup, skin and milk looked terrible.
2- Determining the response for SSS ina GI setting is unclear to me.
Jensen made a comment in his last SSS paper about that and even gave a reduced formula for it. But because a) I don’t fully understand how GI is really done in any raytracer and b) I wasn’t sure how it was done in yafray I didn’t get it to work.
We got some cool images though !
Anyway, I’ll try to check the chat room for a while. What time zone are you in ?