One thing about thinking about usability is that you have to be careful not to just extrapolate your own experiences and assume its the same for everyone. I know you do architecture, so alpha mapped leaves probably come up a lot, but I don’t think it’s a big issue for most other users of Blender. I’ve used Blender for a long time, and I use it full time, every single work day, and I’ve never once needed to use transparent shadows. I’m not saying that means they’re useless or anything, but that this probably isn’t causing headaches for as many people as you think.
There are most definitely performance impacts - it’s not a real good idea to commit things like that, which will cause performance problems, without fully understanding their implications or even doing some benchmarks to test, and I think you should revert it until there’s a better solution.
When that feature first got put in, I felt the same way - it seemed illogical to have to set it on the receiving material, and I was annoyed it wasn’t on by default. But after recently working in the raytracer, I know why it is, and should be that way.
Here’s how raytraced shadows work:
- On the currently rendered surface, for each sample of each ray shadow casting lamp, a ray is traced going from the current point on the surface to the light source.
- If that ray finds an intersection with another piece of geometry, it is considered to be in shadow (since there is geometry blocking the line of sight between the lamp and that point on the surface).
- For soft shadows, this process is repeated over and over again, but aiming at different points on the area light, and then an average is taken to see what ‘proportion’ is in shadow, and how dark the shadow is at that point.
For transparent shadows the process is much more complicated. Rather than just checking to see if there is an intersection and considering it in shadow, the renderer then has to examine the material of the piece of geometry that intersects, and find out its alpha value. This isn’t as simple as just checking the alpha variable, since it can be affected by textures, so the entire material/texture stack must be shaded, to figure out what the alpha value at that intersection point is. Not only can this be very time consuming if there are lots pf procedurals involved too, but if you’re using soft shadows it gets multiplied even more. So for example if you’re rendering a surface which is softly shadowed by a lamp with a sample count of 5 in the UI, that shadow casting material must be rendered internally 5 * 5 = 25 time per pixel just to calculate the shadow opacity, regardless if the shadow is even affecting transparency at all, and this shading isn’t even something that you see in the final render! This is a huge overhead, and of course is totally inefficient to calculate all this if (like in most situations) you’re not using any materials that cast transparent shadows.
A smarter way to do it could be to, at the start of the render process, inspect the texture stack and find out if that material affects alpha at all, or if it will always be totally opaque, then cache that information and use it to skip the transparent shadow calculations. But this needs to be made by someone I recommend that the recent commit should be undone until something like that happens.