Here’s all I did:
Rotate the default cube a little, add a plane just off it’s surface (this file uses shrinkwrap to make it easy but that’s not needed)
Assign a transparent bsdf to the plane.
Assign a principled shader to the box, turn clear coat to .5 and give it a mid-tone range pink, because why not?
Add an area light small and close enough that you can see the edge of the reflection over the box and the edge intersecting the plane.
I used the default camera, and render.
For some reason, I’m getting a reflection? shadow? Something from the plane, even if it’s offset from the box pretty generously.
Am I doing something wrong? It’s not noticeable with some materials, but Very noticeable in others, and depends on lighting.
In my case, I was using an image for a logo that gets applied and couldn’t figure out why I was getting a slightly different color/sampling/Something behind the plane the logo was sitting on.
Well your llght is extremly narrow on the surface and opacity is always a bit of a problem, but anyway try Render Properties → Sampling → Integrator: Branched Path Tracing
Well thank you. That did it. Of course, render time on another test went from 27.7 to 56.6, so That hurts.
And yes, the light is very narrow for that. I was just trying to replicate the issue on a really simple scene. I’m getting the same problem on a scene with the light much further away.
But the material has an orange peel to it. I tried decal machine as well, to be sure it wasn’t just my method of adding a graphic. Then I narrowed it down to clearcoat on the material behind it causing the issue anywhere a light causes a highlight.
I guess I have some reading up on the differences between Path tracing, and Branched Path Tracing now. Because I see I don’t need as many samples to clear up the noise. But the documentation keeps saying settings are multiplied by AA samples, but I can’t find any place to change the AA samples.
I still don’t know what the documentation is referring to with AA samples, but I was able to get the render time down even faster by using adaptive sampling. I just set the noise threshold, then cranked the sampling way up.
I’d be pretty interested in watching a decent video of someone going through several scenes to talk about how all the settings would effect that scene in terms of speed, quality, etc, and what they ended up using eventually. There’s not a one size fits all approach to that, and too many scenes are different, and everything I’ve seen so far was just “Use these settings to make blender render faster.” And then you get there, and #1 is ust “use less samples.” Yeah, no kidding you dork.
I want to see the approach to setting up a single product shot, with transparent shadow on a ground. Then a full interior with natural lighting. Then a full interior with interior lighting. Then a character with hair, and maybe lots of displacement if those would work paired together. Then a landscape scene with Tons of geometry, instanced and not instanced. Particles? Fluids? Basically all the scenes that would make a difference.
Does it ever make sense to render on CPU, other than memory limitations of a GPU? So how to deal with memory limitations of a GPU in that case, and then how to optimize with CPU. Maybe the scene can be split and composited? Is there a good workflow for that, or do you just have to set up the two pieces renders in two different scenes?
Ok ok, so it’d need to be a pretty in depth full training. But that’s what I want to see, and Current. At this point, training this stuff in 2.79 is probably a bit old.
Alas, but behold the greater goal will be achieved to fight the soul eating naga monster… ups sorry different context. The greatest challenge will always be: do i need this or that to catch up to my goals and what are they. (No, nothing about never reaching…) For me, it’s always fascinating how people in different sub branches of the computer edited image world reach there target with complete different approches. @BlenderBob for example said something in one of his video about cinematic models made in maya (?) which where so much in the background and dark or blurred that you can’t see them at all… and it took him weeks. He knows the 3D industie. There are so many people out there, which had fantastic videos at first and now there YT thumbnails are just click bait and sponsored by whatever (which are mostly overpriced for hoppy i think) or the vid’s are trash right from the beginning. I’m having great fun using a i3 with NO extra GPU since some weeks and learnt a ton of things the last few weeks on here on BA, while answering some real problems and facepalming about some questions, but i think they don’t understand rtfm (and blenders user manual is not sooo baad).
Oh, no. It’s actually one of the better manuals for the most part. And I did find that you control the AA samples with the direct samples, then change the individual samples like diffuse, glossy, transmission, etc seperately, but that’s a mutliplier of the AA. The manual isn’t so clear about that because it doesn’t say where the AA comes from. It’s example says “To get the same number of diffuse samples as in the path tracing integrator, note that e.g. 250 path tracing samples = 10 AA Samples × 25 diffuse samples. The Sampling panel shows this total number of samples.”
But it never really tells you where the AA samples comes from. I got there, but had to do some messing around.
And yes, I’m sure we’re all guilty of spending a bunch of time on some silly detail that you don’t end up seeing. Or making one specific element look Awesome, where the rest of the image is just Meh in comparison, making that really good element ends up getting overlooked. I always make myself look at whatever I’m working on ever time I get up and come back to my desk so I can get more of that 10ft view, to help keep that to a minimum. That’s harder to do with something like animations though, because you have to step back and watch it. It’s not just a single image to look at.