Yes, splitting the mesh into different objects does help with memory use.
Also, the default values of the Subdivision settings on the Render Properties tab are too high IMO, I changed my defaults to be like this:
And it helps a LOT, it obviously depends on the scene and the final resolution you’re rendering to, but I think that’s a good starting point.
And a final tip, which I think is a bug in cycles, is to set the subsurf modifier’s Levels Viewport to 0, if you set it higher it will subdivide the mesh again on top of the value you set on the Subdivision settings.
The situation regarding OpenSubDiv in general is getting better (because of the work by Hans), but the devs. unfortunately don’t appear to be in any hurry to fix the issue you are talking about (as part of microdisplacement being a low-priority feature even though it is standard issue in most render engine).
Microdisplacement is still worth using though because it, from 2.8x on, actually interpolates UV coordinates correctly (though your scene in the viewport might look different).
What is “proper”? It’s a property you can’t obtain, only base it on observations. This setup allows control of falloff curve shape and effect, I never needed anything more and it’s very intuitive. I’m not exposing normal due to lack of input can lead to garbled data channel, but I made this before I knew I could use a dummy bump node to fix it.
Well, diffuse shader is an extreme simplification of a volume shader, but it still does the job. I had a quick look at it in the other thread, and I guess it’s ok. Not sure if dot product is quicker than 1-facing, but the latter does allow normal input (which I don’t expose anyway by default).
I thought he was doing the full paper Thea was based on which iirc includes some heavy trig maths which I find completely unnecessary. I’m unable to find it atm. Roughness changes with the physical state as well - i.e. a deflated rubber balloon is much rougher than an inflated one. Stretching skin makes roughness change from isotropic to anisotropic, so things can go ridiculously complex really fast. So to me, artistic controls beats costly accuracy very often. But hey, I love doing senseless things myself sometimes (too often actually).
Well, they are 20/21 hired devs to handle 13 modules.
That means less than 2 developers per module.
And of course, it is not working as is because things are a little bit more complex.
Developers don’t have same experience. Some entered the team, recently. Some are part of the team since more than one decade.
There is a need of people with transversal knowledge to maintain an harmony between modules.
And they are human beings, not cores of a CPU ; they need to be able to go out of a certain routine from time to time.
Render & Cycles module is shared between 4 developers.
Two of the most experienced developers Brecht and Sergey.
Stefan employed by tangent animation.
And Lukas, I don’t know who is employing him or if he is still a student.
But he does a great job.
They try to build a solid organization with what they have.
I don’t expect them to have uniquely one developer involved per module.
That would be a fragile management. What would happen if this person quits ?
If they had more than 26 developers hired ; we probably could expect to have one developer dedicated to one module.
And Cycles would probably be one of the top priority module to receive one.
This is an interesting paper and future research should be interesting but frankly not ready to drop into a renderer…
From the conclusion: “Determining when to use our method is another important aspect for future investigation. Attempting many connections that are ulti- mately unsuccessful can consume a large amount of computation. While glints would benefit…”
For the non-renderer developers who read papers like these keep in mind that Brecht and other cycles devs usually have read them too
I think that there was hope that due to the liberal Cycles licence that more development would happen outside Blender. Brecht is also one of the chief architects so he is unable to devote all his time to Cycles. Then there is E-Cycles which is offering improved render times at a cost.
Genuine question: instead of experimenting with these new (impressive nonetheless) papers, why is it not possible to implement an already-working caustic solution like the one in LuxCore?
My understanding was that LuxCore is using an “hybrid back/forward pathtracing” that is not exactly what we are used to call BiDir as it is much faster, but it might be re-using the same bidirectional tracing code.
Thanks for taking the time to answer!
Dr. Zsolnai-Feher answered in the comments that he would also like to see that technology in Blender. And as we know from his videos, he uses Blender to illustrate examples very often. So I guess it’s not that unrealistic to be considered implementing. And wasn’t Mitsuba renderer used for some of those techniques? So it’s more than just a theory.