Edit: Another Render Engine I consider making photorealistic results a bit easier to obtain is FStorm, which is unfortunately only for 3ds Max. See some examples: https://fstormrender.com/photos/gallery/
What I think Blender would only benefit from is:
better scene measurements (to deliver realistic results, objects, surrounding/atmosphere and camera need to work together)
better tweaking of all things Camera (like what the Photographer Addon or K-Cycles do, but included in Blender natively)
A simple “Save…” and “Load…” button for the tonemapping/“taste” settings like in the Corona screenshot (from the Corona forum, not mine), compared to always adding some sharpness and glare in the Blender compositor.
Yes, the “shader response” is better in fstorm. Like there is certain real life randomness in the pixel noise. But do note that this is raw render with no post, and half the shaders are just simple enough to resemble the original. Artificial lighting is also not matched to the original. Colors are off. But there was no point in refining the scene. It did it’s job in proving that Cycles is good enough.
I also tested Octane. And although I liked the results more, I discarded it and the reason is simple. It was nowhere near elegant to use as Cycles (at least with Blender).
My point is, don’t expect that you can put anything into scene and fstorm will make photorealistic viz.
I should be more clear with my words. Sorry, I didn’t mean FStorm is just realistic out of the box. I’ve been trying realism with Cycles, and it can do it for sure. Your render looks awesome! I can’t really put my finger on what exactly it is that makes stuff look more real, maybe you got a point with the ‘random pixel noise’.
My points are mainly based on a) ease of use and b) time invested.
Here’s a screenshot of one of my tests so you know I’m not just discussing with you guys without having tried first. And I’m sure I could make it even better. The test was to get a result that is as good (contrast, sharpness etc.) out of Blender as if it where with post production
I think that obviously we could generally speaking about mean size…so for, esample, we could say tht an adult person could be 1.65-1.75 meter tall… the same for a tree…
For example…this is a list with 3 scale classes
There is a real life scale for a person. We all know the average size or height of humans in general. My point is trees have a very wide range of heights. Some can be very big and very tall so @Hikmet point or suggestion that the potential reason why I am having that issue is because it’s not real scale is ridiculous.
He is entitled to his opinion of course. Here is an example of tree heights:
EDIT: @Hikmet Btw I appreciate you trying to help. Thanks
I know that so what’s the solution to the tree trunk and branches shade issue? Reduce the tree scale because Cycles has a problem with shadow terminator?
Yep that was pretty vague, sorry. I meant: Blender Units only make sense in Blender, while m, cm, mm etc are measurements that are fixed to something real that work in reality and in any (or most) other 3d softwares.
I worked with 3Ds Max and C4D, also still use MoI3D besides Blender. I can’t even tell you why exactly, but it feels harder to do in Blender even though from a modelling standpoint, it isn’t. Maybe because scale needs to be applied in the process, maybe because if I model something that is like 20x20x5 centimeters the base grid doesn’t adapt on its own, the camera and light look huge and I have to change it. But I don’t want to be thinking about these things as a user. Cool to have the option to change it, but it shouldn’t be neccessary imo.
Rhino for example asks you what measurement you want for your new project. Takes all the guesswork out. Hope it makes more sense now.
Staying closer on topic though: I think scale makes or breaks the illustion of realism as much as color, light or optical effects that a real camera/lens would have.
I think the debate about (camera) post FX is kind of beside the point of this thread.
“Correlation does not imply causation” comes to my mind.
While it is true that post (in general) has an enormous effect on the image quality and these kind of solutions that are implemented directly into the render camera/viewport can be enormous productive, their effect should be separated from the issue at hand and considered a separate category that is related/correlated.
When it comes to compositing in general, all artists using whatever DCC + any possible renderer are on equal footing. Everything that is done after rendering is based on choice; there are a couple options, no limitations to use either, and no dependency to what came before.
There is not that much more skill/time/effort necessary either to do these kinds of effects in Nuke, Resolve/Fusion or AfterFX (or whatever else there is).
Artists can be grouped by their choice of comp tools, but one can hardly separate them by looking at the end results.
Even if we only look at the results archived with Blender+Cycles alone, we will get a fruit-salad of apples and oranges as an result since Blender artists use all possible compositing solutions that are available.
We have not observed a pattern that relates a specific compositing tool to more visual quality coming from a purely technical advantage.
Based on this I would reject any meaningful relationship between availability of post fx in Blender (or any other renderer) and the maximum potential amount of photo-realism Cycles (or any other renderer) can archive.
The abilities of the compositor are also equally distracting for judging Cycles itself.
Everything can happen in post, a seasoned compositor can “fix” almost everything given enough time, but what happens in post should stay in post.
If we want to look at the problem of photo realism more scientifically we need to average out or nullify this part of the equation (compositing as a whole) to get a clear picture of and from Cycles itself.
I use the same compositing workflow and the same tools for images I render in Cycles and in Karma (and whatever other renderer I might get my hands on), everything that happens after rendering is equal, therefore I can totally ignore its effect on the end result.
…so… what does make this (somekind of) photo realistic ( not that i claim it is )… when whe talk about “correct” or “better” liht distribution ?? In my example the upper light is less bright and the seats
aren’t “enlighted” with this yellow light from the cupboard so i tried to add some under the “desk”…
adn also the chairs seem to have the right form of the shadow but there is some lightsource missing because the lower part is more in shadow.
So than isn’t this (adding not emmissioning lights) what artist did before PBR and still do in realtime engine to get a "more realistic look like this “extra light” known in every game engine.
Is it really cycles what does not do the right light or does the artist:
not know what parameter does a better lighting
does (s)he does a not so got lighting (setting lightsource at proper locations ) job
( Since it seems to use Oren-Nayar… when using roughness > 0 ??)
And i didn’t even mention this special light probe thingy where you use special data for every lightsource matching the actual lightbulb version you want to have in the lamps.
If you aren’t getting enough light in an interior scene, have you tried increasing the bounces and min bounces settings? The defaults in Cycles won’t give you the full brigthness of an enclosed room.
By the way, do the chairs in this image have a shading issue, do the legs lack some sharp edges? They look different from the other versions of the same image higher in this thread.
I was referering to using Oren-Nayar… as mentioned above that “other” renderer does “better” images… or other artists do more “tests” to tweak until satisfied…