A Cycles discussion that started with MaterialX, then became about open source standards and color

Sorry to beat that horse again, but this again sounds like the tonemapping is the problem for you - have you looked into this thread? Troy Sabotka is currently developing a successor called AGX that could maybe solve some of the problems old filmic had.

I have yet to see a single render done in Cycles which rivals the realism that I see on a daily basis from other render engines…

Check this out: https://www.youtube.com/watch?v=myg-VbapLno

This isn’t quality comparison, it is speed comparison but it shows that image quality is same level as others and the author explains well why this is difficult to measure. You really need to measure noise and speed, interesting speed optimizations and investigate features and rendering accuracy.

LuxCoreRender is bit more accurate, while it using spectral rendering. It can be approximated using RGB, especially when using ACEScg gamut but LuxCoreRender is more accurate than Cycles or Corona.

As far as I know, V-Ray is has interesting global illumination optimization that lose accuracy to get more speed.

It should be known that every image can be make better by increasing contrast so easily major differences are caused by tone mapping and color correction, not the rendering engine.

Maybe you can describe more technically what is wrong in cycles? Comparing different works doesn’t measure well because there may be more skilled artist using different engine.

There are minor differences but enough samples, unbiased path tracers give similar results.

It’s a bit like asking why does the ARRI Alexa look more cinematic than the Sony A7r?

As far as I know, Sony A7r shoot 8-bit videoformats, less bit rate, sensor is slower, so there are major technical differences.

Modern rendering engines can render to 32-bit OpenEXR so there is no dynamic range issue so please be more accurate what is lacking in cycles.

Sorry to beat that horse again, but this again sounds like the tonemapping is the problem for you

I recommend to switch ACES workflow. I manually set it up to Blender 2.92

There is downside that Cycles sky textures are sRGB gamut, so when I render any sky, I render it to HDR OpenEXR and import it to ACEScg. I just calculate manually light sources in ACEScg.

Nah - tried it and I personally prefer filmic over ACES. Just @Midphase probably doesn’t.
I render to Filmic Log and do colorgrading in Fusion/Resolve → Filmic sRGB.

1 Like

All I did was scroll the gallery for 2 minutes:

Lots of links

In mountains
Romelu Lukaku - Inter 20/21
Botanical Retreat
Åland Islands House - CGI
Millbeck Farm - The Lake District
Human Progress
Houseplants study
Another Arch Viz scene
Entrance Hall
Lego Star Wars X-Wing
Keychron K2 v2 Keyboard
3rd May Avenue Office Park

is a bit of an overstatement.

I highly advise against it, unless you’re working in a pipeline where it’s demanded from third parties. It doesn’t solve anything. A better transform is in development, but already usable. Better to invest time in learning professional color grading and photo editing.

1 Like

Professional color grading and photo editing is based on Aces, is it not?
By professional, i mean “Fits into pipelines of what’s typically used by companies”.

You can use aces and be very very bad at color grading, the two have not much to do with each other. You can PM me if you wanna know more, I don’t wanna derail the thread.

3 Likes

I highly advise against it, unless you’re working in a pipeline where it’s demanded from third parties.

I found it very useful even while I work alone. It makes easy to import camera capture from different cameras and render layers so every video source is then easy to composite and grade. Also, my camera gear shoot 8-bit channel so I’ve done hack to blender and adjust in camera dynamic range more compressed, depending what I’m shooting and import that compressed dynamic range to linear.

Also I found that rendering in ACEScg colorspace is more accurate in light calculations than sRGB. In my opinion, CGI often look crap because they work in limited color gamut, and you can’t avoid that when using ready assets or cycles sky textures.

That’s why I’m not invested better camera equipment while I composite to renderings, rendering will look crap. So I shoot on sRGB gamut, I use sRGB assets, calculate direct light sources and some materials from spectral to ACEScg to get light calculations a bit better, and then color grade and output in sRGB. This way I can lower the difference between rendering output and camera.

Big drawback is that rendering in this workflow exports OpenEXR frames and cheaper renderfarms may support only mpeg-4 that already has compressed dynamic range.

1 Like

Ahhh sorry I totally misread your post and added a “not” between do and Cycles. I have corrected. :blush:

Not really. I mean it’s…complicated.

Being friends with two of the top colorists in the world, I have it on good word that they don’t work in ACES unless the client insists on it. That’s not to say that they don’t use ACES transforms when getting ACEScg shots back from the VFX team, but the color grade portion of it typically doesn’t.

Many colorists working within the Blackmagic Resolve pipeline tend to prefer working in the Resolve Color Managed workspace which is similar to ACES but “feels” better to work with and in the way the wheels respond.

2 Likes

You haven’t worked in film I suppose. That’s where I come from, 30 years of experience in production and post. Technical language isn’t a thing most of the time, and I’m not even quite sure what exactly a “technical artist” is nowadays as pretty much every single creative out there leverages technology in some way, shape or form.

I was working on a sci/fi project about a year ago and I ran some tests using Cycles, Octane (in Blender) and 3Delight in Houdini of the same shot. There were substantial differences between the look and feel of the 3 render engines. Out of the 3 Cycles looked the most “cartoon-ish” if that helps. They way the materials looked, and the way the light reflected off of them seemed the most artificial. 3Delight and Octane were considerably closer to what I would expect to see on a studio-level film.

The catch is that Cycles was the fastest, easiest and far best integrated with Blender. This is why I said that the pros outweigh the cons for me. At the end of the day, the best renderer in the world isn’t worth it if you’re not able to reach your production goals.

Ironically, for some type of shots (particularly sci/fi exteriors), I actually believe that EEVEE can look more photorealistic than Cycles due to the relative unreliable on GI and a certain “grit” to the look that makes it well suited for some of those shots.

I’m a big fan of Paul Chadeisson. I believe he works in Blender but renders in Octane if I’m not mistaken. I think his artwork and his style would be difficult for Cycles to match exactly, I would love to see something rendered in Cycles that matched this level of cinematic quality…but I understand that we’re stepping heavily into subjectivity here.

https://www.artstation.com/pao

2 Likes

Being friends with two of the top colorists in the world, I have it on good word that they don’t work in ACES unless the client insists on it. That’s not to say that they don’t use ACES transforms when getting ACEScg shots back from the VFX team, but the color grade portion of it typically doesn’t.

This is getting too much offtopic so maybe we should discuss that color grading stuff in other thread.

My point was that when rendering, ACEScg gamut gives much better results in calculation than sRGB, even when output is targetted sRGB. So tuning procedural textures/volumetrics using RGB values calculated from spectral and same to lightsources, gives CGI what is often missing there. You need to take this account when comparing path tracers.

Sure, but if we’re comparing apples to apples, then working in rec709/sRGB in other render engines should also result in non-photorealistic results, but it doesn’t.

My typical pipeline is render out to linear EXR and then all of the post processing happens in Resolve.

P.S.

I’m also comparing not just my own work, but what I see posted in the gallery on a daily basis vs. what I see posted on the Octane or the Arnold user groups.

P.P.S.

Out of all the Cycles renders that I have seen, the one who gets perhaps the closest in terms of realism is Ian Hubert, and I don’t believe he’s working in ACEScg.

I think his artwork and his style would be difficult for Cycles to match exactly, I would love to see something rendered in Cycles that matched this level of cinematic quality…

Uh… I love the short film but I read your text first incorrectly, I though you meant to say this video look artificial because it is rendered using Cycles and think that there is some crisp, cartoon-ish look. Then wondering why it look like that.

This is rendered using Arnold: https://www.youtube.com/watch?v=C4pcg7bXgmU

And I expect to get similar rendering quality out of cycles.

I’ve done space rendering testing too and I can get more realistic look using cycles. I don’t know how much visuals in Migrants is intended aesthetic and how much is result from chosen renderer.

I also read article where some guy told decision to use Octane in his low budget short animation was high performance in rendering, not quality.

I get all of that, and I have seen several side by side renders and there indeed is a qualitative difference between each renderer.

If we are interested in improving cycles, it is important to pin down exactly what the issue is. If you can’t say anything other than subjective adjectives, there is no way forward.

For us, the job of the modern technical artist is to take subjective feedback and turn it into technical improvements. I had a client ask me to make a render more glamourous. I added in some skinny area lights to help define reflections better and added in a subtle glare. I understand that the client (or director) doesn’t always have the full technical understanding of exactly what we are going to do to make the image look more “gritty” or “ambient” or “glamourous”. Since we have that skill, it gives us the ability to translate handwaving into the technical details. Software developers are even deeper into the technical nitty gritty, and even farther away from the subjective side. We can help to serve as a bridge there. It’s tough, for sure, but it’s even tougher for the devs. Pinning down the technical issues turns them into actionable items for the devs.

Not sure if this was meant to be a dig or not, but you are right. I work in an industry that is bigger than film, like most industries are. Hollywood is small potatoes nowaday, and working in film isn’t the same clout hammer it used to be.

2 Likes

For us, the job of the modern technical artist is to take subjective feedback and turn it into technical improvements.

I’m myself software developer so I’m very deep in algorithms.

In my opinion, cycles should aim realism. Replicating that cornell box, adding some chrome sphere and all surfaces measured. Ground truth is measured reality, not “cinematic”. I also understand when certain level of realism is achieved, it can be optimized for performance, keeping same level of quality but using tricks that human eye can’t see. It should be also useful for work.

So when we got realistic output, we can also tune shader to get some intended artistic look.

1 Like

I feel you, and agree to a point.

I definitely thing ground truth reality is the best baseline, and using scientific principles to rigorously define that baseline is a good approach. But we are still artists, and sometimes I just want to have a metal material that is kinda rough without looking up NK values and surface finishing specifications.

At the end of the day, whatever rendering model you are using is just a model and is subject to all of it’s shortcomings. reducing those shortcomings is a noble goal, but there are limitations to the science that we can do. What’s the IOR of garbage juice dripping out of a dumpster? There is no scientific answer to that. It’s a complex material and it would be expensive to rigorously define it, and if someone through a bunch of rotten oranges in that dumpster the science is completely different than if the dumpster was full of rainwater and old chow mein.

4 Likes

What’s the IOR of garbage juice dripping out of a dumpster?

About ~1.42. If it is water, it is 1.33 but it can be also high as 1.47. It varies and depends on wavelength, so accurate modelling requires spectral rendering and diffraction but in Cycles and similar renders it good to take average IOR at 555nm and use that as base, and calculate specular from IOR is 0.376511

There is no scientific answer to that.

That interesting juice is usually cellular fluid from decaying organic garbage, it is measured and it varies, depending what proteins there are.

1 Like

And there’s the rub. The point I was making is that those variations have a significant effect and those variations are where the scientific method might leave you lacking, so you’d need to approximate based on the hard science and your innate artistic sense of what looks right.

I’m reminded of the spherical cow:

Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help from academia. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, notebooks crammed with data, where the task of writing the report was left to the team leader. Shortly thereafter the physicist returned to the farm, saying to the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.”

from https://en.wikipedia.org/wiki/Spherical_cow

The real world doesn’t fit nicely into our models. and the tools we use as artists need to have both the realistic underpinnings based on science, and easy + flexible tools to tweak them so that it looks right.

2 Likes