Hi, I ve been watching some Blender Renderman tutorials, and the tutors really highlight how ACES improves the render quality. I do not have Renderman, but I am aware Blender can load both ACES and AGX protocols.
I tried a few times AGX, but I did not compare it with filmic. I just noticed that it demands some changes and tweaking in the exposure to make it pop a bit more. I am trying to convince my colleagues to adopt a new color management.
I am curious to hear from people that understands this in depth and have use both of them which one you recommend.
AgX is under active development. What you saw was the basic inset result, and there is a “punchy” look to “pop” the image.
But as disclosed by Troy, he is planning an update soon that will make the basic inset “pop” enough that there is no need for the additional adjustment. As you can read from the above thread linked by Joseph, we are all speculating how Troy’s new appraoch works, but we need to wait until the actual update to be sure.
I have one version of Blender with Aces and another with Filmic.
I did not like to work with Aces directly because it doesn’t give you a good preview of the scene. Everything looks, or a bit dark or a bit washed out, and even when I know they are not, I feel it hard to prepare a scene correctly if I’m not able to see how it would look. Every view transform besides Srgb gives more acceptable results on the viewport, but to work in a monitor like mine that is not a wide gamut it feels a bit hard.
For me, it’s hard to accept something like “oh, it’s looking awful, but I know it’s good for some purpose”, cause I don’t feel secure.
I also made some tests saving files using the aces installation and the new blender feature that exports files with AcesCG transform and I found out that the Blender version after being opened in a photo edition program that accepts OpenEXR files of 32 bit the saturation of the colors look much better. But again, is it really better? How can I know what I can confide from my senses and what not?
For now, since I don’t work with anything that is so demanding in terms of quality or animation and film, I feel more comfortable using Filmic to build the scene and exporting what’s necessary in AcesCG after that. Meanwhile, I will try to learn more about the differences and see how Blender adapts to Aces in the future.
Sorry, I did not follow the discussion in the other post.
Oh, and yes, I’m not talking about Renderman. Just talking about my experience in cycles and the viewport in Blender. And it’s probably completely out of place, cause I don’t fully understand the matter.
Ok, to be honest, I really don’t understand a thing, and looking at the other post I got even more lost. That’s Greek to me.
Renderman basically requires the use of ACEScg for effective rendering; RM has built-in srgb material conversion (including native texture file extension → .tex on-the-fly) but there is alot to consider when using different color spaces for production.
If your interested in pitching a new color space then I suggest working through your pipeline from end-to-end to ensure achieving your look dev. & color goals is even possible. Example : You may find Blender has compatibility with a particular color space, but learn the next app in your pipeline doesn’t have 3D Lut capability for view-transform, or some other issue. Test your desired workflow locally and get all your ducks in a row before proposing this change.
Thank you, this was helpful. If I got it right, there is a disparity between what you see on the screen and how the render looks. Well, then, implementing it does not sound beneficial to me either.
It is kind of essential that I get what I see. So, hopefully, Blender will widen its color space protocols natively.
If you work in Blender you are working with a wide color space, but using Filmic you have a reasonable representation of that compressed to be seen in an SRGB monitor, because it’s impossible to represent a wide color space in an SRGB monitor without any kind of color space transform as Filmic. It’s just like trying to see colors in a movie on a B&W television. It will simply not work.
If you export your file and override the Filmic color space saving your image as an OpenEXR now you have the option to save it in the AcesCG colorspace or some other possibilities that I normally don’t use. The fact is, in Blender or any other software, if you are using an SRGB monitor all you can see from Aces are approximated representations. You will never be able to fully visualize its potential. That doesn’t mean working with Aces is useless, cause there are many cases in which that would be important and since it’s becoming a widely accepted color space protocol, can be useful to use as a bridge between Blender and some other software that uses Aces natively.
Actually since ACEScg’s AP1 primaries are not actually colors (they are just math result outside of the spectral locus), there will never be a display capable of display ACEScg. This is one of the issues working with ACEScg working space, you will have values that are completely non-sense that are not colors at all. Like looking at the 2D world map, and draw a cross sign in a spot even more southern than the south pole, then say we will build the house here.
What folks need to understand is that the rendered values out of a render engine is just data, the view transform is what produces the image. sRGB monitors, P3 monitors etc. are mediums of images, we are forming an image on mediums with different characteristics, like painting with water paint vs oil paint. We are not trying to “approximate” or “replicate” the rendered data, we are forming an image out of the data. Like when we look at an apple on the table and paint the apple on the canvas, we are not replicating the exact color of the apple, we take what the real apple looks like, and process the information to form an image interpretation in a certain medium (paint, or monitor).