while lighting a scene, i came up with the idea of having the Cycles Viewport Render set to Grayscale. I found that it is usefull for a good lighting setup to ignore all colors. Of Course it would be easy to setup an overwriting Material in the Render Layer Settings, using an Ambient Occlusion Material. But this is not the Same as having a Grayscale Render. And also of course it would be possible to render the scene and set everything to grayscale in the compositor - but this would be too late, because it is not real time. I made some tests with the RGB curves under the Color Management Tab, but i didn’t get usefull results.
So, does anyone have an idea of how to set the viewport render to Grayscale?
Is there really no chance? Did i find something that Blender can not do? I would be happy to get more informations. In the meantime i found a way to achieve a grayscale viewport render - but it is a very crude one. I simply open my NVIDIA Settings panel and turn the Saturation to zero Now i see the rendered Viewport - like everything else on my desktop in grayscale, which i great to check the lighting situation of a scene. When everything is fine, i simply abort the Graphic Cards system Settings without saving and everything is ok.
If more artists were knowledgeable about color around BA, you may have been given the correct answer. Sadly this is not the case.
Of course it is possible, and of course it is elegant.
The simple answer is that to go from any given colorspace to a greyscale you simply multiply the RGB values by each of the primary light’s luminance position and sum the results. For sRGB / REC.709 primaries, which is the implied set of primaries of the reference space in Blender by default, the values are 0.2126729 red, 0.7151522 green, and 0.0721750 blue. If you sum up those values, the result will be the value that you set red, green, and blue to.
So knowing this basic math, how can we do this in Blender?
The answer is via OpenColorIO (OCIO).
OCIO provides a configurable method to implement color transforms to and from the reference space. One solution would be to add a display View transform on the output imaging chain using a matrix that leverages the correct D65 sRGB luminance weights. The result would be in display linear, so we would then finally need to apply the proper transfer curve.
Create a matrix file that has the correct luminance values for D65 sRGB to greyscale.
Create the entry in the config.ocio file and add it to the Views.
First, we create a new empty file called srgbgreyscale.spimtx in the /<VERSION>/datafiles/colormanagement/luts directory. We fill this file with the correct matrix format in the SPI notation and save it:
Now open Blender, load a still image, select “View as Render”, and set the output display transform to sRGB Greyscale. Want to view it in sRGB color again? Simply flip the transform to Default. If you are rendering, you don’t need to set the “View as Render” toggle. The Color Management display view transforms are located in the Properties view under Scene.
Here is a public Gist if you are spooked out about modifying your configuration. Simply copy the config.ocio file to a backup and replace it with the version in this Gist. Add the srgbgreyscale.spimtx file to the luts directory, and, assuming you don’t have a custom OCIO setup, all should work fine.
sorry for the late answer; i unfortunately lost this thread and only went back coincidentially.
I just tested your Files - Great Work! This really helps my out a lot, and it also gives me a deeper understanding of how the whole color management thing works.
Are you deeper involved into blender? Is it possible to get this into trunk?
thank you for the fast reply. As mentioned before, i think the Greyscale Colormanagement function will help out a lot. AO and Clay renders are a good thing, but for checking out lighting situations AO and Clay are only second best solutions. Imagine a interior Architecture render with a full clay override - you will be f*cked up with all windows becoming opaque.
I am looking forward to have it in trunk. We will see what will happen.
For architectural work in particular, you might want to consider some custom display referred transforms that grab six or seven stops of light above your middle grey mapping to get closer to photorealism. This too is possible via OCIO configurations, and worth exploring.
No. In terms of latitude of the default sRGB view transform, where middle grey is pegged at 0.20-ish linear, the output transform only manages two and a small bit of stops of upper roll off before hitting display referred white. This contrasts against about a century of seven or so stops above middle grey photography.
As a result, the idea of “photorealism” is impossible to attain with the default sRGB transform. To accomplish a more photorealistic result, changing the transform will help tremendously.
with this i am unfortunately knocked out at my point of knowledge. I am not that deep into the matter. Gibe me a couple of experience to come back with what you meant, i first need to learn about this. Maybe an example would help.
First, never apologize. We are all constantly learning bits. It just so happens that this bit is rather critical, and frequently misunderstood.
Second, apologies for the delay on this response. I hope that this bit of effort helps you and hopefully someone else wrap their head around an important subject.
All of this largely falls under the banner of colour management for reasons that are related to the management of the pixels and their intensities.
I’d heavily encourage you to post this question on Blender’s StackExchange site for posterity reasons, as I’m reasonably sure many folks would glean a little bit from what the following longwinded, rambling, and irrational bit of drivel I am about to spew here. I’d answer your question there as well.
Finally, an apology on all information that you are already familiar with. I have included the basic skeleton here for those that might not be familiar.
What is “Photorealistic”?
We hear this term bandied about many places, but not many people stop to ask what is photorealism. I’ll focus on the “photo” part of the term and how it relates to your question. Aesthetics are a learned and emergent response. That means that our cognitive world associates photography with “photorealistic”.
Computer Generated Work
Prior to ray tracing and other physical emulation models of CGI, we used procedural guesstimations and approximations. With ray tracing, we exposed ourselves to a more closer approximation to our real world models; we can model real, physical world scene much more closely and accurately than we could previously.
Photography is also a convention. It is the mapping of some external set of values to a recording model. In this case, the manner with which film and later DSLRs capture light is of critical importance. Photographic emulsion had a log-like recording format of physical world values. That is, the medium is nonlinear in nature.
Dividing Our Models
In visual effects, later era terminology divides the models into two separate and discrete domains; the Scene Referred and the Display Referred. Technically, the Display Referred domain can represent any device, output, or limited range medium. The differences between the scene referred domain and the display referred are that the scene seeks to keep the real-world physical ratios of light intact, while the display referred domain can include all sorts of encoding and creative adjustments.
Units of Light
Light being an energy is mapped on a linear scale. One unit plus one unit of light is two units. Two units plus another unit is three units. However, due to the nature of our perceptual systems, we would not consider the last addition an equal amount of increase in perceptual brightness. That is, our perceptual system is nonlinear. In order to see an equivalent increase in light, we would roughly need to double the amount from the previous quantity. This doubling and halving of light is referred to as a “stop” in photographic terms, and closely aligns with our perceptual response to physical world quantities of light.
Mapping of Values
When we took photos on film emulsion, or when you capture an image on your DSLR or pocket camera, the scene’s values are mapped in a unique fashion. If we consider that old film stocks could capture anywhere from fifteen or more stops of total dynamic range, we can begin to see how those values would be mapped to the display referred. The following is a very rough and inaccurate representation of this sort of effect. Here, the maximum value a film-like medium can record is set to 1.0, while the minimum is set at 0.0.
Of key importance here is to note that with any photographic image, notions of black and white do not exist until the transform to the display referred domain is made. That is, black and white do not exist in the scene referred domain.
From Film to Cycles
Cycles is a ray tracing engine that seeks to model physical phenomena. As such, the internal reference model used in Cycles is scene referred, with values ranging from zero to infinity. Also like our physical scene, there is no notion of black or white in Cycles, there is merely the scene. How we wish to interpret that scene is up to the display referred viewing transform.
By default, for reasons of encoding, the default display referred viewing transform is a perfect linear to sRGB mapping as outlined in the reference specification. This is set as such that if you import an sRGB image and render it, it can theoretically look precisely as you imported it. While this might seem to make sense, it is actually part of the crippling issue with the default viewing transform.
The following is the mapping of scene referred linear values to the default sRGB viewing transformation. This is what you see in the render view and also what is “baked” into the data of all display referred formats such as JPEG, PNG, TIFF, etc. at 8 bits per pixel.
Here we can see that the default viewing transformation performs a hard cut at the scene referred 1.0 that gets mapped literally to the display referred 1.0.
How Does this Influence Creative Work?
The problem is that this mapping is very much not at all like a photo. Sadly this also greatly influences how imagers think about imaging. Of course 1.0 is maximum and of course white exists! As such, the disparity between the two models leads imagers to begin to fudge and fake things in order to get the results they expect.
This means that when creating photorealistic work, imagers will begin to cheat things to fit within that default viewing transform. Of course, when the imager begins to cheat her scene referred values in order to fit under the very low bar of the default viewing transform 1.0, the physics of the scene, or the values that actually deliver a portion of that photorealistic output, are now suddenly becoming mangled and broken.
All of this is a cascading bit of complex mangling that ends up with a bunch of things faked, and much more work for an imager, where highlights fail to roll off in a way similar to a photo and entire ranges of lights are botched and crippled.
So as we can see, we ideally would be taking more care in how we map our scene values to our view. We can accomplish this a number of ways, most notably by having several “canned” viewing transforms in our OpenColorIO configuration. Of course, there are other ways as well.
In particular, we could take our upper value of our scene and use the White Point mapping under the Color Management panel to grab a higher scene value and bend it down into the display referred domain. With a tweak to the curve, we could come up with something much more film-like with only a few moments of time. This might look something akin to the following image.
Note that we have left display referred black pointing to 0.0 in the scene referred domain here. This is to demonstrate a quick path to a solution, but it should be noted that black could equally be mapped to a much higher scene value.
Also note how the magic value of 0.2 is mapped to the middle of the display referred output range. This is because display referred image formats assume a nonlinear output, and as such, the default “middle grey” point you are currently working with is linearized to approximately 0.2. This value becomes nonlinearly mapped to about the middle range of the display referred output. The curves adjustment would need to bend 0.2 to approximately 0.5.
The values above middle grey are important here as they are the key focus of the film-like mapping. We want a curved roll off from the above middle grey values to the final scene referred value. Note how the image selects some values above 16.0. This is because in film terms, a typical film-like response would capture six or seven or eight stops above middle grey, and as such, this sample maps about six stops above middle grey to the display referred diffuse white.
Longwinded Spewing and an Apology
So the above is a rough approximation as to the importance of a viewing transform. While one can use the Color Management tab to manually map values and tweak the curve, it can be automated into a view transform for quickly getting results.
There are some terms that refer to this as tonemapping, which it most certainly is. However, the basics behind it are worth exploring and explaining to get a solid grasp of the concepts.
Hopefully this explanation hasn’t misled or confused further. It was kept as brief as possible to communicate the concept at hand.
Black and white don’t exist in the scene referred model. Imagers should learn to control and select what black is and what white is in their display referred output.
“Photorealistic” consists of a portion of the transform to the photo medium. That transform involves mapping the correct ratios of light to the display referred output.
Without a proper knowledge of the division between the scene referred and display referred domain, imagers may lean on an inappropriate mental model and begin to cheat, fudge, and fake values. This cascades into a broken scene referred model where light begins to behave incorrectly and mangles the imager’s goal.
One additional bit that hooks to this is the color picker values. The reason why setting the intensity slider in color picker to the middle does not result in RGB values 0.5 but 0.214 is because color picker colors (and slider positions) are corrected for sRGB gamma but number values are in scene referred linear space. This seeming inconsistence can be one source of confusion sometimes.
My two cents about tonemapping: it is very often called HDR photography or HDR images, but most of the time there is no high dynamic range preserved and the relative values are also messed up beyond recovery. When white paper and sun have the same pixel values, there is no high dynamic range preserved with any stretch of imagination, so to say.
I just managed to get it to run in 2.93.2. It had some troubles with an additional space in one of the definitions. Make sure you check the white space for consistency. I can share the config file, if it helps.