What Monitor Color-space is efficient?

I bought a new monitor that has DCI-P3 color-space.

Before I bought this new monitor,
I usually used SRGB, and the contents we usually watch are usually SRGB (youtube videos)

So I’m still wondering that is DCI-P3 beneficial for doing Blender CG/VFX and Davinch Resolve’s color-grading or not…

I heard that DCI-P3 is specifically made for Cinematic color and its wider than SRGB range. But ironically we usually watch the contents through Youtube which is based on SRGB.

Then if we set up our monitor color-space to DCI-P3 and finish the color-grading. And post it on youtube.
Then the color I set-up via DCI-P3 monitor is no longer visible in Chrome’s youtube… right?

Then is it really beneficial to work in DCI-P3 monitor color-space for Blender CG/VFX and Davinch Resolve Color-grading?

Whenever someone mentions a color space in this website I’ll just post this:


Technically DCI-P3 encompasses older standards like sRGB or Adobe RGB and works best for HDR content.
The biggest complaining in current times is that Content like TV shows where and are getting too dark.

People really complain vocally that they can´t see jackshit.

It all highly depends on the usecases. Do you create HDR output?
You should adjust your hardware to the majority of potential people you want to cater to.

Same thing with audio, sure you can exclusively mix for Dolby Atmos and other systems like 7.1, 5.1
but if you don´t further Downmix it, people hardly will hear anything and some won´t hear jackshit.

tl:dr keep in mind for whom you create content.
Do not only use highly expensive ultra duper content creator gear also use or purchase a shitty speaker, headphones and screen so you know what others will experience.
Edit: Don´t forget mobiles. People enjoy content on those devices too.


Leaving aside all the complexity implicit in

A lot of operations you might care about in a color grading workflow assume positive RGB triplets and might fail if you plug in anything negative.
In wider-gamut spaces, a larger portion of physically realizable colors are going to land on such positive values.
You can, of course, go too wide, making your primaries into “colors” that aren’t even realizable by highly coherent single-peak lasers, which causes another set of issues. But P3 is a solid step up over sRGB while remaining entirely in the realm of realizable colors.
Even if, in a final step, you map to sRGB, having a pipeline in between that can handle more colors is gonna be beneficial, I think

Some operations also just work better if you don’t fully saturate your colors, i.e, not only do they not want negative stuff, they aren’t great if anything is zero either. And if your working gamut is larger than your target gamut, you can guarantee that, except for black, all colors in your target gamuts will not only be non-negative but, in fact, strictly positive.

Though I think P3 actually shares its blue with sRGB? Meaning that isn’t quite a guarantee: If you only use P3’s blue, you’re going to end up using only sRGB’s blue as well, with the other components being 0

All that said, you may easily reason that it’s best to simply work in the target gamut all the way, as you don’t want to ever deal with values that are negative in the target, since it might be nontrivial to map to that in the end. It really just depends on what you wanna do.

1 Like

It makes me sad to say this, as I want to think the best of this community, but there are a handful of users on this site that make it their passion in life to swarm any “color” related threads with thousands of pedantic, condescending, and insult-laden replies belittling everyone and everything. These users make it their goal to beat down anyone foolish enough to actually seek real answers with seemingly-academic sledgehammers of jargon and nonsense. The best possible thing you can do for your mental health (and if you actually want a real answer), @babypopmusic , is delete this thread and ask your questions anywhere else before they show up


As a fellow owner of a laptop with a P3 screen, realistically, I don’t find it of much value. I guess the biggest value, concretely, it has is that your monitor will support the full sRGB range, which is what really matters. Apart from that, P3 content is not so common at all and whatever image you will produce most likely will be displayed on a sRGB display or will be anyway displayed through a web browser that does not support P3 natively, as you say. I think selling DCI-P3 screens has just become a marketing trick to say a monitor is good quality (and I guess if a monitor can support that range, I guess it is indeed not bad).
If you’ll need to use P3 because of the specific requirement of a client or project, they’ll be explicit about it. In most cases, it’s better to just target the most common medium.


Oh no, no please. Not another Color Management post…


This reminds me of an interview I read with Steve Vai after he had released his Love and Warfare album. In the interview he explained how his method of checking that his music was mixed right was to listen to it through a crappy set of headphones, a small radio speaker and a small television set so that he could see if the sounds he was expecting to hear actually came through on the kinds of speakers that regular people (non-audiophiles) might use. I thought that was brilliant.


The video you linked about sound is enlightening! So many years I’m swearing about it and now I know that the only thing I can do is taking the chill pill…

1 Like

Well it’s a fair point, basically we don’t have the same needs regarding colors and we are not always willing to learn all the necessary knowledge to make the right decisions. And probably our choices look completely wrong from an expert POV.

Maybe the first thing we should do is to account for the OP situation and how much effort they’re willing to make in learning / testing in order to get the appropriate workflow.

This remind me of an old personal story : At some point PC and Mac didn’t had the same gamma ( ~2000/2010 ), which was never an issue for me as I worked either on a PC or a Mac, therefore I never bothered investigating that any further.
But once I had to work on a project where editing was done on a mac, greenscreen, comp , fx stuff on a PC and back to Mac for final edit and client presentation.

It turns out that the color pipeline was completely messed up between our PCs, the editing station, and the final monitor used to show stuff to the client.
Basically no-one was knowledgeable in color, monitors weren’t graded, and software probably not configured correctly either.
As a result my work on my PC turned out quite ugly and really washed out for the final presentation.

The client wasn’t convinced at all and didn’t believed that just fixing the color management would fix it, so I had to redo everything more “contrasty” and dampen all the effects that was looking good on my PC in the first place.

But, say we did all that on the mac it might be possible that we never had to look further into the issue, and might not took it more seriously either. On top of that it’s very likely that what we send to clients is probably going to look pretty different than what they saw during the presentation, and no-one cared/was aware of that…

Funnily enough the same story happened more than once at that company since no-one really invested the time to fix these issues and make sure everything works…

Now if I was a color expert for many years having to solve these kind of problem I may turn out crazy, half schooling , half insulting people… whatever the question is… who knows :smiley:

I bet that having both P3 and SRGB as a standard might lead to similar situations where what is seen on a monitor is completely different on another…
hopefully color is more a concern now so it’s probably easier to avoid bad situations…


I think as long as you are able to achieve your artistic vision, you are doing it right in an end result sense.
What might be “wrong” is only workflow-wise, in the sense that you might have taken a path that involved a ton of micro-corrections to counteract distortions, where you essentially could have achieved a better result by doing something more principled, if you will.
I mean, if you want absolutely perfect control over the outcome, nobody is stopping anybody from literally hand-placing pixels one by one.
Even then, I suppose it’s technically not “wrong” so much as it’s highly inefficient. - I’ve seen some very impressive pixel art though. Some people are incredible at that sort of fine control for entire complex high resolution scenes with little to no repetition.

Interoperability between different work places or targeting multiple different viewing conditions (cinemas, TVs, computers, phones, various assumed ambient light conditions) at the same time makes it more complicated though. In that case, you really could “do it wrong” as, whenever you fix it for one thing, it’s gonna look awful for all the other things. Although even then, if you are ok with doing the aforementioned micro-corrections several times over for every single target viewing conditions, nothing will stop you from doing so. It’ll just be a huge pain to do. One that most likely could be avoided by simply having somebody set up proper color management to begin with.

None of that is really about how useful having a wide gamut monitor is though: Most critical parts of a given workflow aren’t really gonna depend that much on how wide your gamuts are.

So to tie that back to OP’s original question, certainly, if you’re gonna do something meant for wide gamut cinema projectors, it’ll be helpful to have access to these wide gamuts on your screen to get a decent idea of how it’ll look like in the end.
If you don’t care about that and only intend for things to be viewed on The Internet™, then yeah, you probably won’t get much use out of that particular aspect of your screen.

FWIW it’s possible to upload HDR video to YouTube. Presumably those then would also render in wider gamuts when possible? Not sure though. It seems to mostly be a test feature “because they can” with all sorts of pretty test stock imagery but little to no “regular” usage.

I have major regrets on that front. Those should never be necessary or acceptable.


Thanks for the mental health advice haha. Well…since I’m quite new here, I didn’t know the color things are so sensitive issue here. I’d better not ask this issue here. But thanks for the advice!

Thank you very much

This is the clear advice I guess! Thank you very much for sharing your experience!

[quote=“kram10321, post:4, topic:1530108”]
Even if, in a final step, you map to sRGB, having a pipeline in between that can handle more colors is gonna be beneficial, I think
[/quote] Understand I see… Thanks!

that’s what I also thought about…haha Thanks for the advice!

Oh… I see. Thanks for sharing your own experience! This makes sense to me!

I think so too! It’s a great example thank you :slight_smile:

1 Like

Understand what you meant. I kept using the crappy small monitor too. But it eventually died. Literally doesn’t work anymore. That’s why recently I bought new monitor. But as I mentioned on the above, since I’m used to sRGB it’s hard to get used to DCI-P3 color. That’s why I made a post here to get some advice…

1 Like

ohh the video you shared was so interesting! Thank you very much for sharing it in deed. I guess this one also clearly resolve my question!

1 Like

I won’t post the ‘color management’ again!


Yeah since these HDR monitor are quite new to me, I’m curious about what is the difference between HDR and LDR : back in the days, the gamma difference between Mac and PC was about everything between 0-1, but these extremes would stay the same.

Now with HDR-LDR , is it just that the image is more bright, but everything stays relative, so 25% gray, 50%, 75% feels relatively similar on both HDR and LDR but the HDR is just brighter, colors get also more saturated.
Or it’s a bit more subtle than that, and maybe 50% is relatively similar, but maybe 90% is way brighter. Therefore an image graded in P3 look really wrong in another monitor …then it’s possible that some colors tend to shift and aren’t exactly the same ? If that makes sense …

Last but not least question, how aware is the software ? If I save something when working under a P3 monitor ( jpg, png, h264 …) given that I configured the software to work with P3 maybe like so :

What will happen if I view that image on another computer ? Is there some kind of automatic conversion, should I set it by myself, what about stuff like image viewer or web browser then ?

1 Like

Currently, Blender only directly supports HDR view on Mac AFAIK:

In the Display menu at the very bottom there. It’s greyed out for me because I’m on Windows.

And it doesn’t have a dedicated View Transform for this scenario:
All the View Transforms basically remap everything into the 0-1 range, except for Standard, which simply clamps everything into the 0-1 range, and if you activate HDR, I think it simply doesn’t clamp the top end, and presumably it applies some different curve to the output, that’s more logarithmic in nature.

That means that, while Standard would then make use of the full range of your screen, it would not do things like the Filmic “go to white”, instead giving you very very saturated bright colors, which may not be what you want.

That isn’t a fundamental limitation of the method I think: It’d just be a matter of building a view transform that is purpose-built to HDR view. This is just another OCIO config, so you wouldn’t even have to rebuild Blender. Just add whatever HDR-targeting OCIO profile you like (if you have access to one) to the OCIO.config in
The 4.2 is from the Blender version so if you’re using something else, adjust accordingly.

Note that this is only what I heard, I was unable to actually see this for myself, because Mac only. Changing that would take more extensive changes to Blender.

1 Like