Lightroom Classic: Color space control. Why can't I control what color space I edit in?

  • 1
  • Question
  • Updated 2 months ago
  • Answered
  • (Edited)
-LR Develop only uses ProPhoto color space.
No monitor can show all the colors in this space, and no printer can print them.
So why is LR limiting me to this color space? It's like editing blind.
Photo of Ryan Speth

Ryan Speth

  • 3 Posts
  • 0 Reply Likes

Posted 2 months ago

  • 1
Photo of Carlos Cardona

Carlos Cardona

  • 473 Posts
  • 98 Reply Likes
ProPhoto is the BIGGEST color space, so it’s not “limiting” you, since it contains all the other spaces. If your monitor only shows sRGB, then that’s what you’re editing in. If AdobeRGB, then that’s your effective color space. Exactly. So not blind.
Photo of Ryan Speth

Ryan Speth

  • 3 Posts
  • 0 Reply Likes
This first image is the color space that my Eizo workes in.
The second is ProPhoto color space.
If a color space container is larger than what a monitor can show, than you are working blind when it comes to color.
So I restate my point. If the moniotor can't show it but the color space still produces it, you are working blind. 

Photo of Ryan Speth

Ryan Speth

  • 3 Posts
  • 0 Reply Likes
Sorry, my post should read, why am I being limited to this color space container?

Photo of Ruurd van Dijk

Ruurd van Dijk

  • 32 Posts
  • 6 Reply Likes
It's not a Limit. Always best to edit in the 'largest' colorspace. Even if your monitor doesn't support that. 'Degrading' to a lower colorspace, together with converting to eg. jpeg are ALWAYS the very last steps you should do in your workflow. That's why you can only choose those settings during Export.
Photo of Carlos Cardona

Carlos Cardona

  • 473 Posts
  • 98 Reply Likes
If the monitor can’t show it, then the color space is not “producing it”! Extra information you can’t use is not limiting you, it’s just digital trash. You are editing...you know what, I’m done trying to explain it to you, you aren’t getting a simple explanation.
Photo of Cameron Rad

Cameron Rad

  • 161 Posts
  • 48 Reply Likes
Photo of Johan Elzenga

Johan Elzenga, Champion

  • 2167 Posts
  • 897 Reply Likes
When you output your images, for example to a printer, you will use a different color space again because the printer is CMYK based. To be able to utilize all the colors that the printer can produce, you need to work in a color space that is big enough to encompass all these possible output color spaces. That is why your working space needs to be as big as ProPhotoRGB. 
Photo of Edmund Gall

Edmund Gall

  • 33 Posts
  • 10 Reply Likes
Hi Ryan! Forgive me if I include things you already know in my response, as colour management is a complex issue and I don't know how much you know. Perhaps the following article can help clarify: sRGB vs Adobe RGB vs ProPhoto RGB https://photographylife.com/srgb-vs-adobe-rgb-vs-prophoto-rgb

My understanding is (subject to correction by anyone reading this):
  1. You are correct that if your monitor only displays a percentage of the small sRGB colour space, then chances are you can't see what's happening to the pixels in the gap between your monitor's colour space and the ProPhoto colour space. So, if I understand your question, you're asking why should we process in ProPhoto when we can't see the full range of colour data (i.e. 'editing blind').

  2. A digital image file is basically just a stream of numbers. When we process that image, basically we're just changing the numbers. The colour space provides the framework for interpreting the numbers into the correct colours. The wider the colour space used during processing, the more room we have to change those numbers (though it's also a function of other factors, e.g. bit-depth per colour channel). It is entirely possible that some of that processed pixel data will move from the invisible gamut to the visible gamut of colours.

  3. Your DSLR camera produces either a JPEG image file or a raw image file. If you configured it to record JPEG, then the colour space embedded in that initial file will be either sRGB, or Adobe RGB (which is wider than sRGB but smaller than ProPhoto). If you configured your camera to assign sRGB, then you're starting with an image that has been initially encoded using the smaller gamut. Although you can process that using larger working colour spaces, the full range of colours won't be available (because it was lost when the image was first created by the camera). Red 240 in sRGB is a different colour to Red 240 in Adobe RGB (which is different to Red 240 in ProPhoto).

  4. If you configure your camera to produce raw image files, then you're getting the most amount of data at the start of your process: raw files don't have any colour profile assigned (though the JPEG preview files embedded within them will be encoded using either sRGB or ARGB, depending on your camera's settings).

  5. Simplistically, you can convert from a larger colour space to a smaller one that matches what's being displayed on your monitor; but it isn't reversible. If you convert from a smaller to a larger, you (or rather, your processing app, e.g. LR) won't know for sure what value should be assigned (because, e.g., a pixel encoded as sRGB 210/100/50 will not be the same colour as ARGB 210/100/50 or ProPhoto 210/100/50).

  6. If Adobe allowed us to set the working colour space (i.e. the colour space used by LR when manipulating the image data) to sRGB, then if we started with a camera-assigned sRGB JPEG there's no visible change: we'd start with low data and be producing low data.

  7. However, if we started with a JPEG with ARGB or a raw file, we'd effectively be dumping a significant amount of colour data. Why would we wish to do that? As I said in Pt #2 above, if any of our processing actions (e.g. increasing saturation) pushed the resulting pixel colour value outside sRGB, it just won't be recorded. When coupled with processing in just 8 bits/channel, we'd also have fewer shades of each channel (R, G, or B) available: it may result in banding.

  8. Hence, as others stated above, the better method is to always ensure you have the most colour data and widest gamut available at each stage of your processing, until you're ready to output the processed image for a specific purpose. If you're producing an image for display on the web, you'd be converting to 8-bit sRGB JPEG via LR export. If you're producing for prints in a high-end lab, you might be converting to 16-bit TIFF with a larger colour space embedded.

  9. Thus, by forcing all users to use the widest colour space possible as the working colour space, LR is not really 'limiting' us: it's giving us the most room possible to complete the required processing changes to the image file's colour data without losing the results (because they got pushed outside the range of a smaller, non-ProPhoto working gamut).

  10. Btw, in PS you can configure the working colour space and bit depth. The same rationale described above covers why you'd want to process using 16-bits per channel even if your camera only produces a 12-bit or 14-bit raw file and will eventually produce an 8-bit JPEG: 16-bits gives a few orders of shades more for each colour channel, so we can push the file more before facing major risk of banding or artefacts appearing in the processed image. Thus, PS gives you the control you desire, but if we don't understand the impact of the settings we choose, we can end up with a poorer/inappropriate output file than required.
Hope this helps...