I guess I may have missed part of the context of the discussion, as I have not yet read all of the replies. So, to set a context for our local discussion, I consider the only valid environment within which one would actually perform any color, exposure, or noise-related post processing to be with the RAW. A RAW image, for all intents and purposes, is an exact replica of the signal on the sensor at the time of exposure, only as a digital signal. So long as you keep an image in its "original digital signal" form, you are not actually processing RGB pixels. You are processing an actual signal that can be interpreted in a variety of ways. You don't even necessarily need to think of a RAW as another matrix of discrete R, G, and B pixels. You can think of it as a 2D waveform of pure luminance, as well as a separate 2D waveform of chrominance (color information described by vision-accurate green-magenta + blue-yellow color axes). You can adjust color information independently of exposure information, or both independently of noise information (or concurrently process the image as a variety of other representations along a pipeline towards the final result you see on your screen). So long as you work with the original digital signal.
When it comes to RGB images, those should only ever crop up in the final stages of your image processing. By the time you start working an 16-bit RGB TIFF, your color balance, exposure tuning, curves adjustments, and noise removal should have already been done. Personally, I'll only start doing content-related adjustments (content-aware fill, spot healing, patching, etc.) after I've done all of that. I am then working with an appropriately "tuned and ready" PP-Master image. Since that PP-Master is still RAW, with a bunch of non-destructive edits in the edit history of my RAW editor that can be overruled at any time, I can always tweak the raw further as if it was a digital signal (rather than an RGB pixel matrix). From that PP-Master, I'll generate a CC-Working 16-bit TIFF image, for the purposes of using content-aware tools in Photoshop to clean up content, sharpening, and other "final cleanup". From the CC-Working, I save out a CC-Master, which is the same original size and dimensions as the RAW. Once I have a full-sized CC-Master, only then do I really feel free to start scaling and cropping for different output mediums, or applying the very LAST edit: output sharpening. You could kind of think of the progression of edit stages like a small tree. At the root is the original unprocessed RAW, from which a series of edits progress:
Code:
/ -> 8x10 Print, Cropped, Sharpened, 600ppi
RAW -> "PP-Master" -> CC-Master - -> 17x22 Print, Cropped, Sharpened, 300ppi
\ -> 750x500 Web, Sharpened
To the rest of your last post and to your question:
RGB as "pixels in a matrix" if you perceive it in a model of, say, hue, saturation and brightness has low resolution of hue and saturation at the extremes of brightness. I don't understand a way around that (perhaps, you do?).
Since exposure tuning, color balance, and noise removal should always be performed on a RAW, they are effectively being done on an original digital signal. As long as you perform all of your exposure tuning, color balance, and noise removal in a non-destructive RAW editor, you would never really be working with a pixel matrix in an HSB model. You are working with a linear signal, attenuated by a camera profile tone curve (which is what gives it meaningful, good-looking contrast, a highlight shoulder and a shadow foot), and rendered to the screen with your edits. The signal might as well be as fluid as it was on the sensor (there are some limitations, but we can get to that later). You can shift exposure around within the available dynamic range (which in a tool like lightroom is at least 8 stops (+/- 4 EV), but more when you use tools like highlight recovery, shadow lifting, white and black tuning, etc.)
Nothing really actually changes the original signal in a non-destructive RAW editor. What you see on the screen is simply a rendering of the digital signal contained within your RAW image. Successive edits, multiple successive tweaks with the same tools, such as Exposure, are not compounded edits on top of edits of RGB pixel data. If that was the case, you would only be able to make a few edits before your image started showing a pronounced loss of fidelity and quality...and not long after that it would start to melt into a meaningless "pixel mud" due to the error present in each and every successive edit. In actuality, in a non-destructive RAW editor, each successive edit is simply a change in a rather small set of instructions to the rendering engine that reprocesses the original digital signal and renders the image you see on your screen. After every single tiny little edit. ;)
Assuming you did not clip your highlights when making an ETTR photo, every last scrap of information in those highlights is not only recoverable, but it also contains full, and maximally rich, color information. You achieve maximum color value in the analog signal on the actual sensor when you approach maximum saturation, since each pixel represents a luminous value for a single specific color (as opposed to an RGB composite pixel). The red, green, and blue sensor pixels have not yet been interpolated into RGB pixels when editing a RAW. If you pull down your highlights by a stop, by two stops, by four stops, the original signal is simply reprocessed with a different set of instructions...different offsets and adjustments and attenuations. You aren't reprocessing previously processed pixels, you are always adjusting the original, and for all intents and purposes "fluid", digital signal. Same goes for curves. Same goes for saturation. Same goes for white balance. If you adjust white balance by 1000k, 5000k, 10000k, the final results will still look accurate because you are always going back to that original digital signal and simply re-rendering.
So, instead of thinking about your photographs as matrices of RGB pixels...think about them as an original, fluid digital signal until the moment you create a CC-Master as a 16-bit TIFF image. From that point on, you'll always have your original, full quality, fluid digital signal in the PP-Master...your RAW with a bunch of rendering and noise removal instructions, as well as a content-cleaned master that has undesirable content cleaned up, etc. You can always re-tune exposure and create a second CC-Master, or copy the CC-Master to crop, scale, sharpen, and publish to different mediums.
A little bit on digital signals. While I called it "fluid" above, because it is effectively a "baked" representation of a true analog signal from the sensor, it is not "fluid without limits." The analog signal on a sensor can be tuned and adjusted at will, shifting exposure around within the dynamic range, without any loss whatsoever until you actually expose. Once you do expose, you get a digital signal that is very flexible, but not quite as good as the "real thing". For one, you have baked in noise, and noise of all forms. You obviously have your photon noise, but you also have FPN (fixed pattern noise), HVBN (horzontal and/or vertical banding noise), thermal noise, qantization noise, non-uniform pixel response noise, etc. Each of those forms of noise can be represented by a discrete wavelet in the Fourier series that ultimately represents the digital signal. As such, technically speaking, they could be removed without adversely affecting the image. The tools to really do that don't quite exist in readily, easily usable forms...so for now we can generally treat noise as a fixed quantity of our digital signal that will always be rendered along with everything else, and some of which might potentially be removed by a tool like Lightroom (which might affect other wavelets of your signal and introduce some blur), or Topaz DeNoise, Nik Define, etc. for slightly better results.
Additionally, a digital signal is represented in a limited precision (12/14-bit) form as discrete integers, rather than in an infinite precision form as real numbers. As such, certain artifacts are intrinsic to the digital signal. These limitations ultimately put a hard limit on just how far we can push, pull, stretch, compress, and otherwise massage a RAW image and still produce an aesthetically pleasing result, even in the best and most advanced of RAW editors.