COLOR

  • Sensitivity of human eye

    http://www.wikilectures.eu/index.php/Spectral_sensitivity_of_the_human_eye

     

    http://www.normankoren.com/Human_spectral_sensitivity_small.jpg

     

    Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.

     

    http://weeklysciencequiz.blogspot.com/2013/01/violet-skies-are-for-birds.html

     

     

    Sensitivity of human eye Sensitivity of human eyes to light increase with the decrease in light intensity. In day-light condition, the cones cell is responding to this condition. And the eye is most sensitive at 555 nm. In darkness condition, the rod cell is responding to this condition. And the eye is most sensitive at 507 nm.

     

    As light intensity decreases, cone function changes more effective way. And when decrease the light intensity, it prompt to accumulation of rhodopsin. Furthermore, in activates rods, it allow to respond to stimuli of light in much lower intensity.

     

    https://www.nde-ed.org/EducationResources/CommunityCollege/PenetrantTest/Introduction/lightresponse.htm

    The three curves in the figure above shows the normalized response of an average human eye to various amounts of ambient light. The shift in sensitivity occurs because two types of photoreceptors called cones and rods are responsible for the eye’s response to light. The curve on the right shows the eye’s response under normal lighting conditions and this is called the photopic response. The cones respond to light under these conditions.

     

    As mentioned previously, cones are composed of three different photo pigments that enable color perception. This curve peaks at 555 nanometers, which means that under normal lighting conditions, the eye is most sensitive to a yellowish-green color. When the light levels drop to near total darkness, the response of the eye changes significantly as shown by the scotopic response curve on the left. At this level of light, the rods are most active and the human eye is more sensitive to the light present, and less sensitive to the range of color. Rods are highly sensitive to light but are comprised of a single photo pigment, which accounts for the loss in ability to discriminate color. At this very low light level, sensitivity to blue, violet, and ultraviolet is increased, but sensitivity to yellow and red is reduced. The heavier curve in the middle represents the eye’s response at the ambient light level found in a typical inspection booth. This curve peaks at 550 nanometers, which means the eye is most sensitive to yellowish-green color at this light level. Fluorescent penetrant inspection materials are designed to fluoresce at around 550 nanometers to produce optimal sensitivity under dim lighting conditions.

     

    , , ,
    Read more: Sensitivity of human eye
  • Gamma correction

    http://www.normankoren.com/makingfineprints1A.html#Gammabox

     

    https://en.wikipedia.org/wiki/Gamma_correction

     

    http://www.photoscientia.co.uk/Gamma.htm

     

    https://www.w3.org/Graphics/Color/sRGB.html

     

    http://www.eizoglobal.com/library/basics/lcd_display_gamma/index.html

     

    https://forum.reallusion.com/PrintTopic308094.aspx

     

    Basically, gamma is the relationship between the brightness of a pixel as it appears on the screen, and the numerical value of that pixel. Generally Gamma is just about defining relationships.

    Three main types:
    – Image Gamma encoded in images
    – Display Gammas encoded in hardware and/or viewing time
    – System or Viewing Gamma which is the net effect of all gammas when you look back at a final image. In theory this should flatten back to 1.0 gamma.

     

    Our eyes, different camera or video recorder devices do not correctly capture luminance. (they are not linear)
    Different display devices (monitor, phone screen, TV) do not display luminance correctly neither. So, one needs to correct them, therefore the gamma correction function.

    The human perception of brightness, under common illumination conditions (not pitch black nor blindingly bright), follows an approximate power function (note: no relation to the gamma function), with greater sensitivity to relative differences between darker tones than between lighter ones, consistent with the Stevens’ power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality.

    https://blog.amerlux.com/4-things-architects-should-know-about-lumens-vs-perceived-brightness/

    cones manage color receptivity, rods determine how large our pupils should be. The larger (more dilated) our pupils are, the more light enters our eyes. In dark situations, our rods dilate our pupils so we can see better. This impacts how we perceive brightness.

     

    https://www.cambridgeincolour.com/tutorials/gamma-correction.htm

    A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

     

    Display gamma
    The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values.

     

    Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. Human response to luminance is also biased. Especially sensible to dark areas.
    Thus, the human visual system has a non-linear response to the power of the incoming light, so a fixed increase in power will not have a fixed increase in perceived brightness.
    We perceive a value as half bright when it is actually 18% of the original intensity not 50%. As such, our perception is not linear.

     

    You probably already know that a pixel can have any ‘value’ of Red, Green, and Blue between 0 and 255, and you would therefore think that a pixel value of 127 would appear as half of the maximum possible brightness, and that a value of 64 would represent one-quarter brightness, and so on. Well, that’s just not the case.

     

    Pixar Color Management
    https://renderman.pixar.com/color-management


    – Why do we need linear gamma?
    Because light works linearly and therefore only works properly when it lights linear values.

     

    – Why do we need to view in sRGB?
    Because the resulting linear image in not suitable for viewing, but contains all the proper data. Pixar’s IT viewer can compensate by showing the rendered image through a sRGB look up table (LUT), which is identical to what will be the final image after the sRGB gamma curve is applied in post.

    This would be simple enough if every software would play by the same rules, but they don’t. In fact, the default gamma workflow for many 3D software is incorrect. This is where the knowledge of a proper imaging workflow comes in to save the day.

     

    Cathode-ray tubes have a peculiar relationship between the voltage applied to them, and the amount of light emitted. It isn’t linear, and in fact it follows what’s called by mathematicians and other geeks, a ‘power law’ (a number raised to a power). The numerical value of that power is what we call the gamma of the monitor or system.

     

    Thus. Gamma describes the nonlinear relationship between the pixel levels in your computer and the luminance of your monitor (the light energy it emits) or the reflectance of your prints. The equation is,

    Luminance = C * value^gamma + black level

    – C is set by the monitor Contrast control.

    – Value is the pixel level normalized to a maximum of 1. For an 8 bit monitor with pixel levels 0 – 255, value = (pixel level)/255.

     

    – Black level is set by the (misnamed) monitor Brightness control. The relationship is linear if gamma = 1. The chart illustrates the relationship for gamma = 1, 1.5, 1.8 and 2.2 with C = 1 and black level = 0.

     

    Gamma affects middle tones; it has no effect on black or white. If gamma is set too high, middle tones appear too dark. Conversely, if it’s set too low, middle tones appear too light.

     

    The native gamma of monitors – the relationship between grid voltage and luminance – is typically around 2.5, though it can vary considerably. This is well above any of the display standards, so you must be aware of gamma and correct it.

     

    A display gamma of 2.2 is the de facto standard for the Windows operating system and the Internet-standard sRGB color space.

     

    The old standard for Mcintosh and prepress file interchange is 1.8. It is now 2.2 as well.

     

    Video cameras have gammas of approximately 0.45 – the inverse of 2.2. The viewing or system gamma is the product of the gammas of all the devices in the system – the image acquisition device (film+scanner or digital camera), color lookup table (LUT), and monitor. System gamma is typically between 1.1 and 1.5. Viewing flare and other factor make images look flat at system gamma = 1.0.

     

    Most laptop LCD screens are poorly suited for critical image editing because gamma is extremely sensitive to viewing angle.

     

    More about screens

    https://www.cambridgeincolour.com/tutorials/gamma-correction.htm

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    About black level (brightness). Your monitor’s brightness control (which should actually be called black level) can be adjusted using the mostly black pattern on the right side of the chart. This pattern contains two dark gray vertical bars, A and B, which increase in luminance with increasing gamma. (If you can’t see them, your black level is way low.) The left bar (A) should be just above the threshold of visibility opposite your chosen gamma (2.2 or 1.8) – it should be invisible where gamma is lower by about 0.3. The right bar (B) should be distinctly visible: brighter than (A), but still very dark. This chart is only for monitors; it doesn’t work on printed media.

     

    The 1.8 and 2.2 gray patterns at the bottom of the image represent a test of monitor quality and calibration. If your monitor is functioning properly and calibrated to gamma = 2.2 or 1.8, the corresponding pattern will appear smooth neutral gray when viewed from a distance. Any waviness, irregularity, or color banding indicates incorrect monitor calibration or poor performance.

     

    Another test to see whether one’s computer monitor is properly hardware adjusted and can display shadow detail in sRGB images properly, they should see the left half of the circle in the large black square very faintly but the right half should be clearly visible. If not, one can adjust their monitor’s contrast and/or brightness setting. This alters the monitor’s perceived gamma. The image is best viewed against a black background.

     

    This procedure is not suitable for calibrating or print-proofing a monitor. It can be useful for making a monitor display sRGB images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace.

     

    On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command xgamma -gamma 0.9 for setting gamma correction factor to 0.9, and xgamma for querying current value of that factor (the default is 1.0). In OS X systems, the gamma and other related screen calibrations are made through the System Preference

     

    https://www.kinematicsoup.com/news/2016/6/15/gamma-and-linear-space-what-they-are-how-they-differ

    Linear color space means that numerical intensity values correspond proportionally to their perceived intensity. This means that the colors can be added and multiplied correctly. A color space without that property is called ”non-linear”. Below is an example where an intensity value is doubled in a linear and a non-linear color space. While the corresponding numerical values in linear space are correct, in the non-linear space (gamma = 0.45, more on this later) we can’t simply double the value to get the correct intensity.

     

    The need for gamma arises for two main reasons: The first is that screens have been built with a non-linear response to intensity. The other is that the human eye can tell the difference between darker shades better than lighter shades. This means that when images are compressed to save space, we want to have greater accuracy for dark intensities at the expense of lighter intensities. Both of these problems are resolved using gamma correction, which is to say the intensity of every pixel in an image is put through a power function. Specifically, gamma is the name given to the power applied to the image.

     

    CRT screens, simply by how they work, apply a gamma of around 2.2, and modern LCD screens are designed to mimic that behavior. A gamma of 2.2, the reciprocal of 0.45, when applied to the brightened images will darken them, leaving the original image.

    , , , ,
    Read more: Gamma correction
  • The Forbidden colors – Red-Green & Blue-Yellow: The Stunning Colors You Can’t See

    www.livescience.com/17948-red-green-blue-yellow-stunning-colors.html

     

     

    While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.

     

    https://en.rockcontent.com/blog/the-use-of-yellow-in-data-design

    The back of the retina is covered in light-sensitive neurons known as cone cells and rod cells. There are three types of cone cells, each sensitive to different ranges of light. These ranges overlap, but for convenience the cones are referred to as blue (short-wavelength), green (medium-wavelength), and red (long-wavelength). The rod cells are primarily used in low-light situations, so we’ll ignore those for now.

     

    When light enters the eye and hits the cone cells, the cones get excited and send signals to the brain through the visual cortex. Different wavelengths of light excite different combinations of cones to varying levels, which generates our perception of color. You can see that the red cones are most sensitive to light, and the blue cones are least sensitive. The sensitivity of green and red cones overlaps for most of the visible spectrum.

     

    Here’s how your brain takes the signals of light intensity from the cones and turns it into color information. To see red or green, your brain finds the difference between the levels of excitement in your red and green cones. This is the red-green channel.

     

    To get “brightness,” your brain combines the excitement of your red and green cones. This creates the luminance, or black-white, channel. To see yellow or blue, your brain then finds the difference between this luminance signal and the excitement of your blue cones. This is the yellow-blue channel.

     

    From the calculations made in the brain along those three channels, we get four basic colors: blue, green, yellow, and red. Seeing blue is what you experience when low-wavelength light excites the blue cones more than the green and red.

     

    Seeing green happens when light excites the green cones more than the red cones. Seeing red happens when only the red cones are excited by high-wavelength light.

     

    Here’s where it gets interesting. Seeing yellow is what happens when BOTH the green AND red cones are highly excited near their peak sensitivity. This is the biggest collective excitement that your cones ever have, aside from seeing pure white.

     

    Notice that yellow occurs at peak intensity in the graph to the right. Further, the lens and cornea of the eye happen to block shorter wavelengths, reducing sensitivity to blue and violet light.

    Read more: The Forbidden colors – Red-Green & Blue-Yellow: The Stunning Colors You Can’t See

LIGHTING

  • Rec-2020 – TVs new color gamut standard used by Dolby Vision?

    https://www.hdrsoft.com/resources/dri.html#bit-depth

     

    The dynamic range is a ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to.

    For a scene: Dynamic range is the ratio between the brightest and darkest parts of the scene.

    For a camera: Dynamic range is the ratio of saturation to noise. More specifically, the ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise.

    For a display: Dynamic range is the ratio between the maximum and minimum intensities emitted from the screen.

     

    The Dynamic Range of real-world scenes can be quite high — ratios of 100,000:1 are common in the natural world. An HDR (High Dynamic Range) image stores pixel values that span the whole tonal range of real-world scenes. Therefore, an HDR image is encoded in a format that allows the largest range of values, e.g. floating-point values stored with 32 bits per color channel. Another characteristics of an HDR image is that it stores linear values. This means that the value of a pixel from an HDR image is proportional to the amount of light measured by the camera.

     

    For TVs HDR is great, but it’s not the only new TV feature worth discussing.

     

    Wide color gamut, or WCG, is often lumped in with HDR. While they’re often found together, they’re not intrinsically linked. Where HDR is an increase in the dynamic range of the picture (with contrast and brighter highlights in particular), a TV’s wide color gamut coverage refers to how much of the new, larger color gamuts a TV can display.

     

    Wide color gamuts only really matter for HDR video sources like UHD Blu-rays and some streaming video, as only HDR sources are meant to take advantage of the ability to display more colors.

     

     

    www.cnet.com/how-to/what-is-wide-color-gamut-wcg/

     

    Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed through a pixel; the other aspect is how broad a range of colors can be expressed (the gamut)

     

    Image rendering bit depth

     

    Wide color gamuts include a greater number of colors than what most current TVs can display, so the greater a TV’s coverage of a wide color gamut, the more colors a TV will be able to reproduce.

     

    When we talk about a color space or color gamut we refer to the range of color values stored in an image. The perception of these color also requires a display that has been tuned with to resolve these color profiles at best. This is often referred to as a ‘viewer lut’.

     

    So this comes also usually paired with an increase in bit depth, going from the old 8 bit system (256 shades per color, with the potential of over 16.7 million colors: 256 green x 256 blue x 256 red) to 10  (1024+ shades per color, with access to over a billion colors) or higher bits, like 12 bit (4096 shades per RGB for 68 billion colors).

    The advantage of higher bit depth is in the ability to bias color with the minimum loss.

    https://photo.stackexchange.com/questions/72116/whats-the-point-of-capturing-14-bit-images-and-editing-on-8-bit-monitors

     

    For an extreme example, raising the brightness from a completely dark image allows for better reproduction, independently on the reproduction medium, due to the amount of data available at editing time:

     

    https://www.cambridgeincolour.com/tutorials/dynamic-range.htm

     

    https://www.hdrsoft.com/resources/dri.html#bit-depth

     

    Note that the number of bits itself may be a misleading indication of the real dynamic range that the image reproduces — converting a Low Dynamic Range image to a higher bit depth does not change its dynamic range, of course.

    • 8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
    • 16-bit images (i.e. 48 bits per pixel for a color image) resulting from RAW conversion are still considered Low Dynamic Range, even though the range of values they can encode is significantly higher than for 8-bit images (65536 versus 256). Note that converting a RAW file involves applying a tonal curve that compresses the dynamic range of the RAW data so that the converted image shows correctly on low dynamic range monitors. The need to adapt the output image file to the dynamic range of the display is the factor that dictates how much the dynamic range is compressed, not the output bit-depth. By using 16 instead of 8 bits, you will gain precision but you will not gain dynamic range.
    • 32-bit images (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range.Unlike 8- and 16-bit images which can take a finite number of values, 32-bit images are coded using floating point numbers, which means the values they can take is unlimited.It is important to note, though, that storing an image in a 32-bit HDR format is a necessary condition for an HDR image but not a sufficient one. When an image comes from a single capture with a standard camera, it will remain a Low Dynamic Range image,

     

     

    Also note that bit depth and dynamic range are often confused as one, but are indeed separate concepts and there is no direct one to one relationship between them. Bit depth is about capacity, dynamic range is about the actual ratio of data stored.
    The bit depth of a capturing or displaying device gives you an indication of its dynamic range capacity. That is, the highest dynamic range that the device would be capable of reproducing if all other constraints are eliminated.

     

    https://rawpedia.rawtherapee.com/Bit_Depth

     

    Finally, note that there are two ways to “count” bits for an image — either the number of bits per color channel (BPC) or the number of bits per pixel (BPP). A bit (0,1) is the smallest unit of data stored in a computer.

    For a grayscale image, 8-bit means that each pixel can be one of 256 levels of gray (256 is 2 to the power 8).

    For an RGB color image, 8-bit means that each one of the three color channels can be one of 256 levels of color.
    Since each pixel is represented by 3 colors in this case, 8-bit per color channel actually means 24-bit per pixel.

    Similarly, 16-bit for an RGB image means 65,536 levels per color channel and 48-bit per pixel.

    To complicate matters, when an image is classified as 16-bit, it just means that it can store a maximum 65,535 values. It does not necessarily mean that it actually spans that range. If the camera sensors can not capture more than 12 bits of tonal values, the actual bit depth of the image will be at best 12-bit and probably less because of noise.

    The following table attempts to summarize the above for the case of an RGB color image.

     

     

    Type of digital supportBit depth per color channelBit depth per pixelFStopsTheoretical maximum Dynamic RangeReality
    8-bit8248256:1most consumer images
    12-bit CCD1236124,096:1real maximum limited by noise
    14-bit CCD14421416,384:1real maximum limited by noise
    16-bit TIFF (integer)16481665,536:1bit-depth in this case is not directly related to the dynamic range captured
    16-bit float EXR16483065,536:1values are distributed more closely in the (lower) darker tones than in the (higher) lighter ones, thus allowing for a more accurate description of the tones more significant to humans. The range of normalized 16-bit floats can represent thirty stops of information with 1024 steps per stop. We have eighteen and a half stops over middle gray, and eleven and a half below. The denormalized numbers provide an additional ten stops with decreasing precision per stop.
    http://download.nvidia.com/developer/GPU_Gems/CD_Image/Image_Processing/OpenEXR/OpenEXR-1.0.6/doc/#recs
    HDR image (e.g. Radiance format)3296“infinite”4.3 billion:1real maximum limited by the captured dynamic range

    32-bit floats are often called “single-precision” floats, and 64-bit floats are often called “double-precision” floats. 16-bit floats therefore are called “half-precision” floats, or just “half floats”.

     

    https://petapixel.com/2018/09/19/8-12-14-vs-16-bit-depth-what-do-you-really-need

    On a separate note, even Photoshop does not handle 16bit per channel. Photoshop does actually use 16-bits per channel. However, it treats the 16th digit differently – it is simply added to the value created from the first 15-digits. This is sometimes called 15+1 bits. This means that instead of 216 possible values (which would be 65,536 possible values) there are only 215+1 possible values (which is 32,768 +1 = 32,769 possible values).

     

    Rec-601 (for the older SDTV format, very similar to rec-709) and Rec-709 (the HDTV’s recommended set of color standards, at times also referred to sRGB, although not exactly the same) are currently the most spread color formats and hardware configurations in the world.

     

    Following those you can find the larger P3 gamut, more commonly used in theaters and in digital production houses (with small variations and improvements to color coverage), as well as most of best 4K/WCG TVs.

     

    And a new standard is now promoted against P3, referred to Rec-2020 and UHDTV.

     

    It is still debatable if this is going to be adopted at consumer level beyond the P3, mainly due to lack of hardware supporting it. But initial tests do prove that it would be a future proof investment.

    www.colour-science.org/anders-langlands/

     

    Rec. 2020 is ultimately designed for television, and not cinema. Therefore, it is to be expected that its properties must behave according to current signal processing standards. In this respect, its foundation is based on current HD and SD video signal characteristics.

     

    As far as color bit depth is concerned, it allows for a maximum of 12 bits, which is more than enough for humans.

    Comparing standards, REC-709 covers 35.9% of the human visible spectrum. P3 45.5%. And REC-2020 75.8%.
    https://www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    Comparing coverage to hardware devices

     

    To note that all the new standards generally score very high on the Pointer’s Gamut chart. But with REC-2020 scoring 99.9% vs P3 at 88.2%.
    www.tftcentral.co.uk/articles/pointers_gamut.htm

    https://www.slideshare.net/hpduiker/acescg-a-common-color-encoding-for-visual-effects-applications

     

    The Pointer’s gamut is (an approximation of) the gamut of real surface colors as can be seen by the human eye, based on the research by Michael R. Pointer (1980). What this means is that every color that can be reflected by the surface of an object of any material is inside the Pointer’s gamut. Basically establishing a widely respected target for color reproduction. Visually, Pointers Gamut represents the colors we see about us in the natural world. Colors outside Pointers Gamut include those that do not occur naturally, such as neon lights and computer-generated colors possible in animation. Which would partially be accounted for with the new gamuts.

    cinepedia.com/picture/color-gamut/

     

    Not all current TVs can support the full spread of the new gamuts. Here is a list of modern TVs’ color coverage in percentage:
    www.rtings.com/tv/tests/picture-quality/wide-color-gamut-rec-709-dci-p3-rec-2020

     

    There are no TVs that can come close to displaying all the colors within Rec.2020, and there likely won’t be for at least a few years. However, to help future-proof the technology, Rec.2020 support is already baked into the HDR spec. That means that the same genuine HDR media that fills the DCI P3 space on a compatible TV now, will in a few years also fill Rec.2020 on a TV supporting that larger space.

     

    Rec.2020’s main gains are in the number of new tones of green that it will display, though it also offers improvements to the number of blue and red colors as well. Altogether, Rec.2020 will cover about 75% of the visual spectrum, which is a sizeable increase in coverage even over DCI P3.

     

     

    Dolby Vision

    https://www.highdefdigest.com/news/show/what-is-dolby-vision/39049

    https://www.techhive.com/article/3237232/dolby-vision-vs-hdr10-which-is-best.html

     

    Dolby Vision is a proprietary end-to-end High Dynamic Range (HDR) format that covers content creation and playback through select cinemas, Ultra HD displays, and 4K titles. Like other HDR standards, the process uses expanded brightness to improve contrast between dark and light aspects of an image, bringing out deeper black levels and more realistic details in specular highlights — like the sun reflecting off of an ocean — in specially graded Dolby Vision material.

     

    The iPhone 12 Pro gets the ability to record 4K 10-bit HDR video. According to Apple, it is the very first smartphone that is capable of capturing Dolby Vision HDR.

    The iPhone 12 Pro takes two separate exposures and runs them through Apple’s custom image signal processor to create a histogram, which is a graph of the tonal values in each frame. The Dolby Vision metadata is then generated based on that histogram. In Laymen’s terms, it is essentially doing real-time grading while you are shooting. This is only possible due to the A14 Bionic chip.

     

    Dolby Vision also allows for 12-bit color, as opposed to HDR10’s and HDR10+’s 10-bit color. While no retail TV we’re aware of supports 12-bit color, Dolby claims it can be down-sampled in such a way as to render 10-bit color more accurately.

     

     

     

     

     

    Resources for more reading:

    https://www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    wolfcrow.com/say-hello-to-rec-2020-the-color-space-of-the-future/

     

    www.cnet.com/news/ultra-hd-tv-color-part-ii-the-future/

     

    , , , ,
    Read more: Rec-2020 – TVs new color gamut standard used by Dolby Vision?
  • HDRI Median Cut plugin

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

     

    # bottom left coordinates = 0,0
    import numpy as np
    import cv2
    
    # Load the HDR or EXR image
    image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification
    
    # Calculate the luminance from the HDR channels (assuming RGB format)
    luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])
    
    # Set a threshold value based on estimated EV
    threshold_value = 2.4  # Estimated threshold value based on 4.8 EV
    
    # Apply the threshold to identify bright areas
    # The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
    thresholded = (luminance > threshold_value) * 255 
    
    # Convert the thresholded image to uint8 for contour detection 
    thresholded = thresholded.astype(np.uint8) 
    
    # Find contours of the bright areas 
    contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 
    
    # Create a list to store the bounding boxes of bright areas 
    bright_areas = [] 
    
    # Iterate through contours and extract bounding boxes for contour in contours: 
    x, y, w, h = cv2.boundingRect(contour) 
    
    # Adjust y-coordinate based on bottom-left origin 
    y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 
    
    # Store as (x1, y1, x2, y2) 
    # Print the identified bright areas 
    print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

     

    More details

     

    Luminance and Exposure in an EXR Image:

    • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
    • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
    • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

     

    The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

     

    These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

     

    Here’s the breakdown of the float values:

    • 0.299: Weight for the red channel.
    • 0.587: Weight for the green channel.
    • 0.114: Weight for the blue channel.

     

    The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

     

    For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

     

    To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

    1. Linear and Logarithmic Scales:
      • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
      • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
    2. Conversion Between Scales:

      • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.

      • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.



       

      threshold_value = base_value * (2 ** EV)



      Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.


    3. Choosing the Base Value:
      • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
      • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

     

    Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

    Here’s an example of how you might determine a base_value to achieve this:

     

    # Define the dynamic range of the image in EV
    dynamic_range = 12
    
    # Choose the desired number of EV above middle gray for thresholding
    desired_ev_above_middle_gray = 2
    
    # Calculate the threshold value based on the desired EV above middle gray
    threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))
    
    print("Threshold Value:", threshold_value)
    , ,
    Read more: HDRI Median Cut plugin