COLOR

  • Sensitivity of human eye

    http://www.wikilectures.eu/index.php/Spectral_sensitivity_of_the_human_eye

    http://www.normankoren.com/Human_spectral_sensitivity_small.jpg

    Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.

    http://weeklysciencequiz.blogspot.com/2013/01/violet-skies-are-for-birds.html

     

     

    Sensitivity of human eye Sensitivity of human eyes to light increase with the decrease in light intensity. In day-light condition, the cones cell is responding to this condition. And the eye is most sensitive at 555 nm. In darkness condition, the rod cell is responding to this condition. And the eye is most sensitive at 507 nm.

    As light intensity decreases, cone function changes more effective way. And when decrease the light intensity, it prompt to accumulation of rhodopsin. Furthermore, in activates rods, it allow to respond to stimuli of light in much lower intensity.

     

    https://www.nde-ed.org/EducationResources/CommunityCollege/PenetrantTest/Introduction/lightresponse.htm

    The three curves in the figure above shows the normalized response of an average human eye to various amounts of ambient light. The shift in sensitivity occurs because two types of photoreceptors called cones and rods are responsible for the eye’s response to light. The curve on the right shows the eye’s response under normal lighting conditions and this is called the photopic response. The cones respond to light under these conditions.

     

    As mentioned previously, cones are composed of three different photo pigments that enable color perception. This curve peaks at 555 nanometers, which means that under normal lighting conditions, the eye is most sensitive to a yellowish-green color. When the light levels drop to near total darkness, the response of the eye changes significantly as shown by the scotopic response curve on the left. At this level of light, the rods are most active and the human eye is more sensitive to the light present, and less sensitive to the range of color. Rods are highly sensitive to light but are comprised of a single photo pigment, which accounts for the loss in ability to discriminate color. At this very low light level, sensitivity to blue, violet, and ultraviolet is increased, but sensitivity to yellow and red is reduced. The heavier curve in the middle represents the eye’s response at the ambient light level found in a typical inspection booth. This curve peaks at 550 nanometers, which means the eye is most sensitive to yellowish-green color at this light level. Fluorescent penetrant inspection materials are designed to fluoresce at around 550 nanometers to produce optimal sensitivity under dim lighting conditions.

     

    , , ,
    Read more: Sensitivity of human eye
  • Björn Ottosson – OKlch color space

    https://abhisaha.com/blog/interactive-post-oklch-color-space

     

    Björn Ottosson proposed OKlch in 2020 to create a color space that can closely mimic how color is perceived by the human eye, predicting perceived lightness, chroma, and hue.

     

    The OK in OKLCH stands for Optimal Color.

    • LLightness (the perceived brightness of the color)
    • CChroma (the intensity or saturation of the color)
    • HHue (the actual color, such as red, blue, green, etc.)

     

     

    Also read:

     

     

     

    Read more: Björn Ottosson – OKlch color space
  • Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors

    In the retina, photoreceptors, bipolar cells, and horizontal cells work together to process visual information before it reaches the brain. Here’s how each cell type contributes to vision:

     

    1. Photoreceptors

    • Types: There are two main types of photoreceptors: rods and cones.
      • Rods: Specialized for low-light and peripheral vision; they help us see in dim lighting and detect motion.
      • Cones: Specialized for color and detail; they function best in bright light and are concentrated in the central retina (the fovea), allowing for high-resolution vision.
    • Function: Photoreceptors convert light into electrical signals. When light hits the retina, photoreceptors undergo a chemical change, triggering an electrical response that initiates the visual process. Rods and cones detect different intensities and colors, providing the foundation for brightness and color perception.

     

    2. Bipolar Cells

    • Function: Bipolar cells act as intermediaries, connecting photoreceptors to ganglion cells, which send signals to the brain. They receive input from photoreceptors and relay it to the retinal ganglion cells.
    • On and Off Bipolar Cells: Some bipolar cells are ON cells, responding when light is detected (depolarizing in light), and others are OFF cells, responding in darkness (depolarizing in the absence of light). This division allows for more precise contrast detection and the ability to distinguish light from dark areas in the visual field.

     

    3. Horizontal Cells

    • Function: Horizontal cells connect photoreceptors to each other and create lateral interactions between them. They integrate signals from multiple photoreceptors, allowing them to adjust the sensitivity of neighboring photoreceptors in response to varying light conditions.
    • Lateral Inhibition: This process improves visual contrast and sharpness by making the borders between light and dark areas more distinct, enhancing our ability to perceive edges and fine detail.

     

    These three types of cells work together to help the retina preprocess visual information and perception, emphasizing contrast and adjusting for different lighting conditions before signals are sent to the brain for further processing and interpretation.

     

     

    ,
    Read more: Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors
  • Image rendering bit depth

    The terms 8-bit, 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

     

    https://en.wikipedia.org/wiki/Color_depth

     

    In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

     

    When referring to a pixel, the concept can be defined as bits per pixel (bpp).

     

    When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

     

    Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

     

     

    Here’s a simple explanation of each.

     

    8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
    They can store around 5 stops of light and each pixel carry a value from 0 (black) to 255 (white).
    As a comparison, DSLR cameras can capture ~12-15 stops of light and they use RAW files to store the information.

     

    16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

     

    16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

     

    32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

     

    Bits and HDR coverage

    High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

     

    A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

     

    On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

     

    The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

     

    However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

     

    To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

     

    To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

     

    In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

     

    HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.

     

    ,
    Read more: Image rendering bit depth
  • What is a Gamut or Color Space and why do I need to know about CIE

    http://www.xdcam-user.com/2014/05/what-is-a-gamut-or-color-space-and-why-do-i-need-to-know-about-it/

     

    In video terms gamut is normally related to as the full range of colours and brightness that can be either captured or displayed.

     

    Generally speaking all color gamuts recommendations are trying to define a reasonable level of color representation based on available technology and hardware. REC-601 represents the old TVs. REC-709 is currently the most distributed solution. P3 is mainly available in movie theaters and is now being adopted in some of the best new 4K HDR TVs. Rec2020 (a wider space than P3 that improves on visibke color representation) and ACES (the full coverage of visible color) are other common standards which see major hardware development these days.

     

     

    To compare and visualize different solution (across video and printing solutions), most developers use the CIE color model chart as a reference.
    The CIE color model is a color space model created by the International Commission on Illumination known as the Commission Internationale de l’Elcairage (CIE) in 1931. It is also known as the CIE XYZ color space or the CIE 1931 XYZ color space.
    This chart represents the first defined quantitative link between distributions of wavelengths in the electromagnetic visible spectrum, and physiologically perceived colors in human color vision. Or basically, the range of color a typical human eye can perceive through visible light.

     

    Note that while the human perception is quite wide, and generally speaking biased towards greens (we are apes after all), the amount of colors available through nature, generated through light reflection, tend to be a much smaller section. This is defined by the Pointer’s Chart.

     

    In short. Color gamut is a representation of color coverage, used to describe data stored in images against available hardware and viewer technologies.

     

    Camera color encoding from
    https://www.slideshare.net/hpduiker/acescg-a-common-color-encoding-for-visual-effects-applications

     

    CIE 1976

    http://bernardsmith.eu/computatrum/scan_and_restore_archive_and_print/scanning/

     

    https://store.yujiintl.com/blogs/high-cri-led/understanding-cie1931-and-cie-1976

     

    The CIE 1931 standard has been replaced by a CIE 1976 standard. Below we can see the significance of this.

     

    People have observed that the biggest issue with CIE 1931 is the lack of uniformity with chromaticity, the three dimension color space in rectangular coordinates is not visually uniformed.

     

    The CIE 1976 (also called CIELUV) was created by the CIE in 1976. It was put forward in an attempt to provide a more uniform color spacing than CIE 1931 for colors at approximately the same luminance

     

    The CIE 1976 standard colour space is more linear and variations in perceived colour between different people has also been reduced. The disproportionately large green-turquoise area in CIE 1931, which cannot be generated with existing computer screens, has been reduced.

     

    If we move from CIE 1931 to the CIE 1976 standard colour space we can see that the improvements made in the gamut for the “new” iPad screen (as compared to the “old” iPad 2) are more evident in the CIE 1976 colour space than in the CIE 1931 colour space, particularly in the blues from aqua to deep blue.

     

     

    https://dot-color.com/2012/08/14/color-space-confusion/

    Despite its age, CIE 1931, named for the year of its adoption, remains a well-worn and familiar shorthand throughout the display industry. CIE 1931 is the primary language of customers. When a customer says that their current display “can do 72% of NTSC,” they implicitly mean 72% of NTSC 1953 color gamut as mapped against CIE 1931.

    , ,
    Read more: What is a Gamut or Color Space and why do I need to know about CIE

LIGHTING