COLOR

  • OLED vs QLED – What TV is better?

     

    Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs.
    OLED stands for “organic light emitting diode.”
    It is a fundamentally different technology from LCD, the major type of TV today.
    OLED is “emissive,” meaning the pixels emit their own light.

     

    Samsung is branding its best TVs with a new acronym: “QLED”
    QLED (according to Samsung) stands for “quantum dot LED TV.”
    It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.”
    QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.

     

    OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

    QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.

     

    When analyzing TVs color, it may be beneficial to consider at least 3 elements:
    “Color Depth”, “Color Gamut”, and “Dynamic Range”.

     

    Color Depth (or “Bit-Depth”, e.g. 8-bit, 10-bit, 12-bit) determines how many distinct color variations (tones/shades) can be viewed on a given display.

     

    Color Gamut (e.g. WCG) determines which specific colors can be displayed from a given “Color Space” (Rec.709, Rec.2020, DCI-P3) (i.e. the color range).

     

    Dynamic Range (SDR, HDR) determines the luminosity range of a specific color – from its darkest shade (or tone) to its brightest.

     

    The overall brightness range of a color will be determined by a display’s “contrast ratio”, that is, the ratio of luminance between the darkest black that can be produced and the brightest white.

     

    Color Volume is the “Color Gamut” + the “Dynamic/Luminosity Range”.
    A TV’s Color Volume will not only determine which specific colors can be displayed (the color range) but also that color’s luminosity range, which will have an affect on its “brightness”, and “colorfulness” (intensity and saturation).

     

    The better the colour volume in a TV, the closer to life the colours appear.

     

    QLED TV can express nearly all of the colours in the DCI-P3 colour space, and of those colours, express 100% of the colour volume, thereby producing an incredible range of colours.

     

    With OLED TV, when the image is too bright, the percentage of the colours in the colour volume produced by the TV drops significantly. The colours get washed out and can only express around 70% colour volume, making the picture quality drop too.

     

    Note. OLED TV uses organic material, so it may lose colour expression as it ages.

     

    Resources for more reading and comparison below

    www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    www.newtechnologytv.com/qled-vs-oled/

     

    news.samsung.com/za/qled-tv-vs-oled-tv

     

    www.cnet.com/news/qled-vs-oled-samsungs-tv-tech-and-lgs-tv-tech-are-not-the-same/

     

    ,
    Read more: OLED vs QLED – What TV is better?
  • Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking

    https://bottosson.github.io/misc/colorpicker

     

    https://bottosson.github.io/posts/colorpicker/

     

    https://www.smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/

     

    One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient.

     

     

    There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly.

     

    I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them.

     

    IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab.

     

    In the Munsell color system, colors are described with three parameters, designed to match the perceived appearance of colors: Hue, Chroma and Value. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. Modern color spaces and models, such as CIELAB, Cam16 and Björn Ottosson own Oklab, are very similar in their construction.

     

     

    By far the most used color spaces today for color picking are HSL and HSV, two representations introduced in the classic 1978 paper “Color Spaces for Computer Graphics”. HSL and HSV designed to roughly correlate with perceptual color properties while being very simple and cheap to compute.

     

    Today HSL and HSV are most commonly used together with the sRGB color space.

     

     

    One of the main advantages of HSL and HSV over the different Lab color spaces is that they map the sRGB gamut to a cylinder. This makes them easy to use since all parameters can be changed independently, without the risk of creating colors outside of the target gamut.

     

     

    The main drawback on the other hand is that their properties don’t match human perception particularly well.
    Reconciling these conflicting goals perfectly isn’t possible, but given that HSV and HSL don’t use anything derived from experiments relating to human perception, creating something that makes a better tradeoff does not seem unreasonable.

     

     

    With this new lightness estimate, we are ready to look into the construction of Okhsv and Okhsl.

     

     

    , ,
    Read more: Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking
  • Anders Langlands – Render Color Spaces

    https://www.colour-science.org/anders-langlands/

     

    This page compares images rendered in Arnold using spectral rendering and different sets of colourspace primaries: Rec.709, Rec.2020, ACES and DCI-P3. The SPD data for the GretagMacbeth Color Checker are the measurements of Noburu Ohta, taken from Mansencal, Mauderer and Parsons (2014) colour-science.org.

     

    ,
    Read more: Anders Langlands – Render Color Spaces

LIGHTING

  • HDRI Median Cut plugin

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

     

    # bottom left coordinates = 0,0
    import numpy as np
    import cv2
    
    # Load the HDR or EXR image
    image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification
    
    # Calculate the luminance from the HDR channels (assuming RGB format)
    luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])
    
    # Set a threshold value based on estimated EV
    threshold_value = 2.4  # Estimated threshold value based on 4.8 EV
    
    # Apply the threshold to identify bright areas
    # The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
    thresholded = (luminance > threshold_value) * 255 
    
    # Convert the thresholded image to uint8 for contour detection 
    thresholded = thresholded.astype(np.uint8) 
    
    # Find contours of the bright areas 
    contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 
    
    # Create a list to store the bounding boxes of bright areas 
    bright_areas = [] 
    
    # Iterate through contours and extract bounding boxes for contour in contours: 
    x, y, w, h = cv2.boundingRect(contour) 
    
    # Adjust y-coordinate based on bottom-left origin 
    y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 
    
    # Store as (x1, y1, x2, y2) 
    # Print the identified bright areas 
    print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

     

    More details

     

    Luminance and Exposure in an EXR Image:

    • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
    • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
    • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

     

    The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

     

    These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

     

    Here’s the breakdown of the float values:

    • 0.299: Weight for the red channel.
    • 0.587: Weight for the green channel.
    • 0.114: Weight for the blue channel.

     

    The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

     

    For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

     

    To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

    1. Linear and Logarithmic Scales:
      • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
      • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
    2. Conversion Between Scales:

      • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.

      • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.



       

      threshold_value = base_value * (2 ** EV)



      Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.


    3. Choosing the Base Value:
      • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
      • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

     

    Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

    Here’s an example of how you might determine a base_value to achieve this:

     

    # Define the dynamic range of the image in EV
    dynamic_range = 12
    
    # Choose the desired number of EV above middle gray for thresholding
    desired_ev_above_middle_gray = 2
    
    # Calculate the threshold value based on the desired EV above middle gray
    threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))
    
    print("Threshold Value:", threshold_value)
    , ,
    Read more: HDRI Median Cut plugin