COLOR

  • Björn Ottosson – How software gets color wrong

    https://bottosson.github.io/posts/colorwrong/

     

    Most software around us today are decent at accurately displaying colors. Processing of colors is another story unfortunately, and is often done badly.

     

    To understand what the problem is, let’s start with an example of three ways of blending green and magenta:

    • Perceptual blend – A smooth transition using a model designed to mimic human perception of color. The blending is done so that the perceived brightness and color varies smoothly and evenly.
    • Linear blend – A model for blending color based on how light behaves physically. This type of blending can occur in many ways naturally, for example when colors are blended together by focus blur in a camera or when viewing a pattern of two colors at a distance.
    • sRGB blend – This is how colors would normally be blended in computer software, using sRGB to represent the colors. 

     

    Let’s look at some more examples of blending of colors, to see how these problems surface more practically. The examples use strong colors since then the differences are more pronounced. This is using the same three ways of blending colors as the first example.

     

    Instead of making it as easy as possible to work with color, most software make it unnecessarily hard, by doing image processing with representations not designed for it. Approximating the physical behavior of light with linear RGB models is one easy thing to do, but more work is needed to create image representations tailored for image processing and human perception.

     

    Also see:

    https://www.pixelsham.com/2022/04/05/bjorn-ottosson-okhsv-and-okhsl-two-new-color-spaces-for-color-picking/

    Read more: Björn Ottosson – How software gets color wrong
  • What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?

    https://www.discovery.com/science/mexapixels-in-human-eye

    About 576 megapixels for the entire field of view.

     

    Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
    90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).

     

    At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:

    120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.

    Or.

    7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.

    https://clarkvision.com/articles/eye-resolution.html

     

    Details in the post

    (more…)

    , ,
    Read more: What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?
  • No one could see the colour blue until modern times

    https://www.businessinsider.com/what-is-blue-and-how-do-we-see-color-2015-2

     

    The way that humans see the world… until we have a way to describe something, even something so fundamental as a colour, we may not even notice that something it’s there.

     

    Ancient languages didn’t have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew, not Icelandic cultures. And without a word for the colour, there’s evidence that they may not have seen it at all.

    https://www.wnycstudios.org/story/211119-colors

     

    Every language first had a word for black and for white, or dark and light. The next word for a colour to come into existence — in every language studied around the world — was red, the colour of blood and wine.

    After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colours to appear in every language is blue.

     

    The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.

    https://mymodernmet.com/shades-of-blue-color-history/

     

    Considered to be the first ever synthetically produced color pigment, Egyptian blue (also known as cuprorivaite) was created around 2,200 B.C. It was made from ground limestone mixed with sand and a copper-containing mineral, such as azurite or malachite, which was then heated between 1470 and 1650°F. The result was an opaque blue glass which then had to be crushed and combined with thickening agents such as egg whites to create a long-lasting paint or glaze.

     

     

    If you think about it, blue doesn’t appear much in nature — there aren’t animals with blue pigments (except for one butterfly, Obrina Olivewing, all animals generate blue through light scattering), blue eyes are rare (also blue through light scattering), and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue?

     

     

    So before we had a word for it, did people not naturally see blue? Do you really see something if you don’t have a word for it?

     

    A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, who speak a language that has no word for blue or distinction between blue and green. When shown a circle with 11 green squares and one blue, they couldn’t pick out which one was different from the others.

     

    When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?

     

    Davidoff says that without a word for a colour, without a way of identifying it as different, it’s much harder for us to notice what’s unique about it — even though our eyes are physically seeing the blocks it in the same way.

     

    Further research brought to wider discussions about color perception in humans. Everything that we make is based on the fact that humans are trichromatic. The television only has 3 colors. Our color printers have 3 different colors. But some people, and in specific some women seemed to be more sensible to color differences… mainly because they’re just more aware or – because of the job that they do.

    Eventually this brought to the discovery of a small percentage of the population, referred to as tetrachromats, which developed an extra cone sensitivity to yellow, likely due to gene modifications.

    The interesting detail about these is that even between tetrachromats, only the ones that had a reason to develop, label and work with extra color sensitivity actually developed the ability to use their native skills.

     

    So before blue became a common concept, maybe humans saw it. But it seems they didn’t know they were seeing it.

    If you see something yet can’t see it, does it exist? Did colours come into existence over time? Not technically, but our ability to notice them… may have…

     

    , ,
    Read more: No one could see the colour blue until modern times

LIGHTING

  • Tracing Spherical harmonics and how Weta used them in production

     

    A way to approximate complex lighting in ultra realistic renders.

    All SH lighting techniques involve replacing parts of standard lighting equations with spherical functions that have been projected into frequency space using the spherical harmonics as a basis.

    http://www.cs.columbia.edu/~cs4162/slides/spherical-harmonic-lighting.pdf

     

    Spherical harmonics as used at Weta Digital

    https://www.fxguide.com/fxfeatured/the-science-of-spherical-harmonics-at-weta-digital/

    , ,
    Read more: Tracing Spherical harmonics and how Weta used them in production
  • Rendering – BRDF – Bidirectional reflectance distribution function

    http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function

    The bidirectional reflectance distribution function is a four-dimensional function that defines how light is reflected at an opaque surface

    http://www.cs.ucla.edu/~zhu/tutorial/An_Introduction_to_BRDF-Based_Lighting.pdf

    In general, when light interacts with matter, a complicated light-matter dynamic occurs. This interaction depends on the physical characteristics of the light as well as the physical composition and characteristics of the matter.

    That is, some of the incident light is reflected, some of the light is transmitted, and another portion of the light is absorbed by the medium itself.

    A BRDF describes how much light is reflected when light makes contact with a certain material. Similarly, a BTDF (Bi-directional Transmission Distribution Function) describes how much light is transmitted when light makes contact with a certain material

    http://www.cs.princeton.edu/~smr/cs348c-97/surveypaper.html

    It is difficult to establish exactly how far one should go in elaborating the surface model. A truly complete representation of the reflective behavior of a surface might take into account such phenomena as polarization, scattering, fluorescence, and phosphorescence, all of which might vary with position on the surface. Therefore, the variables in this complete function would be:

    incoming and outgoing angle incoming and outgoing wavelength incoming and outgoing polarization (both linear and circular) incoming and outgoing position (which might differ due to subsurface scattering) time delay between the incoming and outgoing light ray

    ,
    Read more: Rendering – BRDF – Bidirectional reflectance distribution function
  • HDRI Median Cut plugin

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

     

    # bottom left coordinates = 0,0
    import numpy as np
    import cv2
    
    # Load the HDR or EXR image
    image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification
    
    # Calculate the luminance from the HDR channels (assuming RGB format)
    luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])
    
    # Set a threshold value based on estimated EV
    threshold_value = 2.4  # Estimated threshold value based on 4.8 EV
    
    # Apply the threshold to identify bright areas
    # The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
    thresholded = (luminance > threshold_value) * 255 
    
    # Convert the thresholded image to uint8 for contour detection 
    thresholded = thresholded.astype(np.uint8) 
    
    # Find contours of the bright areas 
    contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 
    
    # Create a list to store the bounding boxes of bright areas 
    bright_areas = [] 
    
    # Iterate through contours and extract bounding boxes for contour in contours: 
    x, y, w, h = cv2.boundingRect(contour) 
    
    # Adjust y-coordinate based on bottom-left origin 
    y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 
    
    # Store as (x1, y1, x2, y2) 
    # Print the identified bright areas 
    print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

     

    More details

     

    Luminance and Exposure in an EXR Image:

    • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
    • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
    • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

     

    The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

     

    These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

     

    Here’s the breakdown of the float values:

    • 0.299: Weight for the red channel.
    • 0.587: Weight for the green channel.
    • 0.114: Weight for the blue channel.

     

    The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

     

    For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

     

    To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

    1. Linear and Logarithmic Scales:
      • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
      • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
    2. Conversion Between Scales:

      • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.

      • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.



       

      threshold_value = base_value * (2 ** EV)



      Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.


    3. Choosing the Base Value:
      • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
      • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

     

    Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

    Here’s an example of how you might determine a base_value to achieve this:

     

    # Define the dynamic range of the image in EV
    dynamic_range = 12
    
    # Choose the desired number of EV above middle gray for thresholding
    desired_ev_above_middle_gray = 2
    
    # Calculate the threshold value based on the desired EV above middle gray
    threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))
    
    print("Threshold Value:", threshold_value)
    , ,
    Read more: HDRI Median Cut plugin

Collections
| Explore posts
| Design And Composition
| Featured AI

Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free