COLOR

  • Polarised vs unpolarized filtering

    A light wave that is vibrating in more than one plane is referred to as unpolarized light. … Polarized light waves are light waves in which the vibrations occur in a single plane. The process of transforming unpolarized light into polarized light is known as polarization.

    en.wikipedia.org/wiki/Polarizing_filter_(photography)

     

    Light reflected from a non-metallic surface becomes polarized; this effect is maximum at Brewster’s angle, about 56° from the vertical for common glass.

     

    A polarizer rotated to pass only light polarized in the direction perpendicular to the reflected light will absorb much of it. This absorption allows glare reflected from, for example, a body of water or a road to be reduced. Reflections from shiny surfaces (e.g. vegetation, sweaty skin, water surfaces, glass) are also reduced. This allows the natural color and detail of what is beneath to come through. Reflections from a window into a dark interior can be much reduced, allowing it to be seen through. (The same effects are available for vision by using polarizing sunglasses.)

     

    www.physicsclassroom.com/class/light/u12l1e.cfm

     

    Some of the light coming from the sky is polarized (bees use this phenomenon for navigation). The electrons in the air molecules cause a scattering of sunlight in all directions. This explains why the sky is not dark during the day. But when looked at from the sides, the light emitted from a specific electron is totally polarized.[3] Hence, a picture taken in a direction at 90 degrees from the sun can take advantage of this polarization. Use of a polarizing filter, in the correct direction, will filter out the polarized component of skylight, darkening the sky; the landscape below it, and clouds, will be less affected, giving a photograph with a darker and more dramatic sky, and emphasizing the clouds.

     

    There are two types of polarizing filters readily available, linear and “circular”, which have exactly the same effect photographically. But the metering and auto-focus sensors in certain cameras, including virtually all auto-focus SLRs, will not work properly with linear polarizers because the beam splitters used to split off the light for focusing and metering are polarization-dependent.

     

    Polarizing filters reduce the light passed through to the film or sensor by about one to three stops (2–8×) depending on how much of the light is polarized at the filter angle selected. Auto-exposure cameras will adjust for this by widening the aperture, lengthening the time the shutter is open, and/or increasing the ASA/ISO speed of the camera.

     

    www.adorama.com/alc/nd-filter-vs-polarizer-what%25e2%2580%2599s-the-difference

     

    Neutral Density (ND) filters help control image exposure by reducing the light that enters the camera so that you can have more control of your depth of field and shutter speed. Polarizers or polarizing filters work in a similar way, but the difference is that they selectively let light waves of a certain polarization pass through. This effect helps create more vivid colors in an image, as well as manage glare and reflections from water surfaces. Both are regarded as some of the best filters for landscape and travel photography as they reduce the dynamic range in high-contrast images, thus enabling photographers to capture more realistic and dramatic sceneries.

     

    shopfelixgray.com/blog/polarized-vs-non-polarized-sunglasses/

     

    www.eyebuydirect.com/blog/difference-polarized-nonpolarized-sunglasses/

     

    , ,
    Read more: Polarised vs unpolarized filtering
  • No one could see the colour blue until modern times

    https://www.businessinsider.com/what-is-blue-and-how-do-we-see-color-2015-2

     

    The way that humans see the world… until we have a way to describe something, even something so fundamental as a colour, we may not even notice that something it’s there.

     

    Ancient languages didn’t have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew, not Icelandic cultures. And without a word for the colour, there’s evidence that they may not have seen it at all.

    https://www.wnycstudios.org/story/211119-colors

     

    Every language first had a word for black and for white, or dark and light. The next word for a colour to come into existence — in every language studied around the world — was red, the colour of blood and wine.

    After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colours to appear in every language is blue.

     

    The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.

    https://mymodernmet.com/shades-of-blue-color-history/

     

    Considered to be the first ever synthetically produced color pigment, Egyptian blue (also known as cuprorivaite) was created around 2,200 B.C. It was made from ground limestone mixed with sand and a copper-containing mineral, such as azurite or malachite, which was then heated between 1470 and 1650°F. The result was an opaque blue glass which then had to be crushed and combined with thickening agents such as egg whites to create a long-lasting paint or glaze.

     

     

    If you think about it, blue doesn’t appear much in nature — there aren’t animals with blue pigments (except for one butterfly, Obrina Olivewing, all animals generate blue through light scattering), blue eyes are rare (also blue through light scattering), and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue?

     

     

    So before we had a word for it, did people not naturally see blue? Do you really see something if you don’t have a word for it?

     

    A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, who speak a language that has no word for blue or distinction between blue and green. When shown a circle with 11 green squares and one blue, they couldn’t pick out which one was different from the others.

     

    When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?

     

    Davidoff says that without a word for a colour, without a way of identifying it as different, it’s much harder for us to notice what’s unique about it — even though our eyes are physically seeing the blocks it in the same way.

     

    Further research brought to wider discussions about color perception in humans. Everything that we make is based on the fact that humans are trichromatic. The television only has 3 colors. Our color printers have 3 different colors. But some people, and in specific some women seemed to be more sensible to color differences… mainly because they’re just more aware or – because of the job that they do.

    Eventually this brought to the discovery of a small percentage of the population, referred to as tetrachromats, which developed an extra cone sensitivity to yellow, likely due to gene modifications.

    The interesting detail about these is that even between tetrachromats, only the ones that had a reason to develop, label and work with extra color sensitivity actually developed the ability to use their native skills.

     

    So before blue became a common concept, maybe humans saw it. But it seems they didn’t know they were seeing it.

    If you see something yet can’t see it, does it exist? Did colours come into existence over time? Not technically, but our ability to notice them… may have…

     

    , ,
    Read more: No one could see the colour blue until modern times

LIGHTING

  • Cinematographers Blueprint 300dpi poster

    The 300dpi digital poster is now available to all PixelSham.com subscribers.

     

    If you have already subscribed and wish a copy, please send me a note through the contact page.

    , , , ,
    Read more: Cinematographers Blueprint 300dpi poster
  • HDRI Median Cut plugin

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

     

    # bottom left coordinates = 0,0
    import numpy as np
    import cv2
    
    # Load the HDR or EXR image
    image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification
    
    # Calculate the luminance from the HDR channels (assuming RGB format)
    luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])
    
    # Set a threshold value based on estimated EV
    threshold_value = 2.4  # Estimated threshold value based on 4.8 EV
    
    # Apply the threshold to identify bright areas
    # The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
    thresholded = (luminance > threshold_value) * 255 
    
    # Convert the thresholded image to uint8 for contour detection 
    thresholded = thresholded.astype(np.uint8) 
    
    # Find contours of the bright areas 
    contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 
    
    # Create a list to store the bounding boxes of bright areas 
    bright_areas = [] 
    
    # Iterate through contours and extract bounding boxes for contour in contours: 
    x, y, w, h = cv2.boundingRect(contour) 
    
    # Adjust y-coordinate based on bottom-left origin 
    y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 
    
    # Store as (x1, y1, x2, y2) 
    # Print the identified bright areas 
    print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

     

    More details

     

    Luminance and Exposure in an EXR Image:

    • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
    • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
    • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

     

    The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

     

    These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

     

    Here’s the breakdown of the float values:

    • 0.299: Weight for the red channel.
    • 0.587: Weight for the green channel.
    • 0.114: Weight for the blue channel.

     

    The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

     

    For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

     

    To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

    1. Linear and Logarithmic Scales:
      • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
      • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
    2. Conversion Between Scales:

      • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.

      • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.



       

      threshold_value = base_value * (2 ** EV)



      Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.


    3. Choosing the Base Value:
      • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
      • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

     

    Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

    Here’s an example of how you might determine a base_value to achieve this:

     

    # Define the dynamic range of the image in EV
    dynamic_range = 12
    
    # Choose the desired number of EV above middle gray for thresholding
    desired_ev_above_middle_gray = 2
    
    # Calculate the threshold value based on the desired EV above middle gray
    threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))
    
    print("Threshold Value:", threshold_value)
    , ,
    Read more: HDRI Median Cut plugin
  • Christopher Butler – Understanding the Eye-Mind Connection – Vision is a mental process

    https://www.chrbutler.com/understanding-the-eye-mind-connection

     

    The intricate relationship between the eyes and the brain, often termed the eye-mind connection, reveals that vision is predominantly a cognitive process. This understanding has profound implications for fields such as design, where capturing and maintaining attention is paramount. This essay delves into the nuances of visual perception, the brain’s role in interpreting visual data, and how this knowledge can be applied to effective design strategies.

     

    This cognitive aspect of vision is evident in phenomena such as optical illusions, where the brain interprets visual information in a way that contradicts physical reality. These illusions underscore that what we “see” is not merely a direct recording of the external world but a constructed experience shaped by cognitive processes.

     

    Understanding the cognitive nature of vision is crucial for effective design. Designers must consider how the brain processes visual information to create compelling and engaging visuals. This involves several key principles:

    1. Attention and Engagement
    2. Visual Hierarchy
    3. Cognitive Load Management
    4. Context and Meaning

     

     

    , , , ,
    Read more: Christopher Butler – Understanding the Eye-Mind Connection – Vision is a mental process

Collections
| Explore posts
| Design And Composition
| Featured AI

Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free