COLOR

  • Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking

    https://bottosson.github.io/misc/colorpicker

     

    https://bottosson.github.io/posts/colorpicker/

     

    https://www.smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/

     

    One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient.

     

     

    There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly.

     

    I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them.

     

    IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab.

     

    In the Munsell color system, colors are described with three parameters, designed to match the perceived appearance of colors: Hue, Chroma and Value. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. Modern color spaces and models, such as CIELAB, Cam16 and Björn Ottosson own Oklab, are very similar in their construction.

     

     

    By far the most used color spaces today for color picking are HSL and HSV, two representations introduced in the classic 1978 paper “Color Spaces for Computer Graphics”. HSL and HSV designed to roughly correlate with perceptual color properties while being very simple and cheap to compute.

     

    Today HSL and HSV are most commonly used together with the sRGB color space.

     

     

    One of the main advantages of HSL and HSV over the different Lab color spaces is that they map the sRGB gamut to a cylinder. This makes them easy to use since all parameters can be changed independently, without the risk of creating colors outside of the target gamut.

     

     

    The main drawback on the other hand is that their properties don’t match human perception particularly well.
    Reconciling these conflicting goals perfectly isn’t possible, but given that HSV and HSL don’t use anything derived from experiments relating to human perception, creating something that makes a better tradeoff does not seem unreasonable.

     

     

    With this new lightness estimate, we are ready to look into the construction of Okhsv and Okhsl.

     

     

    , ,
    Read more: Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking
  • Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

    https://jo.dreggn.org/home/2018_manuka.pdf

     

    http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/

     

    The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture. In its core, reyes is based on stochastic rasterisation of micropolygons, facilitating depth of field, motion blur, high geometric complexity,and programmable shading.

     

    This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade-on-hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure which is then used bypath sampling logic to evaluate contributions and to inform what further rays to cast through the scene.

     

    Over the years, however, the expectations have risen substantially when it comes to image quality. Computing pictures which are indistinguishable from real footage requires accurate simulation of light transport, which is most often performed using some variant of Monte Carlo path tracing. Unfortunately this paradigm requires random memory accesses to the whole scene and does not lend itself well to a rasterisation approach at all.

     

    Manuka is both a uni-directional and bidirectional path tracer and encompasses multiple importance sampling (MIS). Interestingly, and importantly for production character skin work, it is the first major production renderer to incorporate spectral MIS in the form of a new ‘Hero Spectral Sampling’ technique, which was recently published at Eurographics Symposium on Rendering 2014.

     

    Manuka propose a shade-before-hit paradigm in-stead and minimise I/O strain (and some memory costs) on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling, “compressing” any bvh structure as needed, and as such also limiting duplication of source data.
    The difference with reyes is that instead of baking colors into the geometry like in Reyes, manuka bakes surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

     

    The main drawback with this method is that geometry has to be tessellated to its highest, stable topology before shading can be evaluated properly. As such, the high cost to first pixel. Even a basic 4 vertices square becomes a much more complex model with this approach.

     

     

    Manuka use the RenderMan Shading Language (rsl) for programmable shading [Pixar Animation Studios 2015], but we do not invoke rsl shaders when intersecting a ray with a surface (often called shade-on-hit). Instead, we pre-tessellate and pre-shade all the input geometry in the front end of the renderer.
    This way, we can efficiently order shading computations to sup-port near-optimal texture locality, vectorisation, and parallelism. This system avoids repeated evaluation of shaders at the same surface point, and presents a minimal amount of memory to be accessed during light transport time. An added benefit is that the acceleration structure for ray tracing (abounding volume hierarchy, bvh) is built once on the final tessellated geometry, which allows us to ray trace more efficiently than multi-level bvhs and avoids costly caching of on-demand tessellated micropolygons and the associated scheduling issues.

     

    For the shading reasons above, in terms of AOVs, the studio approach is to succeed at combining complex shading with ray paths in the render rather than pass a multi-pass render to compositing.

     

    For the Spectral Rendering component. The light transport stage is fully spectral, using a continuously sampled wavelength which is traced with each path and used to apply the spectral camera sensitivity of the sensor. This allows for faithfully support any degree of observer metamerism as the camera footage they are intended to match as well as complex materials which require wavelength dependent phenomena such as diffraction, dispersion, interference, iridescence, or chromatic extinction and Rayleigh scattering in participating media.

     

    As opposed to the original reyes paper, we use bilinear interpolation of these bsdf inputs later when evaluating bsdfs per pathv ertex during light transport4. This improves temporal stability of geometry which moves very slowly with respect to the pixel raster

     

    In terms of the pipeline, everything rendered at Weta was already completely interwoven with their deep data pipeline. Manuka very much was written with deep data in mind. Here, Manuka not so much extends the deep capabilities, rather it fully matches the already extremely complex and powerful setup Weta Digital already enjoy with RenderMan. For example, an ape in a scene can be selected, its ID is available and a NUKE artist can then paint in 3D say a hand and part of the way up the neutral posed ape.

     

    We called our system Manuka, as a respectful nod to reyes: we had heard a story froma former ILM employee about how reyes got its name from how fond the early Pixar people were of their lunches at Point Reyes, and decided to name our system after our surrounding natural environment, too. Manuka is a kind of tea tree very common in New Zealand which has very many very small leaves, in analogy to micropolygons ina tree structure for ray tracing. It also happens to be the case that Weta Digital’s main site is on Manuka Street.

     

     

    , ,
    Read more: Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline
  • What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?

    https://www.discovery.com/science/mexapixels-in-human-eye

    About 576 megapixels for the entire field of view.

     

    Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
    90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).

     

    At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:

    120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.

    Or.

    7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.

    https://clarkvision.com/articles/eye-resolution.html

     

    Details in the post

    (more…)

    , ,
    Read more: What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?

LIGHTING

  • HDRI Median Cut plugin

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

     

    # bottom left coordinates = 0,0
    import numpy as np
    import cv2
    
    # Load the HDR or EXR image
    image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification
    
    # Calculate the luminance from the HDR channels (assuming RGB format)
    luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])
    
    # Set a threshold value based on estimated EV
    threshold_value = 2.4  # Estimated threshold value based on 4.8 EV
    
    # Apply the threshold to identify bright areas
    # The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
    thresholded = (luminance > threshold_value) * 255 
    
    # Convert the thresholded image to uint8 for contour detection 
    thresholded = thresholded.astype(np.uint8) 
    
    # Find contours of the bright areas 
    contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 
    
    # Create a list to store the bounding boxes of bright areas 
    bright_areas = [] 
    
    # Iterate through contours and extract bounding boxes for contour in contours: 
    x, y, w, h = cv2.boundingRect(contour) 
    
    # Adjust y-coordinate based on bottom-left origin 
    y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 
    
    # Store as (x1, y1, x2, y2) 
    # Print the identified bright areas 
    print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

     

    More details

     

    Luminance and Exposure in an EXR Image:

    • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
    • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
    • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

     

    The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

     

    These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

     

    Here’s the breakdown of the float values:

    • 0.299: Weight for the red channel.
    • 0.587: Weight for the green channel.
    • 0.114: Weight for the blue channel.

     

    The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

     

    For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

     

    To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

    1. Linear and Logarithmic Scales:
      • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
      • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
    2. Conversion Between Scales:

      • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.

      • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.



       

      threshold_value = base_value * (2 ** EV)



      Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.


    3. Choosing the Base Value:
      • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
      • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

     

    Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

    Here’s an example of how you might determine a base_value to achieve this:

     

    # Define the dynamic range of the image in EV
    dynamic_range = 12
    
    # Choose the desired number of EV above middle gray for thresholding
    desired_ev_above_middle_gray = 2
    
    # Calculate the threshold value based on the desired EV above middle gray
    threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))
    
    print("Threshold Value:", threshold_value)
    , ,
    Read more: HDRI Median Cut plugin
  • Polarised vs unpolarized filtering

    A light wave that is vibrating in more than one plane is referred to as unpolarized light. … Polarized light waves are light waves in which the vibrations occur in a single plane. The process of transforming unpolarized light into polarized light is known as polarization.

    en.wikipedia.org/wiki/Polarizing_filter_(photography)

     

    Light reflected from a non-metallic surface becomes polarized; this effect is maximum at Brewster’s angle, about 56° from the vertical for common glass.

     

    A polarizer rotated to pass only light polarized in the direction perpendicular to the reflected light will absorb much of it. This absorption allows glare reflected from, for example, a body of water or a road to be reduced. Reflections from shiny surfaces (e.g. vegetation, sweaty skin, water surfaces, glass) are also reduced. This allows the natural color and detail of what is beneath to come through. Reflections from a window into a dark interior can be much reduced, allowing it to be seen through. (The same effects are available for vision by using polarizing sunglasses.)

     

    www.physicsclassroom.com/class/light/u12l1e.cfm

     

    Some of the light coming from the sky is polarized (bees use this phenomenon for navigation). The electrons in the air molecules cause a scattering of sunlight in all directions. This explains why the sky is not dark during the day. But when looked at from the sides, the light emitted from a specific electron is totally polarized.[3] Hence, a picture taken in a direction at 90 degrees from the sun can take advantage of this polarization. Use of a polarizing filter, in the correct direction, will filter out the polarized component of skylight, darkening the sky; the landscape below it, and clouds, will be less affected, giving a photograph with a darker and more dramatic sky, and emphasizing the clouds.

     

    There are two types of polarizing filters readily available, linear and “circular”, which have exactly the same effect photographically. But the metering and auto-focus sensors in certain cameras, including virtually all auto-focus SLRs, will not work properly with linear polarizers because the beam splitters used to split off the light for focusing and metering are polarization-dependent.

     

    Polarizing filters reduce the light passed through to the film or sensor by about one to three stops (2–8×) depending on how much of the light is polarized at the filter angle selected. Auto-exposure cameras will adjust for this by widening the aperture, lengthening the time the shutter is open, and/or increasing the ASA/ISO speed of the camera.

     

    www.adorama.com/alc/nd-filter-vs-polarizer-what%25e2%2580%2599s-the-difference

     

    Neutral Density (ND) filters help control image exposure by reducing the light that enters the camera so that you can have more control of your depth of field and shutter speed. Polarizers or polarizing filters work in a similar way, but the difference is that they selectively let light waves of a certain polarization pass through. This effect helps create more vivid colors in an image, as well as manage glare and reflections from water surfaces. Both are regarded as some of the best filters for landscape and travel photography as they reduce the dynamic range in high-contrast images, thus enabling photographers to capture more realistic and dramatic sceneries.

     

    shopfelixgray.com/blog/polarized-vs-non-polarized-sunglasses/

     

    www.eyebuydirect.com/blog/difference-polarized-nonpolarized-sunglasses/

     

    , ,
    Read more: Polarised vs unpolarized filtering