COLOR

  • What is a Gamut or Color Space and why do I need to know about CIE

    http://www.xdcam-user.com/2014/05/what-is-a-gamut-or-color-space-and-why-do-i-need-to-know-about-it/

     

    In video terms gamut is normally related to as the full range of colours and brightness that can be either captured or displayed.

     

    Generally speaking all color gamuts recommendations are trying to define a reasonable level of color representation based on available technology and hardware. REC-601 represents the old TVs. REC-709 is currently the most distributed solution. P3 is mainly available in movie theaters and is now being adopted in some of the best new 4K HDR TVs. Rec2020 (a wider space than P3 that improves on visibke color representation) and ACES (the full coverage of visible color) are other common standards which see major hardware development these days.

     

     

    To compare and visualize different solution (across video and printing solutions), most developers use the CIE color model chart as a reference.
    The CIE color model is a color space model created by the International Commission on Illumination known as the Commission Internationale de l’Elcairage (CIE) in 1931. It is also known as the CIE XYZ color space or the CIE 1931 XYZ color space.
    This chart represents the first defined quantitative link between distributions of wavelengths in the electromagnetic visible spectrum, and physiologically perceived colors in human color vision. Or basically, the range of color a typical human eye can perceive through visible light.

     

    Note that while the human perception is quite wide, and generally speaking biased towards greens (we are apes after all), the amount of colors available through nature, generated through light reflection, tend to be a much smaller section. This is defined by the Pointer’s Chart.

     

    In short. Color gamut is a representation of color coverage, used to describe data stored in images against available hardware and viewer technologies.

     

    Camera color encoding from
    https://www.slideshare.net/hpduiker/acescg-a-common-color-encoding-for-visual-effects-applications

     

    CIE 1976

    http://bernardsmith.eu/computatrum/scan_and_restore_archive_and_print/scanning/

     

    https://store.yujiintl.com/blogs/high-cri-led/understanding-cie1931-and-cie-1976

     

    The CIE 1931 standard has been replaced by a CIE 1976 standard. Below we can see the significance of this.

     

    People have observed that the biggest issue with CIE 1931 is the lack of uniformity with chromaticity, the three dimension color space in rectangular coordinates is not visually uniformed.

     

    The CIE 1976 (also called CIELUV) was created by the CIE in 1976. It was put forward in an attempt to provide a more uniform color spacing than CIE 1931 for colors at approximately the same luminance

     

    The CIE 1976 standard colour space is more linear and variations in perceived colour between different people has also been reduced. The disproportionately large green-turquoise area in CIE 1931, which cannot be generated with existing computer screens, has been reduced.

     

    If we move from CIE 1931 to the CIE 1976 standard colour space we can see that the improvements made in the gamut for the “new” iPad screen (as compared to the “old” iPad 2) are more evident in the CIE 1976 colour space than in the CIE 1931 colour space, particularly in the blues from aqua to deep blue.

     

     

    https://dot-color.com/2012/08/14/color-space-confusion/

    Despite its age, CIE 1931, named for the year of its adoption, remains a well-worn and familiar shorthand throughout the display industry. CIE 1931 is the primary language of customers. When a customer says that their current display “can do 72% of NTSC,” they implicitly mean 72% of NTSC 1953 color gamut as mapped against CIE 1931.

    , ,
    Read more: What is a Gamut or Color Space and why do I need to know about CIE
  • Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

    https://jo.dreggn.org/home/2018_manuka.pdf

     

    http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/

     

    The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture. In its core, reyes is based on stochastic rasterisation of micropolygons, facilitating depth of field, motion blur, high geometric complexity,and programmable shading.

     

    This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade-on-hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure which is then used bypath sampling logic to evaluate contributions and to inform what further rays to cast through the scene.

     

    Over the years, however, the expectations have risen substantially when it comes to image quality. Computing pictures which are indistinguishable from real footage requires accurate simulation of light transport, which is most often performed using some variant of Monte Carlo path tracing. Unfortunately this paradigm requires random memory accesses to the whole scene and does not lend itself well to a rasterisation approach at all.

     

    Manuka is both a uni-directional and bidirectional path tracer and encompasses multiple importance sampling (MIS). Interestingly, and importantly for production character skin work, it is the first major production renderer to incorporate spectral MIS in the form of a new ‘Hero Spectral Sampling’ technique, which was recently published at Eurographics Symposium on Rendering 2014.

     

    Manuka propose a shade-before-hit paradigm in-stead and minimise I/O strain (and some memory costs) on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling, “compressing” any bvh structure as needed, and as such also limiting duplication of source data.
    The difference with reyes is that instead of baking colors into the geometry like in Reyes, manuka bakes surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

     

    The main drawback with this method is that geometry has to be tessellated to its highest, stable topology before shading can be evaluated properly. As such, the high cost to first pixel. Even a basic 4 vertices square becomes a much more complex model with this approach.

     

     

    Manuka use the RenderMan Shading Language (rsl) for programmable shading [Pixar Animation Studios 2015], but we do not invoke rsl shaders when intersecting a ray with a surface (often called shade-on-hit). Instead, we pre-tessellate and pre-shade all the input geometry in the front end of the renderer.
    This way, we can efficiently order shading computations to sup-port near-optimal texture locality, vectorisation, and parallelism. This system avoids repeated evaluation of shaders at the same surface point, and presents a minimal amount of memory to be accessed during light transport time. An added benefit is that the acceleration structure for ray tracing (abounding volume hierarchy, bvh) is built once on the final tessellated geometry, which allows us to ray trace more efficiently than multi-level bvhs and avoids costly caching of on-demand tessellated micropolygons and the associated scheduling issues.

     

    For the shading reasons above, in terms of AOVs, the studio approach is to succeed at combining complex shading with ray paths in the render rather than pass a multi-pass render to compositing.

     

    For the Spectral Rendering component. The light transport stage is fully spectral, using a continuously sampled wavelength which is traced with each path and used to apply the spectral camera sensitivity of the sensor. This allows for faithfully support any degree of observer metamerism as the camera footage they are intended to match as well as complex materials which require wavelength dependent phenomena such as diffraction, dispersion, interference, iridescence, or chromatic extinction and Rayleigh scattering in participating media.

     

    As opposed to the original reyes paper, we use bilinear interpolation of these bsdf inputs later when evaluating bsdfs per pathv ertex during light transport4. This improves temporal stability of geometry which moves very slowly with respect to the pixel raster

     

    In terms of the pipeline, everything rendered at Weta was already completely interwoven with their deep data pipeline. Manuka very much was written with deep data in mind. Here, Manuka not so much extends the deep capabilities, rather it fully matches the already extremely complex and powerful setup Weta Digital already enjoy with RenderMan. For example, an ape in a scene can be selected, its ID is available and a NUKE artist can then paint in 3D say a hand and part of the way up the neutral posed ape.

     

    We called our system Manuka, as a respectful nod to reyes: we had heard a story froma former ILM employee about how reyes got its name from how fond the early Pixar people were of their lunches at Point Reyes, and decided to name our system after our surrounding natural environment, too. Manuka is a kind of tea tree very common in New Zealand which has very many very small leaves, in analogy to micropolygons ina tree structure for ray tracing. It also happens to be the case that Weta Digital’s main site is on Manuka Street.

     

     

    , ,
    Read more: Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline
  • Tim Kang – calibrated white light values in sRGB color space

    https://www.linkedin.com/posts/timkang_colorimetry-cinematography-nerdalert-activity-7058330978007584769-9xln

     

    8bit sRGB encoded
    2000K 255 139 22
    2700K 255 172 89
    3000K 255 184 109
    3200K 255 190 122
    4000K 255 211 165
    4300K 255 219 178
    D50 255 235 205
    D55 255 243 224
    D5600 255 244 227
    D6000 255 249 240
    D65 255 255 255
    D10000 202 221 255
    D20000 166 196 255

    8bit Rec709 Gamma 2.4
    2000K 255 145 34
    2700K 255 177 97
    3000K 255 187 117
    3200K 255 193 129
    4000K 255 214 170
    4300K 255 221 182
    D50 255 236 208
    D55 255 243 226
    D5600 255 245 229
    D6000 255 250 241
    D65 255 255 255
    D10000 204 222 255
    D20000 170 199 255

    8bit Display P3 encoded
    2000K 255 154 63
    2700K 255 185 109
    3000K 255 195 127
    3200K 255 201 138
    4000K 255 219 176
    4300K 255 225 187
    D50 255 239 212
    D55 255 245 228
    D5600 255 246 231
    D6000 255 251 242
    D65 255 255 255
    D10000 208 223 255
    D20000 175 199 255

    10bit Rec2020 PQ (100 nits)
    2000K 520 435 273
    2700K 520 466 358
    3000K 520 475 384
    3200K 520 480 399
    4000K 520 495 446
    4300K 520 500 458
    D50 520 510 482
    D55 520 514 497
    D5600 520 514 500
    D6000 520 517 509
    D65 520 520 520
    D10000 479 489 520
    D20000 448 464 520

     

    ,
    Read more: Tim Kang – calibrated white light values in sRGB color space
  • 3D Lighting Tutorial by Amaan Kram

    http://www.amaanakram.com/lightingT/part1.htm

    The goals of lighting in 3D computer graphics are more or less the same as those of real world lighting.

     

    Lighting serves a basic function of bringing out, or pushing back the shapes of objects visible from the camera’s view.
    It gives a two-dimensional image on the monitor an illusion of the third dimension-depth.

    But it does not just stop there. It gives an image its personality, its character. A scene lit in different ways can give a feeling of happiness, of sorrow, of fear etc., and it can do so in dramatic or subtle ways. Along with personality and character, lighting fills a scene with emotion that is directly transmitted to the viewer.

     

    Trying to simulate a real environment in an artificial one can be a daunting task. But even if you make your 3D rendering look absolutely photo-realistic, it doesn’t guarantee that the image carries enough emotion to elicit a “wow” from the people viewing it.

     

    Making 3D renderings photo-realistic can be hard. Putting deep emotions in them can be even harder. However, if you plan out your lighting strategy for the mood and emotion that you want your rendering to express, you make the process easier for yourself.

     

    Each light source can be broken down in to 4 distinct components and analyzed accordingly.

    · Intensity
    · Direction
    · Color
    · Size

     

    The overall thrust of this writing is to produce photo-realistic images by applying good lighting techniques.

    , ,
    Read more: 3D Lighting Tutorial by Amaan Kram

LIGHTING

  • Debayer – A free command line tool to convert camera raw images into scene-linear exr

    https://github.com/jedypod/debayer

     

     

    The only required dependency is oiiotool. However other “debayer engines” are also supported.

    • OpenImageIO – oiiotool is used for converting debayered tif images to exr.
    • Debayer Engines
      • RawTherapee – Powerful raw development software used to decode raw images. High quality, good selection of debayer algorithms, and more advanced raw processing like chromatic aberration removal.
      • LibRaw – dcraw_emu commandline utility included with LibRaw. Optional alternative for debayer. Simple, fast and effective.
      • Darktable – Uses darktable-cli plus an xmp config to process.
      • vkdt – uses vkdt-cli to debayer. Pretty experimental still. Uses Vulkan for image processing. Stupidly fast. Pretty limited.
    , , ,
    Read more: Debayer – A free command line tool to convert camera raw images into scene-linear exr
  • Photography basics: Exposure Value vs Photographic Exposure vs Il/Luminance vs Pixel luminance measurements

    Also see: https://www.pixelsham.com/2015/05/16/how-aperture-shutter-speed-and-iso-affect-your-photos/

     

    In photography, exposure value (EV) is a number that represents a combination of a camera’s shutter speed and f-number, such that all combinations that yield the same exposure have the same EV (for any fixed scene luminance).

     

     

    The EV concept was developed in an attempt to simplify choosing among combinations of equivalent camera settings. Although all camera settings with the same EV nominally give the same exposure, they do not necessarily give the same picture. EV is also used to indicate an interval on the photographic exposure scale. 1 EV corresponding to a standard power-of-2 exposure step, commonly referred to as a stop

     

    EV 0 corresponds to an exposure time of 1 sec and a relative aperture of f/1.0. If the EV is known, it can be used to select combinations of exposure time and f-number.

     

    https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

    Note EV does not equal to photographic exposure. Photographic Exposure is defined as how much light hits the camera’s sensor. It depends on the camera settings mainly aperture and shutter speed. Exposure value (known as EV) is a number that represents the exposure setting of the camera.

     

    Thus, strictly, EV is not a measure of luminance (indirect or reflected exposure) or illuminance (incidental exposure); rather, an EV corresponds to a luminance (or illuminance) for which a camera with a given ISO speed would use the indicated EV to obtain the nominally correct exposure. Nonetheless, it is common practice among photographic equipment manufacturers to express luminance in EV for ISO 100 speed, as when specifying metering range or autofocus sensitivity.

     

    The exposure depends on two things: how much light gets through the lenses to the camera’s sensor and for how long the sensor is exposed. The former is a function of the aperture value while the latter is a function of the shutter speed. Exposure value is a number that represents this potential amount of light that could hit the sensor. It is important to understand that exposure value is a measure of how exposed the sensor is to light and not a measure of how much light actually hits the sensor. The exposure value is independent of how lit the scene is. For example a pair of aperture value and shutter speed represents the same exposure value both if the camera is used during a very bright day or during a dark night.

     

    Each exposure value number represents all the possible shutter and aperture settings that result in the same exposure. Although the exposure value is the same for different combinations of aperture values and shutter speeds the resulting photo can be very different (the aperture controls the depth of field while shutter speed controls how much motion is captured).

    EV 0.0 is defined as the exposure when setting the aperture to f-number 1.0 and the shutter speed to 1 second. All other exposure values are relative to that number. Exposure values are on a base two logarithmic scale. This means that every single step of EV – plus or minus 1 – represents the exposure (actual light that hits the sensor) being halved or doubled.

    https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

     

    Formula

    https://en.wikipedia.org/wiki/Exposure_value

     

    https://www.scantips.com/lights/math.html

     

    which means   2EV = N² / t

    where

    • N is the relative aperture (f-number) Important: Note that f/stop values must first be squared in most calculations
    • t is the exposure time (shutter speed) in seconds

    EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

    Example: If f/16 and 1/4 second, then this is:

    (N² / t) = (16 × 16 ÷ 1/4) = (16 × 16 × 4) = 1024.

    Log₂(1024) is EV 10. Meaning, 210 = 1024.

     

    Collecting photographic exposure using Light Meters

    https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

    The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory)

    https://en.wikipedia.org/wiki/Light_meter

    For reflected-light meters, camera settings are related to ISO speed and subject luminance by the reflected-light exposure equation:

    where

    • N is the relative aperture (f-number)
    • t is the exposure time (“shutter speed”) in seconds
    • L is the average scene luminance
    • S is the ISO arithmetic speed
    • K is the reflected-light meter calibration constant

     

    For incident-light meters, camera settings are related to ISO speed and subject illuminance by the incident-light exposure equation:

    where

    • E is the illuminance (in lux)
    • C is the incident-light meter calibration constant

     

    Two values for K are in common use: 12.5 (Canon, Nikon, and Sekonic) and 14 (Minolta, Kenko, and Pentax); the difference between the two values is approximately 1/6 EV.
    For C a value of 250 is commonly used.

     

    Nonetheless, it is common practice among photographic equipment manufacturers to also express luminance in EV for ISO 100 speed. Using K = 12.5, the relationship between EV at ISO 100 and luminance L is then :

    L = 2(EV-3)

     

    The situation with incident-light meters is more complicated than that for reflected-light meters, because the calibration constant C depends on the sensor type. Illuminance is measured with a flat sensor; a typical value for C is 250 with illuminance in lux. Using C = 250, the relationship between EV at ISO 100 and illuminance E is then :

     

    E = 2.5 * 2(EV)

     

    https://nofilmschool.com/2018/03/want-easier-and-faster-way-calculate-exposure-formula

    Three basic factors go into the exposure formula itself instead: aperture, shutter, and ISO. Plus a light meter calibration constant.

    f-stop²/shutter (in seconds) = lux * ISO/C

     

    If you at least know four of those variables, you’ll be able to calculate the missing value.

    So, say you want to figure out how much light you’re going to need in order to shoot at a certain f-stop. Well, all you do is plug in your values (you should know the f-stop, ISO, and your light meter calibration constant) into the formula below:

    lux = C (f-stop²/shutter (in seconds))/ISO

     

    Exposure Value Calculator:

    https://snapheadshots.com/resources/exposure-and-light-calculator

     

    https://www.scantips.com/lights/exposurecalc.html

     

    https://www.pointsinfocus.com/tools/exposure-settings-ev-calculator/#google_vignette

     

    From that perspective, an exposure stop is a measurement of Exposure and provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso & f-stop.
    +-1 stop is a doubling or halving of the amount of light let in when taking a photo.
    1 EV is just another way to say one stop of exposure change.

     

    One major use of EV (Exposure Value) is just to measure any change of exposure, where one EV implies a change of one stop of exposure. Like when we compensate our picture in the camera.

     

    If the picture comes out too dark, our manual exposure could correct the next one by directly adjusting one of the three exposure controls (f/stop, shutter speed, or ISO). Or if using camera automation, the camera meter is controlling it, but we might apply +1 EV exposure compensation (or +1 EV flash compensation) to make the result goal brighter, as desired. This use of 1 EV is just another way to say one stop of exposure change.

     

    On a perfect day the difference from sampling the sky vs the sun exposure with diffusing spot meters is about 3.2 exposure difference.

     ~15.4 EV for the sun
     ~12.2 EV for the sky
    

    That is as a ballpark. All still influenced by surroundings, accuracy parameters, fov of the sensor…

     

     

     

    EV calculator

    https://www.scantips.com/lights/evchart.html#calc

    http://www.fredparker.com/ultexp1.htm

     

    Exposure value is basically used to indicate an interval on the photographic exposure scale, with a difference of 1 EV corresponding to a standard power-of-2 exposure step, also commonly referred to as a “stop”.

     

    https://contrastly.com/a-guide-to-understanding-exposure-value-ev/

     

    Retrieving photographic exposure from an image

    All you can hope to measure with your camera and some images is the relative reflected luminance. Even if you have the camera settings. https://en.wikipedia.org/wiki/Relative_luminance

     

    If you REALLY want to know the amount of light in absolute radiometric units, you’re going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see: Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf

     

    IF you are still trying to gauge relative brightness, the level of the sun in Nuke can vary, but it should be in the thousands. Ie: between 30,000 and 65,0000 rgb value depending on time of the day, season and atmospherics.

     

    The values for a 12 o’clock sun, with the sun sampled at EV 15.5 (shutter 1/30, ISO 100, F22) is 32.000 RGB max values (or 32,000 pixel luminance).
    The thing to keep an eye for is the level of contrast between sunny side/fill side.  The terminator should be quite obvious,  there can be up to 3 stops difference between fill/key in sunny lit objects.

     

    Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
    You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).
    One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
    So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

    If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.
    Note: make sure to set your Nuke read node to ‘raw data’

     

    In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

     

    To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

     

    If you are trying to gauge the EV from the pixel luminance in the image:
    – Setting the center f-stop to 0 means that in a neutral plate, the middle 18% gray will equal to exposure value 0.
    – So if EV 0 = 0.18 middle gray in nuke which equal to a pixel luminance of 0.18, doubling that value, doubles the EV.

    .18 pixel luminance = 0EV
    .36 pixel luminance = 1EV
    .72 pixel luminance = 2EV
    1.46 pixel luminance = 3EV
    ...
    

     

    This is a Geometric Progression function: xn = ar(n-1)

    The most basic example of this function is 1,2,4,8,16,32,… The sequence starts at 1 and doubles each time, so

    • a=1 (the first term)
    • r=2 (the “common ratio” between terms is a doubling)

    And we get:

    {a, ar, ar2, ar3, … }

    = {1, 1×2, 1×22, 1×23, … }

    = {1, 2, 4, 8, … }

    In this example the function translates to: n = 2(n-1)
    You can graph this curve through this expression: x = 2(y-1)  :

    You can go back and forth between the two values through a geometric progression function and a log function:

    (Note: in a spreadsheet this is: = POWER(2; cell# -1)  and  =LOG(cell#, 2)+1) )

    2(y-1) log2(x)+1
    x y
    1 1
    2 2
    4 3
    8 4
    16 5
    32 6
    64 7
    128 8
    256 9
    512 10
    1024 11
    2048 12
    4096 13

     

    Translating this into a geometric progression between an image pixel luminance and EV:

    (more…)

    , ,
    Read more: Photography basics: Exposure Value vs Photographic Exposure vs Il/Luminance vs Pixel luminance measurements

Collections
| Explore posts
| Design And Composition
| Featured AI

Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free