COLOR

  • Björn Ottosson – How software gets color wrong

    https://bottosson.github.io/posts/colorwrong/

     

    Most software around us today are decent at accurately displaying colors. Processing of colors is another story unfortunately, and is often done badly.

     

    To understand what the problem is, let’s start with an example of three ways of blending green and magenta:

    • Perceptual blend – A smooth transition using a model designed to mimic human perception of color. The blending is done so that the perceived brightness and color varies smoothly and evenly.
    • Linear blend – A model for blending color based on how light behaves physically. This type of blending can occur in many ways naturally, for example when colors are blended together by focus blur in a camera or when viewing a pattern of two colors at a distance.
    • sRGB blend – This is how colors would normally be blended in computer software, using sRGB to represent the colors. 

     

    Let’s look at some more examples of blending of colors, to see how these problems surface more practically. The examples use strong colors since then the differences are more pronounced. This is using the same three ways of blending colors as the first example.

     

    Instead of making it as easy as possible to work with color, most software make it unnecessarily hard, by doing image processing with representations not designed for it. Approximating the physical behavior of light with linear RGB models is one easy thing to do, but more work is needed to create image representations tailored for image processing and human perception.

     

    Also see:

    https://www.pixelsham.com/2022/04/05/bjorn-ottosson-okhsv-and-okhsl-two-new-color-spaces-for-color-picking/

    Read more: Björn Ottosson – How software gets color wrong
  • No one could see the colour blue until modern times

    https://www.businessinsider.com/what-is-blue-and-how-do-we-see-color-2015-2

     

    The way that humans see the world… until we have a way to describe something, even something so fundamental as a colour, we may not even notice that something it’s there.

     

    Ancient languages didn’t have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew, not Icelandic cultures. And without a word for the colour, there’s evidence that they may not have seen it at all.

    https://www.wnycstudios.org/story/211119-colors

     

    Every language first had a word for black and for white, or dark and light. The next word for a colour to come into existence — in every language studied around the world — was red, the colour of blood and wine.

    After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colours to appear in every language is blue.

     

    The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.

    https://mymodernmet.com/shades-of-blue-color-history/

     

    Considered to be the first ever synthetically produced color pigment, Egyptian blue (also known as cuprorivaite) was created around 2,200 B.C. It was made from ground limestone mixed with sand and a copper-containing mineral, such as azurite or malachite, which was then heated between 1470 and 1650°F. The result was an opaque blue glass which then had to be crushed and combined with thickening agents such as egg whites to create a long-lasting paint or glaze.

     

     

    If you think about it, blue doesn’t appear much in nature — there aren’t animals with blue pigments (except for one butterfly, Obrina Olivewing, all animals generate blue through light scattering), blue eyes are rare (also blue through light scattering), and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue?

     

     

    So before we had a word for it, did people not naturally see blue? Do you really see something if you don’t have a word for it?

     

    A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, who speak a language that has no word for blue or distinction between blue and green. When shown a circle with 11 green squares and one blue, they couldn’t pick out which one was different from the others.

     

    When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?

     

    Davidoff says that without a word for a colour, without a way of identifying it as different, it’s much harder for us to notice what’s unique about it — even though our eyes are physically seeing the blocks it in the same way.

     

    Further research brought to wider discussions about color perception in humans. Everything that we make is based on the fact that humans are trichromatic. The television only has 3 colors. Our color printers have 3 different colors. But some people, and in specific some women seemed to be more sensible to color differences… mainly because they’re just more aware or – because of the job that they do.

    Eventually this brought to the discovery of a small percentage of the population, referred to as tetrachromats, which developed an extra cone sensitivity to yellow, likely due to gene modifications.

    The interesting detail about these is that even between tetrachromats, only the ones that had a reason to develop, label and work with extra color sensitivity actually developed the ability to use their native skills.

     

    So before blue became a common concept, maybe humans saw it. But it seems they didn’t know they were seeing it.

    If you see something yet can’t see it, does it exist? Did colours come into existence over time? Not technically, but our ability to notice them… may have…

     

    , ,
    Read more: No one could see the colour blue until modern times

LIGHTING

  • Outpost VFX lighting tips

    www.outpost-vfx.com/en/news/18-pro-tips-and-tricks-for-lighting

     

    Get as much information regarding your plate lighting as possible

    • Always use a reference
    • Replicate what is happening in real life
    • Invest into a solid HDRI
    • Start Simple
    • Observe real world lighting, photography and cinematography
    • Don’t neglect the theory
    • Learn the difference between realism and photo-realism.
    • Keep your scenes organised

     

    Read more: Outpost VFX lighting tips
  • Convert between light exposure and intensity

    import math,sys
    
    def Exposure2Intensity(exposure): 
        exp = float(exposure)
        result = math.pow(2,exp)
        print(result)
    
    Exposure2Intensity(0)
    
    def Intensity2Exposure(intensity):
        inarg = float(intensity)
        
        if inarg == 0:
            print("Exposure of zero intensity is undefined.")
            return
        
        if inarg < 1e-323:
            inarg = max(inarg, 1e-323)
            print("Exposure of negative intensities is undefined. Clamping to a very small value instead (1e-323)")
        
        result = math.log(inarg, 2)
        print(result)
    
    Intensity2Exposure(0.1)
    
    
    
    
    ,
    Read more: Convert between light exposure and intensity
  • What is physically correct lighting all about?

    http://gamedev.stackexchange.com/questions/60638/what-is-physically-correct-lighting-all-about

     

    2012-08 Nathan Reed wrote:

    Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to “look good” subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.

     

    For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.

     

    In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you’re trying to simulate, based on measurements.

     

    Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).

     

    It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.

     

    On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.

     

    Here are some resources to look at for physically-based lighting in games:

     

    SIGGRAPH 2013 Physically Based Shading Course, particularly the background talk by Naty Hoffman at the beginning. You can also check out the previous incarnations of this course for more resources.

     

    Sébastien Lagarde, Adopting a physically-based shading model and Feeding a physically-based shading model

     

    And of course, I would be remiss if I didn’t mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.

    ,
    Read more: What is physically correct lighting all about?
  • Composition – cinematography Cheat Sheet

    https://moodle.gllm.ac.uk/pluginfile.php/190622/mod_resource/content/1/Cinematography%20Cheat%20Sheet.pdf

    Where is our eye attracted first? Why?

    Size. Focus. Lighting. Color.

    Size. Mr. White (Harvey Keitel) on the right.
    Focus. He’s one of the two objects in focus.
    Lighting. Mr. White is large and in focus and Mr. Pink (Steve Buscemi) is highlighted by
    a shaft of light.
    Color. Both are black and white but the read on Mr. White’s shirt now really stands out.


    What type of lighting?

    -> High key lighting.
    Features bright, even illumination and few conspicuous shadows. This lighting key is often used in musicals and comedies.

    Low key lighting
    Features diffused shadows and atmospheric pools of light. This lighting key is often used in mysteries and thrillers.

    High contrast lighting
    Features harsh shafts of lights and dramatic streaks of blackness. This type of lighting is often used in tragedies and melodramas.

     

    What type of shot?

    Extreme long shot
    Taken from a great distance, showing much of the locale. Ifpeople are included in these shots, they usually appear as mere specks

    -> Long shot
    Corresponds to the space between the audience and the stage in a live theater. The long shots show the characters and some of the locale.

    Full shot
    Range with just enough space to contain the human body in full. The full shot shows the character and a minimal amount of the locale.

    Medium shot
    Shows the human figure from the knees or waist up.

    Close-Up
    Concentrates on a relatively small object and show very little if any locale.

    Extreme close-up
    Focuses on an unnaturally small portion of an object, giving that part great detail and symbolic significance.

     

    What angle?

    Bird’s-eye view.
    The shot is photographed directly from above. This type of shot can be disorienting, and the people photographed seem insignificant.

    High angle.
    This angle reduces the size of the objects photographed. A person photographed from this angle seems harmless and insignificant, but to a lesser extent than with the bird’s-eye view.

    -> Eye-level shot.
    The clearest view of an object, but seldom intrinsically dramatic, because it tends to be the norm.

    Low angle.
    This angle increases high and a sense of verticality, heightening the importance of the object photographed. A person shot from this angle is given a sense of power and respect.

    Oblique angle.
    For this angle, the camera is tilted laterally, giving the image a slanted appearance. Oblique angles suggest tension, transition, a impending movement. They are also called canted or dutch angles.

     

    What is the dominant color?

    The use of color in this shot is symbolic. The scene is set in warehouse. Both the set and characters are blues, blacks and whites.

    This was intentional allowing for the scenes and shots with blood to have a great level of contrast.

     

    What is the Lens/Filter/Stock?

    Telephoto lens.
    A lens that draws objects closer but also diminishes the illusion of depth.

    Wide-angle lens.
    A lens that takes in a broad area and increases the illusion of depth but sometimes distorts the edges of the image.

    Fast film stock.
    Highly sensitive to light, it can register an image with little illumination. However, the final product tends to be grainy.

    Slow film stock.
    Relatively insensitive to light, it requires a great deal of illumination. The final product tends to look polished.

    The lens is not wide-angle because there isn’t a great sense of depth, nor are several planes in focus. The lens is probably long but not necessarily a telephoto lens because the depth isn’t inordinately compressed.

    The stock is fast because of the grainy quality of the image.

     

    Subsidiary Contrast; where does the eye go next?

    The two guns.

     

    How much visual information is packed into the image? Is the texture stark, moderate, or highly detailed?

    Minimalist clutter in the warehouse allows a focus on a character driven thriller.

     

    What is the Composition?

    Horizontal.
    Compositions based on horizontal lines seem visually at rest and suggest placidity or peacefulness.

    Vertical.
    Compositions based on vertical lines seem visually at rest and suggest strength.

    -> Diagonal.
    Compositions based on diagonal, or oblique, lines seem dynamic and suggest tension or anxiety.

    -> Binary. Binary structures emphasize parallelism.

    Triangle.
    Triadic compositions stress the dynamic interplay among three main

    Circle.
    Circular compositions suggest security and enclosure.

     

    Is the form open or closed? Does the image suggest a window that arbitrarily isolates a fragment of the scene? Or a proscenium arch, in which the visual elements are carefully arranged and held in balance?

    The most nebulous of all the categories of mise en scene, the type of form is determined by how consciously structured the mise en scene is. Open forms stress apparently simple techniques, because with these unself-conscious methods the filmmaker is able to emphasize the immediate, the familiar, the intimate aspects of reality. In open-form images, the frame tends to be deemphasized. In closed form images, all the necessary information is carefully structured within the confines of the frame. Space seems enclosed and self-contained rather than continuous.

    Could argue this is a proscenium arch because this is such a classic shot with parallels and juxtapositions.

     

    Is the framing tight or loose? Do the character have no room to move around, or can they move freely without impediments?

    Shots where the characters are placed at the edges of the frame and have little room to move around within the frame are considered tight.

    Longer shots, in which characters have room to move around within the frame, are considered loose and tend to suggest freedom.

    Center-framed giving us the entire scene showing isolation, place and struggle.

     

    Depth of Field. On how many planes is the image composed (how many are in focus)? Does the background or foreground comment in any way on the mid-ground?

    Standard DOF, one background and clearly defined foreground.

     

    Which way do the characters look vis-a-vis the camera?

    An actor can be photographed in any of five basic positions, each conveying different psychological overtones.

    Full-front (facing the camera):
    the position with the most intimacy. The character is looking in our direction, inviting our complicity.

    Quarter Turn:
    the favored position of most filmmakers. This position offers a high degree of intimacy but with less emotional involvement than the full-front.

    -> Profile (looking of the frame left or right):
    More remote than the quarter turn, the character in profile seems unaware of being observed, lost in his or her own thoughts.

    Three-quarter Turn:
    More anonymous than the profile, this position is useful for conveying a character’s unfriendly or antisocial feelings, for in effect, the character is partially turning his or her back on us, rejecting our interest.

    Back to Camera:
    The most anonymous of all positions, this position is often used to suggest a character’s alienation from the world. When a character has his or her back to the camera, we can only guess what’s taking place internally, conveying a sense of concealment, or mystery.

    How much space is there between the characters?

    Extremely close, for a gunfight.

     

    The way people use space can be divided into four proxemic patterns.

    Intimate distances.
    The intimate distance ranges from skin contact to about eighteen inches away. This is the distance of physical involvement–of love, comfort, and tenderness between individuals.

    -> Personal distances.
    The personal distance ranges roughly from eighteen inches away to about four feet away. These distances tend to be reserved for friends and acquaintances. Personal distances preserve the privacy between individuals, yet these rages don’t necessarily suggest exclusion, as intimate distances often do.

    Social distances.
    The social distance rages from four feet to about twelve feet. These distances are usually reserved for impersonal business and casual social gatherings. It’s a friendly range in most cases, yet somewhat more formal than the personal distance.

    Public distances.
    The public distance extends from twelve feet to twenty-five feet or more. This range tends to be formal and rather detached.

    , , ,
    Read more: Composition – cinematography Cheat Sheet

Collections
| Explore posts
| Design And Composition
| Featured AI

Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free