COLOR

LIGHTING

  • HDRI shooting and editing by Xuan Prada and Greg Zaal

    www.xuanprada.com/blog/2014/11/3/hdri-shooting

     

    http://blog.gregzaal.com/2016/03/16/make-your-own-hdri/

     

    http://blog.hdrihaven.com/how-to-create-high-quality-hdri/

     

    Shooting checklist

    • Full coverage of the scene (fish-eye shots)
    • Backplates for look-development (including ground or floor)
    • Macbeth chart for white balance
    • Grey ball for lighting calibration
    • Chrome ball for lighting orientation
    • Basic scene measurements
    • Material samples
    • Individual HDR artificial lighting sources if required

    Methodology

    • Plant the tripod where the action happens, stabilise it and level it
    • Set manual focus
    • Set white balance
    • Set ISO
    • Set raw+jpg
    • Set apperture
    • Metering exposure
    • Set neutral exposure
    • Read histogram and adjust neutral exposure if necessary
    • Shot slate (operator name, location, date, time, project code name, etc)
    • Set auto bracketing
    • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
    • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
    • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
    • Take backplates and ground/floor texture references
    • Shoot reference materials
    • Write down measurements of the scene, specially if you are shooting interiors.
    • If shooting artificial lights take HDR samples of each individual lighting source.

    Exposures starting point

    • Day light sun visible ISO 100 F22
    • Day light sun hidden ISO 100 F16
    • Cloudy ISO 320 F16
    • Sunrise/Sunset ISO 100 F11
    • Interior well lit ISO 320 F16
    • Interior ambient bright ISO 320 F10
    • Interior bad light ISO 640 F10
    • Interior ambient dark ISO 640 F8
    • Low light situation ISO 640 F5

     

    NOTE: The goal is to clean the initial individual brackets before or at merging time as much as possible.
    This means:

    • keeping original shooting metadata
    • de-fringing
    • removing aberration (through camera lens data or automatically)
    • at 32 bit
    • in ACEScg (or ACES) wherever possible

     

    Here are the tips for using the chromatic ball in VFX projects, written in English:
    https://www.linkedin.com/posts/bellrodrigo_here-are-the-tips-for-using-the-chromatic-activity-7200950595438940160-AGBp

     

    Tips for Using the Chromatic Ball in VFX Projects**

    The chromatic ball is an invaluable tool in VFX work, helping to capture lighting and reflection data crucial for integrating CGI elements seamlessly. Here are some tips to maximize its effectiveness:

     

    1. **Positioning**:
    – Place the chromatic ball in the same lighting conditions as the main subject. Ensure it is visible in the camera frame but not obstructing the main action.
    – Ideally, place the ball where the CGI elements will be integrated to match the lighting and reflections accurately.

     

    2. **Recording Reference Footage**:
    – Capture reference footage of the chromatic ball at the beginning and end of each scene or lighting setup. This ensures you have consistent lighting data for the entire shoot.

     

    3. **Consistent Angles**:
    – Use consistent camera angles and heights when recording the chromatic ball. This helps in comparing and matching lighting setups across different shots.

     

    4. **Combine with a Gray Ball**:
    – Use a gray ball alongside the chromatic ball. The gray ball provides a neutral reference for exposure and color balance, complementing the chromatic ball’s reflection data.

     

    5. **Marking Positions**:
    – Mark the position of the chromatic ball on the set to ensure consistency when shooting multiple takes or different camera angles.

     

    6. **Lighting Analysis**:
    – Analyze the chromatic ball footage to understand the light sources, intensity, direction, and color temperature. This information is crucial for creating realistic CGI lighting and shadows.

     

    7. **Reflection Analysis**:
    – Use the chromatic ball to capture the environment’s reflections. This helps in accurately reflecting the CGI elements within the same scene, making them blend seamlessly.

     

    8. **Use HDRI**:
    – Capture High Dynamic Range Imagery (HDRI) of the chromatic ball. HDRI provides detailed lighting information and can be used to light CGI scenes with greater realism.

     

    9. **Communication with VFX Team**:
    – Ensure that the VFX team is aware of the chromatic ball’s data and how it was captured. Clear communication ensures that the data is used effectively in post-production.

     

    10. **Post-Production Adjustments**:
    – In post-production, use the chromatic ball data to adjust the CGI elements’ lighting and reflections. This ensures that the final output is visually cohesive and realistic.

    , ,
    Read more: HDRI shooting and editing by Xuan Prada and Greg Zaal
  • What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…

    RASTERIZATION
    Rasterisation (or rasterization)
    is the task of taking the information described in a vector graphics format OR the vertices of triangles making 3D shapes and converting them into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes), or in other words “rasterizing” vectors or 3D models onto a 2D plane for display on a computer screen.

    For each triangle of a 3D shape, you project the corners of the triangle on the virtual screen with some math (projective geometry). Then you have the position of the 3 corners of the triangle on the pixel screen. Those 3 points have texture coordinates, so you know where in the texture are the 3 corners. The cost is proportional to the number of triangles, and is only a little bit affected by the screen resolution.

    In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium.

    With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

    Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

    Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

     

    The main advantage of rasterization is its speed. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. So it cannot take shading, especially the physical light, into account and it cannot promise to get a photorealistic output. That’s a big limitation of rasterization.

    There are also multiple problems:


    • If you have two triangles one is behind the other, you will draw twice all the pixels. you only keep the pixel from the triangle that is closer to you (Z-buffer), but you still do the work twice.



    • The borders of your triangles are jagged as it is hard to know if a pixel is in the triangle or out. You can do some smoothing on those, that is anti-aliasing.



    • You have to handle every triangles (including the ones behind you) and then see that they do not touch the screen at all. (we have techniques to mitigate this where we only look at triangles that are in the field of view)



    • Transparency is hard to handle (you can’t just do an average of the color of overlapping transparent triangles, you have to do it in the right order)


     

     

     

    RAY CASTING
    It is almost the exact reverse of rasterization: you start from the virtual screen instead of the vector or 3D shapes, and you project a ray, starting from each pixel of the screen, until it intersect with a triangle.

    The cost is directly correlated to the number of pixels in the screen and you need a really cheap way of finding the first triangle that intersect a ray. In the end, it is more expensive than rasterization but it will, by design, ignore the triangles that are out of the field of view.

    You can use it to continue after the first triangle it hit, to take a little bit of the color of the next one, etc… This is useful to handle the border of the triangle cleanly (less jagged) and to handle transparency correctly.

     

    RAYTRACING


    Same idea as ray casting except once you hit a triangle you reflect on it and go into a different direction. The number of reflection you allow is the “depth” of your ray tracing. The color of the pixel can be calculated, based off the light source and all the polygons it had to reflect off of to get to that screen pixel.

    The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

    Ray tracing is eye-oriented process that needs walking through each pixel looking for what object should be shown there, which is also can be described as a technique that follows a beam of light (in pixels) from a set point and simulates how it reacts when it encounters objects.

    Compared with rasterization, ray tracing is hard to be implemented in real time, since even one ray can be traced and processed without much trouble, but after one ray bounces off an object, it can turn into 10 rays, and those 10 can turn into 100, 1000…The increase is exponential, and the the calculation for all these rays will be time consuming.

    Historically, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on the another technique called rasterization.

     

     

    PATH TRACING
    Path tracing can be used to solve more complex lighting situations.

    Path tracing is a type of ray tracing. When using path tracing for rendering, the rays only produce a single ray per bounce. The rays do not follow a defined line per bounce (to a light, for example), but rather shoot off in a random direction. The path tracing algorithm then takes a random sampling of all of the rays to create the final image. This results in sampling a variety of different types of lighting.

    When a ray hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit.
    It then calculates the amount of light transferred all the way to the pixel, including any color information gathered from surfaces along the way.
    It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel color value.

    It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. So when you increase the the samples per pixel, you can see the image quality becomes better and better.

    Ray tracing tends to be more efficient than path tracing. Basically, the render time of a ray tracer depends on the number of polygons in the scene. The more polygons you have, the longer it will take.
    Meanwhile, the rendering time of a path tracer can be indifferent to the number of polygons, but it is related to light situation: If you add a light, transparency, translucence, or other shader effects, the path tracer will slow down considerably.

     
     

     

    Sources:
    https://medium.com/@junyingw/future-of-gaming-rasterization-vs-ray-tracing-vs-path-tracing-32b334510f1f

     

     

    blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/

     

    https://en.wikipedia.org/wiki/Rasterisation

     

     

    https://www.quora.com/Whats-the-difference-between-ray-tracing-and-path-tracing

    , ,
    Read more: What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…

Collections
| Explore posts
| Design And Composition
| Featured AI

Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free