COLOR

LIGHTING

  • Photography basics: Solid Angle measures

    http://www.calculator.org/property.aspx?name=solid+angle

     

     

    A measure of how large the object appears to an observer looking from that point. Thus. A measure for objects in the sky. Useful to retuen the size of the sun and moon… and in perspective, how much of their contribution to lighting. Solid angle can be represented in ‘angular diameter’ as well.

    http://en.wikipedia.org/wiki/Solid_angle

     

    http://www.mathsisfun.com/geometry/steradian.html

     

    A solid angle is expressed in a dimensionless unit called a steradian (symbol: sr). By default in terms of the total celestial sphere and before atmospheric’s scattering, the Sun and the Moon subtend fractional areas of 0.000546% (Sun) and 0.000531% (Moon).

     

    http://en.wikipedia.org/wiki/Solid_angle#Sun_and_Moon

     

    On earth the sun is likely closer to 0.00011 solid angle after athmospheric scattering. The sun as perceived from earth has a diameter of 0.53 degrees. This is about 0.000064 solid angle.

    http://www.numericana.com/answer/angles.htm

     

    The mean angular diameter of the full moon is 2q = 0.52° (it varies with time around that average, by about 0.009°). This translates into a solid angle of 0.0000647 sr, which means that the whole night sky covers a solid angle roughly one hundred thousand times greater than the full moon.

     

    More info

     

    http://lcogt.net/spacebook/using-angles-describe-positions-and-apparent-sizes-objects

    http://amazing-space.stsci.edu/glossary/def.php.s=topic_astronomy

     

    Angular Size

    The apparent size of an object as seen by an observer; expressed in units of degrees (of arc), arc minutes, or arc seconds. The moon, as viewed from the Earth, has an angular diameter of one-half a degree.

     

    The angle covered by the diameter of the full moon is about 31 arcmin or 1/2°, so astronomers would say the Moon’s angular diameter is 31 arcmin, or the Moon subtends an angle of 31 arcmin.

    , ,
    Read more: Photography basics: Solid Angle measures
  • What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…

    RASTERIZATION
    Rasterisation (or rasterization)
    is the task of taking the information described in a vector graphics format OR the vertices of triangles making 3D shapes and converting them into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes), or in other words “rasterizing” vectors or 3D models onto a 2D plane for display on a computer screen.

    For each triangle of a 3D shape, you project the corners of the triangle on the virtual screen with some math (projective geometry). Then you have the position of the 3 corners of the triangle on the pixel screen. Those 3 points have texture coordinates, so you know where in the texture are the 3 corners. The cost is proportional to the number of triangles, and is only a little bit affected by the screen resolution.

    In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium.

    With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

    Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

    Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

     

    The main advantage of rasterization is its speed. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. So it cannot take shading, especially the physical light, into account and it cannot promise to get a photorealistic output. That’s a big limitation of rasterization.

    There are also multiple problems:


    • If you have two triangles one is behind the other, you will draw twice all the pixels. you only keep the pixel from the triangle that is closer to you (Z-buffer), but you still do the work twice.



    • The borders of your triangles are jagged as it is hard to know if a pixel is in the triangle or out. You can do some smoothing on those, that is anti-aliasing.



    • You have to handle every triangles (including the ones behind you) and then see that they do not touch the screen at all. (we have techniques to mitigate this where we only look at triangles that are in the field of view)



    • Transparency is hard to handle (you can’t just do an average of the color of overlapping transparent triangles, you have to do it in the right order)


     

     

     

    RAY CASTING
    It is almost the exact reverse of rasterization: you start from the virtual screen instead of the vector or 3D shapes, and you project a ray, starting from each pixel of the screen, until it intersect with a triangle.

    The cost is directly correlated to the number of pixels in the screen and you need a really cheap way of finding the first triangle that intersect a ray. In the end, it is more expensive than rasterization but it will, by design, ignore the triangles that are out of the field of view.

    You can use it to continue after the first triangle it hit, to take a little bit of the color of the next one, etc… This is useful to handle the border of the triangle cleanly (less jagged) and to handle transparency correctly.

     

    RAYTRACING


    Same idea as ray casting except once you hit a triangle you reflect on it and go into a different direction. The number of reflection you allow is the “depth” of your ray tracing. The color of the pixel can be calculated, based off the light source and all the polygons it had to reflect off of to get to that screen pixel.

    The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

    Ray tracing is eye-oriented process that needs walking through each pixel looking for what object should be shown there, which is also can be described as a technique that follows a beam of light (in pixels) from a set point and simulates how it reacts when it encounters objects.

    Compared with rasterization, ray tracing is hard to be implemented in real time, since even one ray can be traced and processed without much trouble, but after one ray bounces off an object, it can turn into 10 rays, and those 10 can turn into 100, 1000…The increase is exponential, and the the calculation for all these rays will be time consuming.

    Historically, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on the another technique called rasterization.

     

     

    PATH TRACING
    Path tracing can be used to solve more complex lighting situations.

    Path tracing is a type of ray tracing. When using path tracing for rendering, the rays only produce a single ray per bounce. The rays do not follow a defined line per bounce (to a light, for example), but rather shoot off in a random direction. The path tracing algorithm then takes a random sampling of all of the rays to create the final image. This results in sampling a variety of different types of lighting.

    When a ray hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit.
    It then calculates the amount of light transferred all the way to the pixel, including any color information gathered from surfaces along the way.
    It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel color value.

    It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. So when you increase the the samples per pixel, you can see the image quality becomes better and better.

    Ray tracing tends to be more efficient than path tracing. Basically, the render time of a ray tracer depends on the number of polygons in the scene. The more polygons you have, the longer it will take.
    Meanwhile, the rendering time of a path tracer can be indifferent to the number of polygons, but it is related to light situation: If you add a light, transparency, translucence, or other shader effects, the path tracer will slow down considerably.

     
     

     

    Sources:
    https://medium.com/@junyingw/future-of-gaming-rasterization-vs-ray-tracing-vs-path-tracing-32b334510f1f

     

     

    blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/

     

    https://en.wikipedia.org/wiki/Rasterisation

     

     

    https://www.quora.com/Whats-the-difference-between-ray-tracing-and-path-tracing

    , ,
    Read more: What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…
  • Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors

    In the retina, photoreceptors, bipolar cells, and horizontal cells work together to process visual information before it reaches the brain. Here’s how each cell type contributes to vision:

     

    1. Photoreceptors

    • Types: There are two main types of photoreceptors: rods and cones.
      • Rods: Specialized for low-light and peripheral vision; they help us see in dim lighting and detect motion.
      • Cones: Specialized for color and detail; they function best in bright light and are concentrated in the central retina (the fovea), allowing for high-resolution vision.
    • Function: Photoreceptors convert light into electrical signals. When light hits the retina, photoreceptors undergo a chemical change, triggering an electrical response that initiates the visual process. Rods and cones detect different intensities and colors, providing the foundation for brightness and color perception.

     

    2. Bipolar Cells

    • Function: Bipolar cells act as intermediaries, connecting photoreceptors to ganglion cells, which send signals to the brain. They receive input from photoreceptors and relay it to the retinal ganglion cells.
    • On and Off Bipolar Cells: Some bipolar cells are ON cells, responding when light is detected (depolarizing in light), and others are OFF cells, responding in darkness (depolarizing in the absence of light). This division allows for more precise contrast detection and the ability to distinguish light from dark areas in the visual field.

     

    3. Horizontal Cells

    • Function: Horizontal cells connect photoreceptors to each other and create lateral interactions between them. They integrate signals from multiple photoreceptors, allowing them to adjust the sensitivity of neighboring photoreceptors in response to varying light conditions.
    • Lateral Inhibition: This process improves visual contrast and sharpness by making the borders between light and dark areas more distinct, enhancing our ability to perceive edges and fine detail.

     

    These three types of cells work together to help the retina preprocess visual information and perception, emphasizing contrast and adjusting for different lighting conditions before signals are sent to the brain for further processing and interpretation.

     

     

    ,
    Read more: Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors