Views :
701
a blog of links related to computer animation and production technology Sponsored by ReelMatters.com
3Dprinting (170) A.I. (634) animation (340) blender (190) colour (225) commercials (45) composition (150) cool (357) design (624) Featured (64) hardware (301) IOS (108) jokes (134) lighting (275) modeling (112) music (183) photogrammetry (171) photography (742) production (1229) python (84) quotes (483) reference (303) software (1317) trailers (290) ves (519) VR (219)
https://www.freecodecamp.org/news/python-code-examples-simple-python-program-example/
If the text does not load well, please download the pdf locally to your machine.
The pdf plugin may not work well under Linux os.
https://www.freecodecamp.org/news/how-to-set-up-vs-code-for-web-development
TIPS
Loop cut a selected face
1- select face
2- shift-H to hide all but the selection. Loop cut doesn’t work on hidden faces.
3- loop cut
4- Alt+H to unhide everything when you’re done.
www.andreageremia.it/tutorial_python_tcl.html
https://www.gatimedia.co.uk/list-of-knobs-2
https://learn.foundry.com/nuke/developers/63/ndkdevguide/knobs-and-handles/knobtypes.html
http://www.andreageremia.it/tutorial_python_tcl.html
http://thoughtvfx.blogspot.com/2012/12/nuke-tcl-tips.html
Check final image quality
https://www.compositingpro.com/tech-check-compositing-shot-in-nuke/
Local copy:
http://pixelsham.com/wp-content/uploads/2023/03/compositing_pro_tech_check_nuke_script.nk
Nuke tcl procedures
https://www.gatimedia.co.uk/nuke-tcl-procedures
Knobs
https://learn.foundry.com/nuke/developers/63/ndkdevguide/knobs-and-handles/knobtypes.html
(more…)
Projecting 3d depth through a plane mapping.
[vimeo 236300648 250 250]
mandyxmq.github.io/research/wavefiber.html
This figure shows a spiderweb iridescence example. The left image is a photograph of this effect by Marianna Armata. The middle image is rendered using the new wave-based BCSDF and the image on the right is rendered using a previous ray-based BCSDF.
www.hdrlabs.com/picturenaut/plugins.html
Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.
Here is an openCV example:
# bottom left coordinates = 0,0 import numpy as np import cv2 # Load the HDR or EXR image image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED) # Load as-is without modification # Calculate the luminance from the HDR channels (assuming RGB format) luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114]) # Set a threshold value based on estimated EV threshold_value = 2.4 # Estimated threshold value based on 4.8 EV # Apply the threshold to identify bright areas # Theluminance
array contains the calculated luminance values for each pixel in the image. # Thethreshold_value
is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
thresholded = (luminance > threshold_value) * 255 # Convert the thresholded image to uint8 for contour detection thresholded = thresholded.astype(np.uint8) # Find contours of the bright areas contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Create a list to store the bounding boxes of bright areas bright_areas = [] # Iterate through contours and extract bounding boxes for contour in contours: x, y, w, h = cv2.boundingRect(contour) # Adjust y-coordinate based on bottom-left origin y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) # Store as (x1, y1, x2, y2) # Print the identified bright areas print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)
More details
Luminance and Exposure in an EXR Image:
The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.
These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.
Here’s the breakdown of the float values:
The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.
For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.
To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:
threshold_value = base_value * (2 ** EV)
Here, EV
is the exposure value, base_value
is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV
is used to convert the logarithmic EV to a linear intensity value.
base_value
factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.base_value
to achieve the desired separation of bright areas from the rest of the image.
Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).
Here’s an example of how you might determine a base_value
to achieve this:
# Define the dynamic range of the image in EV dynamic_range = 12 # Choose the desired number of EV above middle gray for thresholding desired_ev_above_middle_gray = 2 # Calculate the threshold value based on the desired EV above middle gray threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range)) print("Threshold Value:", threshold_value)
3D scanning is becoming more accessible with the LiDAR scanners in the new iPhone 12 Pro and iPad Pro.
Polycam’s integration lets users log in to their Sketchfab account and publish directly to Sketchfab.
www.blender.org/press/facebook-joins-the-blender-development-fund/
Facebook will join the Blender Foundation’s Development Fund as a Corporate Patron as of Q4, 2020.
www.awn.com/news/autodesk-shotgun-taps-new-tech-future-production-management
With Autodesk’s acquisition of technology known as Consilium, machine learning-driven generative scheduling is coming to Shotgun Software, which will enable more accurate bidding, scheduling, and resource planning decisions.
Machine learning is being brought to production management with generative scheduling in Shotgun, currently in early testing. For producers and production managers, this will make the manual and complex challenge of optimized scheduling and resource planning more dynamic, controllable, and predictive. This feature set will allow producers to plan faster, with greater accuracy and agility to help their teams produce the best work possible.
Collections
| Explore posts
| Design And Composition
| Featured AI
Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.