Views :
426
a blog of links related to computer animation and production technology Sponsored by ReelMatters.com
3Dprinting (170) A.I. (634) animation (340) blender (190) colour (225) commercials (45) composition (150) cool (357) design (624) Featured (64) hardware (301) IOS (108) jokes (134) lighting (275) modeling (112) music (183) photogrammetry (171) photography (742) production (1229) python (84) quotes (483) reference (303) software (1317) trailers (290) ves (519) VR (219)
Divesh Naidoo: The video below was made with a live in-camera preview and auto-exposure matching, no camera solve, no HDRI capture and no manual compositing setup. Using the new Simulon phone app.
LDR to HDR through ML
https://simulon.typeform.com/betatest
Process example
https://80.lv/articles/unity-presents-new-fees-based-on-game-installs-and-revenue/
The new program is called the Unity Runtime Fee and the main principle is based on how often users install games. Unity thinks “an initial install-based fee allows creators to keep the ongoing financial gains from player engagement, unlike a revenue share”.
This is bound to kill all developers who count on free downloads but profitable venues of income like in-app purchase. Which count for a vast majority of the 30% of the market that Unity holds onto.
The extra bill will be estimated by Unity based on non-specific data.
Unity does not have a ‘known’ way to track installs. Likely due to privacy laws. Thus they will need to ‘estimate’ installs and bill clients based on that. … …. Data which is aggregated with no identifying features isn’t really prevented. Unity’s claim that they can’t distinguish between an install and reinstall or even a paid versus pirated copy actually reinforces the idea that they aren’t using any identifying information, so it would be compliant to privacy laws. … Assumption is that they will get some data from distributors like AppStore, GooglePlay, Valve, Sony, Microsoft, etc… and estimate from there.
“It hurts because we didn’t agree to this. We used the engine because you pay up front and then ship your product. We weren’t told this was going to happen. We weren’t warned. We weren’t consulted,” explained the Facepunch Studios founder. “We have spent 10 years making Rust on Unity’s engine. We’ve paid them every year. And now they changed the rules.”
“It’s our fault. All of our faults. We sleepwalked into it. We had a ton of warnings,” they added. “We should have been pressing the eject button when Unity IPO’d in 2020. Every single thing they’ve done since then has been the exact opposite of what was good for the engine.
Bounding volume hierarchies are used to support several operations on sets of geometric objects efficiently, such as in collision detection and ray tracing. A bounding volume hierarchy (BVH) is a tree structure on a set of geometric objects. All geometric objects, which form the leaf nodes of the tree, are wrapped in bounding volumes.
BVHs are often used in ray tracing to eliminate potential intersection candidates within a scene by omitting geometric objects located in bounding volumes which are not intersected by the current ray. BVH is a crucial component in ray tracing rendering engines like Arnold, as it helps accelerate ray intersection tests and reduce resource costs.
Users do not have control over RAM consumption of the BVH. Here are some tips to optimize Arnold renders when BVH is the bottleneck:
Optimize Your Scene Geometry. Simplify or optimize your 3D models and scene geometry. Complex geometry can lead to larger BVH structures and longer BVH build times. Consider using LODs (Level of Detail) or proxy objects for distant geometry to reduce the BVH complexity.
Use Arnold Stand-ins and Proxies. Arnold Stand-ins and proxies allow you to load complex geometry only when needed, reducing the BVH complexity during the initial BVH build. This can be particularly useful for scenes with a lot of high-poly assets.
Denoising. Applying denoising to your final render can help reduce the number of rays required and consequently, the BVH intersection tests.
Render in Layers. If your scene has many elements, consider rendering it in layers. This allows you to optimize each layer individually, potentially reducing BVH build times.
Distribute Rendering. If you have access to a render farm or multiple machines, distribute the rendering workload. This can significantly reduce rendering time as each machine can handle a portion of the BVH calculations.
https://arxiv.org/pdf/2308.10902.pdf
“…any errors in the camera position solve can dramatically reduce the quality of a reconstruction. “
“Introducing CamP — a method to precondition camera optimization for NeRFs to significantly improve quality. With CamP we’re able to create high quality reconstructions even when input poses are bad.
CamP is a general purpose trick that you can apply in many places. Apply CamP to prior SOTA NeRF models like Zip-NeRF, and you improve quality and establish a new SOTA. Applying CamP to existing camera optimization models, like SCNeRF, to improve their performance.” – Keunhong Park
https://www.world-creator.com/
World Creator lets you create terrains of any size. There are absolutely no limits. You can create terrains with a few meters and terrains with thousands of kilometers. On top of that, our new terrain system allows you to create terrains of any detail. From meter precision down to centimeter precision, such high details can only be achieved with World Creator.
Internally, Mixbox treats colors as real-life pigments using the Kubelka & Munk theory to predict realistic color behavior.
https://scrtwpns.com/mixbox/painter/
https://scrtwpns.com/mixbox.pdf
https://github.com/scrtwpns/mixbox
https://scrtwpns.com/mixbox/docs/
https://realpython.com/python-shebang
In this tutorial, you’ll:
In short, a shebang is a special kind of comment that you may include in your source code to tell the operating system’s shell where to find the interpreter for the rest of the file:
#!/usr/bin/python3
print("Hello, World!")
https://customersuccess.autodesk.com/learning/course/introduction-to-shotgrid
Learn about ShotGrid’s basic capabilities and functionality in this introductory course. Set up your account, gain an understanding of the structure of data within ShotGrid, learn to navigate ShotGrid, determine your role, including what you can and cannot do, and customize the view of on-screen data.
Steve Wright
https://www.linkedin.com/pulse/why-oh-premultiply-steve-wright/
James Pratt
https://jamesprattvfx.wordpress.com/2018/11/08/premult-unpremult/
The simple definition of premult is to multiply the alpha and the RGB of the input together.
Un-Premult suggests that this does the opposite operation to the premult node. Therefore instead of multiplying the RGB values by the alpha, it divides instead.
Alan Martinez
“Unpremult” and “premult” are terms used in digital compositing that are relevant for both those working with computer-generated graphics (CG) and those working with live-action plates.
“Unpremult” is short for “unpremultiply” and refers to the action of undoing the multiplication of a pixel by its alpha value. It is commonly used to avoid halos or unwanted edges when combining images. (This by making sure that edits to a layer are added independently from edges’ opacity levels.)
“Premult” is short for “premultiply” and is the opposite process of “unpremult.” In this case, each pixel in an image is multiplied by its alpha value.
In simple terms, premult crops the RGB by its alpha, while unpremult does the opposite.
It’s important to perform color corrections on CG renders in a sort of sandwich approach. First, divide the image to extend the edges fully of the RGB channels. Then, apply the necessary color corrections. Finally, pre-multiply the image again to avoid artifacts on the edges.
Typically, most 3D rendered images are premultiplied. As a rule of thumb, if the background is black or even just very dark, the image may be premultiplied. Additionally, most of the time, the 3D render has antialiasing in the edges.
Aaron Strasbourg
https://www.aaronstrasbourgvfx.com/post/2017/06/23/002-unpremult-and-premult
https://keentools.io/products/geotracker-for-blender
https://color-lab-eilat.github.io/Spectral-sensitivity-estimation-web/
A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known. As consumer cameras are not designed for high-precision visual tasks, manufacturers do not disclose spectral sensitivities. Their estimation requires a costly optical setup, which triggered researchers to come up with numerous indirect methods that aim to lower cost and complexity by using color targets. However, the use of color targets gives rise to new complications that make the estimation more difficult, and consequently, there currently exists no simple, low-cost, robust go-to method for spectral sensitivity estimation that non-specialized research labs can adopt. Furthermore, even if not limited by hardware or cost, researchers frequently work with imagery from multiple cameras that they do not have in their possession.
To provide a practical solution to this problem, we propose a framework for spectral sensitivity estimation that not only does not require any hardware (including a color target), but also does not require physical access to the camera itself. Similar to other work, we formulate an optimization problem that minimizes a two-term objective function: a camera-specific term from a system of equations, and a universal term that bounds the solution space.
Different than other work, we utilize publicly available high-quality calibration data to construct both terms. We use the colorimetric mapping matrices provided by the Adobe DNG Converter to formulate the camera-specific system of equations, and constrain the solutions using an autoencoder trained on a database of ground-truth curves. On average, we achieve reconstruction errors as low as those that can arise due to manufacturing imperfections between two copies of the same camera. We provide predicted sensitivities for more than 1,000 cameras that the Adobe DNG Converter currently supports, and discuss which tasks can become trivial when camera responses are available.
Collections
| Explore posts
| Design And Composition
| Featured AI
Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.