https://www.reshot.ai/3d-gaussian-splatting
what are 3D Gaussians? They are a generalization of 1D Gaussians (the bell curve) to 3D. Essentially they are ellipsoids in 3D space, with a center, a scale, a rotation, and “softened edges”.
Each 3D Gaussian is optimized along with a (viewdependant) color and opacity. When blended together, here’s the visualization of the full model, rendered from ANY angle. As you can see, 3D Gaussian Splatting captures extremely well the fuzzy and soft nature of the plush toy, something that photogrammetry-based methods struggle to do.
https://towardsdatascience.com/a-comprehensive-overview-of-gaussian-splatting-e7d570081362
https://neuralradiancefields.io/luma-interactive-scenes-announced/
“…these are in fact Gaussian Splats that are being run and it’s a proprietary iteration of the original Inria paper. They hybridize the performance gain of realtime rendering with Gaussian Splatting with robust cloud based rendering that’s already widely being used in commercial applications. This has been in the works for a while over at Luma and I had the opportunity to try out some of my datasets on their new method.”
MICHAEL RUBLOFF
https://www.reshot.ai/3d-gaussian-splatting
what are 3D Gaussians? They are a generalization of 1D Gaussians (the bell curve) to 3D. Essentially they are ellipsoids in 3D space, with a center, a scale, a rotation, and “softened edges”.
Each 3D Gaussian is optimized along with a (viewdependant) color and opacity. When blended together, here’s the visualization of the full model, rendered from ANY angle. As you can see, 3D Gaussian Splatting captures extremely well the fuzzy and soft nature of the plush toy, something that photogrammetry-based methods struggle to do.
Divesh Naidoo: The video below was made with a live in-camera preview and auto-exposure matching, no camera solve, no HDRI capture and no manual compositing setup. Using the new Simulon phone app.
LDR to HDR through ML
https://simulon.typeform.com/betatest
Process example
https://arxiv.org/pdf/2308.10902.pdf
“…any errors in the camera position solve can dramatically reduce the quality of a reconstruction. “
“Introducing CamP — a method to precondition camera optimization for NeRFs to significantly improve quality. With CamP we’re able to create high quality reconstructions even when input poses are bad.
CamP is a general purpose trick that you can apply in many places. Apply CamP to prior SOTA NeRF models like Zip-NeRF, and you improve quality and establish a new SOTA. Applying CamP to existing camera optimization models, like SCNeRF, to improve their performance.” – Keunhong Park