Views :
58
CupixVista is a new Al that can convert 360° video footage into the 3D map and virtual tour.
Skybox AI is a free 360° image generator. Use the power of AI to imagine stunning worlds in seconds and fine tune them for use in immersive VR, XR, or games.
https://skybox.blockadelabs.com/
https://theta360.com/en/about/theta/z1.html
Theta Z1 is Ricoh’s flagship 360 camera that features 1-inch sensors, which are the largest available for dual lens 360 cameras. It has been a highly regarded camera among 360 photographers because of its excellent image quality, color accuracy, and its ability to shoot Raw DNG photos with exceptional exposure latitude.
Bracketing mode 2022
Rquirement: Basic app iOS ver.2.20.0, Android ver.2.5.0, Camera firmware ver.2.10.3
https://community.theta360.guide/t/new-feature-ae-bracket-added-in-the-shooting-mode-z1-only/8247
HDRi for VFX
https://community.theta360.guide/t/create-high-quality-hdri-for-vfx-using-ricoh-theta-z1/4789/4
ND filtering
https://community.theta360.guide/t/neutral-density-solution-for-most-theta-cameras/7331
https://community.theta360.guide/t/long-exposure-nd-filter-for-ricoh-theta/1100
https://blogs.nvidia.com/blog/2022/08/09/neural-graphics-sdk-metaverse-content/
Unfortunately, png output only at the moment:
http://imaginaire.cc/gaugan360/
https://www.indiatimes.com/technology/news/360-degree-panorama-of-mars-nasa-perseverance-rover-535052.html
When collecting hdri make sure the data supports basic metadata, such as:
In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances (a Photometry measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle) between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to represent more accurately the wide range of intensity levels found in real scenes ranging from direct sunlight to faint starlight and to the deepest shadows.
The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) or standard dynamic range (SDR) images. Tone Mapping (Look-up) techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Photography
In photography, dynamic range is measured in Exposure Values (in photography, exposure value denotes all combinations of camera shutter speed and relative aperture that give the same exposure. The concept was developed in Germany in the 1950s) differences or stops, between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.
The human response to brightness is well approximated by a Steven’s power law, which over a reasonable range is close to logarithmic, as described by the Weber�Fechner law, which is one reason that logarithmic measures of light intensity are often used as well.
HDR is short for High Dynamic Range. It’s a term used to describe an image which contains a greater exposure range than the “black” to “white” that 8 or 16-bit integer formats (JPEG, TIFF, PNG) can describe. Whereas these Low Dynamic Range images (LDR) can hold perhaps 8 to 10 f-stops of image information, HDR images can describe beyond 30 stops and stored in 32 bit images.
SUMMARY
DETAILS
https://boostlog.io/@mohammedalsayedomar/create-cardboard-apps-in-unity-5ac8f81e47018500491f38c8
https://www.sitepoint.com/building-a-google-cardboard-vr-app-in-unity/
VR Actions for Playmaker
https://assetstore.unity.com/packages/tools/vr-actions-for-playmaker-52109
100 Best Unity3d VR Assets
http://meta-guide.com/embodiment/100-best-unity3d-vr-assets
…find more tutorials/reference under this blog page
(more…)
elevr.com/elevrant-panoramic-twist/
Today we discuss panoramic 3d video capture and how understanding its geometry leads to some new potential focus techniques.
With ordinary 2-camera stereoscopy, like you see at a 3d movie, each camera captures its own partial panorama of video, so the two partial circles of video are part of two side-by-side panoramas, each centering on a different point (where the cameras are).
This is great if you want to stare straight ahead from a fixed position. The eyes can measure the depth of any object in the middle of this Venn diagram of overlap. I think of the line of sight as being vectors shooting out of your eyeballs, and when those vectors hit an object from different angles, you get 3d information. When something’s closer, the vectors hit at a wider angle, and when an object is really far away, the vectors approach being parallel.
But even if both these cameras captured spherically, you’d have problems once you turn your head. Your ability to measure depth lessens and lessens, with generally smaller vector angles, until when you’re staring directly to the right they overlap entirely, zero angle no matter how close or far something is. And when you turn to face behind you, the panoramas are backwards, in a way that makes it impossible to focus your eyes on anything.
So a setup with two separate 360 panoramas captured an eye-width apart is no good for actual panoramas.
But you can stitch together a panorama using pairs of cameras an eye-width apart, where the center of the panorama is not on any one camera but at the center of a ball of cameras. Depending on the field of view that gets captured and how it’s stitched together, a four-cameras-per-eye setup might produce something with more or less twist, and more or less twist-reduction between cameras. Ideally, you’d have a many camera setup that lets you get a fully symmetric twist around each panorama. Or, for a circle of lots of cameras facing directly outward, you could crop the footage for each camera: stitch together the right parts of each camera’s capture for the left eye, and the left parts of each camera’s capture for the right eye.
Agisoft PhotoScan is one of the most common tools used, but you will need the professional version to work with panos.
These do not support 360 cameras:
– Autodesk Recap
– Reality Capture
– MeshLab
medium.com/@smitty/spherical-and-panoramic-photogrammetry-resources-2edbaeac13ac
www.nctechimaging.com//downloads-files/PhotoScan_Application_Note_v1.1.pdf
360rumors.com/2017/11/software-institut-pascal-converts-360-video-3d-model-vr.html
WalkAboutWorlds
https://sketchfab.com/models/9bc44ba457104b57943c29a79e4103bd
The Spatial Media Metadata Injector adds metadata to a video file indicating that the file contains 360 video. Use the metadata injector to prepare 360 videos for upload to YouTube.
github.com/google/spatial-media/releases/tag/v2.1
The Windows release requires a 64-bit version of Windows. If you’re using a 32-bit version of Windows, you can still run the metadata injector from the Python source code as follows:
360.Video.Metadata.Tool.mac.zip
360.Video.Metadata.Tool.win.zip