Views :
1,121
www.plutorules.com/page-111-space-rocks.html
This help’s us understand the composition of components in/on solar system bodies.
Dips in the observed light spectrum, also known as, lines of absorption occur as gasses absorb energy from light at specific points along the light spectrum.
These dips or darkened zones (lines of absorption) leave a finger print which identify elements and compounds.
In this image the dark absorption bands appear as lines of emission which occur as the result of emitted not reflected (absorbed) light.
Lines of absorption
www.palagems.com/gem-lighting2
Artificial light sources, not unlike the diverse phases of natural light, vary considerably in their properties. As a result, some lamps render an object’s color better than others do.
The most important criterion for assessing the color-rendering ability of any lamp is its spectral power distribution curve.
Natural daylight varies too much in strength and spectral composition to be taken seriously as a lighting standard for grading and dealing colored stones. For anything to be a standard, it must be constant in its properties, which natural light is not.
For dealers in particular to make the transition from natural light to an artificial light source, that source must offer:
1- A degree of illuminance at least as strong as the common phases of natural daylight.
2- Spectral properties identical or comparable to a phase of natural daylight.
A source combining these two things makes gems appear much the same as when viewed under a given phase of natural light. From the viewpoint of many dealers, this corresponds to a naturalappearance.
The 6000° Kelvin xenon short-arc lamp appears closest to meeting the criteria for a standard light source. Besides the strong illuminance this lamp affords, its spectrum is very similar to CIE standard illuminants of similar color temperature.
https://www.discovery.com/science/mexapixels-in-human-eye
About 576 megapixels for the entire field of view.
Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).
At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:
120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
Or.
7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.
https://clarkvision.com/articles/eye-resolution.html
Details in the post
phoenixnap.com/kb/single-vs-dual-processors-server
The backbone of any server is the number of CPUs that will power it, as well as the actual model and the type of the CPU. From that point, you add the needed amount of RAM, storage and other options that your use case requires.
A CPU (Central Processing Unit) is a piece of hardware responsible for executing tasks from other parts of a computer.
A Core is a physical part of a CPU. Cores act like processors within a single CPU chip. The more cores a CPU has, the more tasks it can perform simultaneously. Virtually all modern CPUs contain multiple cores now. This enables the execution of multiple tasks at the same time.
Threads are like paths your computer can take to process information.
If a CPU has six cores with two threads per core, that means there are twelve paths for information to be processed. The main difference between threads and physical cores is that two threads cannot operate in parallel. While two physical cores can simultaneously perform two tasks, one core alternates between the threads. This happens fast so that it appears that true multitasking takes place. Threads basically help the cores process information in a more efficient manner. That being said, CPU threads bring actual, visible performance in very specific tasks, so a hyper-threaded CPU might not always help you achieve better results.
Single processor servers run on a motherboard with one socket for a CPU. This means that the highest core count CPU available on the market determines the maximum core count per server. RAM capacity constraints with single CPU configurations remain one of their biggest drawbacks.
The most apparent distinction between single and dual-processor servers is that the motherboard has two CPU sockets instead of one. This is followed by additional benefits such as the massive amount of PCI lanes, two separate sets of cache memory and two sets of RAM slots. If the specific motherboard has 24 memory slots, 12 slots belong to the first CPU and the other 12 to the other CPU. In cases where only one CPU slot occupied, the CPU cannot use the other set of RAM sticks. This rarely happens since dual processor servers always have both slots occupied. Dual processor servers and multiprocessor systems, in general, are the best options for space-restricted environments.
While dual CPU setups pack enormous core counts and outshine single processor servers by a large margin, some tests have shown only a marginal performance increase over single CPU configurations with similar core count and clock speeds per chip. This refers to the circumstances where two CPUs worked on the same data at the same time.
On the other hand, we see immense performance boosts in dual processor servers when the workload is optimized for setups like these. This is especially true when CPUs carry out intensive multi-threaded tasks.
www.techsiting.com/cores-vs-threads/
MaterialX is an open standard for transfer of rich material and look-development content between applications and renderers.
Originated at Lucasfilm in 2012, MaterialX has been used by Industrial Light & Magic in feature films such as Star Wars: The Force Awakens and Rogue One: A Star Wars Story, and by ILMxLAB in real-time experiences such as Trials On Tatooine.
MaterialX addresses the need for a common, open standard to represent the data values and relationships required to transfer the complete look of a computer graphics model from one application or rendering platform to another, including shading networks, patterns and texturing, complex nested materials and geometric assignments.
To further encourage interchangeable CG look setups, MaterialX also defines a complete set of data creation and processing nodes with a precise mechanism for functional extensibility.
blogs.nvidia.com/blog/2019/03/18/omniverse-collaboration-platform/
developer.nvidia.com/nvidia-omniverse
An open, Interactive 3D Design Collaboration Platform for Multi-Tool Workflows to simplify studio workflows for real-time graphics.
It supports Pixar’s Universal Scene Description technology for exchanging information about modeling, shading, animation, lighting, visual effects and rendering across multiple applications.
It also supports NVIDIA’s Material Definition Language, which allows artists to exchange information about surface materials across multiple tools.
With Omniverse, artists can see live updates made by other artists working in different applications. They can also see changes reflected in multiple tools at the same time.
For example an artist using Maya with a portal to Omniverse can collaborate with another artist using UE4 and both will see live updates of each others’ changes in their application.
DNEG said last month it was looking to raise £150m from a float on the LSE’s Main Market. This valued the firm at more than £600m.
But yesterday it said it has decided to postpone the listing due to ‘ongoing market uncertainty’.
The London-based group added that it had received ‘a strong level of interest from investors’ and still intends to go public once market conditions improve.
Having burned through $2.6bn – that’s billion – on its way to producing an AR headset that has so many limitations it seemingly has zero chance of becoming a consumer device, the upstart has announced it is now part way through series E funding.
That’s just a Silicon Valley way of saying it’s on a fifth formal round of begging to banks and venture capitalists to help it keep going before the biz finally starts making money. In reality, it is the manufacturer’s eighth funding round, with the most recent being a cash influx of $280m in April this year. That money appears to be running out, or just simply not enough, just six months later.
www.theregister.co.uk/2019/11/14/magic_leap_imoney/
MagicLeap loses cfo Scott Henry and effects wizard John Gaeta following news of funding woes
www.nvidia.com/en-us/design-visualization/technologies/material-definition-language/
THE NVIDIA MATERIAL DEFINITION LANGUAGE (MDL) gives you the freedom to share physically based materials and lights between supporting applications.
For example, create an MDL material in an application like Allegorithmic Substance Designer, save it to your library, then use it in NVIDIA® Iray® or Chaos Group’s V-Ray, or any other supporting application.
Unlike a shading language that produces programs for a particular renderer, MDL materials define the behavior of light at a high level. Different renderers and tools interpret the light behavior and create the best possible image.