More below…
phoenixnap.com/kb/single-vs-dual-processors-server
The backbone of any server is the number of CPUs that will power it, as well as the actual model and the type of the CPU. From that point, you add the needed amount of RAM, storage and other options that your use case requires.
A CPU (Central Processing Unit) is a piece of hardware responsible for executing tasks from other parts of a computer.
A Core is a physical part of a CPU. Cores act like processors within a single CPU chip. The more cores a CPU has, the more tasks it can perform simultaneously. Virtually all modern CPUs contain multiple cores now. This enables the execution of multiple tasks at the same time.
Threads are like paths your computer can take to process information.
If a CPU has six cores with two threads per core, that means there are twelve paths for information to be processed. The main difference between threads and physical cores is that two threads cannot operate in parallel. While two physical cores can simultaneously perform two tasks, one core alternates between the threads. This happens fast so that it appears that true multitasking takes place. Threads basically help the cores process information in a more efficient manner. That being said, CPU threads bring actual, visible performance in very specific tasks, so a hyper-threaded CPU might not always help you achieve better results.
Single processor servers run on a motherboard with one socket for a CPU. This means that the highest core count CPU available on the market determines the maximum core count per server. RAM capacity constraints with single CPU configurations remain one of their biggest drawbacks.
The most apparent distinction between single and dual-processor servers is that the motherboard has two CPU sockets instead of one. This is followed by additional benefits such as the massive amount of PCI lanes, two separate sets of cache memory and two sets of RAM slots. If the specific motherboard has 24 memory slots, 12 slots belong to the first CPU and the other 12 to the other CPU. In cases where only one CPU slot occupied, the CPU cannot use the other set of RAM sticks. This rarely happens since dual processor servers always have both slots occupied. Dual processor servers and multiprocessor systems, in general, are the best options for space-restricted environments.
While dual CPU setups pack enormous core counts and outshine single processor servers by a large margin, some tests have shown only a marginal performance increase over single CPU configurations with similar core count and clock speeds per chip. This refers to the circumstances where two CPUs worked on the same data at the same time.
On the other hand, we see immense performance boosts in dual processor servers when the workload is optimized for setups like these. This is especially true when CPUs carry out intensive multi-threaded tasks.
www.techsiting.com/cores-vs-threads/
MaterialX is an open standard for transfer of rich material and look-development content between applications and renderers.
Originated at Lucasfilm in 2012, MaterialX has been used by Industrial Light & Magic in feature films such as Star Wars: The Force Awakens and Rogue One: A Star Wars Story, and by ILMxLAB in real-time experiences such as Trials On Tatooine.
MaterialX addresses the need for a common, open standard to represent the data values and relationships required to transfer the complete look of a computer graphics model from one application or rendering platform to another, including shading networks, patterns and texturing, complex nested materials and geometric assignments.
To further encourage interchangeable CG look setups, MaterialX also defines a complete set of data creation and processing nodes with a precise mechanism for functional extensibility.
blogs.nvidia.com/blog/2019/03/18/omniverse-collaboration-platform/
developer.nvidia.com/nvidia-omniverse
An open, Interactive 3D Design Collaboration Platform for Multi-Tool Workflows to simplify studio workflows for real-time graphics.
It supports Pixar’s Universal Scene Description technology for exchanging information about modeling, shading, animation, lighting, visual effects and rendering across multiple applications.
It also supports NVIDIA’s Material Definition Language, which allows artists to exchange information about surface materials across multiple tools.
With Omniverse, artists can see live updates made by other artists working in different applications. They can also see changes reflected in multiple tools at the same time.
For example an artist using Maya with a portal to Omniverse can collaborate with another artist using UE4 and both will see live updates of each others’ changes in their application.
DNEG said last month it was looking to raise £150m from a float on the LSE’s Main Market. This valued the firm at more than £600m.
But yesterday it said it has decided to postpone the listing due to ‘ongoing market uncertainty’.
The London-based group added that it had received ‘a strong level of interest from investors’ and still intends to go public once market conditions improve.
Having burned through $2.6bn – that’s billion – on its way to producing an AR headset that has so many limitations it seemingly has zero chance of becoming a consumer device, the upstart has announced it is now part way through series E funding.
That’s just a Silicon Valley way of saying it’s on a fifth formal round of begging to banks and venture capitalists to help it keep going before the biz finally starts making money. In reality, it is the manufacturer’s eighth funding round, with the most recent being a cash influx of $280m in April this year. That money appears to be running out, or just simply not enough, just six months later.
www.theregister.co.uk/2019/11/14/magic_leap_imoney/
MagicLeap loses cfo Scott Henry and effects wizard John Gaeta following news of funding woes
www.nvidia.com/en-us/design-visualization/technologies/material-definition-language/
THE NVIDIA MATERIAL DEFINITION LANGUAGE (MDL) gives you the freedom to share physically based materials and lights between supporting applications.
For example, create an MDL material in an application like Allegorithmic Substance Designer, save it to your library, then use it in NVIDIA® Iray® or Chaos Group’s V-Ray, or any other supporting application.
Unlike a shading language that produces programs for a particular renderer, MDL materials define the behavior of light at a high level. Different renderers and tools interpret the light behavior and create the best possible image.
Also see:
https://www.pixelsham.com/2018/11/22/exposure-value-measurements/
https://www.pixelsham.com/2016/03/03/f-stop-vs-t-stop/
An exposure stop is a unit measurement of Exposure as such it provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso and f-stop.
+-1 stop is a doubling or halving of the amount of light let in when taking a photo
1 EV (exposure value) is just another way to say one stop of exposure change.
https://www.photographymad.com/pages/view/what-is-a-stop-of-exposure-in-photography
Same applies to shutter speed, iso and aperture.
Doubling or halving your shutter speed produces an increase or decrease of 1 stop of exposure.
Doubling or halving your iso speed produces an increase or decrease of 1 stop of exposure.
Details in the post
nerdist.com/article/joe-letteri-avatar-alita-battle-angel-james-cameron-martin-scorsese/
[Any] story [has to be] complete in itself. If there are gaps that you’re hoping will be filled in with visual effects, you’re likely to be disappointed. We can add ideas, we can help in whatever way that we can, but you want to make sure that when you read it, it reads well.
[Our responsibility as VFX artist] I think first and foremost [is] to engage the audience. Everything that we do has to be part of the audience wanting to sit there and watch that movie and see what happens next. And it’s a combination of things. It’s the drama of the characters. It’s maybe what you can do to a scene to make it compelling to look at, the realism that you might need to get people drawn into that moment. It could be any number of things, but it’s really about just making sure that you’re always in mind of how the audience is experiencing what they’re seeing.