a blog of links related to computer animation and production technology Sponsored by ReelMatters.com


  • Ashton Kutcher Says Soon ‘You’ll Be Able to Render a Whole Movie’ Using AI

    https://variety.com/2024/film/news/ashton-kutcher-ai-movies-sora-hollywood-1236027196/

    “I have a beta version of it and it’s pretty amazing,” Kutcher said of the platform in a recent conversation with former Google CEO Eric Schmidt at the Berggruen Salon in Los Angeles.

    “Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100? To go out and shoot it would cost you thousands of dollars,” Kutcher said. “Action scenes of me jumping off of this building, you don’t have to have a stunt person go do it, you could just go do it [with AI].

    ,
  • Microsoft Working on ‘Far Larger’ In-House AI Model

    The new model, MAI-1, is expected to have about 500 billion parameters, Seeking Alpha reported Monday (May 6), citing a paywalled article by The Information.

     

    https://www.pymnts.com/artificial-intelligence-2/2024/report-microsoft-working-on-far-larger-in-house-ai-model/

     

  • Why The New York Times might win its copyright lawsuit against OpenAI

    https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

     

    Daniel Jeffries wrote:

    “Trying to get everyone to license training data is not going to work because that’s not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”

     

    The AI community is full of people who understand how models work and what they’re capable of, and who are working to improve their systems so that the outputs aren’t full of regurgitated inputs. Google won the Google Books case because it could explain both of these persuasively to judges. But the history of technology law is littered with the remains of companies that were less successful in getting judges to see things their way.

    ,
  • Generative AI Glossary

    https://education.civitai.com/generative-ai-glossary/

     

    ,
  • This Vox article is OpenAI training data – How AI is holding the internet hostage and media is no exception

    https://www.vox.com/technology/352849/openai-chatgpt-google-meta-artificial-intelligence-vox-media-chatbots

     

     

    • Pros of AI Chatbots:
      • Efficiency and Accessibility: AI chatbots can handle a large volume of queries simultaneously, providing quick and consistent responses, which is beneficial for customer service and support.
      • 24/7 Availability: Unlike human operators, chatbots are available around the clock, ensuring that users can get assistance at any time.
      • Cost-Effective: Implementing chatbots can reduce operational costs for businesses by minimizing the need for large customer service teams.
      • Personalization and Scalability: Advanced AI can offer personalized responses and scale effortlessly to meet demand.

     

    • Cons of AI Chatbots:
      • Bias and Misinformation: AI chatbots can inadvertently propagate biases present in their training data, leading to inaccurate or offensive responses.
      • Lack of Human Touch: Chatbots may struggle to understand and respond appropriately to nuanced human emotions and complex queries, potentially frustrating users.
      • Technical Limitations: While AI is advancing, chatbots can still make errors and provide incorrect information, which can undermine user trust.
      • Environmental Impact: Training and running large AI models consume significant energy, contributing to carbon emissions and environmental concerns.

     

     

    The deployment of AI chatbots raises significant ethical concerns. Biases in training data can lead to the generation of skewed or harmful content, posing risks to users and undermining trust in AI systems. Additionally, the potential misuse of AI chatbots for spreading misinformation and the environmental impact of training large AI models are critical issues that require attention.

     

    The trajectory of AI chatbot development points towards increasingly sophisticated and generalized AI capabilities. As research progresses towards Artificial General Intelligence (AGI), the potential applications of AI chatbots are expected to expand further, encompassing more complex and nuanced tasks. However, achieving AGI will require addressing current ethical and technical challenges to ensure the responsible development and deployment of AI technologies.

    ,
  • DiffusionLight: HDRI Light Probes for Free by Painting a Chrome Ball

    https://diffusionlight.github.io/

     

     

    https://github.com/DiffusionLight/DiffusionLight

     

    https://github.com/DiffusionLight/DiffusionLight?tab=MIT-1-ov-file#readme

     

    https://colab.research.google.com/drive/15pC4qb9mEtRYsW3utXkk-jnaeVxUy-0S

     

    “a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However, these approaches often struggle with real-world, uncontrolled settings due to the limited diversity and size of their datasets. To address this problem, we leverage diffusion models trained on billions of standard images to render a chrome ball into the input image. Despite its simplicity, this task remains challenging: the diffusion models often insert incorrect or inconsistent objects and cannot readily generate images in HDR format. Our research uncovers a surprising relationship between the appearance of chrome balls and the initial diffusion noise map, which we utilize to consistently generate high-quality chrome balls. We further fine-tune an LDR difusion model (Stable Diffusion XL) with LoRA, enabling it to perform exposure bracketing for HDR light estimation. Our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios.”

     

    , ,
  • How OpenAI so royally screwed up the Sam Altman firing and joining Microsoft

    https://edition.cnn.com/2023/11/19/tech/sam-altman-open-ai-firing-board/index.html

     

    https://www.cnn.com/2023/11/18/tech/openai-sam-altman-shakeup-what-happened/index.html

     

    https://edition.cnn.com/2023/11/20/tech/sam-altman-joins-microsoft/index.html

     

     

    A company’s board of directors has an obligation, first and foremost, to its shareholders. OpenAI’s most important shareholder is Microsoft, the company that gave Altman & Co. $13 billion to help Bing, Office, Windows and Azure leapfrog Google and stay ahead of Amazon, IBM and other AI wannabes.

     

    So a day later, the board reportedly asked for a mulligan and tried to woo Altman back. It was a shocking turn of events and an embarrassing self-own by a company that its widely regarded as the most promising producer of the most exciting new technology.

     

    The board angered a powerful ally and could be forever changed because of the way it handled Altman’s ouster. It could end up with Altman back at the helm, a for-profit company on its nonprofit board – and a massive culture shift at OpenAI.

     

    https://www.bbc.com/news/technology-67474879

     

    But Microsoft, OpenAI’s biggest investor, has decided not to take a chance on Mr Altman taking this tech elsewhere. He will be joining the Seattle-based tech giant, it has been announced, to lead a yet-to-be-created AI research team. His co-founder Greg Brockman goes with him, and judging from the number of staff members posting on X today, it looks like he’ll be taking some of OpenAI’s top talent too.

     

    Many OpenAI staff members are sharing the same post on X. It reads: “OpenAI is nothing without its people”.

     

    Is that a warning to Mr Shear that he might have some hiring to do? A BBC colleague outside OpenAI’s headquarters just told me at 0930 in San Francisco, there were no signs of people arriving for work.

     

    https://edition.cnn.com/2023/11/20/tech/openai-employees-quit-mira-murati-sam-altman/index.html

     

    “Your actions have made it obvious that you are incapable of overseeing OpenAI,” wrote the employees. “We are unable to work for or with people that lack competence, judgement and care for our mission and employees.”

     

    The employees also warned that they would “imminently” follow Altman to Microsoft unless the board resigns and reinstates Altman and Greg Brockman, the former OpenAI president who was also removed by the board on Friday.

    ,
  • Unity Presents New “Runtime Fees” Based on Game Installs and Revenue

    https://80.lv/articles/unity-presents-new-fees-based-on-game-installs-and-revenue/

     

    The new program is called the Unity Runtime Fee and the main principle is based on how often users install games. Unity thinks “an initial install-based fee allows creators to keep the ongoing financial gains from player engagement, unlike a revenue share”.

     

    This is bound to kill all developers who count on free downloads but profitable venues of income like in-app purchase. Which count for a vast majority of the 30% of the market that Unity holds onto.

     

    The extra bill will be estimated by Unity based on non-specific data.

    Unity does not have a ‘known’ way to track installs. Likely due to privacy laws. Thus they will need to ‘estimate’ installs and bill clients based on that. … …. Data which is aggregated with no identifying features isn’t really prevented. Unity’s claim that they can’t distinguish between an install and reinstall or even a paid versus pirated copy actually reinforces the idea that they aren’t using any identifying information, so it would be compliant to privacy laws. … Assumption is that they will get some data from distributors like AppStore, GooglePlay, Valve, Sony, Microsoft, etc… and estimate from there.

     

    https://www.gamedeveloper.com/business/rust-creator-tells-unity-to-get-fucked-as-developers-left-seething-by-new-fee

     

    “It hurts because we didn’t agree to this. We used the engine because you pay up front and then ship your product. We weren’t told this was going to happen. We weren’t warned. We weren’t consulted,” explained the Facepunch Studios founder. “We have spent 10 years making Rust on Unity’s engine. We’ve paid them every year. And now they changed the rules.”

     

    “It’s our fault. All of our faults. We sleepwalked into it. We had a ton of warnings,” they added. “We should have been pressing the eject button when Unity IPO’d in 2020. Every single thing they’ve done since then has been the exact opposite of what was good for the engine. 

     

     

    ,
  • Laurence Van Elegem – The era of gigantic AI models like GPT-4 is coming to an end

    https://www.linkedin.com/feed/update/urn:li:activity:7061987804548870144

     

    Sam Altman, CEO of OpenAI, dropped a 💣 at a recent MIT event, declaring that the era of gigantic AI models like GPT-4 is coming to an end. He believes that future progress in AI needs new ideas, not just bigger models.

    So why is that revolutionary? Well, this is how OpenAI’s LLMs (the models that ‘feed’ chatbots like ChatGPT & Google Bard) grew exponentially over the years:
    ➡️GPT-2 (2019): 1.5 billion parameters
    ➡️GPT-3 (2020): 175 billion parameters
    ➡️GPT-4: (2023): amount undisclosed – but likely trillions of parameters

    That kind of parameter growth is no longer tenable, feels Altman.

    Why?:
    ➡️RETURNS: scaling up model size comes with diminishing returns.
    ➡️PHYSICAL LIMITS: there’s a limit to how many & how quickly data centers can be built.
    ➡️COST: ChatGPT cost over over 100 million dollars to develop.

    What is he NOT saying? That access to data is becoming damned hard & expensive. So if you have a model that keeps needing more data to become better, that’s a problem.

    Why is it becoming harder and more expensive to access data?

    🎨Copyright conundrums: Getty Images, individual artists like Sarah Andersen, Kelly McKernan & Karloa Otiz are suing AI companies over unauthorized use of their content. Universal Music asked Spotify & Apple Music to stop AI companies from accessing their songs for training.

    🔐Privacy matters & regulation: Italy banned ChatGPT over privacy concerns (now back after changes). Germany, France, Ireland, Canada, and Spain remain suspicious. Samsung even warned employees not to use AI tools like ChatGPT for security reasons.

    💸Data monetization: Twitter, Reddit, Stack Overflow & others want AI companies to pay up for training on their data. Contrary to most artists, Grimes is allowing anyone to use her voice for AI-generated songs … for a 50% profit share.

    🕸️Web3’s impact: If Web3 fulfills its promise, users could store data in personal vaults or cryptocurrency wallets, making it harder for LLMs to access the data they crave.

    🌎Geopolitics: it’s increasingly difficult for data to cross country borders. Just think about China and TikTok.

    😷Data contamination: We have this huge amount of ‘new’ – and sometimes hallucinated – data that is being generated by generative AI chatbots. What will happen if we feed that data back into their LLMs?

    No wonder that people like Sam Altman are looking for ways to make the models better without having to use more data. If you want to know more, check our brand new Radar podcast episode (link in the comments), where I talked about this & more with Steven Van Belleghem, Peter Hinssen, Pascal Coppens & Julie Vens – De Vos. We also discussed Twitter, TikTok, Walmart, Amazon, Schmidt Futures, our Never Normal Tour with Mediafin in New York (link in the comments), the human energy crisis, Apple’s new high-yield savings account, the return of China, BYD, AI investment strategies, the power of proximity, the end of Buzzfeed news & much more.

  • A short 170 year history of Neural Radiance Fields (NeRF), Holograms, and Light Fields

    https://neuralradiancefields.io/history-of-neural-radiance-fields/

     

    “Lightfield and hologram capture started with a big theoretical idea 115 years ago and we have struggled to make them viable ever since. Neural Radiance fields aka NeRF along with gaming computers now for the first time provide a promising easy and low cost way for everybody to capture and display lightfields.”

    “Neural Radiance fields (NeRF) recently had its third birthday but the technology is just the latest answer to a question people have been chasing since the 1860s: How do you capture and recreate space (from images)?”

     

    “The plenoptic function measures physical light properties at every point in space and it describes how light transport occurs throughout a 3D volume.”

     

    Google project Starline the latest in real time and compression image to 3D technology

    , ,
  • What is the Light Field?

    http://lightfield-forum.com/what-is-the-lightfield/

     

    The light field consists of the total of all light rays in 3D space, flowing through every point and in every direction.

    How to Record a Light Field

     

    ,
  • Remote working pros and cons

    www.leforttalentgroup.com/business-blog/is-the-genie-out-forever

    Cons of remote working:

    • 1-Prefer 2 distinct locations in life — 1 for work, 1 for everything else
    • 2-Being able to manage the group of employees in one location is preferable — Meetings, training, management of teams and personalities has been easier.
    • 3-Confidentiality and Security — depending on the nature of the business, being able to lessen liabilities by containing the work location
    • 4-Social community — Many fully enjoy the traditional work community and build life long connections
    • 5-Love — A quick Google search shows various sources that cite anywhere from 20-33 percent of people met their spouse through work. What will those stats look like in a year or two from now?
    • 6-Road Warriors with great sound systems in their cars — Some enjoy the commute to unwind after work cranking tunes or catch up with friends and family waiting for the gridlock to ease. Others to continue working from the car.

    Pros of remote working:

    • 1-The overhead costs — Keeping large commercial real estate holdings and related maintenance costs
    • 2-Killer commutes — 5-20 hours/week per employee in lost time now potentially used for other purposes
    • 3-Daily Daycare Scramble — Racing to drop them off or pick them up each day
    • 4-Environmentally, a lower carbon footprint — Less traffic, less pollution
    • 5-Quality Family time — Many parents are spending more time with their growing children

    Some useful tips about working online:

    • Clarify and focus on priorities.
    • Define and manage expectations more explicitly than normal (give context to everything)
    • Log all your working hours.
    • Learn about and respect people’s boundaries.
    • Pay attention to people’s verbal and physical cues.
    • Pay attention to both people’s emotional, hidden and factual cues.
    • Be wary about anticipating, judging, rationalizing, competing, defending, rebutting…
  • Photography basics: Exposure Value vs Photographic Exposure vs Il/Luminance vs Pixel luminance measurements

    Also see: https://www.pixelsham.com/2015/05/16/how-aperture-shutter-speed-and-iso-affect-your-photos/

     

    In photography, exposure value (EV) is a number that represents a combination of a camera’s shutter speed and f-number, such that all combinations that yield the same exposure have the same EV (for any fixed scene luminance).

     

     

    The EV concept was developed in an attempt to simplify choosing among combinations of equivalent camera settings. Although all camera settings with the same EV nominally give the same exposure, they do not necessarily give the same picture. EV is also used to indicate an interval on the photographic exposure scale. 1 EV corresponding to a standard power-of-2 exposure step, commonly referred to as a stop

     

    EV 0 corresponds to an exposure time of 1 sec and a relative aperture of f/1.0. If the EV is known, it can be used to select combinations of exposure time and f-number.

     

    https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

    Note EV does not equal to photographic exposure. Photographic Exposure is defined as how much light hits the camera’s sensor. It depends on the camera settings mainly aperture and shutter speed. Exposure value (known as EV) is a number that represents the exposure setting of the camera.

     

    Thus, strictly, EV is not a measure of luminance (indirect or reflected exposure) or illuminance (incidental exposure); rather, an EV corresponds to a luminance (or illuminance) for which a camera with a given ISO speed would use the indicated EV to obtain the nominally correct exposure. Nonetheless, it is common practice among photographic equipment manufacturers to express luminance in EV for ISO 100 speed, as when specifying metering range or autofocus sensitivity.

     

    The exposure depends on two things: how much light gets through the lenses to the camera’s sensor and for how long the sensor is exposed. The former is a function of the aperture value while the latter is a function of the shutter speed. Exposure value is a number that represents this potential amount of light that could hit the sensor. It is important to understand that exposure value is a measure of how exposed the sensor is to light and not a measure of how much light actually hits the sensor. The exposure value is independent of how lit the scene is. For example a pair of aperture value and shutter speed represents the same exposure value both if the camera is used during a very bright day or during a dark night.

     

    Each exposure value number represents all the possible shutter and aperture settings that result in the same exposure. Although the exposure value is the same for different combinations of aperture values and shutter speeds the resulting photo can be very different (the aperture controls the depth of field while shutter speed controls how much motion is captured).

    EV 0.0 is defined as the exposure when setting the aperture to f-number 1.0 and the shutter speed to 1 second. All other exposure values are relative to that number. Exposure values are on a base two logarithmic scale. This means that every single step of EV – plus or minus 1 – represents the exposure (actual light that hits the sensor) being halved or doubled.

    https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

     

    Formula

    https://en.wikipedia.org/wiki/Exposure_value

     

    https://www.scantips.com/lights/math.html

     

    which means   2EV = N² / t

    where

    • N is the relative aperture (f-number) Important: Note that f/stop values must first be squared in most calculations
    • t is the exposure time (shutter speed) in seconds

    EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

    Example: If f/16 and 1/4 second, then this is:

    (N² / t) = (16 × 16 ÷ 1/4) = (16 × 16 × 4) = 1024.

    Log₂(1024) is EV 10. Meaning, 210 = 1024.

     

    Collecting photographic exposure using Light Meters

    https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

    The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory)

    https://en.wikipedia.org/wiki/Light_meter

    For reflected-light meters, camera settings are related to ISO speed and subject luminance by the reflected-light exposure equation:

    where

    • N is the relative aperture (f-number)
    • t is the exposure time (“shutter speed”) in seconds
    • L is the average scene luminance
    • S is the ISO arithmetic speed
    • K is the reflected-light meter calibration constant

     

    For incident-light meters, camera settings are related to ISO speed and subject illuminance by the incident-light exposure equation:

    where

    • E is the illuminance (in lux)
    • C is the incident-light meter calibration constant

     

    Two values for K are in common use: 12.5 (Canon, Nikon, and Sekonic) and 14 (Minolta, Kenko, and Pentax); the difference between the two values is approximately 1/6 EV.
    For C a value of 250 is commonly used.

     

    Nonetheless, it is common practice among photographic equipment manufacturers to also express luminance in EV for ISO 100 speed. Using K = 12.5, the relationship between EV at ISO 100 and luminance L is then :

    L = 2(EV-3)

     

    The situation with incident-light meters is more complicated than that for reflected-light meters, because the calibration constant C depends on the sensor type. Illuminance is measured with a flat sensor; a typical value for C is 250 with illuminance in lux. Using C = 250, the relationship between EV at ISO 100 and illuminance E is then :

     

    E = 2.5 * 2(EV)

     

    https://nofilmschool.com/2018/03/want-easier-and-faster-way-calculate-exposure-formula

    Three basic factors go into the exposure formula itself instead: aperture, shutter, and ISO. Plus a light meter calibration constant.

    f-stop²/shutter (in seconds) = lux * ISO/C

     

    If you at least know four of those variables, you’ll be able to calculate the missing value.

    So, say you want to figure out how much light you’re going to need in order to shoot at a certain f-stop. Well, all you do is plug in your values (you should know the f-stop, ISO, and your light meter calibration constant) into the formula below:

    lux = C (f-stop²/shutter (in seconds))/ISO

     

    Exposure Value Calculator:

    https://snapheadshots.com/resources/exposure-and-light-calculator

     

    https://www.scantips.com/lights/exposurecalc.html

     

    https://www.pointsinfocus.com/tools/exposure-settings-ev-calculator/#google_vignette

     

    From that perspective, an exposure stop is a measurement of Exposure and provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso & f-stop.
    +-1 stop is a doubling or halving of the amount of light let in when taking a photo.
    1 EV is just another way to say one stop of exposure change.

     

    One major use of EV (Exposure Value) is just to measure any change of exposure, where one EV implies a change of one stop of exposure. Like when we compensate our picture in the camera.

     

    If the picture comes out too dark, our manual exposure could correct the next one by directly adjusting one of the three exposure controls (f/stop, shutter speed, or ISO). Or if using camera automation, the camera meter is controlling it, but we might apply +1 EV exposure compensation (or +1 EV flash compensation) to make the result goal brighter, as desired. This use of 1 EV is just another way to say one stop of exposure change.

     

    On a perfect day the difference from sampling the sky vs the sun exposure with diffusing spot meters is about 3.2 exposure difference.

     ~15.4 EV for the sun
     ~12.2 EV for the sky
    

    That is as a ballpark. All still influenced by surroundings, accuracy parameters, fov of the sensor…

     

     

     

    EV calculator

    https://www.scantips.com/lights/evchart.html#calc

    http://www.fredparker.com/ultexp1.htm

     

    Exposure value is basically used to indicate an interval on the photographic exposure scale, with a difference of 1 EV corresponding to a standard power-of-2 exposure step, also commonly referred to as a “stop”.

     

    https://contrastly.com/a-guide-to-understanding-exposure-value-ev/

     

    Retrieving photographic exposure from an image

    All you can hope to measure with your camera and some images is the relative reflected luminance. Even if you have the camera settings. https://en.wikipedia.org/wiki/Relative_luminance

     

    If you REALLY want to know the amount of light in absolute radiometric units, you’re going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see: Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf

     

    IF you are still trying to gauge relative brightness, the level of the sun in Nuke can vary, but it should be in the thousands. Ie: between 30,000 and 65,0000 rgb value depending on time of the day, season and atmospherics.

     

    The values for a 12 o’clock sun, with the sun sampled at EV 15.5 (shutter 1/30, ISO 100, F22) is 32.000 RGB max values (or 32,000 pixel luminance).
    The thing to keep an eye for is the level of contrast between sunny side/fill side.  The terminator should be quite obvious,  there can be up to 3 stops difference between fill/key in sunny lit objects.

     

    Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
    You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).
    One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
    So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

    If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.
    Note: make sure to set your Nuke read node to ‘raw data’

     

    In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

     

    To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

     

    If you are trying to gauge the EV from the pixel luminance in the image:
    – Setting the center f-stop to 0 means that in a neutral plate, the middle 18% gray will equal to exposure value 0.
    – So if EV 0 = 0.18 middle gray in nuke which equal to a pixel luminance of 0.18, doubling that value, doubles the EV.

    .18 pixel luminance = 0EV
    .36 pixel luminance = 1EV
    .72 pixel luminance = 2EV
    1.46 pixel luminance = 3EV
    ...
    

     

    This is a Geometric Progression function: xn = ar(n-1)

    The most basic example of this function is 1,2,4,8,16,32,… The sequence starts at 1 and doubles each time, so

    • a=1 (the first term)
    • r=2 (the “common ratio” between terms is a doubling)

    And we get:

    {a, ar, ar2, ar3, … }

    = {1, 1×2, 1×22, 1×23, … }

    = {1, 2, 4, 8, … }

    In this example the function translates to: n = 2(n-1)
    You can graph this curve through this expression: x = 2(y-1)  :

    You can go back and forth between the two values through a geometric progression function and a log function:

    (Note: in a spreadsheet this is: = POWER(2; cell# -1)  and  =LOG(cell#, 2)+1) )

    2(y-1) log2(x)+1
    x y
    1 1
    2 2
    4 3
    8 4
    16 5
    32 6
    64 7
    128 8
    256 9
    512 10
    1024 11
    2048 12
    4096 13

     

    Translating this into a geometric progression between an image pixel luminance and EV:

    (more…)

    , ,
  • Big tech snags Hollywood talent to pursue enhanced reality

    www.livemint.com/companies/news/big-tech-snags-hollywood-talent-to-pursue-enhanced-reality-11604470420726.html

    https://www.cultofmac.com/726927/apple-among-the-tech-giants-snapping-up-vfx-experts-to-work-on-ar/

    Silicon Valley’s biggest companies are snapping up the people and technology behind some of Hollywood’s blockbusters in an effort to improve their augmented- and virtual-reality offerings.

    “It’s harder to make as much money working in visual effects,” said Paul Debevec, a veteran of the visual-effects industry who is now a professor at the University of Southern California. About 4½ years ago Google hired Mr. Debevec, an award-winning pioneer in the creation of convincing digital humans, to help the company advance extended reality.

    Working in visual effects in film and TV can mean long, unpredictable hours, limited compensation, poor job security and paltry benefits—many call it the “cool tax” one pays for the pleasure of working in Hollywood.

    ,
  • Unity3D is finally going public

    www.pcgamesinsider.biz/news/71461/unity-is-finally-going-public/

    techcrunch.com/2020/08/24/unitys-ipo-numbers-look-pretty-unreal/

    Between calendar 2018 and 2019, Unity’s revenue rose by 42 per cent year-on-year to $541.8m. Meanwhile, the firm’s revenue for the six months ending June 30th, 2020 was $351.3m, an increase of 39 per cent. The company has clocked up net losses of $163.2m for the year ending December 31st 2019 and $54.1m for the six months concluding June 30th, 2020.

    Unity reckons the market it addresses around the world is worth around $29bn, across both video games and other creative industries it works in.

    Unlike Epic Games, Unity has long worked with the major platforms and gaming companies to get their engine in front of as many developers and gamers as possible. In fact, the company estimates that 53% of the top 1,000 mobile games on the Apple App Store and Google Play Store and over 50% of mobile, personal computer and console games were made with Unity.

    ,

Categories


Archive