https://www.filmsourcing.com/film-industry-cheat-sheet
cgsupervisor.blogspot.com/p/pipeline-articles.html
A VFX pipeline belongs to one of three classes:
A VFX pipeline is comprised of four structural dimensions:
A VFX pipeline utilizes technology but is not the technology
A VFX pipeline divides a work-flow into separate and meaningful tasks assigned to two or more persons
A VFX pipeline tasks divisions are determined using a tree of specialization across the three dimensions
The VFX pipeline structural form is dictated by
A VFX pipeline is malleable
http://www.clockandflame.com/downloads.html
The Production Pipeline, Webinar
As part of ongoing promotions for the book, The Digital Visual Effects Studio: The Artists and Their Work Revealed, Mayur hosted webinars on December 6 and 7, 2009. In the sessions, Mayur made the case for artist-centric pipelines, as opposed to data-centric pipelines and he shared some insight as to why production pipelines fail to be fast, light and powerful. We are pleased to make available a recording of one of the sessions, with a run-time of 56m30s.
http://www.clockandflame.com/media/dvfxs_webinar_pipeline.mp4
http://www.clockandflame.com/media/dvfxs_webinar_pipeline.flv
3Dprinting (167) A.I. (523) animation (328) blender (179) colour (220) commercials (45) composition (144) cool (357) design (609) Featured (46) hardware (289) IOS (106) jokes (133) lighting (268) modeling (102) music (182) photogrammetry (150) photography (737) production (1217) python (77) quotes (475) reference (295) software (1272) trailers (287) ves (504) VR (218)
Thank you for visiting!! Please bear with us while we go through a redesign of the blog. New features and phone support on the way!
https://x.com/AkiyoshiKitaoka/status/1798705648001327209
The left face appears whitish and the right one blackish, but they are made up of the same luminance.
https://community.wolfram.com/groups/-/m/t/3191015
Illusory staircase Gelb effect
https://www.psy.ritsumei.ac.jp/akitaoka/illgelbe.html
Sources familiar with details of the production pegged the cost of the first nine 40-minute episodes at north of $80 million; the second batch of nine about to air has a price tag approaching $100 million. What drove the cost far beyond typical animation expenses, insiders say, were both a labor-intensive approach and frequent cost overruns triggered by delayed script deliveries after the second season was put into production with only a fraction of the season written.
But even more eyebrow-raising than the production cost was that Riot spent $60 million of its own money to promote the first season of “Arcane,” exponentially more than a studio would typically spend for a show it isn’t distributing — and far more than Netflix itself spent ($4 million per episode). Reps for the streaming service declined to comment for this article.
Mochi 1 AI operates on a pay-as-you-go model, meaning you only pay for the services you utilize without any hidden fees.
Bella works in spectral space, allowing effects such as BSDF wavelength dependency, diffraction, or atmosphere to be modeled far more accurately than in color space.
https://superrendersfarm.com/blog/uncategorized/bella-a-new-spectral-physically-based-renderer/
https://www.hasielhassan.com/PlanCraft/#about
It helps you create and Open Schedule Format (OSF) JSON file for your projects.
Popular Searches unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.
https://kartaverse.github.io/Reactor-Docs/#/com.AndrewHazelden.KartaVR
Kartaverse is a free open source post-production pipeline that is optimized for the immersive media sector. If you can imagine it, Kartaverse can help you create it in XR!
“Karta” is the Swedish word for map. With KartaVR you can stitch, composite, retouch, and remap any kind of panoramic video: from any projection to any projection. This provides the essential tools for 360VR, panoramic video stitching, depthmap, lightfield, and fulldome image editing workflows.
Kartaverse makes it a breeze to accessibly and affordably create content for use with virtual reality HMDs (head mounted displays) and fulldome theatres by providing ready to go scripts, templates, plugins, and command-line tools that allow you to work efficiently with XR media. The toolset works inside of Blackmagic Design’s powerful node based Fusion Studio and DaVinci Resolve Studio software.
https://education.civitai.com/generative-ai-glossary/
Term | Tags | Description |
---|---|---|
.ckpt | Model | “Checkpoint”, a file format created by PyTorch Lightning, a PyTorch research framework. It contains a PyTorch Lightning machine learning model used (by Stable Diffusion) to generate images. |
.pt | Software | A machine learning model file created using PyTorch, containing algorithms used to automatically perform a task. |
.Safetensors | Model | A file format for Checkpoint models, less susceptible to embedded malicious code. See “Pickle” |
ADetailer | Software, Extension | A popular Automatic1111 Extension, mostly used to enhance fine face and eye detail, but can be used to re-draw hands and full characters. |
AGI | Concept | Artificial General Intelligence (AGI), the point at which AI matches or exceeds the intelligence of humans. |
Algorithm | Concept, Software | A series of instructions that allow a computer to learn and analyze data, learning from it, and use that learning to interpret and accomplish future tasks on its own. |
AnimateDiff | Software, Extension | Technique which involves injecting motion into txt2img (or img2img) generations. https://animatediff.github.io/ |
API | Software | Application Programmer Interface – a set of functions and tools which allow interaction with, or between, pieces of software. |
Auto-GPT | Software, LLM | |
Automatic1111 | Developer, SD User Interface | Creator of the popular Automatic1111 WebUI graphical user interface for SD. |
Bard | Software, LLM | Google’s Chatbot, based on their LaMDA model. |
Batch | A subset of the training data used in one iteration of model training. In inference, a group of images. | |
Bias | Concept, LLM | In Large Language Models, errors resulting from training data; stereotypes, attributing certain characteristics to races or groups of people, etc. Bias can cause models to generate offensive and harmful content. |
Bing | Software, LLM | Microsoft’s ChatGTP powered Chatbot. |
CFG | Setting | Classifier Free Guidance, sometimes “Guidance Scale”. Controls how closely the image generation process follows the text prompt. |
Checkpoint | Model | The product of training on millions of captioned images scraped from multiple sources on the Web. This file drives Stable Diffusion’s txt2img, img2img, txt2video |
Civitai (Civitai.com) | Community Resource | Popular hosting site for all types of Generative AI resources. |
Civitai Generator | Software, Tool | Free Stable Diffusion Image Generation Interface, available on Civitai.com. |
Civitai Trainer | Software, Tool | LoRA Training interface, available on Civitai.com, for SDXL and 1.5 based LoRA. |
CLIP | Software | An open source model created by OpenAI. Trained on millions of images and captions, it determines how well a particular caption describes an image. |
Cmdr2 | Developer, SD User Interface | Creator of the popular EasyDiffusion, simple one-click install graphical user interface for SD. |
CodeFormer | Face/Image Restoration, Model | A facial image restoration model, for fixing blurry, grainy, or disfigured faces. |
Colab | Tool | Colaboratory, a product from Google Research, allowing execution of Python code through the browser. Particularly geared towards machine learning applications. https://colab.research.google.com/ |
ComfyUI | SD User Interface, Software | A popular powerful modular UI for Stable Diffusion with a “workflow” type workspace. Somewhat more complex than Auto1111 WebUI https://github.com/comfyanonymous/ComfyUI |
CompVis | Organization | Computer Vision & Learning research group at Ludwig Maximilian University of Munich. They host Stable Diffusion models on Hugging Face. |
Conda | Application, Software | An open source package manager for many programming languages, including Python. |
ControlNet | UI Extension | An Extension to Auto1111 WebUI allowing images to be manipulated in a number of ways. https://github.com/Mikubill/sd-webui-controlnet |
Convergence | Concept | The point in image generation where the image no longer changes as the steps increase. |
CUDA | Hardware, Software | Compute Unified Device Architecture, Nvdia’s parallel processing architecture. |
DALL-E / DALL-E 2 | Organization | Deep learning image models created by OpenAI, available as a commercial image generation service. |
Danbooru | Community Resource | English-based image board website specializing in erotic manga fan art, NSFW. |
Danbooru Tag | Community Resource | System of keywords applied to Danbooru images describing the content within. When using Checkpoint models trained on Danbooru images, keywords can be referenced in Prompts. |
DDIM (Sampler) | Sampler | Denoising Diffusion Implicit Models. See Samplers. |
Deep Learning | Concept | A type of Machine Learning, where neural networks attempt to mimic the behavior of the human brain to perform tasks. |
Deforum | UI Extension, Community Resource | A community of AI image synthesis developers, enthusiasts, and artists, producing Generative AI tools. Most commonly known for a Stable Diffusion WebUI video extension of the same name. |
Denoising/Diffusion | Concept | The process by which random noise (see Seed) is iteratively reduced into the final image. |
depth2img | Concept | Infers the depth of an input image (using an existing model), and then generates new images using both the text and depth information. |
Diffusion Model (DM) | Model | A generative model, used to generate data similar to the data on which they are trained. |
DPM adaptive (Sampler) | Sampler | Diffusion Probabilistic Model (Adaptive). See Samplers. Ignores Step Count. |
DPM Fast (Sampler) | Sampler | Diffusion Probabilistic Model (Fast). See Samplers. |
DPM++ 2M (Sampler) | Sampler | Diffusion Probabilistic Model – Multi-step. Produces good quality results within 15-20 Steps. |
DPM++ 2M Karras (Sampler) | Sampler | Diffusion Probabilistic Model – Multi-step. Produces good quality results within 15-20 Steps. |
DPM++ 2S a Karras (Sampler) | Sampler | Diffusion Probabilistic Model – Single-step. Produces good quality results within 15-20 Steps. |
DPM++ 2Sa (Sampler) | Sampler | Diffusion Probabilistic Model – Single-step. Produces good quality results within 15-20 Steps. |
DPM++ SDE (Sampler) | Sampler | |
DPM++ SDE Karras (Sampler) | Sampler | |
DPM2 (Sampler) | Sampler | |
DPM2 a (Sampler) | Sampler | |
DPM2 a Karras (Sampler) | Sampler | |
DPM2 Karras (Sampler) | Sampler | |
DreamArtist | UI Extension, Software | An extension to WebUI allowing users to create trained embeddings to direct an image towards a particular style, or figure. A PyTorch implementation of the research paper DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Contrastive Prompt-Tuning, Ziyi Dong, Pengxu Wei, Liang Lin. |
DreamBooth | Software, Community Resource | Developed by Google Researchers, DreamBooth is a deep learning image generation model designed to fine-tune existing models (checkpoints). Can be used to create custom models based on a set of images. |
DreamStudio | Organization, SD User Interface | A commercial web-based image generation service created by Stability AI using Stable Diffusion models. |
Dropout (training) | Concept | A technique to prevent overfitting by randomly ignoring some images/tokens, etc. during training. |
DyLoRA C3Lier | ||
DyLoRA LierLa | ||
DyLoRA Lycoris | ||
EMA | Model | Exponential Moving Average. A full EMA Checkpoint model contains extra training data which is not required for inference (generating images). Full EMA models can be used to further train a Checkpoint. |
Emad | Organization, Developer | Emad Mostaque, CEO and co-founder of Stability AI, one of the companies behind Stable Diffusion. |
Embedding | Model, UI Extension | Additional file inputs to help guide the diffusion model to produce images that match the prompt. Can be a graphical style, representation of a person, or object. See Textual Inversion and Aesthetic Gradient. |
Emergent Behavior | Concept, LLM | Unintended abilities exhibited by an AI model. |
Entropy | Concept | A measure of randomness, or disorder. |
Epoch | Concept | The number of times a model training process looked through a full data set of images. E.g. The 5th Epoc of a Checkpoint model looked five times through the same data set of images. |
ESRGAN | Upscaler, Model | Enhanced Super-Resolution Generative Adversarial Networks. A technique to reconstruct a higher-resolution image from a lower-resolution image. E.g. upscaling of a 720p image into 1080p. Implemented as a tool within many Stable Diffusion interfaces. |
Euler (Sampler) | Sampler | Named after Leonhard Euler, a numerical procedure for solving ordinary differential equations, See Samplers. |
Euler a (Sampler) | Sampler | Ancestral version of the Euler sampler. Named after Leonhard Euler, a numerical procedure for solving ordinary differential equations, See Samplers. |
Finetune | Concept | |
float16 | Setting, Model, Concept | Half-Precision floating point number. |
float32 | Setting, Model, Concept | Full-Precision floating point number. |
Generative Adversarial Networks (GANs) | Model | A pair of AI models: one generates new data, and the other evaluates its quality. |
Generative AI | Concept | |
GFPGAN | Face/Image Restoration, Model | Generative Facial Prior, a facial restoration model for fixing blurry, grainy, or disfigured faces. |
Git (GitHub) | Application, Software | Hosting service for software development, version control, bug tracking, documentation. |
GPT-3 | Model, LLM | Generative Pre-trained Transformer 3, a language model, using machine learning to produce human-like text, based on an initial prompt. |
GPT-4 | Model, LLM | Generative Pre-trained Transformer 4, a language model, using machine learning to produce human-like text, based on an initial prompt. A huge leap in performance and reasoning capability over GPT 3/3.5. |
GPU | Hardware | A Graphics Processing Unit, a type of processor designed to perform quick mathematical calculations, allowing it to render images and video for display. |
Gradio | Software | A web-browser based interface framework, specifically for Machine Learning applications. Auto1111 WebUI runs in a Gradio interface. |
Hallucinations (LLM) | LLM, Concept | Sometimes LLM models like ChatGPT produce information that sounds plausible but is nonsensical or entirely false. This is called a Hallucination. |
Hash (Checkpoint model) | Model, Concept | An algorithm for verifying the integrity of a file, by generating an alphanumeric string unique to the file in question. Checkpoint models are hashed, and the resulting string can be used to identify that model. |
Heun (Sampler) | Sampler | Named after Karl Heun, a numerical procedure for solving ordinary differential equations. See Samplers. |
Hugging Face | Organization | A community/data science platform providing tools to build, train, and deploy machine learning models. |
Hypernetwork (Hypernet) | Model | A method to guide a Checkpoint model towards a specific theme, object, or character based on its’ own content (no external data required). |
img2img | Concept | Process to generate new images based on an input image, and txt2img prompt. |
Inpainting | Concept | The practice of removing or replacing objects in an image based on a painted mask. |
Kohya | Software | Can refer to Kohya-ss scripts for LoRA/finetuning (https://github.com/kohya-ss/sd-scripts) or the Windows GUI implementation of those scripts (https://github.com/bmaltais/kohya_ss) |
LAION | Organization | A non-profit organization, providing data sets, tools, and models, for machine learning research. |
LAION-5B | Model | A large-scale dataset for research purposes consisting of 5.85 billion CLIP-filtered image-text pairs. |
Lanczos | Upscaler, Model | An interpolation method used to compute new values for sampled data. In this case, used to upscale images. Named after creator, Cornelius Lanczos. |
Large Language Model (LLM) | LLM, Model | A type of Neural Network that learns to write and converse with users. Trained on billions of pieces of text, LLMs excel at producing coherent sentences and replying to prompts in the correct context. They can perform tasks such as re-writing and summarizing text, chatting about various topics, and performing research. |
Latent Diffusion | Model | A type of diffusion model that contains compressed image representations instead of the actual images. This type of model allows the storage of a large amount of data that can be used by encoders to reconstruct images from textual or image inputs. |
Latent Mirroring | Concept, UI Extension | Applies mirroring to the latent images mid-generation to produce anything from subtly balanced compositions to perfect reflections. |
Latent Space | Concept | The information-dense space where the diffusion model’s image representation, attention, and transformation are merged and form the initial noise for the diffusion process. |
LDSR | Upscaler | Latent Diffusion Super Resolution upscaling. A method to increase the dimensions/quality of images. |
Lexica | Community Resource | Lexica.art, a search engine for stable diffusion art and prompts. |
LlamaIndex (GPT Index) | Software, LLM | https://github.com/jerryjliu/llama_index – Allows the connection of text data to an LLM via a generated “index”. |
LLM | LLM, Model | A type of Neural Network that learns to write and converse with users. Trained on billions of pieces of text, LLMs excel at producing coherent sentences and replying to prompts in the correct context. They can perform tasks such as re-writing and summarizing text, chatting about various topics, and performing research. |
LMS (Sampler) | Sampler | |
LMS Karras (Sampler) | Sampler | |
LoCON | ||
LoHa | ||
LoKR | ||
LoRA | Model, Concept | Low-Rank Adaptation, a method of training for SD, much like Textual Inversion. Can capture styles and subjects, producing better results in a shorter time, with smaller output files, than traditional finetuning. |
LoRA C3Lier | ||
LoRA LierLa | ||
Loss (function) | Concept | A measure of how well an AI model’s outputs match the desired outputs. |
Merge (Checkpoint) | Model | A process by which Checkpoint models are combined (merged) to form new models. Depending on the merge method (see Weighted Sum, Sigmoid) and multiplier, the merged model will retain varying characteristics of its’ constituent models. |
Metadata | Concept, Software | Metadata is data that describes data. In the context of Stable Diffusion, metadata is often used to describe the Prompt, Sampler settings, CFG, steps, etc. which are used to define an image, and stored in a .png header. |
MidJourney | Organization, SD User Interface | A commercial web-based image generation service, similar to DALL-E, or the free, open source, Stable Diffusion. |
Model | Model | Alternative term for Checkpoint |
Motion Module | Software | Used by AnimateDiff to inject motion into txt2img (or img2img) generations. |
Multimodal AI | Concept | AI that can process multiple types of inputs, including text, images, video or speech. |
Negative Prompt | Setting, Concept | Keywords which tell a Stable Diffusion prompt what we don’t want to see, in the generated image. |
Neural Network | Concept, Software | Mathematical systems that act like a human brain, with layers of artificial “neurons” helping find connections between data. |
Notebook | Community Resource, Software | See Colab. A Jupyter notebook service providing access, free of charge, to computing resources including GPUs. |
NovelAI (NAI) | Organization | A paid, subscription based AI-assisted story (text) writing service. Also has a txt2img model, which was leaked and is now incorporated into many Stable Diffusion models. |
Olivio (Sarikas) | Community Resource | Olivio produces wonderful SD content on YouTube (https://www.youtube.com/@OlivioSarikas) – one of the best SD news YouTubers out there! |
OpenAI | Organization | AI research laboratory consisting of the for-profit corporation OpenAI LP and the non-profit OpenAI Inc. |
OpenPose | Model, Software | A method for extracting a “skeleton” from an image of a person, allowing poses to be transferred from one image to another. Used by ControlNet. |
Outpainting | Concept | The practice of extending the outer border of an image, into blank canvas space, while maintaining the style and content of the image. |
Overfitting | Concept | When an AI model learns the training data too well and performs poorly on unseen data. |
Parameters (LLMs) | Concept, Software, LLM | Numerical points across a Large Language Model’s training data. Parameters dictate how proficient the model is at its tasks. E.g. a 6B (Billion) Parameter model will likely perform less well than a 13B Parameter model. |
Pickle | Concept, Software | Community slang term for potentially malicious code hidden within models and embeddings. To be “pickled” is to have unwanted code execute on your machine (be hacked). |
PLMS (Sampler) | Sampler | Pre-Trained Language Models. See Samplers. |
Prompt | Concept | Text input to Stable Diffusion describing the particulars of the image you would like output. |
Pruned/Pruning | Model | A method of optimizing a Checkpoint model to increase the speed of inference (prompt generation), file size, and VRAM cost. |
Python | Application, Software | A popular, high-level, general purpose coding language. |
PyTorch | Application, Software | An open source machine learning library, created by META. |
Real-ESRGAN | Upscaler | An image restoration method. |
Refiner | Model | Part of SDXL’s two-stage pipeline – the Refiner further enhances detail from the base model. |
SadTalker | UI Extension | https://github.com/OpenTalker/SadTalker A framework for facial animation/lip synching based upon an audio input. |
Samplers | Sampler | Mathematical functions providing different ways of solving differential equations. Each will produce a slightly (or significantly) different image result from the random latent noise generation. |
Sampling Steps | Sampler, Concept | The number of how many steps to spend generating (diffusing) your image. |
SD 1.4 | Model | A latent txt2img model, the default model for SD at release. Fine-tuned on 225k steps at resolution 512×512 on laion-aesthetics v2 data set. |
SD 1.5 | Model | A latent txt2img model, updated version of 1.4, fine-tuned on 595k steps at resolution 512×512 on laion-aesthetics v2 data set. |
SD UI | Application, Software | Colloquial term for Cmdr2’s popular graphical interface for Stable Diffusion prompting. |
SD.Next | Software | See Vlad, Vladmandic Fork of Auto1111 WebUI. |
SDXL 0.9 | Model | Stability AI’s latest (March 2023) Stable Diffusion Model. Will become SDXL 1.0 and be released ~July 2023. |
Seed | Concept | A pseudo-random number used to initialize the generation of random noise, from which the final image is built. Seeds can be saved and used along with other settings to recreate a particular image. |
Shoggoth Tongue | Concept, LLM | A humorous allusion to the language of the fictional monsters in the Cthulhu Mythos, “Shoggoth Tongue” is the name given to advanced ChatGPT commands which are particularly arcane and difficult to understand, but allow ChatGPT to perform advanced actions outside of the intended operation of the system. |
Sigmoid (Interpolation Method) | Model, Concept | A method for merging Checkpoint Models based on a Sigmoid function – a mathematical function producing an “S” shaped curve. |
Stability AI | Organization | AI technology company co-founded by Emad Mostaque. One of the companies behind Stable Diffusion. |
Stable Diffusion (SD) | Application, Software | A deep learning, text-to-image model released in 2022. It is primarily used to generate detailed images based on provided text descriptions. |
SwinIR | Face/Image Restoration, Model | An image restoration transform, aiming to restore high quality images from low quality images. |
Tensor | Software | A container, in which multi-dimensional data can be stored. |
Tensor Core | Hardware | Processing unit technology developed by Nvidia, designed to carry out matrix multiplication, an arithmetic operation. |
Textual Inversion | Model, Concept, UI Extension | A technique for capturing concepts from a small number of sample images in a way that can influence txt2img results towards a particular face, or object. |
Token | Concept | A token is roughly a word, a punctuation, or a Unicode character in a prompt. |
Tokenizer | Concept, Model | The process/model through which text prompts are turned into tokens, for processing. |
Torch 2.0 | Software | The latest (March 2023) PyTorch release. |
Training | Concept | The process of teaching an AI model by feeding it data and adjusting its parameters. |
Training Data | Model | A set of many images used to “train” a Stable Diffusion model, or embedding. |
Training Data | Concept, LLM, Model | The data sets uses to help AI models learn; can be text, images, code, or other data, depending on the type of model to be trained. |
Turing Test | Concept | Named after mathematician Alan Turing, a test of a machine’s ability to behave like a human. The machine passes if a human can’t distinguish the machine’s response from another human. |
txt2img | Concept, Model | Model/method of image generation via entry of text input. |
txt2video | Concept, Model | Model/method of video generation via entry of text input. |
Underfitting | When an AI model cannot capture the underlying pattern of the data due to incomplete training. | |
UniPC (Sampler) | Sampler | A recently released (3/2023) sampler based upon https://huggingface.co/docs/diffusers/api/schedulers/unipc |
Upscale | Upscaler, Concept | The process of converting low resolution media (images or video) into higher resolution media. |
VAE | Model | Variational Autoencoder. A .vae.pt file which accompanies a Checkpoint model and provides additional detail improvements. Not all Checkpoints have an associated vae file, and some vae files are generic and can be used to improve any Checkpoint model. |
Vector (Prompt Word) | Concept | An attempt to mathematically represent the meaning of a word, for processing in Stable Diffusion. |
Venv | Software | A Python “Virtual Environment” which allows multiple instances of python packages to run, independently, on the same PC. |
Vicuna | LLM, Software, Model | https://vicuna.lmsys.org/ An Open-Source Chatbot model founded by students and faculty from UC Berkeley in collaboration with UCSD and CMU. |
Vladmandic | Software, SD User Interface | A popular “Fork” of Auto1111 WebUI, with its own feature-set. https://github.com/vladmandic/automatic |
VRAM | Hardware | Video random access memory. Dedicated Graphics Card (GPU) memory used to store pixels, and other graphical processing data, for display. |
Waifu Diffusion | Model | A popular text-to-image model, trained on high quality anime images, which produces anime style image outputs. Originally produced for SD 1.4, now has an SDXL version. |
WebUI | Application, Software, SD User Interface | Colloquial term for Automatic1111’s WebUI – a popular graphical interface for Stable Diffusion prompting. |
Weighted Sum (Interpolation Method) | Concept | A method of Checkpoint merging using the formula Result = ( A * (1 – M) ) + ( B * M ) . |
Weights | Model | Alternative term for Checkpoint |
Wildcards | Concept | Text files containing terms (clothing types, cities, weather conditions, etc.) which can be automatically input into image prompts, for a huge variety of dynamic images. |
xformers | UI Extension, Software | Optional library to speed up image generation. Superseded somewhat by new options implemented by Torch 2.0 |
yaml | Software, UI Extension, Model | A human-readable data-serialization programming language commonly used for configuration files. Yaml files accompany Checkpoint models, and provide Stable Diffusion with additional information about the Checkpoint. |
https://www.russian3dscanner.com/wrap4d/
R3DS Wrap4D is an extended version of Wrap designed specifically for 4D processing. It takes a sequence of textured 3D scans as an input and produces a sequence of meshes with a consistent topology as an output.
The solution includes 12 new nodes. At the heart of the pipeline is the FacialWrapping node which combines the power of the BlendWrapping node with the results from the lip and eyelid detector. The idea behind the node is to provide a robust result that doesn’t require cleanup.
The USD ALab data set is the Animal Logic interpretation and implementation of real world production and pipeline concepts. There are two downloads available, including guiding documents and two sets of textures.
The first download contains the ALab scene assets themselves, derived from our production assets and conformed for compatibility to allow them to load in any tool which supports USD.
The second download is an optional extra, a “production rendering Texture Pack,” which delivers 4K OpenEXR textures with udims for production style rendering.
https://www.awn.com/blog/blame-canada-and-covid
deadline.com/2021/08/walt-disney-animation-studios-vancouver-studio-what-if-1234809175/
Effective next year, Walt Disney Animation Studios is throwing the doors open to a new facility in Vancouver, BC that will focus on long-form series and special projects for Disney+. The first in the pipeline is the anticipated, feature-quality musical series Moana.
https://www.freecodecamp.org/news/advanced-computer-vision-with-python/
https://www.freecodecamp.org/news/how-to-use-opencv-and-python-for-computer-vision-and-ai/
Working for a VFX (Visual Effects) studio provides numerous opportunities to leverage the power of Python and OpenCV for various tasks. OpenCV is a versatile computer vision library that can be applied to many aspects of the VFX pipeline. Here’s a detailed list of opportunities to take advantage of Python and OpenCV in a VFX studio:
Interpolating frames from an EXR sequence using OpenCV can be useful when you have only every second frame of a final render and you want to create smoother motion by generating intermediate frames. However, keep in mind that interpolating frames might not always yield perfect results, especially if there are complex changes between frames. Here’s a basic example of how you might use OpenCV to achieve this:
import cv2 import numpy as np import os # Replace with the path to your EXR frames exr_folder = "path_to_exr_frames" # Replace with the appropriate frame extension and naming convention frame_template = "frame_{:04d}.exr" # Define the range of frame numbers you have start_frame = 1 end_frame = 100 step = 2 # Define the output folder for interpolated frames output_folder = "output_interpolated_frames" os.makedirs(output_folder, exist_ok=True) # Loop through the frame range and interpolate for frame_num in range(start_frame, end_frame + 1, step): frame_path = os.path.join(exr_folder, frame_template.format(frame_num)) next_frame_path = os.path.join(exr_folder, frame_template.format(frame_num + step)) if os.path.exists(frame_path) and os.path.exists(next_frame_path): frame = cv2.imread(frame_path, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) next_frame = cv2.imread(next_frame_path, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) # Interpolate frames using simple averaging interpolated_frame = (frame + next_frame) / 2 # Save interpolated frame output_path = os.path.join(output_folder, frame_template.format(frame_num)) cv2.imwrite(output_path, interpolated_frame) print(f"Interpolated frame {frame_num}") # alternatively: print("Interpolated frame {}".format(frame_num))
Please note the following points:
You’ve been in the VFX Industry for over a decade. Tell us about your journey.
It all started with my older brother giving me a Commodore64 personal computer as a gift back in the late 80′. I realised then I could create something directly from my imagination using this new digital media format. And, eventually, make a living in the process.
That led me to start my professional career in 1990. From live TV to games to animation. All the way to live action VFX in the recent years.
I really never stopped to crave to create art since those early days. And I have been incredibly fortunate to work with really great talent along the way, which made my journey so much more effective.
What inspired you to pursue VFX as a career?
An incredible combination of opportunities, really. The opportunity to express myself as an artist and earn money in the process. The opportunity to learn about how the world around us works and how best solve problems. The opportunity to share my time with other talented people with similar passions. The opportunity to grow and adapt to new challenges. The opportunity to develop something that was never done before. A perfect storm of creativity that fed my continuous curiosity about life and genuinely drove my inspiration.
Tell us about the projects you’ve particularly enjoyed working on in your career
I quite enjoyed working on live TV projects, as the combination of tight deadlines and high quality was quite an incredible learning platform as a professional artist. But working on large, high end live action feature projects was really where I learnt most of my trade. And gave me the most satisfaction.
Every film I worked on had some memorable experiences. Right from Avatar to Iron Man 3 to Jungle Book to The Planet of the Apes to The Hobbits to name a few.
But above all, the technical challenges and the high quality we reached in each and every of the projects that I worked on, the best memories come from working with amazing and skilled artists, from a variety of disciplines. As those were my true mentors and became my best friends.
Post Production, Animation, VFX, Motion Graphics, Video Editing …
What are some technologies and trends that you think are emerging in the VFX Industry?
In the last few years there has definitely been a bias from some major studios to make VFX a commodity. In the more negative sense of the word. When any product reaches a level of quality that attracts a mass of consumers and reaches a plateau of opportunities, large corporation tend to respond with maximising its sale values by leveraging marketing schemes and deliverable more than the core values of the product itself. This is often a commoditisation approach that tends to empower agents who are not necessarily knowledgeable of a product’s cycles, and in that process, lowering the quality of the product itself for the sake of profits. It is a pretty common event in modern society and it applies to any brand name, not just VFX.
One challenge with VFX’s technology and artistry is that it relies on the effectiveness of artists and visionaries for the most. And limiting the authority, ownerships and perspective of such a crowd has definitely directly impacted the overall quality of the last decade of productions, both technically and artistically. There are very few and apart creative forces who have been able to deliver project that one could identify as a truly creative breakthrough. While the majority of productions seem to have suffered from some of these commoditisation patterns.
The other bigger challenge with this current trend is that VFX, due to various, historical business arrangements, is often relying on unbalanced resources as well as very small and feeble economic cycles and margins. Which make the entire industry extremely susceptible to marketing failures and to unstable leadership. As a few recent bankruptcies have demonstrated.
It is taking some reasonable time for the VFX crowd to acknowledge these trends and learn to be profitable, as the majority has never been educated on fair business practices.
But. Thankfully, the VFX circle is also a crowd of extremely adaptable and talented individuals, who are quite capable at resolving issues, finding alternatives and leveraging their passion. Which I believe is one of the drives behind the current evolution in the use of artificial intelligence, virtual reality, virtual production, real time rendering, and so on.
There is still a long path ahead of us but I hope we are all learning ways to make our passion speaks in profitable ways for everyone.
It is also highly likely that, in a near future, larger software and hardware corporation, thanks to their more profitable business practices, large development teams and better understanding of marketing, will eventually take over a lot of the cycles that the current production houses currently run. And in that process allow creative studios to focus back on VFX artistry.
What effect has the pandemics-induced lockdown had on the industry?
It is still early to say. I fear that if live action production does not start soon, we may see some of the economic challenges I mention above. At both studio and artists’ scale. There is definitely a push from production houses to make large distribution clients understand the fragility of the moment, especially in relation to payment cycles and economic support. Thus, there is still a fair risk that the few studios which adopted a more commoditised view to production will make their artists pay some price for their choices.
But, any challenge brings opportunities. For example, there is finally some recognition into a momentum to rely on work-from-home as a feasible solution to a lot of the current office production’s limitations and general artistry restrictions. Which, while there is no win-win in this pandemic, could be a silver lining.
What would you say to the budding artists who wish to become CG artists or VFX professionals?
Follow your passion but treat this career as any other business.
Learn to be adaptable. Find a true balance between professional and family life. Carefully plan your future. And watch our channel to learn more about all these.
What inspired you to create a channel for aspiring artists?
As many fellow and respected artists, I love this industry, but I had to understand a lot of business practices at my own expenses.
You can learn tools, cycles and software from books and schools. But production life tends to drive its own rhythms and there are fewer opportunities to absorb those.
Along my career I had some challenges finding professional willing to share their time to invest into me. But I was still extremely fortunate to find other mentors who helped me to be economically and professionally successful in this business. I owe a lot to these people. I promised myself I would exchange that favour by helping other artists, myself.
What can students expect to learn from your channel?
I am excited to have the opportunity to fill some of the voids that the current education systems and industry may have. This by helping new artists with true life stories by some of the most accomplished and successful talents I met during my career. We will talk about technology trends as much as our life experiences as artists. Discussing career advises. Trying to look into the future of the industry. And suggesting professional tips. The aim through this mentor-ship is to inspire new generations to focus on what is more important for the VFX industry. Take responsibilities for their art and passions as much as their families.
And, in the process, to feel empowered to materialise from their imagination more and more of those creative, awe inspiring moments that this art form has gifted us with so far.
www.shootonline.com/news/pandemic-production-prospects-possibilities-concerns
“For many, production has stopped in its tracks due to the coronavirus pandemic. ”
“Others have stepped up their in-house activity, tapping into their homegrown production and post capabilities.” [Or working from home]
“While losing the physical proximity and communal nature of collaboration, creatives and artists have managed to stay connected through technology”
“While some projects have “completely died,” said Gavin Wellsman [a creative director at The Mill in New York], others are still in the pipeline and have adapted to a world where social distancing is imperative and live-action production as we’ve known it is no longer feasible at the moment. Clients are turning to visual effects, CG and other options.”
“Still, much work has fallen by the wayside. And many projects don’t translate properly from live action to another [full CG] discipline.”
“London-based independent production house MindsEye launched HomeStudio…. HomeStudio brings together a lineup of directors who have their own equipment, DPs with studio space, and stop-frame animators who can turn out content in this period of imposed self-isolation. This isn’t a roster of talent that a company has signed in the traditional sense; rather it’s a collection of talent that’s being made available to agencies and brands.”
“However, ingenuity, imagination and improvisation can only go so far when production and post companies are suffering from poor cash flow, a situation which is exacerbated by the COVID-19 crisis. …many companies would settle for–or at least welcome with open arms–getting paid in a timely fashion by marketers and ad agencies for services already rendered. ”
“In a live poll of over 500 AICP member participants during a Zoom Town Hall last month, the issue of outstanding receivables was the most immediate concern. It was found that 28% of companies reported that they are owed in excess of $1 million, while 23% are owed between $500,000-$1 million and 34% are owed between $100,000-$500,000. The members were also polled on how late these payments are: 29% reported that payments are 45 or more days late (per their contracted terms), and one-third are 30-45 days late. Extrapolating across the industry, conservatively, this is well in excess of $200 million.”
“Matt Miller, president and CEO of AICP: A healthy production and post community is integral to the overall economy’s recovery once we are clear of the pandemic. Production and post talent will be needed to help brands connect with the consumer marketplace and bring it back to life. It’s thus in the interest of [all] marketers and agencies to do what they can–and should do–to contribute to keeping the production and post sectors whole. “
Ethan Roffler
I recently had the honor of interviewing this VFX genius and gained great insight into what it takes to work in the entertainment industry. Keep in mind, these questions are coming from an artist’s perspective but can be applied to any creative individual looking for some wisdom from a professional. So grab a drink, sit back, and enjoy this fun and insightful conversation.
Ethan
To start, I just wanted to say thank you so much for taking the time for this interview!
Daniele
My pleasure.
When I started my career I struggled to find help. Even people in the industry at the time were not that helpful. Because of that, I decided very early on that I was going to do exactly the opposite. I spend most of my weekends talking or helping students. ;)
Ethan
That’s awesome! I have also come across the same struggle! Just a heads up, this will probably be the most informal interview you’ll ever have haha! Okay, so let’s start with a small introduction!
Daniele
Short introduction: I worked very hard and got lucky enough to work on great shows with great people. ;) Slightly longer version: I started working for a TV channel, very early, while I was learning about CG. Slowly made my way across the world, working along very great people and amazing shows. I learned that to be successful in this business, you have to really love what you do as much as respecting the people around you. What you do will improve to the final product; the way you work with people will make a difference in your life.
Ethan
How long have you been an artist?
Daniele
Loaded question. I believe I am still trying and craving to be one. After each production I finish I realize how much I still do not know. And how many things I would like to try. I guess in my CG Sup and generalist world, being an artist is about learning as much about the latest technologies and production cycles as I can, then putting that in practice. Having said that, I do consider myself a cinematographer first, as I have been doing that for about 25 years now.
Ethan
Words of true wisdom, the more I know the less I know:) How did you get your start in the industry?
How did you break into such a competitive field?
Daniele
There were not many schools when I started. It was all about a few magazines, some books, and pushing software around trying to learn how to make pretty images. Opportunities opened because of that knowledge! The true break was learning to work hard to achieve a Suspension of Disbelief in my work that people would recognize as such. It’s not something everyone can do, but I was fortunate to not be scared of working hard, being a quick learner and having very good supervisors and colleagues to learn from.
Ethan
Which do you think is better, having a solid art degree or a strong portfolio?
Daniele
Very good question. A strong portfolio will get you a job now. A solid strong degree will likely get you a job for a longer period. Let me digress here; Working as an artist is not about being an artist, it’s about making money as an artist. Most people fail to make that difference and have either a poor career or lack the understanding to make a stable one. One should never mix art with working as an artist. You can do both only if you understand business and are fair to yourself.
Ethan
That’s probably the most helpful answer to that question I have ever heard.
What’s some advice you can offer to someone just starting out who wants to break into the industry?
Daniele
Breaking in the industry is not just about knowing your art. It’s about knowing good business practices. Prepare a good demo reel based on the skill you are applying for; research all the places where you want to apply and why; send as many reels around; follow up each reel with a phone call. Business is all about right time, right place.
Ethan
A follow-up question to that is: Would you consider it a bad practice to send your demo reels out in mass quantity rather than focusing on a handful of companies to research and apply for?
Daniele
Depends how desperate you are… I would say research is a must. To improve your options, you need to know which company is working on what and what skills they are after. If you were selling vacuum cleaners you probably would not want to waste energy contacting shoemakers or cattle farmers.
Ethan
What do you think the biggest killer of creativity and productivity is for you?
Daniele
Money…If you were thinking as an artist. ;) If you were thinking about making money as an artist… then I would say “thinking that you work alone”.
Ethan
Best. Answer. Ever.
What are ways you fight complacency and maintain fresh ideas, outlooks, and perspectives
Daniele
Two things: Challenge yourself to go outside your comfort zone. And think outside of the box.
Ethan
What are the ways/habits you have that challenge yourself to get out of your comfort zone and think outside the box?
Daniele
If you think you are a good character painter, pick up a camera and go take pictures of amazing landscapes. If you think you are good only at painting or sketching, learn how to code in python. If you cannot solve a problem, that being a project or a person, learn to ask for help or learn about looking at the problem from various perspectives. If you are introvert, learn to be extrovert. And vice versa. And so on…
Ethan
How do you avoid burnout?
Daniele
Oh… I wish I learned about this earlier. I think anyone that has a passion in something is at risk of burning out. Artists, more than many, because we see the world differently and our passion goes deep. You avoid burnouts by thinking that you are in a long term plan and that you have an obligation to pay or repay your talent by supporting and cherishing yourself and your family, not your paycheck. You do this by treating your art as a business and using business skills when dealing with your career and using artistic skills only when you are dealing with a project itself.
Ethan
Looking back, what was a big defining moment for you?
Daniele
Recognizing that people around you, those being colleagues, friends or family, come first.
It changed my career overnight.
Ethan
Who are some of your personal heroes?
Daniele
Too many to list. Most recently… James Cameron; Joe Letteri; Lawrence Krauss; Richard Dawkins. Because they all mix science, art, and poetry in their own way.
Ethan
Last question:
What’s your dream job? ;)
Daniele
Teaching artists to be better at being business people… as it will help us all improve our lives and the careers we took…
Being a VFX artist is fundamentally based on mistrust.
This because schedules, pipelines, technology, creative calls… all have a native and naive instability to them that causes everyone to grow a genuine but beneficial lack of trust in the status quo. This is a fine balance act to build into your character. The VFX motto: “Love everyone but trust no one” is born on that.
Interview for the magazine InCG, Taiwan, Issue 28, 201609
————————————————————-
– First of all can you introduce yourself to our audience, who you are, how you join this part of industry? Can you talk about your past experience as VFX artist?
My career started on a late Christmas night in the middle of the 1980s. I remember waking up to the soundtrack of Ghostbusters playing off from a new Commodore 64 console. My older brother, Claudio, left the console in my room, as a gift. And I was hooked.
Since that moment I spent any free time available to play with computer technology and in particular computer graphic. Eventually this evolved into a passion that pushed me to learn the basic techniques and the art of all related to computer graphic. In a time when computer graphic at consumer level was still in its infancy.
My place would be filled with any computer graphic magazine I could put my hands on. As well as the first few books. A collection that at some point grew to around 300 books. From the making-of movie books. To reference books. To animation books. And so on. My first girlfriends were not too thrilled about sharing the space in that room.
This passion, as well as the initial few side jobs creating small animated videos and logos for local companies, eventually gave me enough confidence in my abilities and led me into my first professional job. As a computer graphic technician, driving lead and credit titles for one of the first few private national TV stations in Italy. Not necessarily a striking but a well paid job.
The fact that I could make money through what I loved the most was an eye opener in my young life. It gave me fuel to invest even more of my time in the art and it did set the fundamentals for a very long career than has spanned over 20 years, across TV productions, commercials, video games and more recently feature movies.
————————————————————-
– Can you introduce us about your current company?
After leaving Italy I started working for some of the most recognized Studios around the world, and eventually for facilities such as Disney Features, Sony Imageworks, Moving Picture Company. During that period I had the fortune to serve along world level talents and supervisors, who helped me refine both my technical and artistic skills. This while also investing my time into learning about management and training cycles.
I started sharing some of this personal knowledge and production experience throughout the world with ReelMatters Ltd.
But eventually those extra skills allowed me to reach my dream in 2008, when I joined the team at Weta Digital in Wellington, New Zealand, to help on James Cameron’s Avatar.
Weta has since been my family and the source of my pride. The level of expertise, passion and vision among the crew at Weta is inspirational and clearly visible in any project we work on. We all tend to thrive on perfection here and continuously pushing quality well beyond standards. One of the reasons why Weta is still at the forefront of the VFX industry nowadays.
————————————————————-
– What sort of movie had you participated before? Out of all movies what was the most challenging that you had encountered?
Due to my early, self thought, home training, it became easier for me to be involved with CG animation productions first. On that front, my best memories are working on Sony Imageworks’ “Surf’s Up” as well as on Steven Spielberg and Peter Jackson’s “The Adventure Of Tintin”. Movies which both raised the bar for CG environments and character animation.
Most recently I have seen myself more involved with live action features, such as: “Avatar”, “Rise Of The Planet Of The Apes” and “Dawn Of The Planet Of The Apes”, “The Hobbit: An Unexpected Journey” and “The Hobbit: The Battle Of Armies”, “Iron Man Three”. All the way to Jon Favreau’s Walt Disney production: “The Jungle Book”.
Each production has its own level of complexity and it is hard to make comparisons. Having some basic training has been fundamental for me to be able to see these features to delivery, while being flexible enough in sorting out those unique daily trials.
Feature production overall is an unique challenge itself. You do need a solid understanding of both technology and human nature to be able to find solutions which are applicable to a constantly moving target, across the life of a project. Often under a commercially driven, delivery pressure. And while working along a multitude of different unique talents.
It is quite a life changing experience, worth the pages of a best selling book. Where each chapter has its own plot.
————————————————————-
– How do you co-operate with other special effect artist in order to create realistic effect?
While there is an incredible amount of high class talent in the feature production business, no production is ever done by just an individual. It’s always the product of a constant collaboration that flows from the brain of visionary directors to the hands of skillful visual artist, and back.
Providing the perfect backdrop for this collaboration is what usually makes some productions more successful than others.
In that context. Creativity is the true fusion of the best ideas shared by this pool of minds, independently from which level of production you are at.
Management’s job is to feed and support this fusion, not to drive it.
And the working environment is one that allows trust and respect between all parties, while avoiding mechanical routines.
In other words. No piece of hardware or software will make a visually pleasant picture by itself unless someone infuses it with a soul. As George Sand once said “ The artist vocation is to send light into the human heart.”.
And to paraphrase Arthur C. Clark, I believe that a true collaboration between visionaries and artists is what makes “any sufficiently advanced (CG) technology indistinguishable from magic”.
————————————————————-
– What does it mean to you to create a good quality effect?
Any good CG effect that you would call as such is an effect that live for its purpose. Which most of the time is to support the action or the plot at hand.
In a live action feature, I tend to be in awe when the effect is helping experiencing that perfect Suspension Of Disbelief. Which is, the willingness to suspend logic and criticism for the sake of enjoying the unbelievable.
As soon as any effect breaks from its purpose or it is not up to the task at hand, your brain will tend to over analyze the visuals and, as such, take you away from the overall experience.
It is interesting to see that movies such as Jurassic Park are still holding their ground nowadays. Where more modern vfx productions tend to look dated very quickly. From that point of view, it appears to me that a quite a common mistake today is to overcompensate visuals with camera work, digital grading and computer generated work for the sake of the effect, more than to serve the story and the truth of the moment.
————————————————————-
– If it is possible for you to share tips about creating good quality effect?
1- The generalist at heart.
One question that I get quite often during my seminars is what should new vfx artists focus on. Is it specializing on a tool? Or learning a discipline? Or mastering a specific skill?
It is a fact that higher level Studios tend to hire people with well defined talents that fit in specific operational labels. In this way it is easier for them to fulfill recruitment numbers and satisfy production’s immediate needs.
What happens after wards, when you start working as a VFX artist, is not always as well defined. The flexible nature of feature production cycles and delivery deadlines is often a catalyst for a multitude of variations in an artist’s work life. Especially on the post-production side of a digital pipeline. For that reason, I notice that people with more generic skills, with an ability to adapt to new processes and a genuinely open nature tend to fit in better and last longer throughout various projects.
The exception here being artists with dedicated PHDs and/or masters of a very specific domain, which makes them highly specialized in the VFX crowd and able to have a niche of their own.
Looking at the software or hardware side of things, technology is still progressing on a daily basis. And will continue doing so. To this extent, many facilities rely on proprietary technology. Thus specializing on a single tool, without learning the CG art’s basics, is also a dangerous game to play. You may end up being obsolete along the program you have learned. Or, in the best case, having a very limited number of facilities you can apply to.
What I suggest as a general rule to young VFX artists is to focus their energies in learning all that constitutes the basis of a successful career in computer graphic, along with improving their natural talent. So. From understanding modeling. To lighting and color. From rigging to animation. From procedural cycles to FX mechanism.
Doing so, building the knowledge necessary not only to satisfy a possible recruitment position, but also to be able to interact with people with different talents in a large facility. And as such, have enough confidence to quickly help and fit it in the bigger picture, which often forms these complex production pipelines.
On that note, competition for very few spots in a large studio is also a challenge when combined with trying to win the attention of a busy HR office or of a busy VFX Supervisor.
When applying for a VFX position, it is quite beneficial to have a very clear introduction letter, which simply states in one line the discipline you are applying for. That being for example: modeling, animation, texturing, shading, … But never indicating more than one discipline at the time. Then in the body of the introduction letter describe that, if need arises, you could also help covering other positions which fit along your skills.
Finally, supporting your application with a very short demo reel (one minute top, possibly less) that shows and clearly labels your very best work in the main discipline you are applying for and clarifies your side skills, wherever those are applicable. To this extent, if you are interested in multiple disciplines, it is highly recommended to prepare multiple introduction letters and related demo reels to satisfy each separate application.
2-What constitute the best production pipeline.
There is always a lot of pride in winning accolades in the VFX industry. And deservedly so. The amount of energy, investments, time and talent required to achieve such a task is, to say the least, overwhelming. Very few Studios and individuals have the sensibility,
experience and organization to pull that feat.
In support of these cycles, there is also a lot of new technology and specialized tools which continuously push the boundaries of what is achievable in computer graphic on a daily basis. To the point that I am confident the majority of senior VFX people in the industry would agree that we are still at the beginning of this exploration, in many ways.
Where a painter is looking for an intimate inspiration to fill in his lonely blank canvas, with a brush and a small collection of colors at his disposal. CG is often the product of a perfect balance between a crowd of ambitions, thousands of frames, a multitude of digital gadgets and a variety of complex mediums.
The combination of new visions and new science is also what makes organizing these complex VFX tasks an expensive challenge in itself, worth the efforts of the most influential CTOs and producers around the world.
A challenge well described in a white-paper about The Status Of Visual Effects written by Renee Dunlop, Paul Malcolm, Eric Roth for the Visual Effects Society in July 2008.
Between the pages, the writers detail a few of the biggest obstacles currently affecting production:
– The difficulty to determine who is in charge of certain creative decisions.
– Directors and Producers’ mixed approach to pre and post visualization.
– The lack of consistency and resources between pre, mid and post production.
– A lack of consistency throughout pipelines, mainly due to the impact of new technologies.
Most of the time, this translates into a very costly, “brute-force” solution workflow. Which, in its own, destabilize any reasonable software production schemes that Studios are willing to invest into.
While a collection of good stable software it’s a fair base for any visual effects venture, I firmly believe that to defy these challenges the core of any VFX pipeline should be a software agnostic one.
All CG elements should be able to be translated effortlessly across tools, independently from their original disciplines’ unique requirements.
And, more than the compartmentalized organization used in other markets, the key structure of this pipeline should focus on the flow of data and the quality of the inventory.
The rest is important, but not essential.
By achieving such a system, the work environment would prove to:
. Be flexible enough to maintain integrity across platforms and departments.
. Allow modifications to the software infrastructure without affecting deliverables.
. Accept various in house and external content.
. And deliver quality without jeopardizing speed.
Overall and independently from the approach, the support of flow of data and of inventory quality is for me a critical element that would help any production survive under the majority of modern, commercial delivery stress requirements.
This framework would help maintaining productivity stable even with continuous changes in a feature’s vision and objectives.
Finally, it would help training the modern VFX artist not to rely on those unique tools or solutions which are software centric and bound to expiry when new technology arises. Thus keeping skills and talent always applicable to the task at hand, to the long lasting benefit of the production studio.
To support such a mechanism, facilities should consider researching and investing into :
. A stable, software independent, browser based, asset and shot manager.
. A solid look development structure.
. A software independent, script based, rendering management solution.
And an asset living in this environment should sport basic qualities such as:
. being version-able
. being hash-able
. being track-able
. being verbose
. being software and hierarchic relation agnostic
. being self-contained
. supporting expandable qualities
. supporting temporally and shading stable procedural decimation
————————————————————-
– Can you give a word of inspiration to those who wish to participate as VFX artist
If anyone is willing to notice it or not, the vast majority of top grossing movies coming out every year are now filled with special effects created by a new wave of craftsmen who share their talent all around the world.
We are living in a period where the new DaVincis, Botticellis and Galileos live their life, comfortably seating in front of a computer. Creating a new art form which converts ones and zeros into a visually pleasing virtual reality. All this while offering their artistry away from language, race and belief barriers.
The knowledge required to achieve such a task is still a mix of an incredible amount of disciplines.
From biology and zoology, to physics and mathematics. From sculpting to painting. From astronomy to molecular chemistry.
It is an incredible opportunity to have a working career, learning about all aspects of life, while creating a new Suspension Of Disbelief
http://www.reddit.com/r/IAmA/comments/1r5h9b/im_phil_tippett_stopmotion_animator_director
LA Animator:
What, in your opinion, is broken in the current VFX workflow? What things would you like to most see change?
Endless revisions?
Disconnect between Director and artists/VFX facility?
Bidding process? etc?
Phil Tippet:
In the olden days, producers knew what visual effects were. Now they’ve gotten into this methodology where they’ll hire a middleman – a visual effects supervisor, and this person works for the producing studio. They’re middle managers.
And when you go into a review with one of them, there’s this weird sort of competition that happens. It’s a game called ‘Find What’s Wrong With This Shot’. And there’s always going to be something wrong, because everything’s subjective. And you can micromanage it down to a pixel, and that happens all the time.
We’re doing it digitally, so there’s no pressure to save on film costs or whatever, so it’s not unusual to go through 500 revisions of the same shot, moving pixels around and scrutinizing this or that.
That’s not how you manage artists. You encourage artists, and then you’ll get – you know – art. If your idea of managing artists is just pointing out what’s wrong and making them fix it over and over again, you end up with artists who just stand around asking “OK lady, where do you want this sofa? You want it over there? No? Fine. You want it over there? I don’t give a fuck. I’ll put it wherever you want it.”
It’s creative mismanagement, it’s part of the whole corporate modality. The fish stinks from the head on down. Back on Star Wars, Robocop, we never thought about what was wrong with a shot. We just thought about how to make it better.
This resonates with the VES 2008′ Status Of VFX white-paper written by Renee Dunlop, Paul Malcolm, Eric Roth for the Visual Effects Society in July 2008. A critical effort in educating the production world on the status of the current use of visual effects:
“While digital visual effects have opened the door wide to tremendous opportunity, those working in the
pipeline are still jostling to claim their place in the process.
Though it is understandable that different crafts want to protect their own territory, safeguarding
the old ways of doing things often gets in the way of creating partnerships.
· An oft-heard concern raised by Art Directors and Production Designers has been a lack
of consistency throughout the pipeline and the impact new technologies used by VFX
artists will have on their work. However, the fact is that part of a VFX Supervisor’s
responsibility is to maintain communication with DPs and Art Directors to ensure that
the final product reflects what those parties wanted.
· Often it is difficult to determine who is in charge of certain creative decisions that
overlap craft boundaries.
· There are cultural and generational factors to consider. A lot of Directors don’t want to
commit to what the shot’s going to be until they’re in post. However, many younger
Directors are very open to manipulating images early on.
· On many, if not most, projects, Art Directors and DPs are often on another project and
no longer accessible to VFX artists after the shoot. Unfortunately, this is when VFX
Supervisors are finalizing their work in post, which impacts the work done earlier by
other craftspeople.
Yet another factor is time. As studios continue to push for shorter production periods, putting
together the perfect pipeline becomes increasingly difficult. As Alex Funke, Visual Effects
Supervisor and DP says:
“In a perfect world, the live action would be shot first, the miniatures would
shoot second and then the digital effects would assemble and enhance the
whole mix. But it doesn’t usually happen that way. It’s more likely that the
miniatures will be shooting along with the live action, and the digital artists will
be trying to get rough assemblies of shots done so they can find out just how
much work they will have to do.””
…
“Some of these cultural changes need to begin within the visual effects community itself.
As VFX Supervisor and former Visual Effects Society Board Chair Jeffrey A. Okun noted:
“In our earnest desire to get a seat at the ‘adults table’ of the business, we have
been engaged in a 30-year publicity campaign to make everyone aware of what
we do and how we do it. In this process we have focused on the success stories
and hidden away the terrible truth of how we got to those successes. As far as
the public, most directors and producers are concerned, they are not at all aware
of how time and hands-on intensive the process really is. What we need to do is
begin telling the painful truth.””
…
“Visual effects artists need to bring other practitioners into the VFX production process as part of
the education process. Unfortunately, this often proves difficult because of long-standing
territorial issues. As productions are shared throughout the international production community,
consistency and quality can be difficult for Department Heads to monitor.”
…
“As anyone with even a modicum of experience in the entertainment industry knows, nothing
ever changes unless producers can be convinced that it’s in their best economic interest. That
said, visual effects professionals have a very convincing argument to make with producers
about the need to re-evaluate the impact of VFX on the bottom line.
Roughly 20 out of the top 25 highest grossing films of all time are visual effects films. Visual
effects typically take 25-50% or more of an entire show’s budget, an estimate that is often
admitted as low. In animated films, that number is far higher. Clearly, the work, the creative
product, and the practitioners of the visual effects industry all have an enormous impact on the
producers’ bottom line and the show’s eventual success in the marketplace.
Essentially, if visual effects artists are included in decision making up front, productions costs
will drop because the VFX artists can help streamline the production path in telling the
Director’s story. They can be instrumental in offering more choices that help guide and
generate better creative and production decisions and, therefore, a more efficient production
schedule.”
The owner of ReelMatters is a Technical Director with over 25 years of production experience in various fields of the entertainment business.
During this period he has contributed to Productions for some of the most distinguished and internationally recognized studios and facilities including Walt Disney Pictures, Sony Pictures Imageworks, Universal Pictures, Weta Digital, C.O.R.E Feature Animation, Framestore CFC, Moving Picture Company, UPN, Atari, Seneca Colleges, Hasbro, DKP Studios, Artech Studios and FOX.
In 2001 he was awarded as one of the fifty most recognized Maya developers by French magazine: Pixelcreation. He has been named in various technical contributions published online for organizations such as VFX World, Autodesk Alias and Highend3D.
Covering various Production roles, such as CG Supervisor, Art Director, Lighting Supervisor, and lead Technical Director, he has led and supervised teams up to 40 people to successful deadlines in various production types; from video games, to broadcast television and theatrically released feature films.
His expertise range from modeling and look development, to lighting, rendering, VFX and to pipeline implementation.
PixelSham.com is now the direct portal for ReelMatters‘ blog and industry news.
http://vfxsoldier.wordpress.com/2013/02/24/an-open-letter-to-ang-lee/
Dear Mr. Lee, When asked about the bankruptcy of Rhythm + Hues, the visual effects house largely responsible for making your film “life of Pi” as incredible as it was, you said: “I would like it to be cheaper and not a tough business [for VFX vendors].
vfxsoldier.wordpress.com An article is making the rounds promoting the 40-hour week and explains how long work hours have proven to cost workers and their employers more of their time, their money, and their health.
http://www.visualeffectssociety.com/node/2425
As an Honorary Society, VES has led the way in promoting the incredible work of VFX artists but so far no one has stood up to lead the way on the business side of our business. No one has been able to speak out for unrepresented artists and facilities – or the craft as a whole – in any meaningful way.