The role of a VFX Supervisor in filmmaking is multifaceted, encompassing pre-production planning, budgeting, team management, on-set supervision, and post-production oversight. They collaborate with directors to understand the creative vision, plan VFX sequences, and ensure seamless integration of digital elements. Their responsibilities include guiding actors, capturing on-set references, maintaining quality control, and overseeing the final VFX integration during post-production. Effective documentation and reporting throughout the process are crucial for successful project completion.
Full breakdown here: https://squarezeroone.wixsite.com/home/post/the-vital-role-of-the-vfx-supervisor-in-filmmaking
https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch
https://www.anthropic.com/claude
https://time.com/6990386/anthropic-dario-amodei-interview/
https://github.com/anthropics/anthropic-quickstarts
Dario Amodei, CEO of Anthropic, envisions a future where AI systems are not only powerful but also aligned with human values. After leaving OpenAI, Amodei co-founded Anthropic to tackle the safety challenges of AI, aiming to create systems that are both intelligent and ethical. One of the key methods Anthropic employs is “Constitutional AI,” a training approach that instills AI models with a set of core principles derived from universally accepted documents like the United Nations Declaration of Human Rights.
https://apps.apple.com/us/app/claude-by-anthropic/id6473753684
https://github.com/GaiaNet-AI/gaianet-node
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides
https://github.com/abgulati/LARS
This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.
LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
https://github.com/deepseek-ai/DeepSeek-Coder-V2
The new material provides an energy density—the amount that can be squeezed into a given space—of 1,000 watt-hours per liter, which is about 100 times greater than TDK’s current battery in mass production.
TDK has 50 to 60 percent global market share in the small-capacity batteries that power smartphones and is targeting leadership in the medium-capacity market, which includes energy storage devices and larger electronics such as drones.
https://blender-addons.gumroad.com/l/denoiser_comp_addon
Blender 3 updated Intel® Open Image Denoise to version 1.4.2 which improved many artifacts in render, even separating into passes, but still loses a lot of definition when used in standard mode, DENOISER COMP separates passes and applies denoiser only in the selected passes and generates the final pass (beauty) keeping much more definition as can be seen in the videos.
https://gapszju.github.io/RTG-SLAM/
https://github.com/MisEty/RTG-SLAM
https://gapszju.github.io/RTG-SLAM/static/pdfs/RTG-SLAM_arxiv.pdf
A Real-time Gaussian SLAM (RTG-SLAM), a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting.
https://runwayml.com/blog/introducing-gen-3-alpha/
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.
Immersity AI (formerly LeiaPix), turns 2D illustrations into 3D animation, ideal for bringing a sketch, painting or scene to life.
It converts the video into an animated depth video and uses that to trigger depth in the final output.