-
Andreas Horn – The 9 algorithms
The illustration below highlights the algorithms most frequently utilized in our everyday activities: They play a key role in everything we do from online shopping recommendations, navigation apps, social media, email spam filters and even smart home devices.
🔹 𝗦𝗼𝗿𝘁𝗶𝗻𝗴 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺
– Organize data for efficiency.
➜ Example: Sorting email threads or search results.
🔹 𝗗𝗶𝗷𝗸𝘀𝘁𝗿𝗮’𝘀 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺
– Finds the shortest path in networks.
➜ Example: Google Maps driving routes.
🔹 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿𝘀
– AI models that understand context and meaning.
➜ Example: ChatGPT, Claude and other LLMs.
🔹 𝗟𝗶𝗻𝗸 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀
– Ranks pages and builds connections.
➜ Example: TikTok PageRank, LinkedIn recommendations.
🔹 𝗥𝗦𝗔 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺
– Encrypts and secures data communication.
➜ Example: WhatsApp encryption or online banking.
🔹 𝗜𝗻𝘁𝗲𝗴𝗲𝗿 𝗙𝗮𝗰𝘁𝗼𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻
– Secures cryptographic systems.
➜ Example: Protecting sensitive data in blockchain.
🔹 𝗖𝗼𝗻𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗮𝗹 𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀 (𝗖𝗡𝗡𝘀)
– Recognizes patterns in images and videos.
➜ Example: Facial recognition, object detection in self-driving cars.
🔹 𝗛𝘂𝗳𝗳𝗺𝗮𝗻 𝗖𝗼𝗱𝗶𝗻𝗴
– Compresses data efficiently.
➜ Example: JPEG and MP3 file compression.
🔹 𝗦𝗲𝗰𝘂𝗿𝗲 𝗛𝗮𝘀𝗵 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 (𝗦𝗛𝗔)
– Ensures data integrity.
➜ Example: Password encryption, digital signatures. -
OpenAI 4.5 model arrives to mixed reviews
The verdict is in: OpenAI’s newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called “scaling laws” cited by many for years have possibly met their natural end.
-
Walter Murch – “In the Blink of an Eye” – A perspective on cut editing
https://www.amazon.ca/Blink-Eye-Revised-2nd/dp/1879505622
Celebrated film editor Walter Murch’s vivid, multifaceted, thought-provoking essay on film editing. Starting with the most basic editing question — Why do cuts work? — Murch takes the reader on a wonderful ride through the aesthetics and practical concerns of cutting film. Along the way, he offers unique insights on such subjects as continuity and discontinuity in editing, dreaming, and reality; criteria for a good cut; the blink of the eye as an emotional cue; digital editing; and much more. In this second edition, Murch revises his popular first edition’s lengthy meditation on digital editing in light of technological changes. Francis Ford Coppola says about this book: “Nothing is as fascinating as spending hours listening to Walter’s theories of life, cinema and the countless tidbits of wisdom that he leaves behind like Hansel and Gretel’s trail of breadcrumbs…….”
Version 1.0.0 -
PixVerse – Prompt, lypsync and extended video generation
https://app.pixverse.ai/onboard
PixVerse now has 3 main features:
text to video
➡️ How To Generate Videos With Text Promptsimage to video
➡️ How To Animate Your Images And Bring Them To Lifeupscale
➡️ How to Upscale Your Video
Enhanced Capabilities
– Improved Prompt Understanding: Achieve more accurate prompt interpretation and stunning video dynamics.
– Supports Various Video Ratios: Choose from 16:9, 9:16, 3:4, 4:3, and 1:1 ratios.
– Upgraded Styles: Style functionality returns with options like Anime, Realistic, Clay, and 3D. It supports both text-to-video and image-to-video stylization.New Features
– Lipsync: The new Lipsync feature enables users to add text or upload audio, and PixVerse will automatically sync the characters’ lip movements in the generated video based on the text or audio.
– Effect: Offers 8 creative effects, including Zombie Transformation, Wizard Hat, Monster Invasion, and other Halloween-themed effects, enabling one-click creativity.
– Extend: Extend the generated video by an additional 5-8 seconds, with control over the content of the extended segment. -
Alibaba Group Tongyi Lab WanxAI Wan2.1 – open source model
👍 SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
🚀 Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
🎉 Multiple tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
🔮 Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
💪 Powerful Video VAE: Wan-VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/example%20workflows_Wan2.1
https://huggingface.co/Wan-AI/Wan2.1-T2V-14B
https://huggingface.co/Kijai/WanVideo_comfy/tree/main
-
VES Cinematic Color – Motion-Picture Color Management
This paper presents an introduction to the color pipelines behind modern feature-film visual-effects and animation.
Authored by Jeremy Selan, and reviewed by the members of the VES Technology Committee including Rob Bredow, Dan Candela, Nick Cannon, Paul Debevec, Ray Feeney, Andy Hendrickson, Gautham Krishnamurti, Sam Richards, Jordan Soles, and Sebastian Sylwan.
-
The VFX Reference Platform
The VFX Reference Platform is a set of tool and library versions to be used as a common target platform for building software for the VFX industry. Its purpose is to minimise incompatibilities between different software packages, ease the support burden for integrated pipelines and encourage further adoption of Linux by both studios and software vendors. The Reference Platform is updated annually by a group of software vendors in collaboration with the Visual Effects Society Technology Committee.
Each annual reference platform is designated by the calendar year in which major product releases should be targeting that particular reference.
-
Deep Compositing in Nuke – a walkthrough
Depth Map: A depth map is a representation of the distance or depth information for each pixel in a scene. It is typically a two-dimensional array where each pixel contains a value that represents the distance from the camera to the corresponding point in the scene. The depth values are usually represented in metric units, such as meters. A depth map provides a continuous representation of the scene’s depth information.
For example, in Arnold this is achieved through a Z AOV, this collects depth of the shading points as seen from the camera.
(more…)
https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_output_aovs_ac_aovs_html
https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_3ds_max_ax_aov_tutorials_ax_zdepth_aov_html -
VFX Giant MPC and Parent Company Technicolor Shut Down Amid ‘Severe Financial Challenges
https://variety.com/2025/film/global/technicolor-vfx-mpc-shutter-severe-challenges-1236316354
Shaun Severi, Head of Creative Production at the Mill, claimed in a LinkedIn post that 4,500 had lost their jobs in 24 hours: “The problem wasn’t talent or execution — it was mismanagement at the highest levels…the incompetence at the top was nothing short of disastrous.”
According to Severi, successive company presidents “buried the company under massive debt by acquiring VFX Studios…the second president, after a disastrous merger of the post houses, took us public, artificially inflating the company’s value — only for it to come crashing down when the real numbers were revealed….and the third and final president, who came from a car rental company, had no vision of what she was building, selling or managing.”
-
Moondream Gaze Detection – Open source code
This is convenient for captioning videos, understanding social dynamics, and for specific cases such as sports analytics, or detecting when drivers or operators are distracted.
https://huggingface.co/spaces/moondream/gaze-demo
https://moondream.ai/blog/announcing-gaze-detection
-
X-Dyna – Expressive Dynamic Human Image Animation
https://x-dyna.github.io/xdyna.github.io
A novel zero-shot, diffusion-based pipeline for animating a single human image using facial expressions and body movements derived from a driving video, that generates realistic, context-aware dynamics for both the subject and the surrounding environment.
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.
