https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/tree/main/split_files/diffusion_models
https://github.com/Tencent/HunyuanVideo
3Dprinting (175) A.I. (731) animation (338) blender (196) colour (229) commercials (49) composition (152) cool (359) design (634) Featured (68) hardware (307) IOS (109) jokes (134) lighting (282) modeling (125) music (185) photogrammetry (177) photography (751) production (1248) python (85) quotes (486) reference (310) software (1328) trailers (296) ves (536) VR (219)
https://www.patreon.com/posts/create-from-with-115147229
Note: the image below is not from the workflow
Nodes:
Install missing nodes in the workflow through the manager.
Models:
Make sure not to mix SD1.5 and SDLX models.
Follow the details under the pdf below.
General suggesions:
– Comfy Org / Flux.1 [dev] Checkpoint model (fp8)
The manager will put it under checkpoints, which will not work.
Make sure to put it under the models/unet folder for the Load Diffusion Model node to work.
– same for realvisxlV50_v50LightningBakedvae.safetensors
it should go under models/vae
https://github.com/Genesis-Embodied-AI/Genesis
https://genesis-world.readthedocs.io/en/latest
Genesis is a physics platform designed for general purpose Robotics/Embodied AI/Physical AI applications. It is simultaneously multiple things:
https://github.com/comfyanonymous/ComfyUI
https://comfyui-wiki.com/en/install
https://stable-diffusion-art.com/comfyui
https://github.com/LykosAI/StabilityMatrix
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/comfyanonymous/ComfyUI
https://github.com/ltdrdata/ComfyUI-Manager
https://www.thinkdiffusion.com
Videos, shortcuts and details in the post!
https://substack.com/inbox/post/153106976
https://techcrunch.com/2024/12/14/what-are-ai-world-models-and-why-do-they-matter/
A model that can generate the next frame of a 3D scene based on the previous frame(s) and user input, trained on video data, and running in real-time.
World models enable AI systems to simulate and reason about their environments, pushing forward autonomous decision-making and real-world problem-solving.
The key insight is that by training on video data, these models learn not just how to generate images, but also:
Some companies, like World Labs, are taking a hybrid approach: using World Models to generate static 3D representations that can then be rendered using traditional 3D engines (in this case, Gaussian Splatting). This gives you the best of both worlds: the creative power of AI generation with the multiview consistency and performance of traditional rendering.
https://aivideo.hunyuan.tencent.com
https://github.com/Tencent/HunyuanVideo
Unlike other models like Sora, Pika2, Veo2, HunyuanVideo’s neural network weights are uncensored and openly distributed, which means they can be run locally under the right circumstances (for example on a consumer 24 GB VRAM GPU) and it can be fine-tuned or used with LoRAs to teach it new concepts.
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.