https://xraispotlight.substack.com/p/capture-and-relieve-your-memories
3Dprinting (174) A.I. (708) animation (337) blender (196) colour (229) commercials (49) composition (151) cool (359) design (630) Featured (68) hardware (306) IOS (109) jokes (134) lighting (280) modeling (123) music (184) photogrammetry (175) photography (749) production (1246) python (85) quotes (485) reference (309) software (1323) trailers (295) ves (534) VR (219)
https://kartaverse.github.io/Reactor-Docs/#/com.AndrewHazelden.KartaVR
Kartaverse is a free open source post-production pipeline that is optimized for the immersive media sector. If you can imagine it, Kartaverse can help you create it in XR!
“Karta” is the Swedish word for map. With KartaVR you can stitch, composite, retouch, and remap any kind of panoramic video: from any projection to any projection. This provides the essential tools for 360VR, panoramic video stitching, depthmap, lightfield, and fulldome image editing workflows.
Kartaverse makes it a breeze to accessibly and affordably create content for use with virtual reality HMDs (head mounted displays) and fulldome theatres by providing ready to go scripts, templates, plugins, and command-line tools that allow you to work efficiently with XR media. The toolset works inside of Blackmagic Design’s powerful node based Fusion Studio and DaVinci Resolve Studio software.
https://github.com/tin2tin/Pallaidium/
Text to video | Text to audio |
Text to speech | Text to image |
Image to image | Image to video |
Video to video | Image to text |
ControlNet | OpenPose |
ADetailer | IP Adapter Face/Style |
Canny | Illusion |
Multiple LoRAs | Segmind distilled SDXL |
Seed | Quality steps |
Frames | Word power |
Style selector | Strip power |
Batch conversion | Batch refinement of images. |
Batch upscale & refinement of movies. | Model card selector. |
Render-to-path selector. | Render finished notification. |
Model Cards | One-click install and uninstall dependencies. |
User-defined file path for generated files. | Seed and prompt added to strip name. |
https://curiousrefuge.com/blog/ai-filmmaking-tools-for-filmmakers
It’s a new Magnific spell that allows you to change the entire lighting of a scene and, optionally, the background with just:
1/ A prompt OR
2/ A reference image OR
3/ A light map (drawing your own lights)
https://x.com/javilopen/status/1805274155065176489
https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch
https://www.anthropic.com/claude
https://time.com/6990386/anthropic-dario-amodei-interview/
https://github.com/anthropics/anthropic-quickstarts
Dario Amodei, CEO of Anthropic, envisions a future where AI systems are not only powerful but also aligned with human values. After leaving OpenAI, Amodei co-founded Anthropic to tackle the safety challenges of AI, aiming to create systems that are both intelligent and ethical. One of the key methods Anthropic employs is “Constitutional AI,” a training approach that instills AI models with a set of core principles derived from universally accepted documents like the United Nations Declaration of Human Rights.
https://apps.apple.com/us/app/claude-by-anthropic/id6473753684
https://github.com/GaiaNet-AI/gaianet-node
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides
https://github.com/abgulati/LARS
This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.
LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
https://github.com/deepseek-ai/DeepSeek-Coder-V2
https://blender-addons.gumroad.com/l/denoiser_comp_addon
Blender 3 updated Intel® Open Image Denoise to version 1.4.2 which improved many artifacts in render, even separating into passes, but still loses a lot of definition when used in standard mode, DENOISER COMP separates passes and applies denoiser only in the selected passes and generates the final pass (beauty) keeping much more definition as can be seen in the videos.
https://gapszju.github.io/RTG-SLAM/
https://github.com/MisEty/RTG-SLAM
https://gapszju.github.io/RTG-SLAM/static/pdfs/RTG-SLAM_arxiv.pdf
A Real-time Gaussian SLAM (RTG-SLAM), a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting.
https://runwayml.com/blog/introducing-gen-3-alpha/
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.
Immersity AI (formerly LeiaPix), turns 2D illustrations into 3D animation, ideal for bringing a sketch, painting or scene to life.
It converts the video into an animated depth video and uses that to trigger depth in the final output.
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.