https://hila-chefer.github.io/videojam-paper.github.io
3Dprinting (172) A.I. (665) animation (335) blender (194) colour (225) commercials (46) composition (150) cool (358) design (627) Featured (65) hardware (302) IOS (109) jokes (134) lighting (278) modeling (116) music (183) photogrammetry (171) photography (744) production (1233) python (84) quotes (485) reference (305) software (1319) trailers (295) ves (522) VR (219)
https://omnihuman-lab.github.io
They propose an end-to-end multimodality-conditioned human video generation framework named OmniHuman, which can generate human videos based on a single human image and motion signals (e.g., audio only, video only, or a combination of audio and video). In OmniHuman, we introduce a multimodality motion conditioning mixed training strategy, allowing the model to benefit from data scaling up of mixed conditioning. This overcomes the issue that previous end-to-end approaches faced due to the scarcity of high-quality data. OmniHuman significantly outperforms existing methods, generating extremely realistic human videos based on weak signal inputs, especially audio. It supports image inputs of any aspect ratio, whether they are portraits, half-body, or full-body images, delivering more lifelike and high-quality results across various scenarios.
In the Golden Age of Hollywood (1930-1959), a 10:1 shooting ratio was the norm—a 90-minute film meant about 15 hours of footage. Directors like Alfred Hitchcock famously kept it tight with a 3:1 ratio, giving studios little wiggle room in the edit.
Fast forward to today: the digital era has sent shooting ratios skyrocketing. Affordable cameras roll endlessly, capturing multiple takes, resets, and everything in between. Gone are the disciplined “Action to Cut” days of film.
https://en.wikipedia.org/wiki/Shooting_ratio
https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf
GOT-OCR2 works on a wide range of tasks, including plain document OCR, scene text OCR, formatted document OCR, and even OCR for tables, charts, mathematical formulas, geometric shapes, molecular formulas and sheet music.
Arminas created this using Juggernaut Xl model and QR Code Monster SDXL ControlNet.
His pipeline:
Static Images – Forge UI.
Upscaled with Leonardo AI universal upscaler.
Animated with Runway ML and Minimax.
Video upscale – Topaz Video AI.
Composited in Adobe Premiere.
Juggernaut Xl download here:
https://civitai.com/models/133005/juggernaut-xl
QR Code Monster SDXL:
https://civitai.com/models/197247?modelVersionId=221829
https://openai.com/index/openai-o3-mini
OpenAI o3-mini is our first small reasoning model that supports highly requested developer features including function calling(opens in a new window), Structured Outputs(opens in a new window), and developer messages(opens in a new window), making it production-ready out of the gate.
o3-mini does not support vision capabilities, so developers should continue using OpenAI o1 for visual reasoning tasks.
ChatGPT Plus, Team, and Pro users can access OpenAI o3-mini starting today, with Enterprise access coming in February. o3-mini will replace OpenAI o1-mini in the model picker, offering higher rate limits and lower latency, making it a compelling choice for coding, STEM, and logical problem-solving tasks.
As part of this upgrade, we’re tripling the rate limit for Plus and Team users from 50 messages per day with o1-mini to 150 messages per day with o3-mini.
Starting today, free plan users can also try OpenAI o3-mini by selecting ‘Reason’ in the message composer or by regenerating a response. This marks the first time a reasoning model has been made available to free users in ChatGPT.
DeepSeek Gets an ‘F’ in Safety From Researchers https://gizmodo.com/deepseek-gets-an-f-in-safety-from-researchers-2000558645
🔹 Google DeepMind Veo 2
🔹 OpenAI Sora
🔹 Hunyuan Video
🔹 Pika 2.1
🔹 Alibaba Cloud Wanx 2.1
🔹 Runway Gen-3
🔹 Kling AI 1.6
🔹 Luma AI Ray2
🔹 Hailuo T2V-01
Uncompressed video under the post
Benchmarks don’t capture real-world complexity like latency, domain-specific tasks, or edge cases. Enterprises often need more than raw performance, also needing reliability, ease of integration, and robust vendor support. Enterprise money will support the industries providing these services.
… it is also reasonable to assume that anything you put into the app or their website will be going to the Chinese government as well, so factor that in as well.
https://byliutao.github.io/1Prompt1Story.github.io
Tneration models can create high-quality images from input prompts. However, they struggle to support the consistent generation of identity-preserving requirements for storytelling.
Our approach 1Prompt1Story concatenates all prompts into a single input for T2I diffusion models, initially preserving character identities.
https://www.seangoedecke.com/deepseek-r1
The Chinese AI lab DeepSeek recently released their new reasoning model R1, which is supposedly (a) better than the current best reasoning models (OpenAI’s o1- series), and (b) was trained on a GPU cluster a fraction the size of any of the big western AI labs.
DeepSeek uses a reinforcement learning approach, not a fine-tuning approach. There’s no need to generate a huge body of chain-of-thought data ahead of time, and there’s no need to run an expensive answer-checking model. Instead, the model generates its own chains-of-thought as it goes.
The secret behind their success? A bold move to train their models using FP8 (8-bit floating-point precision) instead of the standard FP32 (32-bit floating-point precision).
…
By using a clever system that applies high precision only when absolutely necessary, they achieved incredible efficiency without losing accuracy.
…
The impressive part? These multi-token predictions are about 85–90% accurate, meaning DeepSeek R1 can deliver high-quality answers at double the speed of its competitors.
Chinese AI firm DeepSeek has 50,000 NVIDIA H100 AI GPUs
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.