Thank you for visiting!! Please bear with us while we go through some troubleshooting and a redesign of the blog.
There is a database query issue which keeps hitting the site. Apologies. Still investigating solutions.
-
Bloomberg – Sam Altman on ChatGPT’s First Two Years, Elon Musk and AI Under Trump
https://www.bloomberg.com/features/2025-sam-altman-interview
One of the strengths of that original OpenAI group was recruiting. Somehow you managed to corner the market on a ton of the top AI research talent, often with much less money to offer than your competitors. What was the pitch?
The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. If you’re doing the same thing everybody else is doing, if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent.
OpenAI senior executives at the company’s headquarters in San Francisco on March 13, 2023, from left: Sam Altman, chief executive officer; Mira Murati, chief technology officer; Greg Brockman, president; and Ilya Sutskever, chief scientist. Photographer: Jim Wilson/The New York Times
-
LG 45GX990A – The world’s first bendable gaming monitor
The monitor resembles a typical thin flat screen when in its home position, but it can flex its 45-inch body to 900R curvature in the blink of an eye.
-
DMesh++ – An Efficient Differentiable Mesh for Complex Shapes
https://sonsang.github.io/dmesh2-project
An efficient differentiable mesh-based method that can effectively handle complex 2D and 3D shapes. For instance, it can be used for reconstructing complex shapes from point clouds and multi-view images.
-
Nvidia unveils $3,000 desktop AI computer for home LLM researchers
https://arstechnica.com/ai/2025/01/nvidias-first-desktop-pc-can-run-local-ai-models-for-3000
https://www.nvidia.com/en-us/project-digits
Some smaller open-weights AI language models (such as Llama 3.1 70B, with 70 billion parameters) and various AI image-synthesis models like Flux.1 dev (12 billion parameters) could probably run comfortably on Project DIGITS, but larger open models like Llama 3.1 405B, with 405 billion parameters, may not. Given the recent explosion of smaller AI models, a creative developer could likely run quite a few interesting models on the unit.
DIGITS’ 128GB of unified RAM is notable because a high-power consumer GPU like the RTX 4090 has only 24GB of VRAM. Memory serves as a hard limit on AI model parameter size, and more memory makes room for running larger local AI models. -
Gaussian Splatting OFX plugin for Nuke
https://radiancefields.com/gaussian-splatting-in-nuke
https://aescripts.com/gaussian-splatting-for-nuke
Features
- Import .ply files in Nuke.
- Support Compressed .ply files from SuperSplat
- Crop with Spherical or Box shape.
- Crop with Y Plane.
- Combine up to 10 models in the scene.
- Colorize with Ramp using Spherical or Box shape.
- Reveal model with Opacity Ramp.
- Animate Splat Scale with Spherical or Box shape.
- Each model can be distorted with Noise.
- Render Depth Pass for 3D compose.
- Color correction for each model.
- Real-time with GPU
- Export scene
-
ComfyUI + InstaID SDXL – Face and body swap tutorials
https://github.com/cubiq/ComfyUI_InstantID
https://github.com/cubiq/ComfyUI_InstantID/tree/main/examples
https://github.com/deepinsight/insightface
Unofficial version https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID
Installation details under the post
(more…) -
ComfyUI Tutorial Series Ep 25 – LTX Video – Fast AI Video Generator Model
https://comfyanonymous.github.io/ComfyUI_examples/ltxv
LTX-Video 2B v0.9.1 Checkpoint model
https://huggingface.co/Lightricks/LTX-Video/tree/main
More details under the post
(more…) -
The AI-Copyright Trap document by Carys Craig
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4905118
“There are many good reasons to be concerned about the rise of generative AI(…). Unfortunately, there are also many good reasons to be concerned about copyright’s growing prevalence in the policy discourse around AI’s regulation. Insisting that copyright protects an exclusive right to use materials for text and data mining practices (whether for informational analysis or machine learning to train generative AI models) is likely to do more harm than good. As many others have explained, imposing copyright constraints will certainly limit competition in the AI industry, creating cost-prohibitive barriers to quality data and ensuring that only the most powerful players have the means to build the best AI tools (provoking all of the usual monopoly concerns that accompany this kind of market reality but arguably on a greater scale than ever before). It will not, however, prevent the continued development and widespread use of generative AI.”
…
“(…) As Michal Shur-Ofry has explained, the technical traits of generative AI already mean that its outputs will tend towards the dominant, likely reflecting ‘a relatively narrow, mainstream view, prioritizing the popular and conventional over diverse contents and narratives.’ Perhaps, then, if the political goal is to push for equality, participation, and representation in the AI age, critics’ demands should focus not on exclusivity but inclusivity. If we want to encourage the development of ethical and responsible AI, maybe we should be asking what kind of material and training data must be included in the inputs and outputs of AI to advance that goal. Certainly, relying on copyright and the market to dictate what is in and what is out is unlikely to advance a public interest or equality-oriented agenda.”
…
“If copyright is not the solution, however, it might reasonably be asked: what is? The first step to answering that question—to producing a purposively sound prescription and evidence-based prognosis, is to correctly diagnose the problem. If, as I have argued, the problem is not that AI models are being trained on copyright works without their owners’ consent, then requiring copyright owners’ consent and/or compensation for the use of their work in AI-training datasets is not the appropriate solution. (…)If the only real copyright problem is that the outputs of generative AI may be substantially similar to specific human-authored and copyright-protected works, then copyright law as we know it already provides the solution.”
Collections
| Explore posts
| Design And Composition
| Featured AI
Popular Searches
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
-
JavaScript how-to free resources
-
Sensitivity of human eye
-
Principles of Animation with Alan Becker, Dermot OConnor and Shaun Keenan
-
Convert 2D Images to 3D Models
-
The Perils of Technical Debt – Understanding Its Impact on Security, Usability, and Stability
-
Types of AI Explained in a few Minutes
-
How to paint a boardgame miniatures
-
Google – Artificial Intelligence free courses
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.