Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it. So, will this large spend ever pay off? MIT’s Daron Acemoglu and GS’ Jim Covello are skeptical, with Acemoglu seeing only limited US economic upside from AI over the next decade and Covello arguing that the technology isn’t designed to solve the complex problems that would justify the costs, which may not decline as many expect. But GS’ Joseph Briggs, Kash Rangan, and Eric Sheridan remain more optimistic about AI’s economic potential and its ability to ultimately generate returns beyond the current “picks and shovels” phase, even if AI’s “killer application” has yet to emerge. And even if it does, we explore whether the current chips shortage (with GS’ Toshiya Hari) and looming power shortage (with Cloverleaf Infrastructure’s Brian Janous) will constrain AI growth. But despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst.
Local copy below
https://github.com/tin2tin/Pallaidium/
Text to video | Text to audio |
Text to speech | Text to image |
Image to image | Image to video |
Video to video | Image to text |
ControlNet | OpenPose |
ADetailer | IP Adapter Face/Style |
Canny | Illusion |
Multiple LoRAs | Segmind distilled SDXL |
Seed | Quality steps |
Frames | Word power |
Style selector | Strip power |
Batch conversion | Batch refinement of images. |
Batch upscale & refinement of movies. | Model card selector. |
Render-to-path selector. | Render finished notification. |
Model Cards | One-click install and uninstall dependencies. |
User-defined file path for generated files. | Seed and prompt added to strip name. |
https://curiousrefuge.com/blog/ai-filmmaking-tools-for-filmmakers
https://spectrum.ieee.org/chatgpt-for-coding
ChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered. This could explain why it is so much better at addressing older coding problems than newer ones.
“ChatGPT may generate incorrect code because it does not understand the meaning of algorithm problems, thus, simple error feedback information is not enough,”
It’s a new Magnific spell that allows you to change the entire lighting of a scene and, optionally, the background with just:
1/ A prompt OR
2/ A reference image OR
3/ A light map (drawing your own lights)
https://x.com/javilopen/status/1805274155065176489
https://stableprojectorz.com/lessons-and-videos/
https://lumalabs.ai/genie text to 3D
https://www.tripo3d.ai/app Image to 3D
https://www.vizcom.ai/ Web Sketching and design app
https://github.com/lllyasviel/Fooocus
https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch
https://www.anthropic.com/claude
https://time.com/6990386/anthropic-dario-amodei-interview/
https://github.com/anthropics/anthropic-quickstarts
Dario Amodei, CEO of Anthropic, envisions a future where AI systems are not only powerful but also aligned with human values. After leaving OpenAI, Amodei co-founded Anthropic to tackle the safety challenges of AI, aiming to create systems that are both intelligent and ethical. One of the key methods Anthropic employs is “Constitutional AI,” a training approach that instills AI models with a set of core principles derived from universally accepted documents like the United Nations Declaration of Human Rights.
https://apps.apple.com/us/app/claude-by-anthropic/id6473753684
https://github.com/GaiaNet-AI/gaianet-node
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides
https://github.com/abgulati/LARS
This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.
LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
https://github.com/deepseek-ai/DeepSeek-Coder-V2