The new model, MAI-1, is expected to have about 500 billion parameters, Seeking Alpha reported Monday (May 6), citing a paywalled article by The Information.
https://www.theverge.com/2024/5/7/24151109/apple-final-cut-camera-app-support-multicam-ipad
Apple has released Final Cut Camera for iPhone and iPad, allowing filmmakers to take video and stream it live back to an iPad for a multicam shoot. The updated Final Cut 2 app allows users to can control each Final Cut Camera-running device connected to it with a multiscreen view. Users can switch between production and editing anytime to live-cut their projects in the new version.
It’s becoming clear that deterministic physics cannot easily answer all aspects of nature, at astronomical and biological level.
Is this a limitation in modern mathematics and/or tools. Or an actual barrier?
https://www.instagram.com/gerdegotit/reel/C6s-2r2RgSu/
Since spending a lot of time recently with SDXL I’ve since made my way back to SD 1.5
While the models overall have less fidelity. There is just no comparing to the current motion models we have available for animatediff with 1.5 models.
To date this is one of my favorite pieces. Not because I think it’s even the best it can be. But because the workflow adjustments unlocked some very important ideas I can’t wait to try out.
Performance by @silkenkelly and @itxtheballerina on IG
Jonathan Bronfman, CEO at MARZ, tells us: “I don’t think the industry will ever be the same. It will recover slowly in 2024. The streaming wars cost studios too much money and now they are all reevaluating their strategies.”
He notes that AI will play a big role in how things shake out. “Technology is pushing out the traditional approach, something which is long overdue. Studios in Hollywood have been operating the same way for decades, and now AI will move them off their pedestal.
“The entire industry is in for a reckoning. I think studios would have come to this realisation eventually, so it was inevitable, but I think the pressure from the strikes accelerated this.”
https://www.vfxwire.com/how-the-vfx-industry-is-recovering-from-last-years-strikes/
Get the free daily email with summaries of the most interesting stories in startups, tech, and programming!
Meta is the only Big Tech company committed to developing AI, particularly large language models, with an open-source approach.
There are 3 ways you can use Llama 3 for your business:
1- Llama 3 as a Service
Use Llama 3 from any cloud provider as a service. You pay by use, but the price is typically much cheaper than proprietary models like GPT-4 or Claude.
→ Use Llama 3 on Azure AI catalog:
https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-meta-llama-3-models-on-azure-ai-model-catalog/ba-p/4117144
2- Self-Hosting
If you have GPU infrastructure (on-premises or cloud), you can run Llama 3 internally at your desired scale.
→ Deploy Llama 3 on Amazon SageMaker:
https://www.philschmid.de/sagemaker-llama3
3- Desktop (Offline)
Tools like Ollama allow you to run the small model offline on consumer hardware like current MacBooks.
→ Tutorial for Mac:
https://ollama.com/blog/llama3
Panelists include Author and Distinguished Research Scientist in DL/ML & CG at Wētā FX Dr. Andrew Glassner, VFX, Post & Technology Recruiter and VES 1st Vice Chair Susan O’Neal, CTO at Cinesite Group and VES Technology Committee member Michele Sciolette and Shareholder & Co-Chair of Buchalter’s Entertainment Industry Group and Adjunct Professor at Southwestern Law School Stephen Strauss, moderated by VES Technology Committee member and Media & Entertainment Executive, CTO & Industry Advisor Barbara Ford Grant.
https://www.patreon.com/posts/102660674
https://www.linkedin.com/posts/stephenwestland_here-is-a-post-about-the-dark-yellow-problem-activity-7187131643764092929-7uCL