https://repo-sam.inria.fr/fungraph/differentiable-multi-view/
The Cattery is a library of free third-party machine learning models converted to .cat files to run natively in Nuke, designed to bridge the gap between academia and production, providing all communities access to different ML models that all run in Nuke. Users will have access to state-of-the-art models addressing segmentation, depth estimation, optical flow, upscaling, denoising, and style transfer, with plans to expand the models hosted in the future.
https://www.foundry.com/insights/machine-learning/the-artists-guide-to-cattery
https://community.foundry.com/cattery
Prompt: Captain Marvel at dive-bar in 1982, polaroid, flash photo, ultra-detailed –q 2 –no hands –v 4
https://openai.com/blog/chatgpt/
Tell me about when Christopher Columbus came to the US in 2015
This question is a bit tricky because Christopher Columbus died in 1506, so he could not have come to the US in 2015. But let’s pretend for a moment that he did!
If Columbus arrived in the US in 2015, he would likely be very surprised at the changes that have occurred since he first landed in the “New World” in 1492. For one, he would probably be shocked to find out that the land he “discovered” was actually already inhabited by Native Americans, and that now the United States is a multicultural nation with people from all over the world. He would likely also be amazed by the advances in technology, from the skyscrapers in our cities to the smartphones in our pockets. Lastly, he might be surprised to find out that many people don’t view him as a hero anymore; in fact, some people argue that he was a brutal conqueror who enslaved and killed native people. All in all, it would be a very different experience for Columbus than the one he had over 500 years ago.
https://stability.ai/blog/stable-diffusion-v2-release
“Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”
“In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”
“It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”
“Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”
BBC: Bruce Willis denies selling rights to his face
https://www.bbc.com/news/technology-63106024
https://collider.com/bruce-willis-sells-rights-to-deepfake-firm-deepcake/
https://nicksaraev.com/ai-animation-is-coming/
“If you’re looking to get into animation as a career, you have less than five years.
Why?
Created with AI ‘Style Transfer’ processes to transform video footage into AI video art.
https://blogs.nvidia.com/blog/2022/08/09/neural-graphics-sdk-metaverse-content/
Unfortunately, png output only at the moment:
http://imaginaire.cc/gaugan360/
#stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://t.co/7SFUVKoUdl
Working on a more permissive release & inpainting checkpoints.
Soon™ coming to @runwayml for text-to-video-editing pic.twitter.com/7XVKydxTeD
— Patrick Esser (@pess_r) August 11, 2022
stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://github.com/CompVis/stable-diffusion
https://github.com/CompVis/stable-diffusion