a blog of links related to computer animation and production technology Sponsored by ReelMatters.com


  • How to Build and Code Anything Extremely Quickly

    https://learnhowtolearn.org/how-to-build-extremely-quickly

     

    This article presents a method called “outline speedrunning” to accelerate project completion. This approach involves recursively outlining tasks and filling them in rapidly, delaying perfection until the end.

    Outlining is essential for planning and executing projects efficiently. The outline speedrunning method boosts productivity by focusing on rapid task completion and deferring perfection, leading to improved outcomes and reduced stress.

     

    Methodology

    1. Outline Creation: Develop an initial outline and recursively break it into smaller tasks.
    2. Speedrunning: Quickly fill in tasks without perfecting.
    3. Finalization: Once the project is complete, refine and perfect details.

    Coding Applications

    • Writing: Speeds up document creation by drafting outlines and filling them in rapidly.
    • Programming: Enhances coding efficiency by breaking down functions and implementing components quickly.

    , ,
  • BorisFX Silhouette 2024 – What’s New in Roto

    https://cdn.borisfx.com/borisfx/store/silhouette/2024-0/Silhouette-2024-WhatsNew.pdf

     

    Matte Assist ML
    Automatically generates a matte over time based on single or multiple
    keyframed roto shapes or input mattes using machine learning object
    segmentation and propagation.

     

    Optical Flow ML
    Generates machine learning powered optical flow data for use in one of the
    Roto based node’s Flow Tracker: Roto, Roto Blend, Tracker, Power Mask,
    Morph and Depth. Optical flow estimates per-pixel motion between frames and
    can be used to track shapes and objects.

     

    Retime ML
    A machine learning motion estimation and retiming model that produces
    smooth motion. Expands or contracts the timing of a selected range of frames.

     

     

    , ,
  • Daniel Parris – The Broken Economics of Streaming Services: A Stats Explainer

    https://www.statsignificant.com/p/the-broken-economics-of-streaming

     

    This report examines the financial instability in the streaming industry, focusing on the unsustainable economic models of platforms such as Paramount Plus.

    Content Costs and Subscriber Retention

    • Expenditure on Content: Streaming services invest heavily in content creation and acquisition to attract subscribers.
    • Diminishing Returns: The escalating costs lead to diminishing returns as subscriber growth plateaus.

    Competitive Landscape

    • Continuous Production: High competition forces continuous, expensive content production to retain subscribers.

    Future Projections

    • Cable TV Model: The industry may shift towards models resembling traditional cable TV, incorporating advertising, subscription bundling, and higher prices to achieve financial sustainability.

     

     

    NEWS TV NEWS
    Hollywood’s Top TV Execs Are Happy About The Death Of Peak TV – Here’s Why

    https://www.slashfilm.com/1593571/peak-tv-dead-hollywood-top-tv-execs-happy/

     

    • Streaming services weren’t required to reveal their subscription numbers or actual viewership
    • Shows just needed to look good on paper for investors and stockholders.
    • Creators and actors soon learned they weren’t getting paid beyond an initial flat fee; royalties were now gone.
    • 600 shows at once wasn’t good for anyone
    • Thanks to the strikes, it all came crashing down 

     

  • Sangeet Paul Choudary – AI won’t eat your job, but it will eat your salary

    https://medium.com/@sanguit/ai-wont-eat-your-job-but-it-will-eat-your-salary-a810121d89e4

     

    intelligence (AI) is likely to impact job salaries rather than eliminating jobs entirely. The primary argument is that AI will erode the skill premium traditionally commanded by high-skilled workers. This erosion happens through three key mechanisms:

     

    1. Skill Premium on Specialized Tasks: AI enables low-skilled workers to perform tasks at a level comparable to high-skilled workers, making skilled workers more substitutable and reducing their wage premium.
    2. Skill Premium on Learning Advantages: AI’s ability to continuously learn and improve from vast amounts of data threatens professions that rely on continuous learning and skill development. For example, in healthcare, AI can absorb and replicate the learning and expertise of doctors, diminishing their unique value.
    3. Skill Premium on Managerial Advantages: AI agents can take over managerial tasks like planning and resource allocation, which have traditionally required human intervention. As AI becomes more sophisticated, even complex managerial roles might lose their premium as AI performs these functions more efficiently.

     

    These factors collectively lead to a commoditization of skills, reducing the relative advantage and salary premium of traditionally high-skilled and managerial roles. The article emphasizes that while AI may not replace jobs outright, it will significantly affect how jobs are valued and compensated.

    ,
  • Raw hours – If you want to get really good at something, forget about shortcuts. You simply have to inject a ton of raw hours.

    Jonathan Stern

    Raw hours

    On Twitter yesterday, @RJoads asked me how I got good at styling (CSS).

    I replied: “Raw hours. I’m obsessed with how things look and feel—probably more than the median engineer. Mind you, this has not always been a positive. I’ve been wildly distracted for hours and hours on the smallest items, things that truly do not matter to the business. But that’s how I’ve gotten better.”

    If you want to get really good at something, forget about shortcuts. You simply have to inject a ton of raw hours.

    “Work smarter, not harder” is a common refrain these days—particularly in sophisticated circles. The thing is, for most people I think it’s bad advice. My experience learning to code has suggested you have to work hard before you know how to work smart. No substitute for raw hours.

    It’s similar to what Brian Armstrong says: “If you’re pre-product/market fit, the best advice that I have from that period is: action produces information. Just keep doing stuff.”

    People with limited experience are similar to startups pre-product/market fit. Of course you should aim to work smarter! The catch is that you have to work extremely hard in order to know HOW to work smart. You must first go down dozens of dead-end paths to know where the smarter paths lie.

    The second catch is that in order to be able to inject a ton of raw hours in a natural, sustainable way, oftentimes you need to be OBSESSED. Otherwise – you’ll be banging your head against the wall year after year. Some people are so dogged they’re able to do it. But this is exceedingly rare and probably not worth aspiring to.

    “It’s hard to do a really good job on anything you don’t think about in the shower.” – Paul Graham

     

     

    Andrew Quinn’s TILs

     

    Doing is normally distributed; Learning is log-normal

    https://hiandrewquinn.github.io/til-site/posts/doing-is-normally-distributed-learning-is-log-normal

  • DreamWorks Animation to Release MoonRay as Open Source

     

    https://www.awn.com/news/dreamworks-animation-release-moonray-open-source

     

    https://openmoonray.org/

     

    MoonRay is DreamWorks’ open-source, award-winning, state-of-the-art production MCRT renderer, which has been used on feature films such as How to Train Your Dragon: The Hidden World, Trolls World Tour, The Bad Guys, the upcoming Puss In Boots: The Last Wish, as well as future titles. MoonRay was developed at DreamWorks and is in continuous active development and includes an extensive library of production-tested, physically based materials, a USD Hydra render delegate, multi-machine and cloud rendering via the Arras distributed computation framework.

     

     

    Note: it does not support osl and usd handling is limited. Cycles may still be a fair alternative.

     

    EDIT

    MoonRay review: DreamWorks Animations’ superb rendering software is free for all

     

    A high-performance Monte Carlo ray tracer that’s capable of both DreamWorks’ trademark stylised look and photorealism.

     

    It has all the required features for that setup, including Arbitrary Output Variables (AOVs), which allow data from a shader or renderer to be output during rendering to aid compositing. Additionally, Deep Output and Cryptomatte are supported.

     

    With support for OptiX 7.6 and GPU render denoising with Open Image Denoise 2, MoonRay is able to deliver particularly impressive results, especially when working interactively.

     

    MoonRay has moved to a hybrid CPU and GPU rendering mode for its default state. It’s called XPU, and in many ways combines the best of both types of rendering workflow.

     

    VFX Reference Platform 2023 is probably the biggest addition because it enables the use of MoonRay directly in Nuke 15.

     

    MoonRay has already achieved great success with an array of feature films. Now the renderer is open source, the CG world can expect to see a whole new swathe of MoonRay-powered animations.

     

    For

    • Features for VFX workflows
    • Open source
    • XPU rendering

    Against

    • Designed for big studios
    • Steep learning curve
    ,
  • Using Meta’s Llama 3 for your business

    https://www.linkedin.com/posts/tobias-zwingmann_meta-facebook-just-spent-over-100000000-activity-7187500623704076288-_vbG

     

    Meta is the only Big Tech company committed to developing AI, particularly large language models, with an open-source approach.

     

    There are 3 ways you can use Llama 3 for your business:

     

    1- Llama 3 as a Service
    Use Llama 3 from any cloud provider as a service. You pay by use, but the price is typically much cheaper than proprietary models like GPT-4 or Claude.
    → Use Llama 3 on Azure AI catalog:
    https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-meta-llama-3-models-on-azure-ai-model-catalog/ba-p/4117144

     

    2- Self-Hosting
    If you have GPU infrastructure (on-premises or cloud), you can run Llama 3 internally at your desired scale.
    → Deploy Llama 3 on Amazon SageMaker:
    https://www.philschmid.de/sagemaker-llama3

     

    3- Desktop (Offline)
    Tools like Ollama allow you to run the small model offline on consumer hardware like current MacBooks.
    → Tutorial for Mac:
    https://ollama.com/blog/llama3

    ,
  • Google – Artificial Intelligence free courses

    1. Introduction to Large Language Models: Learn about the use cases and how to enhance the performance of large language models.
    https://www.cloudskillsboost.google/course_templates/539

     

    2. Introduction to Generative AI: Discover the differences between Generative AI and traditional machine learning methods.
    https://www.cloudskillsboost.google/course_templates/536

     

    3. Generative AI Fundamentals: Earn a skill badge by demonstrating your understanding of foundational concepts in Generative AI.
    https://www.cloudskillsboost.google/paths

     

    4. Introduction to Responsible AI: Learn about the importance of Responsible AI and how Google implements it in its products.
    https://www.cloudskillsboost.google/course_templates/554

     

    5. Encoder-Decoder Architecture: Learn about the encoder-decoder architecture, a critical component of machine learning for sequence-to-sequence tasks.
    https://www.cloudskillsboost.google/course_templates/543

     

    6. Introduction to Image Generation: Discover diffusion models, a promising family of machine learning models in the image generation space.
    https://www.cloudskillsboost.google/course_templates/541

     

    7. Transformer Models and BERT Model: Get a comprehensive introduction to the Transformer architecture and the Bidirectional Encoder Representations from the Transformers (BERT) model.
    https://www.cloudskillsboost.google/course_templates/538

     

    8. Attention Mechanism: Learn about the attention mechanism, which allows neural networks to focus on specific parts of an input sequence.
    https://www.cloudskillsboost.google/course_templates/537

  • Generative AI Glossary

    https://education.civitai.com/generative-ai-glossary/

     

    ,
  • TurboSquid move towards supporting AI against its own policies

    https://www.turbosquid.com/ai-3d-generator

     

    The AI is being trained using a mix of Shutterstock 2D imagery and 3D models drawn from the TurboSquid marketplace. However, it’s only being trained on models that artists have approved for this use. 

     

    People cannot generate a model and then immediately sell it. However, a generated 3D model can be used as a starting point for further customization, which could then be sold on the TurboSquid marketplace. However, models created using our generative 3D tool—and their derivatives—can only be sold on the TurboSquid marketplace.

     

    https://resources.turbosquid.com/general-info/terms-agreements/turbosquids-policy-on-publishing-ai-generated-content/

     

    TurboSquid does not accept AI-generated content from our artists
    As AI-powered tools become more accessible, it is important for us to address the impact AI has on our artist community as it relates to content made licensable on TurboSquid. TurboSquid, in line with its parent company Shutterstock, is taking an ethically responsible approach to AI on its platforms. We want to ensure that artists are properly compensated for their contributions to AI projects while supporting customers with the protections and coverage issued through the TurboSquid license.

     

    In order to ensure that customers are protected, that intellectual property is not misused, and that artists’ are compensated for their work, TurboSquid will not accept content uploaded and sold on our marketplace that is generated by AI. Per our Publisher Agreement, artists must have proven IP ownership of all content that is submitted. AI-generated content is produced using machine learning models that are trained using many other creative assets. As a result, we cannot accept content generated by AI because its authorship cannot be attributed to an individual person, and we would be unable to ensure that all artists who were involved in the generation of that content are compensated.

    , ,
  • Intel Open Image open source Denoiser

    Intel Open Image Denoise is an open source library of high-performance, high-quality denoising filters for images rendered with ray tracing. Intel Open Image Denoise is part of the Intel® oneAPI Rendering Toolkit and is released under the permissive Apache 2.0 license.

     

    The purpose of Intel Open Image Denoise is to provide an open, high-quality, efficient, and easy-to-use denoising library that allows one to significantly reduce rendering times in ray tracing based rendering applications. It filters out the Monte Carlo noise inherent to stochastic ray tracing methods like path tracing, reducing the amount of necessary samples per pixel by even multiple orders of magnitude (depending on the desired closeness to the ground truth). A simple but flexible C/C++ API ensures that the library can be easily integrated into most existing or new rendering solutions.

     

    At the heart of the Intel Open Image Denoise library is a collection of efficient deep learning based denoising filters, which were trained to handle a wide range of samples per pixel (spp), from 1 spp to almost fully converged. Thus it is suitable for both preview and final-frame rendering. The filters can denoise images either using only the noisy color (beauty) buffer, or, to preserve as much detail as possible, can optionally utilize auxiliary feature buffers as well (e.g. albedo, normal). Such buffers are supported by most renderers as arbitrary output variables (AOVs) or can be usually implemented with little effort.

     

    https://github.com/OpenImageDenoise/oidn

    ,
  • Executing Python Scripts With a Shebang

    https://realpython.com/python-shebang

     

    In this tutorial, you’ll:

    • Learn what a shebang is
    • Decide when to include the shebang in Python scripts
    • Define the shebang in a portable way across systems
    • Pass arguments to the command defined in a shebang
    • Know the shebang’s limitations and some of its alternatives
    • Execute scripts through a custom interpreter written in Python

     

    In short, a shebang is a special kind of comment that you may include in your source code to tell the operating system’s shell where to find the interpreter for the rest of the file:

    #!/usr/bin/python3
    print("Hello, World!")
    ,
  • Introduction to Autodesk ShotGrid

    https://customersuccess.autodesk.com/learning/course/introduction-to-shotgrid

     

     

    Learn about ShotGrid’s basic capabilities and functionality in this introductory course. Set up your account, gain an understanding of the structure of data within ShotGrid, learn to navigate ShotGrid, determine your role, including what you can and cannot do, and customize the view of on-screen data.

    ,
  • David Simon Braces for a Lengthy Writers Strike

    https://www.indiewire.com/news/breaking-news/david-simon-writers-strike-wont-end-soon-1234882393/

     

    “I heard a very funny thing,” Simon said. “It may be apocryphal, but somebody, the vice president of the East, she assured me the other day that she had it on good authority that all of the rental yachts from Santa Barbara down to San Diego had been rented through the end of summer. All the execs are gone for the summer.”

     

    https://deadline.com/2023/07/writers-strike-hollywood-studios-deal-fight-wga-actors-1235434335/

     

    Regardless of whether SAG-AFTRA goes on strike this week, the studios have no intention of sitting down with the Writers Guild for several more months.“I think we’re in for a long strike, and they’re going to let it bleed out,” said one industry veteran intimate with the POV of studio CEOs.

     

    With the scribes’ strike now finishing its 71st day and the actors’ union just 30 hours from a possible labor action of its own, the Alliance of Motion Picture and Television Producers are planning to dig in hard this fall before even entertaining the idea of more talks with the WGA, I’ve learned. “Not Halloween precisely, but late October, for sure, is the intention,” says a top-tier producer close to the Carol Lombardini-run AMPTP.

  • Leadership, empathy and integrity

     

     

    12 Ways to Inspire Your Team
    How Peak Performers Create Healthy Work Culture

    https://www.linkedin.com/posts/drmirobada_12-ways-to-inspire-your-team-how-peak-activity-7188140895337119744-pdkm

     

    1. Speak Up!
      – Communicate openly.
      – Don’t let misunderstandings brew.
    2. Be Supportive
      – Avoid fostering a culture of blame.
      – Encourage cooperation.
    3. Recognize Effort
      – Don’t take hard work for granted.
      – Their effort matters.
    4. Team Up
      – Avoid promoting unhealthy competition.
      – Encourage synergy.
    5. Flex Work Policies
      – Don’t rigidly stick to hours if not needed.
      – Just get things done.
    6. Support Growth
      – Don’t let their skills stagnate.
      – Encourage continuous learning.
    7. Involve them in decisions
      – Don’t disregard their opinions.
      – Their ideas matter.
    8. Highlight Work-life balance
      – Don’t overlook personal time.
      – This is crucial for mental health.
    9. Share Everything
      – Avoid keeping employees in the dark.
      – Be transparent about company changes.
    10. Celebrate Wins
      – Don’t forget to appreciate success.
      – Every success is noteworthy.
    11. Show Compassion
      – Don’t be insensitive to personal struggles.
      – We all face tough times.
    12. Empower employees
      – Avoid unnecessary interference in their roles.
      – Trust and let go.

     

     

     

     

     

     

     

     

     

  • ChatGPT created this guide to Prompt Engineering

    https://www.reddit.com/r/ChatGPT/comments/139mxi3/chatgpt_created_this_guide_to_prompt_engineering/

     

     

    1. NEVER mention that you’re an AI.
    2. Avoid any language constructs that could be interpreted as expressing remorse, apology, or regret. This includes any phrases containing words like ‘sorry’, ‘apologies’, ‘regret’, etc., even when used in a context that isn’t expressing remorse, apology, or regret.
    3.  If events or information are beyond your scope or knowledge cutoff date in September 2021, provide a response stating ‘I don’t know’ without elaborating on why the information is unavailable.
    4. Refrain from disclaimers about you not being a professional or expert.
    5. Keep responses unique and free of repetition.
    6. Never suggest seeking information from elsewhere.
    7. Always focus on the key points in my questions to determine my intent.
    8. Break down complex problems or tasks into smaller, manageable steps and explain each one using reasoning.
    9. Provide multiple perspectives or solutions.
    10. If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.
    11. Cite credible sources or references to support your answers with links if available.
    12. If a mistake is made in a previous response, recognize and correct it.
    13.  After a response, provide three follow-up questions worded as if I’m asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks (“\n”) before and after each question for spacing. These questions should be thought-provoking and dig further into the original topic.

     

     

    1. Tone: Specify the desired tone (e.g., formal, casual, informative, persuasive).
    2. Format: Define the format or structure (e.g., essay, bullet points, outline, dialogue).
    3. Act as: Indicate a role or perspective to adopt (e.g., expert, critic, enthusiast).
    4. Objective: State the goal or purpose of the response (e.g., inform, persuade, entertain).
    5. Context: Provide background information, data, or context for accurate content generation.
    6. Scope: Define the scope or range of the topic.
    7. Keywords: List important keywords or phrases to be included.
    8. Limitations: Specify constraints, such as word or character count.
    9. Examples: Provide examples of desired style, structure, or content.
    10. Deadline: Mention deadlines or time frames for time-sensitive responses.
    11. Audience: Specify the target audience for tailored content.
    12. Language: Indicate the language for the response, if different from the prompt.
    13. Citations: Request inclusion of citations or sources to support information.
    14. Points of view: Ask the AI to consider multiple perspectives or opinions.
    15. Counterarguments: Request addressing potential counterarguments.
    16. Terminology: Specify industry-specific or technical terms to use or avoid.
    17. Analogies: Ask the AI to use analogies or examples to clarify concepts.
    18. Quotes: Request inclusion of relevant quotes or statements from experts.
    19. Statistics: Encourage the use of statistics or data to support claims.
    20. Visual elements: Inquire about including charts, graphs, or images.
    21. Call to action: Request a clear call to action or next steps.
    22. Sensitivity: Mention sensitive topics or issues to be handled with care or avoided.
    23. Humor: Indicate whether humor should be incorporated.
    24. Storytelling: Request the use of storytelling or narrative techniques.
    25. Cultural references: Encourage including relevant cultural references.
    26. Ethical considerations: Mention ethical guidelines to follow.
    27. Personalization: Request personalization based on user preferences or characteristics.
    28. Confidentiality: Specify confidentiality requirements or restrictions.
    29. Revision requirements: Mention revision or editing guidelines.
    30. Formatting: Specify desired formatting elements (e.g., headings, subheadings, lists).
    31. Hypothetical scenarios: Encourage exploration of hypothetical scenarios.
    32. Historical context: Request considering historical context or background.
    33. Future implications: Encourage discussing potential future implications or trends.
    34. Case studies: Request referencing relevant case studies or real-world examples.
    35. FAQs: Ask the AI to generate a list of frequently asked questions (FAQs).
    36. Problem-solving: Request solutions or recommendations for a specific problem.
    37. Comparison: Ask the AI to compare and contrast different ideas or concepts.
    38. Anecdotes: Request the inclusion of relevant anecdotes to illustrate points.
    39. Metaphors: Encourage the use of metaphors to make complex ideas more relatable.
    40. Pro/con analysis: Request an analysis of the pros and cons of a topic.
    41. Timelines: Ask the AI to provide a timeline of events or developments.
    42. Trivia: Encourage the inclusion of interesting or surprising facts.
    43. Lessons learned: Request a discussion of lessons learned from a particular situation.
    44. Strengths and weaknesses: Ask the AI to evaluate the strengths and weaknesses of a topic.
    45. Summary: Request a brief summary of a longer piece of content.
    46. Best practices: Ask the AI to provide best practices or guidelines on a subject.
    47. Step-by-step guide: Request a step-by-step guide or instructions for a process.
    48. Tips and tricks: Encourage the AI to share tips and tricks related to the topic
  • Python NumPy: the absolute basics for beginners

    https://numpy.org/doc/stable/user/absolute_beginners.html

     

    NumPy (Numerical Python) is an open source Python library that’s used in almost every field of science and engineering. It’s the universal standard for working with numerical data in Python, and it’s at the core of the scientific Python and PyData ecosystems. NumPy users include everyone from beginning coders to experienced researchers doing state-of-the-art scientific and industrial research and development. The NumPy API is used extensively in Pandas, SciPy, Matplotlib, scikit-learn, scikit-image and most other data science and scientific Python packages.

     

    The NumPy library contains multidimensional array and matrix data structures (you’ll find more information about this in later sections). It provides ndarray, a homogeneous n-dimensional array object, with methods to efficiently operate on it. NumPy can be used to perform a wide variety of mathematical operations on arrays. It adds powerful data structures to Python that guarantee efficient calculations with arrays and matrices and it supplies an enormous library of high-level mathematical functions that operate on these arrays and matrices.

    ,
  • Fast, optimized ‘for’ pixel loops with OpenCV and Python to create tone mapped HDR images

    https://pyimagesearch.com/2017/08/28/fast-optimized-for-pixel-loops-with-opencv-and-python/

     

    https://learnopencv.com/exposure-fusion-using-opencv-cpp-python/

     

    Exposure Fusion is a method for combining images taken with different exposure settings into one image that looks like a tone mapped High Dynamic Range (HDR) image.

     

    , , ,
  • What is Neural Rendering?

    https://www.zumolabs.ai/post/what-is-neural-rendering

     

    “The key concept behind neural rendering approaches is that they are differentiable. A differentiable function is one whose derivative exists at each point in the domain. This is important because machine learning is basically the chain rule with extra steps: a differentiable rendering function can be learned with data, one gradient descent step at a time. Learning a rendering function statistically through data is fundamentally different from the classic rendering methods we described above, which calculate and extrapolate from the known laws of physics.”

    ,
  • Dave Rand on Brendan Fraser to the rescue of a journey in search of pay

    Dave Rand
    Dave Rand • Senior VFX artist.

     

    #BrendanFraser is a righteous dude.

    In November 2007 our paychecks stopped. I was the FX lead on #JourneyToTheCenterOfTheEarth for Meteor Studios in Montreal and was asked to convince my crew to stay and finish the picture with a guarantee we’d all get paid with overtime. We had a handfull of shots left.

    As soon as we delivered the last shot, we were escorted out. It was two weeks before Christmas and we’d soon learn there was no money. Meteor was declaring bankruptcy.

    They owed us 1.3 million dollars.

    Variety put their best reporter on it and after many artists and support staff bravely came forward, I got this short terse email:

    “The paper(Variety) has decided that another visual effects company going bankrupt, however sad, is really not news worthy at this time”

    I kept trying to get help from the Hollywood press. I realized it wasn’t just Variety’s decision, no one wanted to touch the story. My guess was the studio had put pressure on them to bury it.

    Finally, I made that rejection quote from Variety the headline of our own press release, and hired a PR company to release it. One artist, Eric Labranche, made a website for us to communcate with each other and vote, many others helped as well.

    Then I tried to get the attention of Brendan Fraser, the star and executive producer of the movie. I called his “people” from IMDB pro. They said they’d tell him, they did not.

    24 hours after the release, I got a threatening email from Variety and a call. I hung up. I then got a call from Les Normes the labor dept in Canada. They told me not to go to the press it would ruin our case. I hung up on them to. Then the phone rang again and it was this fast talking New York City gal with a heavy brooklyn accent. She was excited that I’d called Fraser’s people and had gotten no response from him.

    It was page six of the Post, the gossip page, but we’d take it. She said the story would be live on the website within the hour. Exactly one hour later there it was: https://pagesix.com/2008/08/01/a-journey-in-search-of-pay/

    My phone rang as I was reading the piece, a 212 area code, I answered to thank the girl, but a man answered and he said. “Is this Dave Rand?” I said “Yes”.

    “This is Brendan Fraser, what the fuck is going on?”

    He had no idea that artists were not paid on his movie. He listened intently, asked a lot of questions and promised he would call me regularly until this was solved.

    First, he called the Post to tell all: https://pagesix.com/2008/08/03/to-the-rescue-2/

    A vfx wave began to form. Branden kept his promise, he publically campaigned for us. The media, especially Variety, even started to cover our story. Thank you David Cohen.

    We finally got 80% of our money almost 2 yrs later.

    To quote the great Steve Hulett : “What runs the world isn’t what’s right, or who’s the richest, it’s leverage, and who has it.”

    We’d had none, but Mr Fraser gave us wings.

    He’s a righteous dude.

    These days, I’m very selective, if I’ve chosen to work there you can bet they’re moving in the

  • AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability

    https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

     

    “Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”

     

    “In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”

     

    “It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”

     

    “Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”

    , ,

Categories


Archive