a blog of links related to computer animation and production technology Sponsored by ReelMatters.com


  • Convert 2D Images to 3D Models

     

    https://www.news.viverse.com/post/pixel-to-polygon-converting-2d-images-to-3d-models-top-tools-revealed

     

    https://www.rankred.com/convert-2d-images-to-3d/

     

    (more…)

    , , , , , , ,
  • Edit Canon R7 & R5C VR180 stereo footage in DaVinci Resolve with Kartaverse KartaVR – Free EOS VR Utility Alternative

     

     

    https://www.pixelsham.com/2024/06/20/capturing-stereo-images-with-the-rf-s7-8mm-f4-stm-dual-and-the-canon-rf-s3-9mm-f3-5-stm-dual-fisheye-lenses-for-eos-r7/

     

    https://kartaverse.github.io/Reactor-Docs/#/com.AndrewHazelden.KartaVR

     

    https://github.com/Kartaverse

     

    Kartaverse is a free open source post-production pipeline that is optimized for the immersive media sector. If you can imagine it, Kartaverse can help you create it in XR!

    “Karta” is the Swedish word for map. With KartaVR you can stitch, composite, retouch, and remap any kind of panoramic video: from any projection to any projection. This provides the essential tools for 360VR, panoramic video stitching, depthmap, lightfield, and fulldome image editing workflows.

    Kartaverse makes it a breeze to accessibly and affordably create content for use with virtual reality HMDs (head mounted displays) and fulldome theatres by providing ready to go scripts, templates, plugins, and command-line tools that allow you to work efficiently with XR media. The toolset works inside of Blackmagic Design’s powerful node based Fusion Studio and DaVinci Resolve Studio software.

    ,
  • Generative AI Glossary

    https://education.civitai.com/generative-ai-glossary/

     

    ,
  • Python NumPy: the absolute basics for beginners

    https://numpy.org/doc/stable/user/absolute_beginners.html

     

    NumPy (Numerical Python) is an open source Python library that’s used in almost every field of science and engineering. It’s the universal standard for working with numerical data in Python, and it’s at the core of the scientific Python and PyData ecosystems. NumPy users include everyone from beginning coders to experienced researchers doing state-of-the-art scientific and industrial research and development. The NumPy API is used extensively in Pandas, SciPy, Matplotlib, scikit-learn, scikit-image and most other data science and scientific Python packages.

     

    The NumPy library contains multidimensional array and matrix data structures (you’ll find more information about this in later sections). It provides ndarray, a homogeneous n-dimensional array object, with methods to efficiently operate on it. NumPy can be used to perform a wide variety of mathematical operations on arrays. It adds powerful data structures to Python that guarantee efficient calculations with arrays and matrices and it supplies an enormous library of high-level mathematical functions that operate on these arrays and matrices.

    ,
  • Netflix removes movie noise, saves 30% bandwidth and adds it back again

    https://www.slashcam.com/news/single/Netflix-removes-movie-noise–saves-30–bandwidth-a-17337.html

     

    ”’Filmmaker Parker Gibbons has drawn attention to a very interesting fact: Netflix removes film noise before streaming its movies and artificially adds it back when decoding. This is because digitally shot films are actually free of any film grain, the very specific (not to be confused with noise caused by too little light) noise that occurs in analog filming. But this type of noise has become so associated with “real” motion pictures through the long history of film (as a component of the film look) that it is unconsciously perceived by many viewers as an important feature of a motion picture.

    This leads to a difficult-to-resolve contradiction between, on the one hand, film material that is as compressible and noise-free as possible, and, on the other hand, the noise caused by film grain that is desirable for the film look. Netflix has found a very special solution to resolve this contradiction. It uses a very special function of the open source AV1 video codec, which Netflix has been using for a long time, namely the artificial synthesis of film grain. Thus, film noise is first analyzed using statistical methods before compression and then removed for efficient compression. According to Netflix, this saves around 30% of the data during transmission.”’

     

  • online real time collaborative text editor

    http://etherpad.org/

    a highly customizable Open Source online editor providing collaborative editing in really real-time

  • Processing – a flexible software sketchbook

    https://processing.org/

     

    Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping.

     

    » Free to download and open source
    » Interactive programs with 2D, 3D or PDF output
    » OpenGL integration for accelerated 2D and 3D
    » For GNU/Linux, Mac OS X, Windows, Android, and ARM
    » Over 100 libraries extend the core software

     

    ,
  • scikit-learn – Machine Learning A.I. in Python

    http://scikit-learn.org/stable/

    Simple and efficient tools for data mining and data analysis Accessible to everybody, and reusable in various contexts Built on NumPy, SciPy, and matplotlib Open source, commercially usable – BSD license

    , ,
  • MeshLab unstructured 3D triangular meshes editor

    MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured 3D triangular meshes. The system is aimed to help the processing of the typical not-so-small unstructured models arising in 3D scanning, providing a set of tools for editing, cleaning, healing, inspecting, rendering and converting this kind of meshes.

     

    http://meshlab.sourceforge.net/

     

    , , ,
  • Public Work – A search engine for free public domain content

    Explore 100,000+ copyright-free images from The MET, New York Public Library, and other sources.

     

    https://public.work/

     

    ,
  • Free fonts

    https://fontlibrary.org/

     

    https://fontsource.org/

    Open-source fonts packaged into individual NPM packages for self-hosting in web applications. Self-hosting fonts can significantly improve website performance, remain version-locked, work offline, and offer more privacy.

     

    https://www.awwwards.com/awwwards/collections/free-fonts

     

    https://www.fontsquirrel.com/

     

    https://www.freefaces.gallery

     

    https://www.1001freefonts.com/

     

    http://www.dafont.com/

     

    http://www.fontspace.com/popular/fonts

     

    https://www.urbanfonts.com/free-fonts.htm

     

    http://www.1001fonts.com/poster-fonts.html

     

    How to use @font-face in CSS

    The @font-face rule allows custom fonts to be loaded on a webpage.

    https://css-tricks.com/snippets/css/using-font-face-in-css/

    , , ,
  • LARS – An application that enables you to run LLMs locally on your device

    https://github.com/abgulati/LARS

     

    This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.

     

    LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.

     

     

     

    ,
  • Using Meta’s Llama 3 for your business

    https://www.linkedin.com/posts/tobias-zwingmann_meta-facebook-just-spent-over-100000000-activity-7187500623704076288-_vbG

     

    Meta is the only Big Tech company committed to developing AI, particularly large language models, with an open-source approach.

     

    There are 3 ways you can use Llama 3 for your business:

     

    1- Llama 3 as a Service
    Use Llama 3 from any cloud provider as a service. You pay by use, but the price is typically much cheaper than proprietary models like GPT-4 or Claude.
    → Use Llama 3 on Azure AI catalog:
    https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-meta-llama-3-models-on-azure-ai-model-catalog/ba-p/4117144

     

    2- Self-Hosting
    If you have GPU infrastructure (on-premises or cloud), you can run Llama 3 internally at your desired scale.
    → Deploy Llama 3 on Amazon SageMaker:
    https://www.philschmid.de/sagemaker-llama3

     

    3- Desktop (Offline)
    Tools like Ollama allow you to run the small model offline on consumer hardware like current MacBooks.
    → Tutorial for Mac:
    https://ollama.com/blog/llama3

    ,
  • Thomas Mansencal – Colour Science for Python

    https://thomasmansencal.substack.com/p/colour-science-for-python

     

    https://www.colour-science.org/

     

    Colour is an open-source Python package providing a comprehensive number of algorithms and datasets for colour science. It is freely available under the BSD-3-Clause terms.

     

    , ,
  • Custom bokeh in a raytraced DOF render

     

    https://www.linkedin.com/posts/davidgruwierlarsen_you-can-render-super-realistic-custom-bokeh-activity-7148259483440381952-I9hi

     

    To achieve a custom pinhole camera effect with a custom bokeh in Arnold Raytracer, you can follow these steps:

    1. Set the render camera with a focal length around 50 (or as needed)
    2. Set the F-Stop to a high value (e.g., 22).
    3. Set the focus distance as you require
    4. Turn on DOF
    5. Place a plane a few cm in front of the camera.
    6. Texture the plane with a transparent shape at the center of it. (Transmission with no specular roughness)
    ,
  • Magic Lantern – a free software add-on to boost Canon’s features

    https://www.magiclantern.fm/

     

    https://builds.magiclantern.fm/

     

    Supported cameras:
    5D Mark II, 5D Mark III, 6D, 7D, 50D, 60D, 500D/T1i, 550D/T2i, 600D/T3i, 650D/T4i, 700D/T5i, 1100D/T3, EOS M.

     

    Example features:

    • Zebras for under/over exposed areas.
    • Focus peaking for quickly checking focus.
    • Magic Zoom window to fine-tune focus.
    • Cropmark overlays for 16×9 bars or any custom shape.
    • Overlay a ghost image in live view.
    • Spotmeter in live view.
    • False color for precise exposure control.
    • RGB histogram with RAW support.
    • Waveform for perfect green screen lighting.
    • Vectorscope for color tones.
    • Kelvin and push-button white balance.
    • Auto ETTR (Expose To The Right).
    • Exposure Lock for manual mode.
    • Manually override exposure settings in LiveView.
    • Quickly switch between two exposure presets.
    • Toggle exposure simulation on/off (Liveview).
    • Dynamic range improvements (Dual ISO, other features being researched).
    • Bracketing for exposure, flash, or depth-of-field.
    • In-camera intervalometer.
    • Custom bulb timer for extra-long exposures.
    • Motion detection.
    • Silent pictures without shutter actuation.
    • Record voice tags for photos / videos.
    • Analog / digital gain adjustments.
    • Selectable input source.
    • Toggle wind filter.
    • Live audio monitoring through with headphones.
    • Beep / test tones.

     

     

    ,
  • DNEG announces pay cuts of up to 25% and artists’ repayment loans

    EDIT 20230919

    https://www.cartoonbrew.com/artist-rights/vfx-giant-dneg-puts-forth-new-salary-reduction-proposal-after-worker-backlash-to-initial-proposal-232735.html

    Revised Proposal: Initially met with backlash, DNEG has revised its proposal over the weekend. They’ve introduced a third option that focuses on reducing work hours instead of salaries, along with additional paid leave to compensate for the income reduction.

     

    1. A salary reduction of 20% to 25% for seven months, with paid leave to compensate.
    2. A temporary 50% salary reduction, supplemented by a company loan, totalling 90% of the original salary, repayable over three years.
    3. Reduced working hours to a 3-day week for seven months, with no hourly rate reduction.

     

     

    https://www.linkedin.com/posts/avuuk_animation-visualeffects-dneg-activity-7107674426275442688-Fd1d

     

    Today, we want to address a concerning development at DNEG. They very recently announced pay cuts of up to 25% for its employees, coupled with a rather unconventional approach to compensate for these losses through ‘loans’, which their staff need to repay overtime.

     

    As of now, DNEG is imposing these pay cuts for a period of 7 months. To ‘help’ offset the financial impact on their staff, the company is offering ‘loans’ to their employees. While offering financial support during challenging times is usually commendable, the repayment terms are causing deep concern within the Animation & Visual Effects community, especially around their legality.

     

    The loan offered by DNEG comes with a significant catch: employees are required to pay back the loan over a three-year period. This means that even after the pay cuts are reinstated, employees will be obligated to allocate a portion of their salaries to repay the company. Aledgedly, there is no interest on the loan (tbc). This approach has sparked a considerable backlash within our industry.

     

    We at the Animation & Visual Effects Union voice very strong concern and opposition to the pay cuts, as well as the loan method. We believe pay cuts should not be compensated through loans with long-term repayment plans, placing a heavy burden on the employees who are already facing financial challenges.

     

    This situation underscores the importance of open dialogue and collaboration between employers and employees during challenging times. While businesses often need to make tough decisions to navigate economic uncertainties, it’s crucial to strike a balance that doesn’t disproportionately impact the livelihoods of their dedicated workforce.

     

    What can be done about this?

     

    If you are a member of the Animation & Visual Effects Union, get in touch with us immediately and do not accept any pay cuts yet. You can email your BECTU official Stefan Vassalos stefan.vassalos@prospect.org.uk to get advice and organise with your colleagues at DNEG.

     

    Remember, you MUST give your consent for a paycut. It is ILLEGAL to impose a cut without it. You DO NOT have to consent to a pay cut. Legal action can and will be taken against paycuts without consent. Anyone affected please get in touch with us immediately so we can represent and protect you and your livlihood as much as possible. BECTU has the power and resources to challenge moments like this, so it is imperitive YOU take action and contact us. Talk to your colleagues and get in touch. It is only through solidarity and collective effort that we can address these challenges and shape a brighter future for our industry.

     

    Please feel free to share your thoughts and insights on this matter. Your input and perspective are valuable as we navigate these unprecedented times together.

  • Stack OverflowAI

     

    https://stackoverflow.blog/2023/07/27/announcing-overflowai/

     

    EDIT

    OpenAI and Stack Overflow’s recent partnership announcement has raised concerns for some users that their data is being used without permission. Some users who wanted to delete their answers have had their accounts suspended. The site does not generally allow the deletion of posts if the questions have any answers. Stack Overflow moderators say that it doesn’t allow users to remove posts as they hurt the company as a whole. While the GDPR gives users in the EU the right to be forgotten, websites have a right not to delete data if it doesn’t contain identifiable information to prevent the flow of a topic or question from being disrupted.

    ,
  • Infinigen – a free procedural generator of 3D scenes

    https://infinigen.org/

     

    https://github.com/princeton-vl/infinigen

     

    Infinigen is based on Blender and is free and open-source (BSD 3-Clause License). Infinigen is being actively developed to expand its capabilities and coverage.

     

    , , ,
  • ChatGPT created this guide to Prompt Engineering

    https://www.reddit.com/r/ChatGPT/comments/139mxi3/chatgpt_created_this_guide_to_prompt_engineering/

     

     

    1. NEVER mention that you’re an AI.
    2. Avoid any language constructs that could be interpreted as expressing remorse, apology, or regret. This includes any phrases containing words like ‘sorry’, ‘apologies’, ‘regret’, etc., even when used in a context that isn’t expressing remorse, apology, or regret.
    3.  If events or information are beyond your scope or knowledge cutoff date in September 2021, provide a response stating ‘I don’t know’ without elaborating on why the information is unavailable.
    4. Refrain from disclaimers about you not being a professional or expert.
    5. Keep responses unique and free of repetition.
    6. Never suggest seeking information from elsewhere.
    7. Always focus on the key points in my questions to determine my intent.
    8. Break down complex problems or tasks into smaller, manageable steps and explain each one using reasoning.
    9. Provide multiple perspectives or solutions.
    10. If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.
    11. Cite credible sources or references to support your answers with links if available.
    12. If a mistake is made in a previous response, recognize and correct it.
    13.  After a response, provide three follow-up questions worded as if I’m asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks (“\n”) before and after each question for spacing. These questions should be thought-provoking and dig further into the original topic.

     

     

    1. Tone: Specify the desired tone (e.g., formal, casual, informative, persuasive).
    2. Format: Define the format or structure (e.g., essay, bullet points, outline, dialogue).
    3. Act as: Indicate a role or perspective to adopt (e.g., expert, critic, enthusiast).
    4. Objective: State the goal or purpose of the response (e.g., inform, persuade, entertain).
    5. Context: Provide background information, data, or context for accurate content generation.
    6. Scope: Define the scope or range of the topic.
    7. Keywords: List important keywords or phrases to be included.
    8. Limitations: Specify constraints, such as word or character count.
    9. Examples: Provide examples of desired style, structure, or content.
    10. Deadline: Mention deadlines or time frames for time-sensitive responses.
    11. Audience: Specify the target audience for tailored content.
    12. Language: Indicate the language for the response, if different from the prompt.
    13. Citations: Request inclusion of citations or sources to support information.
    14. Points of view: Ask the AI to consider multiple perspectives or opinions.
    15. Counterarguments: Request addressing potential counterarguments.
    16. Terminology: Specify industry-specific or technical terms to use or avoid.
    17. Analogies: Ask the AI to use analogies or examples to clarify concepts.
    18. Quotes: Request inclusion of relevant quotes or statements from experts.
    19. Statistics: Encourage the use of statistics or data to support claims.
    20. Visual elements: Inquire about including charts, graphs, or images.
    21. Call to action: Request a clear call to action or next steps.
    22. Sensitivity: Mention sensitive topics or issues to be handled with care or avoided.
    23. Humor: Indicate whether humor should be incorporated.
    24. Storytelling: Request the use of storytelling or narrative techniques.
    25. Cultural references: Encourage including relevant cultural references.
    26. Ethical considerations: Mention ethical guidelines to follow.
    27. Personalization: Request personalization based on user preferences or characteristics.
    28. Confidentiality: Specify confidentiality requirements or restrictions.
    29. Revision requirements: Mention revision or editing guidelines.
    30. Formatting: Specify desired formatting elements (e.g., headings, subheadings, lists).
    31. Hypothetical scenarios: Encourage exploration of hypothetical scenarios.
    32. Historical context: Request considering historical context or background.
    33. Future implications: Encourage discussing potential future implications or trends.
    34. Case studies: Request referencing relevant case studies or real-world examples.
    35. FAQs: Ask the AI to generate a list of frequently asked questions (FAQs).
    36. Problem-solving: Request solutions or recommendations for a specific problem.
    37. Comparison: Ask the AI to compare and contrast different ideas or concepts.
    38. Anecdotes: Request the inclusion of relevant anecdotes to illustrate points.
    39. Metaphors: Encourage the use of metaphors to make complex ideas more relatable.
    40. Pro/con analysis: Request an analysis of the pros and cons of a topic.
    41. Timelines: Ask the AI to provide a timeline of events or developments.
    42. Trivia: Encourage the inclusion of interesting or surprising facts.
    43. Lessons learned: Request a discussion of lessons learned from a particular situation.
    44. Strengths and weaknesses: Ask the AI to evaluate the strengths and weaknesses of a topic.
    45. Summary: Request a brief summary of a longer piece of content.
    46. Best practices: Ask the AI to provide best practices or guidelines on a subject.
    47. Step-by-step guide: Request a step-by-step guide or instructions for a process.
    48. Tips and tricks: Encourage the AI to share tips and tricks related to the topic
  • ChaosGPT: “destroy humanity,” “establish global dominance,” and “attain immortality.”

    A user of the new open-source autonomous AI project Auto-GPT asked it to try to “destroy humanity,” “establish global dominance,” and “attain immortality.” The AI, called ChaosGPT, complied and tried to research nuclear weapons, recruit other AI agents to help it do research, and sent tweets trying to influence others.

     

    https://www.vice.com/en/article/93kw7p/someone-asked-an-autonomous-ai-to-destroy-humanity-this-is-what-happened

     

    ,
  • StudioBinder.com – Photography basics: What is Dynamic Range in Photography

    https://www.studiobinder.com/blog/what-is-dynamic-range-photography/

     

    https://www.hdrsoft.com/resources/dri.html#bit-depth

     

     

     

    The dynamic range is a ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to.

    For a scene: Dynamic range is the ratio between the brightest and darkest parts of the scene.

     

    For a camera: Dynamic range is the ratio of saturation to noise. More specifically, the ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise.

     

    For a display: Dynamic range is the ratio between the maximum and minimum intensities emitted from the screen.

     

     

     

     

     

    , ,

Categories


Archive