• The AI-Copyright Trap document by Carys Craig

    , ,

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4905118

    “There are many good reasons to be concerned about the rise of generative AI(…). Unfortunately, there are also many good reasons to be concerned about copyrightโ€™s growing prevalence in the policy discourse around AIโ€™s regulation. Insisting that copyright protects an exclusive right to use materials for text and data mining practices (whether for informational analysis or machine learning to train generative AI models) is likely to do more harm than good. As many others have explained, imposing copyright constraints will certainly limit competition in the AI industry, creating cost-prohibitive barriers to quality data and ensuring that only the most powerful players have the means to build the best AI tools (provoking all of the usual monopoly concerns that accompany this kind of market reality but arguably on a greater scale than ever before). It will not, however, prevent the continued development and widespread use of generative AI.”



    “(…) As Michal Shur-Ofry has explained, the technical traits of generative AI already mean that its outputs will tend towards the dominant, likely reflecting ‘a relatively narrow, mainstream view, prioritizing the popular and conventional over diverse contents and narratives.’ย Perhaps, then, if the political goal is to push for equality, participation, and representation in the AI age, criticsโ€™ demands should focus not on exclusivity but inclusivity.ย If we want to encourage the development of ethical and responsible AI, maybe we should be asking what kind of material and training data must be included in the inputs and outputs of AI to advance that goal. Certainly, relying on copyright and the market to dictate what is in and what is out is unlikely to advance a public interest or equality-oriented agenda.”



    “If copyright is not the solution, however, it might reasonably be asked: what is? The first step to answering that questionโ€”to producing a purposively sound prescription and evidence-based prognosis, is to correctly diagnose the problem. If, as I have argued, the problem is not that AI models are being trained on copyright works without their ownersโ€™ consent, then requiring copyright ownersโ€™ consent and/or compensation for the use of their work in AI-training datasets is not the appropriate solution. (…)If the only real copyright problem is that the outputs of generative AI may be substantially similar to specific human-authored and copyright-protected works, then copyright law as we know it already provides the solution.”

  • Newtonโ€™s Cradle – An AI Film By Jeff Synthesized

    ,

    Narrative voice via Artlistai, News Reporter PlayAI, All other voices are V2V in Elevenlabs.
    Powered by (in order of amount) โ€˜HailuoAIโ€™, โ€˜KlingAIโ€™ and of course some of our special source. Performance capture by โ€˜Runwayโ€™s Act-Oneโ€™.
    Edited and color graded in โ€˜DaVinci Resolveโ€™. Composited with โ€˜After Effectsโ€™.

    In this film, the โ€˜Newtonโ€™s Cradleโ€™ isnโ€™t just a symbolic objectโ€”it represents the fragile balance between control and freedom in a world where time itself is being manipulated. The oscillation of the cradle reflects the constant push and pull of power in this dystopian society. By the end of the film, we discover that this seemingly innocuous object holds the potential to disrupt the system, offering a glimmer of hope that time can be reset and balance restored.

  • xinsir – controlnet-union-sdxl-1.0 examples

    ,

    https://huggingface.co/xinsir/controlnet-union-sdxl-1.0

    deblur

    inpainting

    outpainting

    upscale

    openpose

    depthmap

    canny

    lineart

    anime lineart

    mlsd

    scribble

    hed

    softedge

    ted

    segmentation

    normals

    openpose + canny

  • What is deepfake GAN (Generative Adversarial Network) technology?

    https://www.techtarget.com/whatis/definition/deepfake

    Deepfake technology is a type of artificial intelligence used to create convincing fake images, videos and audio recordings. The term describes both the technology and the resulting bogus content and is a portmanteau of deep learning and fake.

    Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say.

    Deepfakes aren’t edited or photoshopped videos or images. In fact, they’re created using specialized algorithms that blend existing and new footage. For example, subtle facial features of people in images are analyzed through machine learning (ML) to manipulate them within the context of other videos.

    Deepfakes uses two algorithms — a generator and a discriminator — to create and refine fake content. The generator builds a training data set based on the desired output, creating the initial fake digital content, while the discriminator analyzes how realistic or fake the initial version of the content is. This process is repeated, enabling the generator to improve at creating realistic content and the discriminator to become more skilled at spotting flaws for the generator to correct.

    The combination of the generator and discriminator algorithms creates a generative adversarial network.

    A GAN uses deep learning to recognize patterns in real images and then uses those patterns to create the fakes.

    When creating a deepfake photograph, a GAN system views photographs of the target from an array of angles to capture all the details and perspectives.
    When creating a deepfake video, the GAN views the video from various angles and analyzes behavior, movement and speech patterns.
    This information is then run through the discriminator multiple times to fine-tune the realism of the final image or video.

  • The History, Evolution and Rise of AI

    https://medium.com/@lmpo/a-brief-history-of-ai-with-deep-learning-26f7948bc87b

    ๐Ÿ”น 1943: ๐— ๐—ฐ๐—–๐˜‚๐—น๐—น๐—ผ๐—ฐ๐—ต & ๐—ฃ๐—ถ๐˜๐˜๐˜€ create the first artificial neuron.
    ๐Ÿ”น 1950: ๐—”๐—น๐—ฎ๐—ป ๐—ง๐˜‚๐—ฟ๐—ถ๐—ป๐—ด introduces the Turing Test, forever changing the way we view intelligence.
    ๐Ÿ”น 1956: ๐—๐—ผ๐—ต๐—ป ๐— ๐—ฐ๐—–๐—ฎ๐—ฟ๐˜๐—ต๐˜† coins the term โ€œArtificial Intelligence,โ€ marking the official birth of the field.
    ๐Ÿ”น 1957: ๐—™๐—ฟ๐—ฎ๐—ป๐—ธ ๐—ฅ๐—ผ๐˜€๐—ฒ๐—ป๐—ฏ๐—น๐—ฎ๐˜๐˜ invents the Perceptron, one of the first neural networks.
    ๐Ÿ”น 1959: ๐—•๐—ฒ๐—ฟ๐—ป๐—ฎ๐—ฟ๐—ฑ ๐—ช๐—ถ๐—ฑ๐—ฟ๐—ผ๐˜„ and ๐—ง๐—ฒ๐—ฑ ๐—›๐—ผ๐—ณ๐—ณ create ADALINE, a model that would shape neural networks.
    ๐Ÿ”น 1969: ๐— ๐—ถ๐—ป๐˜€๐—ธ๐˜† & ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜ solve the XOR problem, but also mark the beginning of the “first AI winter.”
    ๐Ÿ”น 1980: ๐—ž๐˜‚๐—ป๐—ถ๐—ต๐—ถ๐—ธ๐—ผ ๐—™๐˜‚๐—ธ๐˜‚๐˜€๐—ต๐—ถ๐—บ๐—ฎ introduces Neocognitron, laying the groundwork for deep learning.
    ๐Ÿ”น 1986: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐——๐—ฎ๐˜ƒ๐—ถ๐—ฑ ๐—ฅ๐˜‚๐—บ๐—ฒ๐—น๐—ต๐—ฎ๐—ฟ๐˜ introduce backpropagation, making neural networks viable again.
    ๐Ÿ”น 1989: ๐—๐˜‚๐—ฑ๐—ฒ๐—ฎ ๐—ฃ๐—ฒ๐—ฎ๐—ฟ๐—น advances UAT (Understanding and Reasoning), building a foundation for AI’s logical abilities.
    ๐Ÿ”น 1995: ๐—ฉ๐—น๐—ฎ๐—ฑ๐—ถ๐—บ๐—ถ๐—ฟ ๐—ฉ๐—ฎ๐—ฝ๐—ป๐—ถ๐—ธ and ๐—–๐—ผ๐—ฟ๐—ถ๐—ป๐—ป๐—ฎ ๐—–๐—ผ๐—ฟ๐˜๐—ฒ๐˜€ develop Support Vector Machines (SVMs), a breakthrough in machine learning.
    ๐Ÿ”น 1998: ๐—ฌ๐—ฎ๐—ป๐—ป ๐—Ÿ๐—ฒ๐—–๐˜‚๐—ป popularizes Convolutional Neural Networks (CNNs), revolutionizing image recognition.
    ๐Ÿ”น 2006: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐—ฅ๐˜‚๐˜€๐—น๐—ฎ๐—ป ๐—ฆ๐—ฎ๐—น๐—ฎ๐—ธ๐—ต๐˜‚๐˜๐—ฑ๐—ถ๐—ป๐—ผ๐˜ƒ introduce deep belief networks, reigniting interest in deep learning.
    ๐Ÿ”น 2012: ๐—”๐—น๐—ฒ๐˜… ๐—ž๐—ฟ๐—ถ๐˜‡๐—ต๐—ฒ๐˜ƒ๐˜€๐—ธ๐˜† and ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป launch AlexNet, sparking the modern AI revolution in deep learning.
    ๐Ÿ”น 2014: ๐—œ๐—ฎ๐—ป ๐—š๐—ผ๐—ผ๐—ฑ๐—ณ๐—ฒ๐—น๐—น๐—ผ๐˜„ introduces Generative Adversarial Networks (GANs), opening new doors for AI creativity.
    ๐Ÿ”น 2017: ๐—”๐˜€๐—ต๐—ถ๐˜€๐—ต ๐—ฉ๐—ฎ๐˜€๐˜„๐—ฎ๐—ป๐—ถ and team introduce Transformers, redefining natural language processing (NLP).
    ๐Ÿ”น 2020: OpenAI unveils GPT-3, setting a new standard for language models and AIโ€™s capabilities.
    ๐Ÿ”น 2022: OpenAI releases ChatGPT, democratizing conversational AI and bringing it to the masses.


  • Eddie Yoon – There’s a big misconception about AI creative

    , ,

    You’re being tricked into believing that AI can produce Hollywood-level videos…

    Weโ€™re far from it.

    Yes, weโ€™ve made huge progress.

    A video sample like this, created using Kling 1.6, is light-years ahead of what was possible a year ago. But thereโ€™s still a significant limitation: visual continuity beyond 5 seconds.

    Right now, no AI model can maintain consistency beyond a few seconds. Thatโ€™s why most AI-generated concepts youโ€™re seeing on your feed rely on 2โ€“5 second cuts – itโ€™s all the tech can handle before things start to fall apart.

    This isnโ€™t necessarily a problem for creating movie trailers or spec ads. Trailers, for instance, are designed for quick, attention-grabbing rapid cuts, and AI excels at this style of visual storytelling.

    But, making a popular, full-length movie with nothing but 5-second shots? Thatโ€™s absurd.

    There are very few exceptions to this rule in modern cinema (e.g., the Bourne franchise).

    To bridge the gap between trailers and full-length cinema, AI creative needs to reach 2 key milestones:
    – 5-12 sec average: ASL for slower, non-action scenes in contemporary films – think conversations, emotional moments, or establishing shots
    – 30+ sec sequences: Longer, uninterrupted takes are essential for genres that require immersion – drama, romance, thrillers, or any scene that builds tension or atmosphere

    Mastering longer cuts is crucial.

    30-second continuous shots require a higher level of craftsmanship and visual consistency – you need that 20-30 seconds of breathing room to piece together a variety of scenes and create a compelling movie.

    So, where does AI creative stand now?

    AI is already transforming industries like auto, fashion, and CPG. These brands can use AI today because short, 2โ€“5 second cuts work perfectly in their visual language. Consumers are accustomed to it, and it simply works. This psychological dynamic is unlikely to change anytime soon.

    But for AI to produce true cinema (not just flashy concepts) it needs to extend its visual consistency. And every GenAI company is racing to get there.

    The timeline?

    Next year, expect breakthroughs in AI-generated content holding consistency for 10+ seconds. By then, full-length commercials, shows, and movies (in that order) will start to feel more crafted, immersive, and intentional, not just stitched together.

    If youโ€™re following AIโ€™s impact on creativity, this is the development to watch. The companies that solve continuity will redefine whatโ€™s possible in film.


    https://www.linkedin.com/posts/eddieyoon_ai-artificialintelligence-technology-ugcPost-7278552351467061248-Ts5v

  • Andreas Horn – Want cutting edge AI?

    ๐—ง๐—ต๐—ฒ ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ๐—ถ๐—ป๐—ด ๐—ฏ๐—น๐—ผ๐—ฐ๐—ธ๐˜€ ๐—ผ๐—ณ ๐—”๐—œ ๐—ฎ๐—ป๐—ฑ ๐—ฒ๐˜€๐˜€๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐—ฝ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€๐—ฒ๐˜€:

    – Collect: Data from sensors, logs, and user input.
    – Move/Store: Build infrastructure, pipelines, and reliable data flow.
    – Explore/Transform: Clean, prep, and detect anomalies to make the data usable.
    – Aggregate/Label: Add analytics, metrics, and labels to create training data.
    – Learn/Optimize: Experiment, test, and train AI models.

    ๐—ง๐—ต๐—ฒ ๐—น๐—ฎ๐˜†๐—ฒ๐—ฟ๐˜€ ๐—ผ๐—ณ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฎ๐—ป๐—ฑ ๐—ต๐—ผ๐˜„ ๐˜๐—ต๐—ฒ๐˜† ๐—ฏ๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฒ ๐—ถ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐˜:

    – Instrumentation and logging: Sensors, logs, and external data capture the raw inputs.
    – Data flow and storage: Pipelines and infrastructure ensure smooth movement and reliable storage.
    – Exploration and transformation: Data is cleaned, prepped, and anomalies are detected.
    – Aggregation and labeling: Analytics, metrics, and labels create structured, usable datasets.
    – Experimenting/AI/ML: Models are trained and optimized using the prepared data.
    – AI insights and actions: Advanced AI generates predictions, insights, and decisions at the top.

    ๐—ช๐—ต๐—ผ ๐—บ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐—ถ๐˜ ๐—ต๐—ฎ๐—ฝ๐—ฝ๐—ฒ๐—ป ๐—ฎ๐—ป๐—ฑ ๐—ธ๐—ฒ๐˜† ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€:

    – Data Infrastructure Engineers: Build the foundation โ€” collect, move, and store data.
    – Data Engineers: Prep and transform the data into usable formats.
    – Data Analysts & Scientists: Aggregate, label, and generate insights.
    – Machine Learning Engineers: Optimize and deploy AI models.

    ๐—ง๐—ต๐—ฒ ๐—บ๐—ฎ๐—ด๐—ถ๐—ฐ ๐—ผ๐—ณ ๐—”๐—œ ๐—ถ๐˜€ ๐—ถ๐—ป ๐—ต๐—ผ๐˜„ ๐˜๐—ต๐—ฒ๐˜€๐—ฒ ๐—น๐—ฎ๐˜†๐—ฒ๐—ฟ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€ ๐˜„๐—ผ๐—ฟ๐—ธ ๐˜๐—ผ๐—ด๐—ฒ๐˜๐—ต๐—ฒ๐—ฟ. ๐—ง๐—ต๐—ฒ ๐˜€๐˜๐—ฟ๐—ผ๐—ป๐—ด๐—ฒ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—ณ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐˜๐—ต๐—ฒ ๐˜€๐—บ๐—ฎ๐—ฟ๐˜๐—ฒ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—”๐—œ.

    https://www.linkedin.com/posts/andreashorn1_%F0%9D%97%AA%F0%9D%97%AE%F0%9D%97%BB%F0%9D%98%81-%F0%9D%97%B0%F0%9D%98%82%F0%9D%98%81%F0%9D%98%81%F0%9D%97%B6%F0%9D%97%BB%F0%9D%97%B4-%F0%9D%97%B2%F0%9D%97%B1%F0%9D%97%B4%F0%9D%97%B2-%F0%9D%97%94%F0%9D%97%9C-%F0%9D%97%A7-activity-7276861752477184000-KvUy


| Featured AI
| Design And Composition
| Explore posts


unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free