Hollywood’s pre-pandemic “normal” wasn’t sustainable or healthy, particularly for workers who faced long hours, poor work-life balance, and limited diversity. This article calls for the industry to use this post-pandemic period as a chance to reform and prioritize the well-being, creativity, and inclusivity of its workforce, rather than simply returning to old, harmful practices.
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AIRisks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
The court declined to dismiss copyright infringement claims against the AI companies. The order could implicate other firms that used Stable Diffusion, the AI model at issue in the case. The case will move forward to discovery, where the artists could uncover information related to the way in which the AI firms harvested copyrighted materials that were then used to train large language models.
nearly 140 statues at the booth from licenses including DC Comics, Lord of the Rings, Uncharted, The Last of Us, Bloodborne, Demon Souls, God of War, Jurassic Park, Godzilla, Predator, Aliens, Transformers, Berserk, Evangelion, My Hero Academia, Chainsaw Man, Attack on Titan, the DC movie universe, X-Men, Spider-man and much more
The Curse of Knowledge is a cognitive bias that occurs when someone incorrectly assumes that others have the same or enough background to understand each other.
You can avoid the negative effects of the curse of knowledge by constantly questioning your assumptions as to how much exactly your audience knows.
A paper by computer scientists Matyas Bohacek and Hany Farid with the catchy title ‘Nepotistically Trained Generative-AI Models Collapse‘ shows that training AI image generators on AI images quite quickly leads to a deterioration in the quality of output. Farid likened the phenomenon to inbreeding. “If a species inbreeds with their own offspring and doesn’t diversify their gene pool, it can lead to a collapse of the species,” he said.
How a manually misconfigured and untested server continued running old test code during a live trading session, lead the bot to make $8.65 billion in unintended stock trades in just 28 minutes.
“We combine these two optical systems in a single camera by splitting the aperture: one half applies application-specific modulation using a diffractive optical element, and the other captures a conventional image. This co-design with a dual-pixel sensor allows simultaneous capture of coded and uncoded images — without increasing physical or computational footprint.”
The EU Artificial Intelligence (AI) Act, which went into effect on August 1, 2024.
This act implements a risk-based approach to AI regulation, categorizing AI systems based on the level of risk they pose. High-risk systems, such as those used in healthcare, transport, and law enforcement, face stringent requirements, including risk management, transparency, and human oversight.
Key provisions of the AI Act include:
Transparency and Safety Requirements: AI systems must be designed to be safe, transparent, and easily understandable to users. This includes labeling requirements for AI-generated content, such as deepfakes (Engadget).
Risk Management and Compliance: Companies must establish comprehensive governance frameworks to assess and manage the risks associated with their AI systems. This includes compliance programs that cover data privacy, ethical use, and geographical considerations (Faegre Drinker Biddle & Reath LLP) (Passle).
Copyright and Data Mining: Companies must adhere to copyright laws when training AI models, obtaining proper authorization from rights holders for text and data mining unless it is for research purposes (Engadget).
Prohibitions and Restrictions: AI systems that manipulate behavior, exploit vulnerabilities, or perform social scoring are prohibited. The act also sets out specific rules for high-risk AI applications and imposes fines for non-compliance (Passle).
For US tech firms, compliance with the EU AI Act is critical due to the EU’s significant market size
FLUX (or FLUX. 1) is a suite of text-to-image models from Black Forest Labs, a new company set up by some of the AI researchers behind innovations and models like VQGAN, Stable Diffusion, Latent Diffusion, and Adversarial Diffusion Distillation
DISCLAIMER– Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.