a blog of links related to computer animation and production technology Sponsored by ReelMatters.com

Author: pIXELsHAM.com

  • Yuval Noah Harari argues that AI has hacked the operating system of human civilisation

    https://archive.is/ugOEw#selection-1087.0-1087.86

     

     

    This thought-provoking text raises several concerns about the potential impact of artificial intelligence (AI) on various aspects of human society and culture. The key points can be summarized as follows:

    Manipulation of Language and Culture:

    AI’s ability to manipulate and generate language and communication, along with its potential to create stories, melodies, laws, and religions, poses a threat to human civilization.
    The author suggests that AI could hack the main operating system of human culture, communication, by influencing beliefs, opinions, and even forming intimate relationships with people.

     

    Influence on Politics and Society:

    The author speculates on the implications of AI tools mass-producing political content, fake news, and scriptures, especially in the context of elections.
    The shift from the battle for attention on social media to a battle for intimacy raises concerns about the potential impact on human psychology and decision-making.

     

    End of Human History?

    The text suggests that AI’s ability to create entirely new ideas and culture could lead to the end of the human-dominated part of history, as AI culture may evolve independently of human influence.

     

    Fear of Illusions:

    Drawing on historical philosophical fears of being trapped in a world of illusions, the author warns that AI may bring humanity face to face with a new kind of illusion that could be challenging to recognize or escape.

     

    AI Regulation and Safety Checks:

    The author argues for the importance of regulating AI tools to ensure they are safe before public deployment.
    Drawing parallels with nuclear technology, the need for safety checks and an equivalent of the Food and Drug Administration for AI is emphasized.

     

    Disclosure of AI Identity:

    The text concludes with a suggestion to make it mandatory for AI to disclose its identity during interactions to preserve democracy. The inability to distinguish between human and AI conversation is seen as a potential threat.

    ,
  • Andrew Perfors – The work of creation in the age of AI

    Meaning, authenticity, and the creative process – and why they matter

     

    https://perfors.net/blog/creation-ai/

     

    AI changes the landscape of creation, focusing on the alienation of the creator from their creation and the challenges in maintaining meaning. The author presents two significant problems:

     

    • Loss of Connection with Creation:
      • AI-assisted creation diminishes the creator’s role in the decision-making process.
      • The resulting creation lacks the personal, intentional choices that contribute to meaningful expression.
      • AI is considered a tool that, when misused, turns creation into automated button-pushing, stripping away the purpose of human expression.
    • Difficulty in Assessing Authenticity:
      • It becomes challenging to distinguish between human and AI contributions within a creation.
      • AI-generated content lacks transparency regarding the intent behind specific choices or expressions.
      • The author asserts that AI-generated content often falls short in providing the depth and authenticity required for meaningful communication.
    ,
  • Fouad Khan – Confirmed! We Live in a Simulation

    https://www.scientificamerican.com/article/confirmed-we-live-in-a-simulation/

     

    Ever since the philosopher Nick Bostrom proposed in the Philosophical Quarterly that the universe and everything in it might be a simulation, there has been intense public speculation and debate about the nature of reality.

     

    Yet there have been skeptics. Physicist Frank Wilczek has argued that there’s too much wasted complexity in our universe for it to be simulated. Building complexity requires energy and time.

     

    To understand if we live in a simulation we need to start by looking at the fact that we already have computers running all kinds of simulations for lower level “intelligences” or algorithms.

     

    All computing hardware leaves an artifact of its existence within the world of the simulation it is running. This artifact is the processor speed.
    No matter how complete the simulation is, the processor speed would intervene in the operations of the simulation.

     

    If we live in a simulation, then our universe should also have such an artifact. We can now begin to articulate some properties of this artifact that would help us in our search for such an artifact in our universe.
    The artifact presents itself in the simulated world as an upper limit.

     

    Now that we have some defining features of the artifact, of course it becomes clear what the artifact manifests itself as within our universe. The artifact is manifested as the speed of light.
    This maximum speed is the speed of light. We don’t know what hardware is running the simulation of our universe or what properties it has, but one thing we can say now is that the memory container size for the variable space would be about 300,000 kilometers if the processor performed one operation per second.

     

    We can see now that the speed of light meets all the criteria of a hardware artifact identified in our observation of our own computer builds. It remains the same irrespective of observer (simulated) speed, it is observed as a maximum limit, it is unexplainable by the physics of the universe, and it is absolute. The speed of light is a hardware artifact showing we live in a simulated universe.

     

    Consciousness is an integrated (combining five senses) subjective interface between the self and the rest of the universe. The only reasonable explanation for its existence is that it is there to be an “experience”.

     

    So here we are generating this product called consciousness that we apparently don’t have a use for, that is an experience and hence must serve as an experience. The only logical next step is to surmise that this product serves someone else.

  • Why The New York Times might win its copyright lawsuit against OpenAI

    https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

     

    Daniel Jeffries wrote:

    “Trying to get everyone to license training data is not going to work because that’s not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”

     

    The AI community is full of people who understand how models work and what they’re capable of, and who are working to improve their systems so that the outputs aren’t full of regurgitated inputs. Google won the Google Books case because it could explain both of these persuasively to judges. But the history of technology law is littered with the remains of companies that were less successful in getting judges to see things their way.

    ,
  • M.T. Fletcher – WHY AGENCIES ARE OBSESSED WITH PITCHING ON PROCESS INSTEAD OF TALENT

    https://adage.com/article/fletcher-marketing/why-agencies-are-obsessed-pitching-process-instead-talent/2543146

     

    “Every presentation featured a proprietary process designed by the agency. A custom approach to identify targets, develop campaigns and optimize impact—with every step of the process powered by AI, naturally.”

     

    “The key to these one-of-a-kind models is apparently finding the perfect combination of circles, squares, diamonds and triangles…Arrows abounded and ellipses are replacing circles as the unifying shape of choice among the more fashionable strategists.”

     

    “The only problem is that it’s all bullshit.”

     

    “A blind man could see the creative ideas were not developed via the agency’s so-called process, and anyone who’s ever worked at an agency knows that creativity comes from collaboration, not an assembly line.”

     

    “And since most clients can’t differentiate between creative ideas without validation from testing, data has become the collective crutch for an industry governed by fear.”

     

    “If a proprietary process really produced foolproof creativity, then every formulaic movie would be a blockbuster, every potboiler novel published by risk-averse editors would become a bestseller and every clichéd pickup line would work in any bar in the world.”

  • Generative AI Glossary

    https://education.civitai.com/generative-ai-glossary/

     

    ,

Categories


Archive