OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • BURN@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    10 months ago

    But they don’t purchase the data. That’s the whole problem.

    And copyright is absolutely violated by training off it. It’s being used to make money and no longer falls under even the widest interpretation of free use.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        10 months ago

        It may be freely available for non-commercial works, eg. Photos on Photobucket, internet archive free book archives, etc.

        Most everything is on the internet these days, copyrighted or not. I’m sure if I googled enough I could find the entire text of Harry Potter for free. I still haven’t purchased it, and technically it’s not legally freely available. But in training these models I guarantee they didn’t care where the data came from, just that it was data.

        I’m against piracy as well for the record, but pretty much everything is available through torrenting and pirate sites at this point, copyright be damned.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          edit-2
          10 months ago

          Don’t care, that’s not mine or these LLMs problem they don’t secure their copyright. They shouldn’t come asking for others to pay for them not securing their data. I see it as a double edged sword.

          I really hope this is a wake up call to all creative types to pack up and not use the internet like a street corner while they busk.

          If they want to come online to contribute like everybody else. Just have fun and post stuff, that’s great. But all of them are no different then any other greedy corporation. They all want more toll roads. When they do make it and earn millions and get our attention they exploit it with more ads. It swallows all the free good content. Sites gear towards these rich creators. They lawyer up and sue everybody and everything that looks or sounds like them. We lose all our good spaces to them.

          I hope the LLM allows regular people to shit post in peace finally.

          • adrian783@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            creative types are greedy for wanting compensation for their creation? is a car mechanic greedy for wanting money for fixing your car?

    • GroggyGuava@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      10 months ago

      You need to expand on how learning from something to make money is somehow using the original material to make money. Considering that’s how art works in general, I’m having a hard time taking the side of “learning from media to make your own is against copyright”. As long as they don’t reproduce the same thing as the original, I don’t see any issues with it. If they learned from Lord of the rings to then make “the Lord of the rings” then yes, that’d be infringement. But if they use that data to make a new IP with original ideas, then how is that bad for the world/ artists.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        Creating an AI model is a commercial work. They’re made to make money. Now these models are dependent on other artists data to train on. The models would be useless if they weren’t able to train on anything.

        I hold the stance that using copyrighted data as part of a training set is a violation of copyright. That still hasn’t been fully challenged in court, so there’s no specific legal definition yet.

        Due to the requirement of copywritten materials to make the model function I feel that they are using copyrighted works in order to build a commercial product.

        Also AI doesn’t learn. LLMs build statistical models based on sentence structure of what they’ve seen before. There’s no level of understanding or inherent knowledge, and there’s nothing new being added.