OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • rosenjcb@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    The powers that be have done a great job convincing the layperson that copyright is about protecting artists and not publishers. It’s historically inaccurate and you can discover that copyright law was pushed by publishers who did not want authors keeping second hand manuscripts of works they sold to publishing companies.

    Additional reading: https://en.m.wikipedia.org/wiki/Statute_of_Anne

  • Technoguyfication@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    People are acting like ChatGPT is storing the entire Harry Potter series in its neural net somewhere. It’s not storing or reproducing text in a 1:1 manner from the original material. Certain material, like very popular books, has likely been interpreted tens of thousands of times due to how many times it was reposted online (and therefore how many times it appeared in the training data).

    Just because it can recite certain passages almost perfectly doesn’t mean it’s redistributing copyrighted books. How many quotes do you know perfectly from books you’ve read before? I would guess quite a few. LLMs are doing the same thing, but on mega steroids with a nearly limitless capacity for information retention.

    • Teritz@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Using Copyrighted Work as Art as example still influences the AI which their make Profit from.

      If they use my Works then they need to pay thats it.

      • coheedcollapse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Still kinda blows my mind how like the most socialist people I know (fellow artists) turned super capitalist the second a tool showed like an inkling of potential to impact their bottom line.

        Personally, I’m happy to have my work scraped and permutated by systems that are open to the public. My biggest enemy isn’t the existence of software scraping an open internet, it’s the huge companies who see it as a way to cut us out of the picture.

        If we go all copyright crazy on the models for looking at stuff we’ve already posted openly on the internet, the only companies with access to the tools will be those who already control huge amounts of data.

        I mean, for real, it’s just mind-blowing seeing the entire artistic community pretty much go full-blown “Metallica with the RIAA” after decades of making the “you wouldn’t download a car” joke.

        • angstylittlecatboy@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I feel like a lot of internet people (not even just socialists) go from seeing copyright as at best a compromise that allows the arts to have value under capitalism to treating it like a holy doctrine when the subject of LLMs comes up.

          Like, people who will say “piracy is always okay” will also say “ban AI, period” (and misrepresent organizations that want regulations on it’s use as wanting a full ban.)

          Like, growing up with an internet full of technically illegal content (or grey area at best) like fangames and YouTube Poops made me a lifelong copyright skeptic. It’s outright confusing to me when people take copyright as seriously as this.

        • Sir_Kevin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Fuckin preach! I feel like I’m surrounded by children that didn’t live through the many other technologies that have came along and changed things. People lost their shit when photoshop became mainstream, when music started using samples, etc. AI is here to stay. These same people are probably listening to autotuned music all day while they complain on the internet about AI looking at their art.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Training AI on copyrighted material is no more illegal or unethical than training human beings on copyrighted material (from library books or borrowed books, nonetheless!). And trying to challenge the veracity of generative AI systems on the notion that it was trained on copyrighted material only raises the specter that IP law has lost its validity as a public good.

    The only valid concern about generative AI is that it could displace human workers (or swap out skilled jobs for menial ones) which is a problem because our society recognizes the value of human beings only in their capacity to provide a compensation-worthy service to people with money.

    The problem is this is a shitty, unethical way to determine who gets to survive and who doesn’t. All the current controversy about generative AI does is kick this can down the road a bit. But we’re going to have to address soon that our monied elites will be glad to dispose of the rest of us as soon as they can.

    Also, amateur creators are as good as professionals, given the same resources. Maybe we should look at creating content by other means than for-profit companies.

  • RadialMonster@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    what if they scraped a whole lot of the internet, and those excerpts were in random blogs and posts and quotes and memes etc etc all over the place? They didnt injest the material directly, or knowingly.

  • Blapoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    We have to distinguish between LLMs

    • Trained on copyrighted material and
    • Outputting copyrighted material

    They are not one and the same

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Should we distinguish it though? Why shouldn’t (and didn’t) artists have a say if their art is used to train LLMs? Just like publicly displayed art doesn’t provide a permission to copy it and use it in other unspecified purposes, it would be reasonable that the same would apply to AI training.

      • Blapoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ah, but that’s the thing. Training isn’t copying. It’s pattern recognition. If you train a model “The dog says woof” and then ask a model “What does the dog say”, it’s not guaranteed to say “woof”.

        Similarly, just because a model was trained on Harry Potter, all that means is it has a good corpus of how the sentences in that book go.

        Thus the distinction. Can I train on a comment section discussing the book?

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      Output from an AI has just been recently considered as not copyrightable.

      I think it stemmed from the actors strikes recently.

      It was stated that only work originating from a human can be copyrighted.

      • Anders429@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Output from an AI has just been recently considered as not copyrightable.

        Where can I read more about this? I’ve seen it mentioned a few times, but never with any links.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          They clearly only read the headline If they’re talking about the ruling that came out this week, that whole thing was about trying to give an AI authorship of a work generated solely by a machine and having the copyright go to the owner of the machine through the work-for-hire doctrine. So an AI itself can’t be authors or hold a copyright, but humans using them can still be copyright holders of any qualifying works.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Its a bit pedantic, but I’m not really sure I support this kind of extremist view of copyright and the scale of whats being interpreted as ‘possessed’ under the idea of copyright. Once an idea is communicated, it becomes a part of the collective consciousness. Different people interpret and build upon that idea in various ways, making it a dynamic entity that evolves beyond the original creator’s intention. Its like issues with sampling beats or records in the early days of hiphop. Its like the very principal of an idea goes against this vision, more that, once you put something out into the commons, its irretrievable. Its not really yours any more once its been communicated. I think if you want to keep an idea truly yours, then you should keep it to yourself. Otherwise you are participating in a shared vision of the idea. You don’t control how the idea is interpreted so its not really yours any more.

    If thats ChatGPT or Public Enemy is neither here nor there to me. The idea that a work like Peter Pan is still possessed is such a very real but very silly obvious malady of this weirdly accepted but very extreme view of the ability to possess an idea.

    • Laticauda@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Ai isn’t interpreting anything. This isn’t the sci-fi style of ai that people think of, that’s general ai. This is narrow AI, which is really just an advanced algorithm. It can’t create new things with intent and design, it can only regurgitate a mix of pre-existing stuff based on narrow guidelines programmed into it to try and keep it coherent, with no actual thought or interpretation involved in the result. The issue isn’t that it’s derivative, the issue is that it can only ever be inherently derivative without any intentional interpretation or creativity, and nothing else.

      Even collage art has to qualify as fair use to avoid copyright infringement if it’s being done for profit, and fair use requires it to provide commentary, criticism, or parody of the original work used (which requires intent). Even if it’s transformative enough to make the original unrecognizable, if the majority of the work is not your own art, then you need to get permission to use it otherwise you aren’t automatically safe from getting in trouble over copyright. Even using images for photoshop involves creative commons and commercial use licenses. Fanart and fanfic is also considered a grey area and the only reason more of a stink isn’t kicked up over it regarding copyright is because it’s generally beneficial to the original creators, and credit is naturally provided by the nature of fan works so long as someone doesn’t try to claim the characters or IP as their own. So most creators turn a blind eye to the copyright aspect of the genre, but if any ever did want to kick up a stink, they could, and have in the past like with Anne Rice. And as a result most fanfiction sites do not allow writers to profit off of fanfics, or advertise fanfic commissions. And those are cases with actual humans being the ones to produce the works based on something that inspired them or that they are interpreting. So even human made derivative works have rules and laws applied to them as well. Ai isn’t a creative force with thoughts and ideas and intent, it’s just a pattern recognition and replication tool, and it doesn’t benefit creators when it’s used to replace them entirely, like Hollywood is attempting to do (among other corporate entities). Viewing AI at least as critically as actual human beings is the very least we can do, as well as establishing protection for human creators so that they can’t be taken advantage of because of AI.

      I’m not inherently against AI as a concept and as a tool for creators to use, but I am against AI works with no human input being used to replace creators entirely, and I am against using works to train it without the permission of the original creators. Even in the artist/writer/etc communities it’s considered to be a common courtesy to credit other people/works that you based a work on or took inspiration from, even if what you made would be safe under copyright law regardless. Sure, humans get some leeway in this because we are imperfect meat creatures with imperfect memories and may not be aware of all our influences, but a coded algorithm doesn’t have that excuse. If the current AIs in circulation can’t function without being fed stolen works without credit or permission, then they’re simply not ready for commercial use yet as far as I’m concerned. If it’s never going to be possible, which I just simply don’t believe, then it should never be used commercially period. And it should be used by creators to assist in their work, not used to replace them entirely. If it takes longer to develop, fine. If it takes more effort and manpower, fine. That’s the price I’m willing to pay for it to be ethical. If it can’t be done ethically, then imo it shouldn’t be done at all.

      • Kogasa@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Your broader point would be stronger if it weren’t framed around what seems like a misunderstanding of modern AI. To be clear, you don’t need to believe that AI is “just” a “coded algorithm” to believe it’s wrong for humans to exploit other humans with it. But to say that modern AI is “just an advanced algorithm” is technically correct in exactly the same way that a blender is “just a deterministic shuffling algorithm.” We understand that the blender chops up food by spinning a blade, and we understand that it turns solid food into liquid. The precise way in which it rearranges the matter of the food is both incomprehensible and irrelevant. In the same way, we understand the basic algorithms of model training and evaluation, and we understand the basic domain task that a model performs. The “rules” governing this behavior at a fine level are incomprehensible and irrelevant-- and certainly not dictated by humans. They are an emergent property of a simple algorithm applied to billions-to-trillions of numerical parameters, in which all the interesting behavior is encoded in some incomprehensible way.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If I memorize the text of Harry Potter, my brain does not thereby become a copyright infringement.

    A copyright infringement only occurs if I then reproduce that text, e.g. by writing it down or reciting it in a public performance.

    Training an LLM from a corpus that includes a piece of copyrighted material does not necessarily produce a work that is legally a derivative work of that copyrighted material. The copyright status of that LLM’s “brain” has not yet been adjudicated by any court anywhere.

    If the developers have taken steps to ensure that the LLM cannot recite copyrighted material, that should count in their favor, not against them. Calling it “hiding” is backwards.

    • Gyoza Power@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Let’s not pretend that LLMs are like people where you’d read a bunch of books and draw inspiration from them. An LLM does not think nor does it have an actual creative process like we do. It should still be a breach of copyright.

      • efstajas@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        … you’re getting into philosophical territory here. The plain fact is that LLMs generate cohesive text that is original and doesn’t occur in their training sets, and it’s very hard if not impossible to get them to quote back copyrighted source material to you verbatim. Whether you want to call that “creativity” or not is up to you, but it certainly seems to disqualify the notion that LLMs commit copyright infringement.

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      If Google took samples from millions of different songs that were under copyright and created a website that allowed users to mix them together into new songs, they would be sued into oblivion before you could say “unauthorized reproduction.”

      You simply cannot compare one single person memorizing a book to corporations feeding literally millions of pieces of copyrighted material into a blender and acting like the resulting sausage is fine because “only a few rats fell into the vat, what’s the big deal”

          • player2@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            The analogy talks about mixing samples of music together to make new music, but that’s not what is happening in real life.

            The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.

  • Jat620DH27@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I thought everyone knows that OpenAI has the same access to any books, knowledge that human beings have.

    • Redditiscancer789@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yes, but it’s what it is doing with it that is the murky grey area. Anyone can read a book, but you can’t use those books for your own commercial stuff. Rowling and other writers are making the case their works are being used in an inappropriate way commercially. Whether they have a case iunno ianal but I could see the argument at least.

      • Touching_Grass@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Harry potter uses so many tropes and inspiration from other works that came before. How is that different? wizards of the coast should sue her into the ground.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    One of the first things I ever did with ChatGPT was ask it to write some Harry Potter fan fiction. It wrote a short story about Ron and Harry getting into trouble. I never said the word McGonagal and yet she appeared in the story.

    So yeah, case closed. They are full of shit.

    • PraiseTheSoup@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There is enough non-copywrited Harry Potter fan fiction out there that it would not need to be trained on the actual books to know all the characters. While I agree they are full of shit, your anecdote proves nothing.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        While I agree they are full of shit, your anecdote proves nothing.

        Why? Because you say so?

        He brings up a valid point, it seems transformative.

        • LittleLordLimerick@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The anecdote proves nothing because the model could potentially have known of the McGonagal character without ever being trained on the books, since that character appears in a lot of fan fiction. So their point is invalid and their anecdote proves nothing.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Why are people defending a massive corporation that admits it is attempting to create something that will give them unparalleled power if they are successful?

    • bamboo@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Mostly because fuck corporations trying to milk their copyright. I have no particular love for OpenAI (though I do like their product), but I do have great distain for already-successful corporations that would hold back the progress of humanity because they didn’t get paid (again).

  • Thorny_Thicket@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I don’t get why this is an issue. Assuming they purchased a legal copy that it was trained on then what’s the problem? Like really. What does it matter that it knows a certain book from cover to cover or is able to imitate art styles etc. That’s exactly what people do too. We’re just not quite as good at it.

    • Hildegarde@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      A copyright holder has the right to control who has the right to create derivative works based on their copyright. If you want to take someone’s copyright and use it to create something else, you need permission from the copyright holder.

      The one major exception is Fair Use. It is unlikely that AI training is a fair use. However this point has not been adjudicated in a court as far as I am aware.

      • LordShrek@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        this is so fucking stupid though. almost everyone reads books and/or watches movies, and their speech is developed from that. the way we speak is modeled after characters and dialogue in books. the way we think is often from books. do we track down what percentage of each sentence comes from what book every time we think or talk?

      • FatCat@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It is not a derivative it is transformative work. Just like human artists “synthesise” art they see around them and make new art, so do LLMs.

  • Tetsuo@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    1 year ago

    If I’m not mistaken AI work was just recently considered as NOT copyrightable.

    So I find interesting that an AI learning from copyrighted work is an issue even though what will be generated will NOT be copyrightable.

    So even if you generated some copy of Harry Potter you would not be able to copyright it. So in no way could you really compete with the original art.

    I’m not saying that it makes it ok to train AIs on copyrighted art but I think it’s still an interesting aspect of this topic.

    As others probably have stated, the AI may be creating content that is transformative and therefore under fair use. But even if that work is transformative it cannot be copyrighted because it wasn’t created by a human.

    • habanhero@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      How do you tell if a piece of work contains AI generated content or not?

      It’s not hard to generate a piece of AI content, put in some hours to round out AI’s signatures / common mistakes, and pass it off as your own. So in practise it’s still easy to benefit from AI systems by masking generate content as largely your own.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      If you’re talking about the ruling that came out this week, that whole thing was about trying to give an AI authorship of a work generated solely by a machine and having the copyright go to the owner of the machine through the work-for-hire doctrine. So an AI itself can’t be authors or hold a copyright, but humans using them can still be copyright holders of any qualifying works.