OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    176
    arrow-down
    31
    ·
    10 months ago

    Its a bit pedantic, but I’m not really sure I support this kind of extremist view of copyright and the scale of whats being interpreted as ‘possessed’ under the idea of copyright. Once an idea is communicated, it becomes a part of the collective consciousness. Different people interpret and build upon that idea in various ways, making it a dynamic entity that evolves beyond the original creator’s intention. Its like issues with sampling beats or records in the early days of hiphop. Its like the very principal of an idea goes against this vision, more that, once you put something out into the commons, its irretrievable. Its not really yours any more once its been communicated. I think if you want to keep an idea truly yours, then you should keep it to yourself. Otherwise you are participating in a shared vision of the idea. You don’t control how the idea is interpreted so its not really yours any more.

    If thats ChatGPT or Public Enemy is neither here nor there to me. The idea that a work like Peter Pan is still possessed is such a very real but very silly obvious malady of this weirdly accepted but very extreme view of the ability to possess an idea.

    • Laticauda@lemmy.ca
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      18
      ·
      edit-2
      10 months ago

      Ai isn’t interpreting anything. This isn’t the sci-fi style of ai that people think of, that’s general ai. This is narrow AI, which is really just an advanced algorithm. It can’t create new things with intent and design, it can only regurgitate a mix of pre-existing stuff based on narrow guidelines programmed into it to try and keep it coherent, with no actual thought or interpretation involved in the result. The issue isn’t that it’s derivative, the issue is that it can only ever be inherently derivative without any intentional interpretation or creativity, and nothing else.

      Even collage art has to qualify as fair use to avoid copyright infringement if it’s being done for profit, and fair use requires it to provide commentary, criticism, or parody of the original work used (which requires intent). Even if it’s transformative enough to make the original unrecognizable, if the majority of the work is not your own art, then you need to get permission to use it otherwise you aren’t automatically safe from getting in trouble over copyright. Even using images for photoshop involves creative commons and commercial use licenses. Fanart and fanfic is also considered a grey area and the only reason more of a stink isn’t kicked up over it regarding copyright is because it’s generally beneficial to the original creators, and credit is naturally provided by the nature of fan works so long as someone doesn’t try to claim the characters or IP as their own. So most creators turn a blind eye to the copyright aspect of the genre, but if any ever did want to kick up a stink, they could, and have in the past like with Anne Rice. And as a result most fanfiction sites do not allow writers to profit off of fanfics, or advertise fanfic commissions. And those are cases with actual humans being the ones to produce the works based on something that inspired them or that they are interpreting. So even human made derivative works have rules and laws applied to them as well. Ai isn’t a creative force with thoughts and ideas and intent, it’s just a pattern recognition and replication tool, and it doesn’t benefit creators when it’s used to replace them entirely, like Hollywood is attempting to do (among other corporate entities). Viewing AI at least as critically as actual human beings is the very least we can do, as well as establishing protection for human creators so that they can’t be taken advantage of because of AI.

      I’m not inherently against AI as a concept and as a tool for creators to use, but I am against AI works with no human input being used to replace creators entirely, and I am against using works to train it without the permission of the original creators. Even in the artist/writer/etc communities it’s considered to be a common courtesy to credit other people/works that you based a work on or took inspiration from, even if what you made would be safe under copyright law regardless. Sure, humans get some leeway in this because we are imperfect meat creatures with imperfect memories and may not be aware of all our influences, but a coded algorithm doesn’t have that excuse. If the current AIs in circulation can’t function without being fed stolen works without credit or permission, then they’re simply not ready for commercial use yet as far as I’m concerned. If it’s never going to be possible, which I just simply don’t believe, then it should never be used commercially period. And it should be used by creators to assist in their work, not used to replace them entirely. If it takes longer to develop, fine. If it takes more effort and manpower, fine. That’s the price I’m willing to pay for it to be ethical. If it can’t be done ethically, then imo it shouldn’t be done at all.

      • Kogasa@programming.dev
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        8
        ·
        10 months ago

        Your broader point would be stronger if it weren’t framed around what seems like a misunderstanding of modern AI. To be clear, you don’t need to believe that AI is “just” a “coded algorithm” to believe it’s wrong for humans to exploit other humans with it. But to say that modern AI is “just an advanced algorithm” is technically correct in exactly the same way that a blender is “just a deterministic shuffling algorithm.” We understand that the blender chops up food by spinning a blade, and we understand that it turns solid food into liquid. The precise way in which it rearranges the matter of the food is both incomprehensible and irrelevant. In the same way, we understand the basic algorithms of model training and evaluation, and we understand the basic domain task that a model performs. The “rules” governing this behavior at a fine level are incomprehensible and irrelevant-- and certainly not dictated by humans. They are an emergent property of a simple algorithm applied to billions-to-trillions of numerical parameters, in which all the interesting behavior is encoded in some incomprehensible way.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          20
          ·
          edit-2
          10 months ago

          Bro I don’t think you have any idea what you’re talking about. These AIs aren’t blenders, they are designed to recognize and replicate specific aspects of art and writing and whatever else, in a way that is coherent and recognizable. Unless there’s a blender that can sculpt Michelangelo’s David out of apple peels, AI isn’t like a blender in any way.

          But even if they were comparable, a blender is meant to produce chaos. It is meant to, you know, blend the food we put into it. So yes, the outcome is dictated by humans. We want the individual pieces to be indistinguishable, and deliberate design decisions get made by the humans making them to try and produce a blender that blends things sufficiently, and makes the right amount of chaos with as many ingredients as possible.

          And here’s the thing, if we wanted to determine what foods were put into a blender, even assuming we had blindfolds on while tossing random shit in, we could test the resulting mixture to determine what the ingredients were before they got mashed together. We also use blenders for our own personal use the majority of the time, not for profit, and we use our own fruits and vegetables rather than stuff we stole from a neighbor’s yard, which would be, you know, trespassing and theft. And even people who use blenders to make something that they sell or offer publicly almost always list the ingredients, like restaurants.

          So even if AI was like a blender, that wouldn’t be an excuse, nor would it contradict anything I’ve said.

      • primbin@lemmy.one
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        7
        ·
        10 months ago

        I disagree with your interpretation of how an AI works, but I think the way that AI works is pretty much irrelevant to the discussion in the first place. I think your argument stands completely the same regardless. Even if AI worked much like a human mind and was very intelligent and creative, I would still say that usage of an idea by AI without the consent of the original artist is fundamentally exploitative.

        You can easily train an AI (with next to no human labor) to launder an artist’s works, by using the artist’s own works as reference. There’s no human input or hard work involved, which is a factor in what dictates whether a work is transformative. I’d argue that if you can put a work into a machine, type in a prompt, and get a new work out, then you still haven’t really transformed it. No matter how creative or novel the work is, the reality is that no human really put any effort into it, and it was built off the backs of unpaid and uncredited artists.

        You could probably make an argument for being able to sell works made by an AI trained only on the public domain, but it still should not be copyrightable IMO, cause it’s not a human creation.

        TL;DR - No matter how creative an AI is, its works should not be considered transformative in a copyright sense, as no human did the transformation.

      • Immersive_Matthew@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        7
        ·
        10 months ago

        I thought this way too, but after playing with ChatGPT and Mid Journey near daily, I have seen many moments of creativity way beyond the source it was trained on. I think a good example that I saw was on a YouTube video (sorry I cannot recall which to link) where thr prompt was animals made of sushi and wow, was it ever good and creative on how it made them and it was photo realistic. This is just not something you an find anywhere on the Internet. I just did a search and found some hand drawn Japanese style sushi with eyes and such, but nothing like what I saw in that video.

        I have also experienced it suggested ways to handle coding on my VR Theme Park app that is very unconventional and not something anyone has posted about as near as I can tell. It seems to be able to put 2 and 2 together and get 8. Likely as it sees so much of everything at once that it can connect the dots on ways we would struggle too. It is more than regurgitated data and it surprises me near daily.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          10 months ago

          Just because you think it seems creative due to your lack of experience with human creativity, that doesn’t mean it is uniquely creative. It’s not, it can’t be by its very nature, it can only regurgitate an amalgamation of stuff fed into it. What you think you see is the equivalent of paradoilia.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        10 months ago

        if it’s being done for profit, and fair use requires it to provide commentary, criticism, or parody of the original work used. Even if it’s transformative enough to make the original unrecognizable

        I’m going to need a source for that. Fair use is a flexible and context-specific, It depends on the situation and four things: why, what, how much, and how it affects the work. No one thing is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          5
          ·
          10 months ago

          I’m a bit confused about what point you’re trying to make. There is not a single paragraph or example in the link you provided that doesn’t support what I’ve said, and none of the examples provided in that link are something that qualified as fair use despite not meeting any criteria. In fact one was the opposite, as something that met all the criteria but still didn’t qualify as fair use.

          The key aspect of how they define transformative is here:

          Has the material you have taken from the original work been transformed by adding new expression or meaning?

          These (narrow) AIs cannot add new expression or meaning, because they do not have intent. They are just replicating and rearranging learned patterns mindlessly.

          Was value added to the original by creating new information, new aesthetics, new insights, and understandings?

          These AIs can’t provide new information because they can’t create something new, they can only reconfigure previously provided info. They can’t provide new aesthetics for the same reason, they can only recreate pre-existing aesthetics from the works fed to them, and they definitely can’t provide new insights or understandings because again, there is no intent or interpretation going on, just regurgitation.

          The fact that it’s so strict that even stuff that meets all the criteria might still not qualify as fair use only supports what I said about how even derivative works made by humans are subject to a lot of laws and regulations, and if human works are under that much scrutiny then there’s no reason why AI works shouldn’t also be under at least as much scrutiny or more. The fact that so much of fair use defense is dependent on having intent, and providing new meaning, insights, and information, is just another reason why AI can’t hide behind fair use or be given a pass automatically because “humans make derivative works too”. Even derivative human works are subject to scrutiny, criticism, and regulation, and so should AI works.

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            11
            ·
            edit-2
            10 months ago

            I’m a bit confused about what point you’re trying to make. There is not a single paragraph or example in the link you provided that doesn’t support what I’ve said, and none of the examples provided in that link are something that qualified as fair use despite not meeting any criteria. In fact one was the opposite, as something that met all the criteria but still didn’t qualify as fair use.

            You said "…fair use requires it to provide commentary, criticism, or parody of the original work used. " This isn’t true, if you look at the summaries of fair use cases I provided you can see there are plenty of cases where there was no purpose stated.

            These (narrow) AIs cannot add new expression or meaning, because they do not have intent. They are just replicating and rearranging learned patterns mindlessly.

            You’re anthropomorphizing a machine here, the intent is that of the person using the tool, not the tool itself. These are tools made by humans for humans to use. It’s up to the artist to make all the content choices when it comes to the input and output and everything in between.

            These AIs can’t provide new information because they can’t create something new, they can only reconfigure previously provided info. They can’t provide new aesthetics for the same reason, they can only recreate pre-existing aesthetics from the works fed to them, and they definitely can’t provide new insights or understandings because again, there is no intent or interpretation going on, just regurgitation.

            I’m going to need a source on this too. This statement isn’t backed up with anything.

            The fact that it’s so strict that even stuff that meets all the criteria might still not qualify as fair use only supports what I said about how even derivative works made by humans are subject to a lot of laws and regulations, and if human works are under that much scrutiny then there’s no reason why AI works shouldn’t also be under at least as much scrutiny or more. The fact that so much of fair use defense is dependent on having intent, and providing new meaning, insights, and information, is just another reason why AI can’t hide behind fair use or be given a pass automatically because “humans make derivative works too”. Even derivative human works are subject to scrutiny, criticism, and regulation, and so should AI works.

            AI works are human works. AI can’t be authors or hold copyright.

      • Echoes in May@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        10 months ago

        Neural networks are based on the same principles as the human brain, they are literally learning in the exact same way humans are. Copyrighting the training of neural nets is the essentially the same thing as copyrighting interpretation and learning by humans.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          These AIs are not neural networks based on the human brain. They’re literally just algorithms designed to perform a single task.

    • Bogasse@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      10 months ago

      Well, I’d consider agreeing if the LLMs were considered as a generic knowledge database. However I had the impression that the whole response from OpenAI & cie. to this copyright issue is “they build original content”, both for LLMs and stable diffusion models. Now that they started this line of defence I think that they are stuck with proving that their “original content” is not derivated from copyrighted content 🤷

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        Well, I’d consider agreeing if the LLMs were considered as a generic knowledge database. However I had the impression that the whole response from OpenAI & cie. to this copyright issue is “they build original content”, both for LLMs and stable diffusion models. Now that they started this line of defence I think that they are stuck with proving that their “original content” is not derivated from copyrighted content 🤷

        Yeah I suppose that’s on them.

    • Toasteh@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      10 months ago

      Copyright definitely needs to be stripped back severely. Artists need time to use their own work, but after a certain time everything needs to enter the public space for the sake of creativity.

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      30
      ·
      10 months ago

      If you sample someone else’s music and turn around and try to sell it, without first asking permission from the original artist, that’s copyright infringement.

      So, if the same rules apply, as your post suggests, OpenAI is also infringing on copyright.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        49
        arrow-down
        17
        ·
        10 months ago

        If you sample someone else’s music and turn around and try to sell it, without first asking permission from the original artist, that’s copyright infringement.

        I think you completely and thoroughly do not understand what I’m saying or why I’m saying it. No where did I suggest that I do not understand modern copyright. I’m saying I’m questioning my belief in this extreme interpretation of copyright which is represented by exactly what you just parroted. That this interpretation is both functionally and materially unworkable, but also antithetical to a reasonable understanding of how ideas and communication work.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          11
          ·
          10 months ago

          Because in practical terms, writers and artists’ livelihoods are being threatened by AIs who were trained on their work without their consent or compensation. Ultimately the only valid justification for copyright is to enable the career of professional creators who contribute to our culture. We knew how ideas and communication worked when copyright was first created. That is why it’s a limited time protection, a compromise.

          All the philosophical arguments about the nature of ideas and learning, and how much a machine may be like a person don’t change that if anyone dedicates years of their efforts to develop their skills only to be undercut by an AI who was trained on their own works, is an incredibly shitty position to be in.

          • poweruser@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            8
            ·
            10 months ago

            That’s actually not what copyright is for. Copyright was made to enhance the public culture by promoting the creation of art.

            If these record label types impede public culture then they are antithetical to copyright

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            9
            ·
            10 months ago

            All the philosophical arguments about the nature of ideas and learning, and how much a machine may be like a person don’t change that if anyone dedicates years of their efforts to develop their skills only to be undercut by an AI who was trained on their own works, is an incredibly shitty position to be in.

            So should Dread Zeplin be hauled off to jail because they created derivative works without permission? I mean maybe they should, but not for copyright imo. How about the fan Star Wars movies getting their balls sued off by Disney?

            writers and artists’ livelihoods are being threatened by AIs who were trained on their work without their consent or compensation

            Guess what? The actual copyright owners of the world; those who own tens of thousands or millions of copyrighted works, will be the precise individuals paying for and developing that kind of automation, and in the current legal interpretation of copyright, its their property to do so with. This outrage masturbation the internet is engaged in ignores the current reality of copyright, that its not small makers and artists benefiting from it but billion dollar multinational corporations benefiting from it.

            This is a philosophical argument and an important one: Should we legally constrain the flow and use of ideas because an individuals right to extract profit from an idea.

            I don’t think so.

            • Sheik@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              10 months ago

              Dread Zeppelin could have been sued. They were just lucky to be liked by Robert Plant.

              As for the Star Wars fan movie, the copyright claim about the music was dropped because it was frivolous. The video creator made a deal with Lucasfilm to use Star Wars copyrighted material, he didn’t just go yolo.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              10
              ·
              10 months ago

              You are conditioning the rights of artists making derivative works to the rights of systems being used to take advantage of those artists without consent or compensation. Not only those are two different situations but also supporting the latter doesn’t mean supporting the former.

              Like I said somewhere in this discussion, AI are not people. People have rights that tools do not. If you want to argue in favor of parody and fan artists, do that. If you want to speak out again how the current state of copyright makes it so corporations rather than the actual artists gets the rights and profit over the works they create, do that. Leaping in defense of AI is not it.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                4
                ·
                10 months ago

                I’m challenging the legal precedent of the barrier of creating derivative works in any media, including AI.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              12
              arrow-down
              6
              ·
              10 months ago

              AI is not a person. If you replace it with a person in an analogy, that’s a whole different discussion.

              We actually do restrict how tools can engage with artworks all the time. You know, “don’t take pictures”.

        • treefrog@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          9
          ·
          10 months ago

          That’s life under capitalism.

          I agree with you in essence (I’ve put a lot of time into a free software game).

          However, people are entitled to the fruits of their labor, and until we learn to leave capitalism behind artists have to protect their work to survive. To eat. To feed their kids. And pay their rent.

          Unless OpenAi is planning to pay out royalties to everyone they stole from, what their doing is illegal and immoral under our current, capitalist paradigm.

          • kmkz_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            8
            ·
            10 months ago

            Yeah, this is definitely leaning a little too “People shouldn’t pump their own gas because gas attendants need to eat, feed their kids, pay rent” for me.

      • NOT_RICK@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        edit-2
        10 months ago

        A sample is a fundamental part of a song’s output, not just its input. If LLMs are changing the input’s work to a high enough degree is it not protected as a transformative work?

        • treefrog@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          edit-2
          10 months ago

          it’s more like a collage of everyone’s words. it doesn’t make anything creative because ot doesn’t have a body or life or real social inputs you could say. basically it’s just rearranging other people’s words.

          A song that’s nothing but samples. but so many samples it hides that fact. this is my view anyway.

          and only a handful of people are getting rich of the outputs.

          if we were in some kinda post capitalism economy or if we had UBI it wouldn’t bother me really. it’s not the artists ego I’m sticking up for, but their livelihood

    • AgentOrange@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      10 months ago

      To add to that, Harry Potter is the worst example to use here. There is no extra billion that JK Rowling needs to allow her to spend time writing more books.

      Copyright was meant to encourage authors to invest in their work in the same way that patents do. If you were going to argue about the issue of lifting content from books, you should be using books that need the protection of copyright, not ones that don’t.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        10 months ago

        Copyright was meant

        I just don’t know that I agree that this line of reasoning is useful. Who cares what it was meant for? What is it now, currently and functionally, doing?

    • BURN@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      9
      ·
      10 months ago

      I’m a huge proponent of expanding individual copyright to extreme amounts (an individual is entitled to own the rights and usage rights to anything they create and can revoke those rights from anyone), but not in favor of the same thing for corporations.

      I hold the exact opposite view as you. As long as it’s a truly creative work (a writing, music, artwork, etc) then you own that specific implementation of the idea. Someone can make something else based on it, but you still own the original content.

      LLMs and companies using them need to pay for the content in some way. This is already done through licensing in other parallels, and will likely come to AI quickly.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        To be clear, I’m still working through my thinking in this but it’s been something cooking for quite a while. I may not have all the words to express my meaning. For example, I think there are two routes to take in making my argument, one moral, the other technical. I’m not building on the morality of copyright, but focusing on the technical aspects of the limits of ideas.

        I suppose I would ask you then about your views in authoritarianism. Because it seems to be that with out an extremely authoritarian state, it would be basically impossible to enforce your view of copyright. Are you okay with that kind of pervasiveness?

        Also, from a technical perspective, how do you propose this view of copyright be applied? This is kind of towards the broader point I’m thinking I believe in. It’s not just that copyright laws are epifaci ridiculous, they are also technically almost unenforceable in their modern extremist interpretation with out an extremely pervasive form of surveillance.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          10 months ago

          Easy. The same way we already do it. We enforce music licensing, video licensing and other ip licensing. It’s been done. All this would do is extend those rights to the individual and remove them from corporations. Work product can be owned by companies, but not indefinitely. Individuals should always be in control of their creations.

          Restrictions would more or less be strictly commercial, to where hobbyists wouldn’t be impacted, but as soon as it’s used to make money the original creators are owed as part of it.

          It wouldn’t be any harder than it is now, as long as copyright is proved. (Hey look, this is the first time I’ve found an actual use of NFTs). In general anything being done for momentary gain is already monitored and surveilled, so this wouldn’t be a change there either.

          Edit: Also most of us already live in authoritarian states. This won’t really change anything. Big corps already enforce their copyright when it makes monetary sense and are actively trolling for unauthorized uses.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            10 months ago

            It wouldn’t be any harder than it is now, as long as copyright is proved. (Hey look, this is the first time I’ve found an actual use of NFTs). In general anything being done for momentary gain is already monitored and surveilled, so this wouldn’t be a change there either.

            Personally, I think you are describing a dystopian, authoritarian landscape which will be devoid of any real creativity or interesting ideas. I’m a believer that all ideas are free to be stolen, copied, improved upon; that imitation of ideas is a fundamental human right, and fundamental to what it means to be human. Likewise, I think our social and media landscape would be much poorer without this right. I don’t think any one has the inherent right to profit off of an idea.

            • BURN@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              5
              ·
              10 months ago

              I feel the exact opposite. There’s no reason for me to create anything if someone else can come along and steal it. Eliminating copyright will bring your dystopian landscape where nobody shares any sort of art or creative work because someone else will steal it.

              What motivation is there for creatives if you’re just telling them their work has no implicit value and anyone else can come along and reappropriate it for whatever they’d like?

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                10 months ago

                I feel the exact opposite. There’s no reason for me to create anything if someone else can come along and steal it. Eliminating copyright will bring your dystopian landscape where nobody shares any sort of art or creative work because someone else will steal it.

                This is great because I think you are totally correct in your sentiment that we believe oppositely. I see art created only for the purpose of profit as drivel; true art is an expression of the self. If the only reason you make art is for profit, you aren’t an artist, you are an employee.

                • BURN@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  10 months ago

                  That’s a great theory and all, but it’s not even money. I make no money from my photos, but I also refrain from posting any of them because I’d rather they not be used for AI training. Same with any music I create and I’m getting there with my code.

                  The nobility of art has always been in question, and it’s consistently been proven that artists who aren’t compensated for their work also tend to create less.

                  This is also not explicitly about profit. If I write a song and then it’s used at a hate rally, I currently have no recourse. They’re not making money from that application (directly), but they are using my creation to promote something I don’t agree with.

                  I’m curious to know if you’re an artist yourself, as it’s very contrary to the opinions from other creatives I know.

              • kmkz_ninja@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                10 months ago

                I assume you’re against the communal and collective culture that modders for games enjoy?

                I assume you also believe no technological innovations are produced in America anymore since China would simply steal it.

                • BURN@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 months ago

                  Nowhere did I say derivative works are not ok. If a game maker explicitly forbids using modded versions of their game, I think that should be up to them. Games that have vibrant modding communities are almost always at least partially supported by the developer anyways.

                  My points are individual copyright anyways, not corporate. With increasing individual protections I also propose decreasing corporate copyright protection.

                  I believe that China makes 90% of the same product for 80% of the price after ripping off their American counterparts. But that’s also entirely off topic and really has nothing to do with this. Art/Creative Works are entirely different than physical goods.

      • SkyezOpen@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        10 months ago

        I hold whatever view makes George Lucas stop digitally remastering the original trilogy.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    18
    ·
    edit-2
    10 months ago

    If I memorize the text of Harry Potter, my brain does not thereby become a copyright infringement.

    A copyright infringement only occurs if I then reproduce that text, e.g. by writing it down or reciting it in a public performance.

    Training an LLM from a corpus that includes a piece of copyrighted material does not necessarily produce a work that is legally a derivative work of that copyrighted material. The copyright status of that LLM’s “brain” has not yet been adjudicated by any court anywhere.

    If the developers have taken steps to ensure that the LLM cannot recite copyrighted material, that should count in their favor, not against them. Calling it “hiding” is backwards.

  • Blapoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    11
    ·
    10 months ago

    We have to distinguish between LLMs

    • Trained on copyrighted material and
    • Outputting copyrighted material

    They are not one and the same

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      9
      ·
      10 months ago

      Yeah, this headline is trying to make it seem like training on copyrighted material is or should be wrong.

      • scv@discuss.online
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        3
        ·
        10 months ago

        Legally the output of the training could be considered a derived work. We treat brains differently here, that’s all.

        I think the current intellectual property system makes no sense and AI is revealing that fact.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        10
        ·
        10 months ago

        I think this brings up broader questions about the currently quite extreme interpretation of copyright. Personally I don’t think its wrong to sample from or create derivative works from something that is accessible. If its not behind lock and key, its free to use. If you have a problem with that, then put it behind lock and key. No one is forcing you to share your art with the world.

        • Bogasse@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          10 months ago

          Most books are actually locked behind paywalls and not free to use? Or maybe I don’t understand what you meant?

        • Railcar8095@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          Following that, if a sailor is the sea were to put a copy of a protected book on the internet and ChatGPT was trained on it, how that argument would go? The copyright owner didn’t place it there, so it’s not “their decision”. And savvy people can make sure it’s accessible if they want to.

          My belief is, if they can use all non locked data for free, then the model should be shared for free too and it’s outputs shouldn’t be subject to copyright. Just for context

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      Output from an AI has just been recently considered as not copyrightable.

      I think it stemmed from the actors strikes recently.

      It was stated that only work originating from a human can be copyrighted.

      • Anders429@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Output from an AI has just been recently considered as not copyrightable.

        Where can I read more about this? I’ve seen it mentioned a few times, but never with any links.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 months ago

          They clearly only read the headline If they’re talking about the ruling that came out this week, that whole thing was about trying to give an AI authorship of a work generated solely by a machine and having the copyright go to the owner of the machine through the work-for-hire doctrine. So an AI itself can’t be authors or hold a copyright, but humans using them can still be copyright holders of any qualifying works.

  • Skanky@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    10 months ago

    Vanilla Ice had it right all along. Nobody gives a shit about copyright until big money is involved.

          • kmkz_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            10 months ago

            His point is equally valid. Can an artist be compelled to show the methods of their art? Is it as right to force an artist to give up methods if another artist thinks they are using AI to derive copyrighted work? Haven’t we already seen that LLMs are really poor at evaluating whether or not something was created by an LLM? Wouldn’t making strong laws on such an already opaque and difficult-to-prove issue be more of a burden on smaller artists vs. large studios with lawyers-in-tow.

      • Asuka@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        If I read Harry Potter and wrote a novel of my own, no doubt ideas from it could consciously or subconsciously influence it and be incorporated into it. Hey is that any different from what an LLM does?

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        6
        ·
        10 months ago

        You joke but AI advocates seem to forget that people have fundamentally different rights than tools and objects. A photocopier doesn’t get the right to “memorize” and “learn” from a text that a human being does. As much as people may argue that AIs work different, AIs are still not people.

        And if they ever become people, the situation will be much more complicated than whether they can imitate some writer. But we aren’t there yet, even their advocates just uses them as tools.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      10
      ·
      10 months ago

      Exactly. If I write some Loony toons fan fiction, Warner doesn’t own that. This ridiculous view of copyright (that’s not being challenged in the public discourse) needs to be confronted.

      • wmassingham@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        10 months ago

        They can own it, actually. If you use the characters of Bugs Bunny, etc., or the setting (do they have a canonical setting?) then Warner does own the rights to the material you’re using.

        For example, see how the original Winnie the Pooh material just entered public domain, but the subsequent Disney versions have not. You can use the original stuff (see the recent horror movie for an example of legal use) but not the later material like Tigger or Pooh in a red shirt.

        Now if your work is satire or parody, then you can argue that it’s fair use. But generally, most companies don’t care about fan fiction because it doesn’t compete with their sales. If you publish your Harry Potter fan fiction on Livejournal, it wouldn’t be worth the money to pay the lawyers to take it down. But if you publish your Larry Cotter and the Wizard’s Rock story on Amazon, they’ll take it down because now it’s a competing product.

          • Sethayy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Can’t but theyre pretty open on how they trained the model, so like almost admitted guilt (though they werent hosting the pirated content, its still out there and would be trained on). Cause unless they trained it on a paid Netflix account, there’s no way to get it legally.

            Idk where this lands legally, but I’d assume not in their favour

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      10 months ago

      It’s honestly a good question. It’s perfectly legal for you to memorize a copyrighted work. In some contexts, you can recite it, too (particularly the perilous fair use). And even if you don’t recite a copyrighted work directly, you are most certainly allowed to learn to write from reading copyrighted books, then try to come up with your own writing based off what you’ve read. You’ll probably try your best to avoid copying anyone, but you might still make mistakes, simply by forgetting that some idea isn’t your own.

      But can AI? If we want to view AI as basically an artificial brain, then shouldn’t it be able to do what humans can do? Though at the same time, it’s not actually a brain nor is it a human. Humans are pretty limited in what they can remember, whereas an AI could be virtually boundless.

      If we’re looking at intent, the AI companies certainly aren’t trying to recreate copyrighted works. They’ve actively tried to stop it as we can see. And LLMs don’t directly store the copyrighted works, either. They’re basically just storing super hard to understand sets of weights, which are a challenge even for experienced researchers to explain. They’re not denying that they read copyrighted works (like all of us do), but arguably they aren’t trying to write copyrighted works.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      No, because you paid for a single viewing of that content with your cinema ticket. And frankly, I think that the price of a cinema ticket (= a single viewing, which it was) should be what OpenAI should be made to pay.

  • rosenjcb@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    3
    ·
    edit-2
    10 months ago

    The powers that be have done a great job convincing the layperson that copyright is about protecting artists and not publishers. It’s historically inaccurate and you can discover that copyright law was pushed by publishers who did not want authors keeping second hand manuscripts of works they sold to publishing companies.

    Additional reading: https://en.m.wikipedia.org/wiki/Statute_of_Anne

  • Sentau@lemmy.one
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    7
    ·
    edit-2
    10 months ago

    I think a lot of people are not getting it. AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour. Similar to how using copyrighted clips in a monetized video can make you get a strike against your channel but if the video is not monetized, the chances of YouTube taking action against you is lower.

    Edit - If this was an open source model available for use by the general public at no cost, I would be far less bothered by claims of copyright infringement by the model

    • Tyler_Zoro@ttrpg.network
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      4
      ·
      10 months ago

      AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.

      And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?

      Learning is not and has never been considered a financial transaction.

    • FMT99@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      10 months ago

      But wouldn’t this training and the subsequent output be so transformative that being based on the copyrighted work makes no difference? If I read a Harry Potter book and then write a story about a boy wizard who becomes a great hero, anyone trying to copyright strike that would be laughed at.

    • 1ird@notyour.rodeo
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      edit-2
      10 months ago

      How is it any different from someone reading the books, being influenced by them and writing their own book with that inspiration? Should the author of the original book be paid for sales of the second book?

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 months ago

        They used to be a non profit, that immediately turned it into a for profit when their product was refined. They took a bunch of people’s effort whether it be training materials or training Monkeys using the product and then slapped a huge price tag on it.

        • Touching_Grass@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          10 months ago

          I didn’t know they were a non profit. I’m good as long as they keep the current model. Release older models free to use while charging for extra or latest features

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        10 months ago

        They’re stealing a ridiculous amount of copyrighted works to use to train their model without the consent of the copyright holders.

        This includes the single person operations creating art that’s being used to feed the models that will take their jobs.

        OpenAI should not be allowed to train on copyrighted material without paying a licensing fee at minimum.

        • uzay@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          10 months ago

          Also Sam Altman is a grifter who gives people in need small amounts of monopoly money to get their biometric data

          • LifeInMultipleChoice@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            10 months ago

            So hypothetical here. If Dreddit did launch a system that made it so users could trade Karma in for real currency or some alternative, does that mean that all fan fictions and all other fan boy account created material would become copyright infringement as they are now making money off the original works?

        • Touching_Grass@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          10 months ago

          If they purchased the data or the data is free its theirs to do what they want without violating the copyright like reselling the original work as their own. Training off it should not violate any copyright if the work was available for free or purchased by at least one person involved. Capitalism should work both ways

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            10 months ago

            But they don’t purchase the data. That’s the whole problem.

            And copyright is absolutely violated by training off it. It’s being used to make money and no longer falls under even the widest interpretation of free use.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                10 months ago

                It may be freely available for non-commercial works, eg. Photos on Photobucket, internet archive free book archives, etc.

                Most everything is on the internet these days, copyrighted or not. I’m sure if I googled enough I could find the entire text of Harry Potter for free. I still haven’t purchased it, and technically it’s not legally freely available. But in training these models I guarantee they didn’t care where the data came from, just that it was data.

                I’m against piracy as well for the record, but pretty much everything is available through torrenting and pirate sites at this point, copyright be damned.

                • Touching_Grass@lemmy.world
                  cake
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  edit-2
                  10 months ago

                  Don’t care, that’s not mine or these LLMs problem they don’t secure their copyright. They shouldn’t come asking for others to pay for them not securing their data. I see it as a double edged sword.

                  I really hope this is a wake up call to all creative types to pack up and not use the internet like a street corner while they busk.

                  If they want to come online to contribute like everybody else. Just have fun and post stuff, that’s great. But all of them are no different then any other greedy corporation. They all want more toll roads. When they do make it and earn millions and get our attention they exploit it with more ads. It swallows all the free good content. Sites gear towards these rich creators. They lawyer up and sue everybody and everything that looks or sounds like them. We lose all our good spaces to them.

                  I hope the LLM allows regular people to shit post in peace finally.

            • GroggyGuava@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              10 months ago

              You need to expand on how learning from something to make money is somehow using the original material to make money. Considering that’s how art works in general, I’m having a hard time taking the side of “learning from media to make your own is against copyright”. As long as they don’t reproduce the same thing as the original, I don’t see any issues with it. If they learned from Lord of the rings to then make “the Lord of the rings” then yes, that’d be infringement. But if they use that data to make a new IP with original ideas, then how is that bad for the world/ artists.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                10 months ago

                Creating an AI model is a commercial work. They’re made to make money. Now these models are dependent on other artists data to train on. The models would be useless if they weren’t able to train on anything.

                I hold the stance that using copyrighted data as part of a training set is a violation of copyright. That still hasn’t been fully challenged in court, so there’s no specific legal definition yet.

                Due to the requirement of copywritten materials to make the model function I feel that they are using copyrighted works in order to build a commercial product.

                Also AI doesn’t learn. LLMs build statistical models based on sentence structure of what they’ve seen before. There’s no level of understanding or inherent knowledge, and there’s nothing new being added.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    12
    ·
    10 months ago

    Why are people defending a massive corporation that admits it is attempting to create something that will give them unparalleled power if they are successful?

    • bamboo@lemm.ee
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      5
      ·
      10 months ago

      Mostly because fuck corporations trying to milk their copyright. I have no particular love for OpenAI (though I do like their product), but I do have great distain for already-successful corporations that would hold back the progress of humanity because they didn’t get paid (again).

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          Perhaps, and when that happens I would be equally disdainful towards them.

        • LifeInMultipleChoice@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 months ago

          In the United States there was a judgement made the other day saying that works created soley by AI are not copyright-able. So that that would put a speed bumb there.
          I may have misunderstood what you though.

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Yeah, they might not copyright it, but after it becomes the ‘one true AI’, it will be at the hands of Microsoft, so please do not act friendly towards them.

            It will turn on you just like every private company has.

            (don’t mean specifically you, but everyone generally)

          • uis@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            Huh. Doesn’t this means technically AI cannot do copyright infringement.

            • LifeInMultipleChoice@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Nah, it would mean that you cannot copyright a work created by an AI, such as a piece of art.

              E.g. if you tell it to draw you a donkey carting avocados, the picture can be used by anyone from what I understand.

              • uis@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                you cannot copyright a work created by an AI, such as a piece of art.

                That’s what I said. Copyright infringement is when there is another copyrightable object that is copy of first object. AI is not witin copyright area. You can’t copyright it, but also you can’t be sued for copyright infringement too.

                if you tell it to draw you a donkey carting avocados, the picture can be used by anyone from what I understand.

                Yes. Same for Public Domain, but PD is another status. PD applies only to copyrightable work.

        • uis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          It’s like argument “but new politicians will steal more” that I hear in Russia from people who protect Putin

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            It’s literally not, wtf.

            Do not let any private entity to get overwhelming majority on anything period.

            But do not kid yourself that Microsoft will let OpenAI do anything for public once it gets big enough.

            OpenAI is open only in name after they rolled back all the promises of being for everyone.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              10 months ago

              That’s my entire point. It’s not who, but how long.

              Also Microsoft plays both sides here. OpenAI vs copyright is wrong question. There’s more: both are status-quo. Both are for keeping corporate ownership of ideas.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        10 months ago

        There’s a massive difference though between corporations milking copyright and authors/musicians/artists wanting their copyright respected. All I see here is a corporation milking copyrighted works by creative individuals.

    • Cosmic Cleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      Because ultimately, it’s about the truth of things, and not what team is winning or losing.

    • Whimsical@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      The dream would be that they manage to make their own glorious free & open source version, so that after a brief spike in corporate profit as they fire all their writers and artists, suddenly nobody needs those corps anymore because EVERYONE gets access to the same tools - if everyone has the ability to churn out massive content without hiring anyone, that theoretically favors those who never had the capital to hire people to begin with, far more than those who did the hiring.

      Of course, this stance doesn’t really have an answer for any of the other problems involved in the tech, not the least of which is that there’s bigger issues at play than just “content”.

      • otherbastard@lemm.ee
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        10
        ·
        10 months ago

        An LLM is not a person, it is a product. It doesn’t matter that it “learns” like a human - at the end of the day, it is a product created by a corporation that used other people’s work, with the capacity to disrupt the market that those folks’ work competes in.

        • Touching_Grass@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          8
          ·
          edit-2
          10 months ago

          And it should be able to freely use anything that’s available to it. These massive corporations and entities have exploited all the free spaces to advertise and sell us their own products and are now sour.

          If they had their way they are going to lock up much more of the net behind paywalls. Everybody should be with the LLMs on this.

          • otherbastard@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            10 months ago

            You are somehow conflating “massive corporation” with “independent creator,” while also not recognizing that successful LLM implementations are and will be run by massive corporations, and eventually plagued with ads and paywalls.

            People that make things should be allowed payment for their time and the value they provide their customer.

            • Touching_Grass@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              3
              ·
              edit-2
              10 months ago

              People are paid. But they’re greedy and expect far more compensation then they deserve. In this case they should not be compensated for having an LLM ingest their work work if that work was legally owned or obtained

          • assassin_aragorn@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            10 months ago

            Except the massive corporations and entities are the ones getting rich on this. They’re seeking to exploit the work of authors and musicians and artists.

            Respecting the intellectual property of creative workers is the anti corporate position here.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Except corporations have infinitely more resources(money, lawyers) compared to people who create. Take Jarek Duda(mathematician from Poland) and Microsoft as an example. He created new compression algorythm, and Microsoft came few years later and patented it in Britain AFAIK. To file patent contest and prior art he needs 100k£.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                I think there’s an important distinction to make here between patents and copyright. Patents are the issue with corporations, and I couldn’t care less if AI consumed all that.

                • uis@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 months ago

                  And for copyright there is no possible way to contest it. Also when copyright expires there is no guarantee it will be accessable by humanity. Patents are bad, copyright even worse.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              There is nothing anti corporate if result can be alienated.

            • Touching_Grass@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              6
              ·
              10 months ago

              Large number of these Artist, musicians and authors is corporate America today. And those authors artists and musicians have exploited all our spaces for far too long. Most of the internet had been turned toxic due to their greed. I wish they take their content and go find their own spaces instead of mooching off everybody else’s. These LLMs are only doing what they’ve done

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            6
            ·
            10 months ago

            If they had their way they are going to lock up much more of the net behind paywalls.

            This!

            When the Internet was first a thing corpos tried to put everything behind paywalls, and we pushed back and won.

            Now, the next generation is advocating to put everything behind a paywall again?

            • Touching_Grass@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              10 months ago

              Its always weird to me how the old values from early internet days sort of vanished. Is it by design there aren’t any more Richard Stallmans or is it the natural progression on an internet that was taken over

              • Cosmic Cleric@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                10 months ago

                Not to inject politics into this, but the Internet started off way more socialist than it is today.

                Capitalism creeping in taking over slowly. And it’s being done in a slow boiling the toad in a pot sort of way.

          • scarabic@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            7
            ·
            10 months ago

            First, we don’t have to make AI.

            Second, it’s not about it being unable to learn, it’s about the fact that they aren’t paying the people who are teaching it.

              • FatCrab@lemmy.one
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                3
                ·
                10 months ago

                The reasoning that claims training a generative model is infringing IP would still mean a robot going into a library with a card it has to optically read all the books there to create the same generative model would still be infringing IP.

              • AncientMariner@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                10 months ago

                Humans can judge information make decisions on it and adapt it. AI mostly just looks at what is statistically what is most likely based on training data. If 1 piece of data exists, it will copy, not paraphrase. Example was from I think copilot where it just printed out the code and comments from an old game verbatim. I think Quake2. It isn’t intelligence, it is statistical copying.

                • uis@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  10 months ago

                  Well, mathematics cannot be copyrighted. In most countries at least.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                4
                ·
                10 months ago

                because it might hurt authors and musicians and artists and other creative workers

                FTFY. Corporations shouldn’t be making a fucking dime from any of these works without fairly paying the creators.

    • SCB@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      10 months ago

      Leftists hating on AI while dreaming of post-scarcity will never not be funny

    • Crozekiel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      10
      ·
      10 months ago

      AI is the new fan boy following since it became official that nfts are all fucking scams. They need a new technological God to push to feel superior to everyone else…

  • uriel238@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    8
    ·
    edit-2
    10 months ago

    Training AI on copyrighted material is no more illegal or unethical than training human beings on copyrighted material (from library books or borrowed books, nonetheless!). And trying to challenge the veracity of generative AI systems on the notion that it was trained on copyrighted material only raises the specter that IP law has lost its validity as a public good.

    The only valid concern about generative AI is that it could displace human workers (or swap out skilled jobs for menial ones) which is a problem because our society recognizes the value of human beings only in their capacity to provide a compensation-worthy service to people with money.

    The problem is this is a shitty, unethical way to determine who gets to survive and who doesn’t. All the current controversy about generative AI does is kick this can down the road a bit. But we’re going to have to address soon that our monied elites will be glad to dispose of the rest of us as soon as they can.

    Also, amateur creators are as good as professionals, given the same resources. Maybe we should look at creating content by other means than for-profit companies.

    • Draedron@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      10 months ago

      Also this argument if replacing human workers has been made with every single industrial revolution.

        • Draedron@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          The point is fighting back against it is stupid. The point is people still have work. New technology opens up new was to work with new jobs.

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    10 months ago

    This is just OpenAI covering their ass by attempting to block the most egregious and obvious outputs in legal gray areas, something they’ve been doing for a while, hence why their AI models are known to be massively censored. I wouldn’t call that ‘hiding’. It’s kind of hard to hide it was trained on copyrighted material, since that’s common knowledge, really.

  • RadialMonster@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    10 months ago

    what if they scraped a whole lot of the internet, and those excerpts were in random blogs and posts and quotes and memes etc etc all over the place? They didnt injest the material directly, or knowingly.

    • beetus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      10 months ago

      Not knowing something is a crime doesn’t stop you from being prosecuted for committing it.

      It doesn’t matter if someone else is sharing copyright works and you don’t know it and use it in ways that infringes on that copyright.

      “I didn’t know that was copyrighted” is not a valid defence.

      • stewsters@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        10 months ago

        Is reading a passage from a book actually a crime though?

        Sure, you could try to regenerate the full text from quotes you read online, much like you could open a lot of video reviews and recreate larger portions of the original text, but you would not blame the video editing program for that, you would blame the one who did it and decided to post it online.

    • chemical_cutthroat@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      8
      ·
      10 months ago

      That’s why this whole argument is worthless, and why I think that, at its core, it is disingenuous. I would be willing to be a steak dinner that a lot of these lawsuits are just fishing for money, and the rest are set up by competition trying to slow the market down because they are lagging behind. AI is an arms race, and it’s growing so fast that if you got in too late, you are just out of luck. So, companies that want in are trying to slow down the leaders, at best, and at worst they are trying to make them publish their training material so they can just copy it. AI training models should be considered IP, and should be protected as such. It’s like trying to get the Colonel’s secret recipe by saying that all the spices that were used have been used in other recipes before, so it should be fair game.

      • Kujo@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        10 months ago

        If training models are considered IP then shouldn’t we allow other training models to view and learn from the competition? If learning from other IPs that are copywritten is okay, why should the training models be treated different?

        • chemical_cutthroat@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          10 months ago

          They are allegedly learning from copyrighted material, there is no actual proof that they have been trained on the actual material, or just snippets that have been published online. And it would be illegal for them to be trained on full copyrighted materials, because it is protected by laws that prevent that.

  • Thorny_Thicket@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    6
    ·
    10 months ago

    I don’t get why this is an issue. Assuming they purchased a legal copy that it was trained on then what’s the problem? Like really. What does it matter that it knows a certain book from cover to cover or is able to imitate art styles etc. That’s exactly what people do too. We’re just not quite as good at it.

    • Hildegarde@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      12
      ·
      10 months ago

      A copyright holder has the right to control who has the right to create derivative works based on their copyright. If you want to take someone’s copyright and use it to create something else, you need permission from the copyright holder.

      The one major exception is Fair Use. It is unlikely that AI training is a fair use. However this point has not been adjudicated in a court as far as I am aware.

      • FatCat@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        8
        ·
        10 months ago

        It is not a derivative it is transformative work. Just like human artists “synthesise” art they see around them and make new art, so do LLMs.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 months ago

          LLMs don’t create anything new. They have limited access to what they can be based on, and all assumptions made by it are based on that data. They do not learn new things or present new ideas. Only ideas that have been already done and are present in their training.

        • Hildegarde@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          6
          ·
          10 months ago

          Transformative works are not a thing.

          If you copy the copyrightable elements of another work, you have created a derivative work. That work needs to be transformative in order to be eligible for its own copyright, but being transformative alone is not enough to make it non-infringing.

          There are four fair use factors. Transformativeness is only considered by one of them. That is not enough to make a fair use.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            Transformativeness is only considered by one of them. That is not enough to make a fair use.

            Somebody better let YouTube content creators know that. /s

      • LordShrek@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        5
        ·
        10 months ago

        this is so fucking stupid though. almost everyone reads books and/or watches movies, and their speech is developed from that. the way we speak is modeled after characters and dialogue in books. the way we think is often from books. do we track down what percentage of each sentence comes from what book every time we think or talk?

        • SpiderShoeCult@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          10 months ago

          Aye, but I’m thinking the whole notion of copyright is banking on the fact that human beings are inherently lazy and not everyone will start churning out books in the same universe or style. And if they do, it takes quite some time to get the finished product and they just get sued for it. It’s easy, because there’s a single target.

          So there’s an extra deterrent to people writing and publishing a new harry potter novel, unaffiliated with the current owner of the copyright. Invest all that time and resources just to be sued? Nah…

          Issue with generating stuff with 'puters is that you invest way less time, so the same issue pops up for the copyright owner, they’re just DDoS-ed on their possible attack routes. Will they really sue thousands or hundreds of thoudands of internet randos generating harry potter erotica using a LLM? Would you even know who they are? People can hide money away in Switzerland from entite governments, I’m sure there are ways to hide your identity from a book publisher.

          It was never about the content, it’s about the opportunities the technology provides to halt the gears of the system that works to enforce questionable laws. So they’re nipping it in the bud.

          • LordShrek@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            10 months ago

            this brings up the question: what is a book? what is art? if an “AI” can now churn out the next harry potter sequel and people literally can’t tell that it’s not written by JK Rowling, then what does that mean for what people value in stories? what is a story? is this a sign that we humans should figure something new out, instead of reacting according to an outdated protocol?

            yes, authors made money in the past before AI. now that we have AI and most people can get satisfied by a book written by AI, what will differentiate human authors from AI? will it become a niche thing, where some people can tell the difference and they prefer human authors? or will there be some small number of exceptional authors who can produce something that is obviously different from AI?

            i see this as an opportunity for artists to compete with AI, rather than say “hey! no fair! he can think and write faster than me!”

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              Well, poor literature has always existed, which some might not even dignify to call literature. Are writers of such things threatened by LLMs? Of course they are. Every new technology has beought with it the fear of upending somebody’s world. And to some extent, every new technology has indeed done just that.

              Personally, and… this will probably be highly unpopular, I honestly don’t care who or what created a piece of art. Is it pretty? Does it satisfy my need for just the right amount of weird, funny and disturbing to stir emotions or make me go ‘heh, interesting!’? Then it really doesn’t matter where it comes from. We put way too much emphasis on the pedigree of art and not on the content. Hell, one very nice short story I read was the greentext one about humans being AI and escaping from the simulation. Wonder how many would scoff at calling art something that came out of 4chan?

              Maybe this is the issue? Art is thought of as a purely human endeavour (also birds do it, and that one pufferfish that draws on the seabed, but they’re “dumb” animals so they don’t count, right? hell, there’s even a jumping spider that does some pretty rad dances). And if code in a machine can do it just as well (can it? let it - we’ll be all the better for it. can’t it? let it be then - no issue) then what would be the significance of being human?