In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

    • Big P@feddit.uk
      link
      fedilink
      arrow-up
      29
      ·
      1 year ago

      You wouldn’t be saying that if it was your content that was being ripped off

        • Kichae@kbin.social
          link
          fedilink
          arrow-up
          17
          ·
          1 year ago

          That’s, uh, exactly how they work? They need large amounts of training data, and that data isn’t being generated in house.

          It’s being stolen, scraped from the internet.

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            If it was publicly available on the internet, then it wasn’t stolen. OpenAI hasn’t been hacking into restricted content that isn’t meant for public consumption. You’re allowed to download anything you see online (technically, if you’re seeing it, you’ve already downloaded it). And you’re allowed to study anything you see online. Even for personal use. Even for profit. Taking inspiration from something isn’t a crime. That’s allowed. If it wasn’t, the internet wouldn’t function at a fundamental level.

            • HeartyBeast@kbin.social
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              I don’t think you understand how copyright works. Something appearing on the internet doesn’t give you automatic full commercial rights to it.

        • Niello@kbin.social
          link
          fedilink
          arrow-up
          16
          ·
          edit-2
          1 year ago

          if you read a copyrighted material without paying and then forgot most of it a month later with vague recollection of what you’ve read the fact is you still accessed and used the copyrighted material without paying.

          Now let’s go a step further, you write something that is inspired by that copyrighted material and what you wrote become successful to some degree with eyes on it, but you refuse to admit that’s where you got the idea from because you only have a vague recollection. The fact is you got the idea from the copyrighted material.

            • nicetriangle@kbin.social
              link
              fedilink
              arrow-up
              13
              ·
              1 year ago

              Except that nobody has a superhuman ability to create endless amounts of content almost instantly based on said work.

              People throw this “artists/writers use inspiration to create X” argument all the time and it just totally ignores the fact that we’re not talking about some person spending 10s/100s/1000s of hours of their time to copy someone’s working style.

              It’s a piece of software churning it out in seconds.

              • exscape@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Do generative AI models typically focus on ONE person’s style? Don’t they mix together influences from thousands of artists?

                FWIW this is not an area I read up on, and so I don’t have a strong opinion one way or the other.

                • volkrom@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  For the image generating ones like Midjourney you could ask for an artist’s style by putting their name in the prompt.
                  It probably works the same in OpenAI.

              • Tarte@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                1 year ago

                If I would create a very slow AI that takes 10 or 100 hours for each response, would that make it any better in your opinion? I do not think calculation speed of a software is a good basis for legislation.

                If analyzing a piece of art and replicating parts of it without permission is illegal, then it should be illegal regardless of the tools used. However, that would make every single piece of art illegal, so it’s not an option. If we make only the digital tools illegal then the question still remains where to draw the line. How much inefficiency is required for a tool to still be considered legal?

                Is Adobe Photoshop generative auto-fill legal?
                Is translating with deepl.com or the modern Google Translate equivalent legal?
                Are voice activated commands on your mobile phone legal (Cortana, Siri, Google)?

                All of these tools were trained in similar ways. All of these take away jobs (read: make work/life more efficient).

                It’s hard to draw a line and I don’t have any solution to offer.

            • Niello@kbin.social
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              1 year ago

              Except the illegally obtaining the copyrighted material part, which is the main point. And definitely not on this scale.

            • BraveSirZaphod@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I think there can be said to be a meaningful difference due to the sheer scale and speed at which AIs can do this though.

              Ultimately, I think it’s less of a direct legal question and more a societal question of whether or not we think this is fair or not. I’d expect it to ultimately be resolved by legislative bodies, not the courts.

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            That’s still not how LLMs work. I can’t believe everybody who is upset with them doesn’t understand this.

            The LLM has no idea what it’s reading. None. It’s just doing a word association game, but at a scale we can’t comprehend. It knows what arrangement of words go together, but it’s not reproducing anything with any actual intent. To get it to actually output anything that actually resembles a single piece of material it was trained against would require incredibly specific prompts to get there, and at that point it’s not really the LLM’s making anymore.

            There’s plenty of reasons to be against AI. Such as the massive amounts of data scraping that happens to train models, the possible privacy invasions that come with that, academic cheating, etc. But to be mad at AI for copyright infringement only shows a lack of understanding of what these systems actually do.

            • magic_lobster_party@kbin.social
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

              In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

              Can LLMs recite entire pieces of work? Who knows?

              Does it count as copyright infringement if it does so? Possibly.

              • ReCursing@kbin.social
                link
                fedilink
                arrow-up
                7
                ·
                1 year ago

                The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

                No it isn’t. That;s not how neural networks work, like at all

                In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

                It’s learning patterns. It’s not memorising training data. Again, not how the system works at all

                Can LLMs recite entire pieces of work? Who knows?

                No. No they can’t.

                Does it count as copyright infringement if it does so? Possibly.

                That’d be one for the lawyers were it to ever come up, but it won’t

                • magic_lobster_party@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  Here’s a basic description of how (a part of) LLMs work: https://huggingface.co/learn/nlp-course/chapter1/6

                  LLMs are generating texts word for word (or token by token if you’re pedantic). This is why ChatGPT is slowly generating the response word by word instead of giving you the entire response at once.

                  Same applies during the training phase. It gets a piece of text and the word it’s supposed to predict. Then it’s tuned to improve its chances to predict the right word based on the text it’s given.

                  Ideally it’s supposed to make predictions by learning the patterns of the language. This is not always the case. Sometimes it can just memorize the answer instead of learning why (just like how a child can memorize the multiplication table without understanding multiplication). This is formally known as overfitting, which is a machine learning 101 concept.

                  There are ways to mitigate overfitting, but there’s no silver bullet solution. Sometimes it cannot help to memorize the training data.

                  When GitHub Copilot was new people quickly figured out it could generate the fast inverse square root implementation from Quake. Word for word. Including the “what the fuck” comment. It had memorized it completely.

                  I’m not sure how much OpenAI has done to mitigate this issue. But it’s a thing that can happen. It’s not imaginary.

        • 00@kbin.social
          link
          fedilink
          arrow-up
          18
          ·
          1 year ago

          Exactly this. I hate copyright as much as the next person and find it funny when corporate meddling leads to them fighting each other, but both sides of this leads to shitty precedent. While copyright enforcement already is a shitty precedent, its something we can fight. AI companies laundering massive amounts of data without having to hold up copyright could possibly lead to them also not having to abide to privacy laws in the future with similar arguments. Correct me if im wrong.

            • 00@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Not entirely. I do think that if copyright holders have an argument against AI data scraping, privacy watchdogs will have one as well. But if copyright holders don’t have one, the position of privacy watchdogs will be weaker as well. Mind you, I’m not arguing about legality, I’m fully aware that those are two very different things from a legal perspective. Im arguing from a perspective of policy narratives.

        • nicetriangle@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          yeah I’ll just wait here patiently until they share their source code and all the contents of their black box of data

    • Ferk@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      1 year ago

      Note that what the EU is requesting is for OpenAI to disclose information, nobody says (yet?) that they can’t use copyrighted material, what they are asking is for OpenAI to be transparent with sharing the training method, and what material is being used.

      The problem seems to be that OpenAI doesn’t want to be “Open” anymore.

      In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

      Of couse, disclosing openly what materials are being used for training might leave them open for lawsuits, but whether or not it’s legal to use copyrighted material for training is something that is still in the air, so it’s a risk either way, whether they disclose it or not.

      • 00@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

        Cant have others copying stuff that you have painstakingly copied yourself.

      • nicetriangle@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        They seem really intent on having their cake and eating it too.

        a) we’re not violating the letter or spirit of copyright laws

        b) disclosing our data could open us up to a ton of IP lawsuits

        hmm

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Your first comment and it is to support OpenAI.

      edit:

      Haaaa, OpenAI, this famous hippies led, non-profit firm.

      2015–2018: Non-profit beginnings

      2019: Transition from non-profit

      Funded by Musk and Amazon. The friends of humanity.

      Also:

      In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

      Yeah, he closed the source code because he was afraid he would get copied by other people.

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        With replies like this, it’s no wonder he was hesitant to post in the first place.

        There’s no need for the hostility and finger pointing.

      • nicetriangle@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

        I feel like the AI model is going to become self aware before people like Sutskever do

        • Oswald_Buzzbald@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Someone should just create an actual open source LLM, that can learn and replicate the innovations of all the others, and then just use these companies’ arguments about copyright against them.

    • teolan@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      But OpenAI’s models are proprietary. I’m a bit with Stable Diffusion since the models are open, but fuck OpenAI. OpenAI is not in favor of reduced copyrights. They are in favor of not being negatively affected by copyright, but still benefit from it.

  • chemical_cutthroat@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn’t seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don’t believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I’d love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

    Edit: I asked GPT-4 what it thought about this, and here is what it said:

    As an AI developed by OpenAI, I don’t access any copyrighted databases, subscription sites, confidential information, or proprietary databases for my learning or response generation. I was trained on a diverse range of internet text, which includes publicly available data in multiple languages and formats. The training also involves learning from large-scale datasets that contain parts of many kinds of texts (books, websites, scientific articles, etc.). However, I don’t know specifics about which documents were in my training set or have access to any specific documents or sources.

    Your question seems to pertain to how copyright law applies to AI. This is a complex and evolving area of law and policy, largely because AI systems like me do not “read” or “use” material in the same way a human would. A human might read a book and then consciously recall information from it when writing or speaking. In contrast, AI models do not have consciousness or awareness, they analyze statistical patterns in the data they were trained on and use these patterns to generate responses.

    When a human uses copyrighted material to create something new, the law often considers this to be a derivative work. It is typically necessary to obtain permission from the copyright holder to create derivative works. However, AI doesn’t use or recall copyrighted material directly; instead, it learns patterns from a huge amount of text data to predict or generate text based on a given prompt.

    Dissemination of copyrighted material, such as through forum posts or Q&A platforms, without proper licensing or within the bounds of fair use, can be a violation of copyright law. AI doesn’t ‘disseminate’ copyrighted material; it generates responses based on patterns it has learned.

    That said, the legal interpretation of how copyright law applies to machine learning and AI is still evolving, and there can be a lot of complexities and nuances. Therefore, the information here should not be taken as legal advice.

    As of my last update in September 2021, this was the case. However, I recommend checking the most recent legal standpoints and opinions, as this area is still evolving rapidly.

    • Kichae@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      LLMs are not book reports. They are not synthesizing information. They’re just pulling words based on probability distributions.Those probability distributions are based entirely on what training data has been fed into them.

      You can see what this really means in action when you call on them to spit out paragraphs on topics they haven’t ingested enough sources for. Their distributions are sparse, and they’ll spit out entire chunks of text that are pulled directly from those sources, without citation.

      If you write a book report that just reprinted significant swaths of the book, that would be plaigerism, and yes, would 100% be called copyright infringement.

      Importantly, though, the copyright infringement for these models does not come at the point where it spits out passages from a copyrighted work. It occurs at the point where the work is copied and used for purposes that fall outside what the work is licensed for. And most people have not licensed their words for billion dollar companies to use them in for-profit products.

      • chemical_cutthroat@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        @Kichae

        Those probability distributions are based entirely on what training data has been fed into them.

        The exact same thing a human does when writing a sentence. I’m starting to think that the backlash against AI is simply because it’s showing us what simple machines we humans are as far as thinking and creativity goes.

        You can see what this really means in action when you call on them to spit out paragraphs on topics they haven’t ingested enough sources for. Their distributions are sparse, and they’ll spit out entire chunks of text that are pulled directly from those sources, without citation.

        Do you have an example of this? I’ve used GPT extensively for a while now, and I’ve never had it do that. If it gives me a chunk of data directly from a source, it always lists the source for me. However, I may not be digging deep enough into things it doesn’t understand. If we have a repeatable case of this, I’d love to see it so I can better understand it.

        It occurs at the point where the work is copied and used for purposes that fall outside what the work is licensed for. And most people have not licensed their words for billion dollar companies to use them in for-profit products.

        This is the meat and potatoes of it. When a work is made public, be it a book, movie, song, physical or digital, it is placed in the public domain and can be freely consumed by the public, and it then becomes part of our own particular data set. However, the public, up until a year ago, wasn’t capable of doing what an AI does on such a large scale and with such ease of use. The problem isn’t that it’s using copyright material to create. Humans do that all the time, we just call it an “homage” or “parody” or “style”. An AI can do it much better, much more accurately, and much more quickly, though. That’s the rub, and I’m fine with updating the laws based on evolving technology, but let’s call a spade a spade. AI isn’t doing anything that humans haven’t been doing for as long as their has been verbal storytelling. The difference is that AI is so much better at it than we are, and we need to decide if we should adjust what we allow our own works to be used for. If we do, though, it must effect the AI in the same way that it does the human, otherwise this debate will never end. If we hamstring the data that an AI can learn from, a human must have the same handicap.

    • cendawanita@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      @chemical_cutthroat

      If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright?

      The first conceptual mistake in this analogy is assuming the LLM entity is “writing”. A person or a sentient being writing is still showing signs of intellectual work, which is how the example book report and movie review will not be accused of plagiarism, which is very very basically stealing someone’s output but one that is not made legally ownership of (which then brings it to copyright infringement territory).

      LLMs are producing text based on statistical probability meaning it is quite literally aping/replicating the aesthetic form of a known genre of textual output, which in these cases are given the legal status of intellectual property. So yes, an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre. It’s the same way YouTube video essays get taken down if it’s just a collection of movie clips that might sound like a full dialogue. Of course in that example yt clip, if you can argue it’s a creative output where an artist is forming a new piece out of a collage of previous media, the rights owner to those movie clips might lose their claim to the said video. You can’t make that defence with OpenAI.

      @stopthatgirl7

      • chemical_cutthroat@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If you can truly tell me how our form of writing is any different than how an AI writes, I’ll do a backflip. Humans are pattern seekers. We do everything based on one. We can’t handle chaos. Here’s an example.

        Normal sentence:

        Jane walked to the end of the road and turned around.

        Chaotic Sentence:

        The terminal boundary of the linear thoroughfare did Jane ambulate toward, then her orientation underwent a 180-degree about-face, confounding the conventional concept of destinational progression.

        On first pass, I bet you zoned out half way through that second sentence because there was no pattern or rhythm to it, it was word salad. It still works as a sentence, but it’s chaotic and strange to read.

        The first sentence is a generic sentence. Subject, predicate, noun, verb, etc. It follows the pattern of English writing that we are all familiar with because it’s how we were taught. An AI will do the same thing. It will generate a pattern of speech the same way that it was taught. Now, if you were taught in a public school and didn’t read a book or watch a movie for your entire life, I would let you have your argument that

        @cendawanita

        an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre.

        However, you can’t say that a human does any different. We are the sum of our experience and our teachings. If you get truly granular with it, you can trace the genesis of every sentence a human writes or even every thought a human thinks back to a point of inception, where the human learned how to write and think in the first place, and it will always be based on some sensory experience that the human has had, whether through reading, listening to music, watching a movie, or any other way we consume the data around us. The second sentence is an example of this. I thought to myself, “how would a pedantic asshat write this sentence?” and I wrote it. It didn’t come from some grand creative well of sentience that every human can draw from when they need a sentence, it came from experience and learning, just like the first, and the same well of knowledge than an AI draws from when it writes its sentences.

        • cendawanita@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          @chemical_cutthroat
          Again, all of your analogical effort presumes that an LLM is synthesizing. When I say, specifically, they generate outputs based on statistical probability it’s not at all the same as a sentient process of reiterative learning based on their available knowledge.

          If you can’t get that distinction, then all the effort to respond to you will expect too much from me (personally; I wish the best to others who’d like). If you’re really sincere though, honestly it’s been best elaborated by Timnit Gebru and Emily Bender in their writings about the “stochastic parrot”. Please do have a read. https://dl.acm.org/doi/10.1145/3442188.3445922
          @stopthatgirl7

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I am very open to discussion on this, and if anyone has a counter-argument, I’d love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

      That’s very cool and all but while we have this debate there are artists getting ripped off.

      • kmkz_ninja@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You aren’t having a debate. You’re blindly claiming that artists are getting ripped off, because maybe they are a bit, or maybe they’re latching onto any reason that lets them still have professional careers in 30 years.

        • PabloDiscobar@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I’m not making blind claims. And I won’t point you to the sources either. I’m not making any homework for anyone today. Dig the subject and post us some information if you are really into the debate thing.

      • chemical_cutthroat@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If you can provide some sources with real data from people that have proven a loss of income due to getting “ripped off” by AI, I’d love to look over it. Until then, it’s a witch hunt.

        • PabloDiscobar@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I can provide you with reddit posts from artists who are replaced by AI.

          Would you like it served with a cup of tea and some sandwiches?

          • chemical_cutthroat@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            If you have some that have actual proof in them, sure. That’s exactly what I’m looking for. However, if it amounts to nothing more than hearsay, then no, I don’t think I want them.

    • mack123@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      It is an area that will require us to think carefully of the ethics of the situation. Humans create works for humans. Has this really changed? Now consumption happens through a machine learning interface. I agree with your reasoning, but we have an elephant in the room that this line of reasoning does not address.

      When we ask the AI system to generate content in someone else’s style or when the AI distorts someone’s view in its responses. It is in this area where things get very murky for me. Can I get an AI to eventually write another book in Terry Pratchett’s style? Would his estate be entitled to some form of compensation? And that is an easier one compared to living authors or writers. We already see the way image generating AI programs copy artists. Now we are getting the same for language and more.

      It will certainly be an interesting space to follow in the next few years as we develop new ethics around this.

      • chemical_cutthroat@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        @mack123

        Can I get an AI to eventually write another book in Terry Pratchett’s style? Would his estate be entitled to some form of compensation?

        No, that’s fair use under parody. Weird Al isn’t compensating other artists for parody, so the creators of OpenAI shouldn’t either, just because their bot can make something that sounds like Pratchett or anyone else. I wrote a short story a while back that my friend said sounded like if Douglas Adams wrote dystopian fiction. Do I owe the Adams’ estate if I were to publish it? The same goes for photography and art. If I take a picture of a pastel wall that happens to have an awkward person standing in front of it, do I owe Wes Anderson compensation? This is how we have to look at it. What’s good for the goose must be good for the gander. I can’t justify punishing AI research and learning for doing the same things that humans already do.

        • mack123@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          That is the current stand of affairs yes. But it something that I think we will need to resolve as AI becomes better. When it becomes impossible to say which work was created by the human original and which by the AI.

          I do think it would be ethically wrong for a company to profit by mimicking someone’s style exactly. What incentive remains for the original style or work to exist if you cannot earn a living from it.

          • chemical_cutthroat@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I do think it would be ethically wrong for a company to profit by mimicking someone’s style exactly. What incentive remains for the original style or work to exist if you cannot earn a living from it.

            That’s where we differ in opinion. I create art because it’s what I enjoy doing. It makes me happy. Would I like to profit from it? Sure, and I do, to some extent. However, you are conflating two ideas. Art created for profit is no longer art, it is a product. The definition fundamentally changes.

            I’m a writer, a photographer, and a cook. The first two I do for pleasure, the last for profit. If I write something that someone deems worthy to train an AI on, first, great, maybe I’m not as bad as I think I am. Second, though, it doesn’t matter, because when I wrote what I wrote, it was a reflection of something that I personally felt, and I used my own data set of experience to create that art.

            The same thing goes for photography, though slightly differently. When I’m walking around with my camera and taking shots, I do it because something has made me feel an emotion that I can capture in a camera lens. I have also done some model shoots, where I am compensated for my time and effort. In those shoots, I search for art in composition and theme because that’s what I’m paid for, but once I finish the shoot, and I give the photographs to the model, what they do with them is their own business. If they use them to train AI, then so be it. The AI might be able to make some 99% similar to what I’ve done, but it won’t have what I had in the moment. It won’t have the emotional connection to the art that I had.

            As far as the third, cooking, goes, I think it’s the most important. When I follow a recipe, I’m doing exactly what the AI does. I use my data set to create something that is a copy of something someone else has done before. Maybe I tweak it here and there, but so does AI. I do this for profit. I feed people, and they pay me. Do I owe the man who created the Caesar Salad every time I sell one? It’s his recipe. I make the dressing from scratch just like he did. I know that’s not a perfect example, but I’m sure you can see the point I’m making.

            So, when it comes to Art v. Product, there are two different sides to it, and both have different answers. If you are worried about AI copying “art” then don’t. It can’t. Art is something that can only be created by the artist in the moment, and may be replicated, but can never truly be copied, in the same way that taking a photo of the Mona Lisa doesn’t make me DaVinci. However, if it’s a product, then we are talking about capitalism, and here we can see that there is no argument against AI, because it is only doing what we have been doing for forever. McDonalds may as well be the AI of fast food burgers. Pizza Hut the AI of pizza. Taco Bell the AI of TexMex. Capitalism is about finding faster, cheaper ways of producing products that people want. Supply and demand. If someone is creating a product, and their product can be manufactured faster, and cheaper, by the competition, then the onus is on the original creator to find a way to stand out from the competition, or lose their marketshare to the competitor. We can’t hamper AI just because some busker is having a hard time selling his spray paint on bowl planet scape art. If you mass produce for the sake of profit, you can’t complain when someone out-mass produces you, AI or human. That’s the way of the world.

  • nivenkos@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    You can read the actual proposal here - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

    The stuff in the article isn’t a problem IMO, but the main issue is the huge amount of bureaucracy for smaller companies and initiatives.

    Almost everything counts as “AI” :

    (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
    (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
    (c) Statistical approaches, Bayesian estimation, search and optimization methods.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Read the whole thing. The reason OpenAI is opposing the law is not necessarily copyright infringement.

    One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”)

    This is the more likely problem.

    • jcrm@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Given their name is “OpenAI” and they were founded on the idea of being transparent with those exact things, I’m less impressed that that’s what they’re upset about. The keep saying they’re “protecting” us by not releasing us, which just isn’t true. They’re protecting their profits and valuation.

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Maybe, but I believe that the AI model of this level should not be shared with dictators like the CCP, at least for now.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    The EU’s stance is concerning. Their coming laws would benefit unlawful AI devs backed by dictatorships. (Edit: They’ll do whatever they want to research and build more powerful AIs while devs in EU struggle due to heavy restrictions.) Currently, big techs are still learning how to build strong AIs, and giving dictatorships huge advantage like this is dangerous.

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think it’s a basic requirement that the data upon which a large language model is trained be publicly disclosed. It’s the same as the requirement of writing the ingredients in packaged food. Or in knowing where your lawyer got their degree from. You want to know where what you’re using is coming from.

  • StarServal@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    This is one of those cases where copywrite law works opposite as intended; in that it should drive innovation. Here we have an example of innovation, but copywrite holders want to (justifiably) shut it down.

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      I think this is actually a case where copyright works correctly. It is protecting individuals of getting their work, they provided for free in many cases, ‘stolen’ by a more powerful party to make money from it without paying the creators of their work.

  • LegendOfZelda@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I disagree with the “they’re violating copyright by training on our stuff” argument, but I’ve turned against generative AI because now automation is taking art from us, and we’re still slaving away at work, when automation was supposed to free up time for us to pursue art.

    • stopthatgirl7@kbin.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      My phone did because it’s in Japanese and it defaulted to that. I thought I had edited it to fix it, but I guess it didn’t actually do it.