• Moonrise2473
    link
    fedilink
    arrow-up
    35
    ·
    1 year ago

    I can’t believe they nonchalantly resorted to piracy in such a massive scale for profit.

    If a normal person did that he would be locked in a cell for decades

    • Overzeetop@beehaw.org
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Well, lots of normal people due this not for profit, which is just as damning in the eyes of copyright.

      But what if they had done this in a legitimate fashion - say they got a library account and just ordered the books one by one, read them in, and then returned the books. As I understand it (which is not very well, tbh) the LLM don’t keep a copy of the original reference. They use the works to determine paths and branches in what I assume is a quasi-statistical approach (ie stable diffusion associates characteristics with words, but once the characteristics are stored in the model the original is effectively discarded and can’t actually be recreated, except in the way a child might reproduce a picture from memory.)

      If the dataset is not, in fact, stored, would the authors still have a case?

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I believe this should be allowed, honestly. For, it’s dangerous to disallow. I mean, there are dictatorships training their AIs, and they won’t care about copyrights. That’s gonna be an advantage for them, and the west should feed the same information.

        We don’t need to allow Steven King, but scientific and engineering articles, sure.

          • bedrooms@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Yes, but the problem is that the authors of closed articles did sign a copyright transfer agreement (because they basically had no other option). Government cannot and should not override it against the will of the business companies. And this extends to the public.

            For these closed articles it’s the authors’ burden to release the draft. That act is almost always permitted by the signed agreement.

    • bedrooms@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      OpenAI’s gonna redo the training.

      That said, it’s concerning that dictatorships can feed more data to their AIs because they don’t care about ethics. At some point their AIs might outperform western ones.

      Here comes an unpopular opinion, but for the greater good we might be eventually forced to allow those companies to feed everything.

      • AAA@feddit.de
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Dictatorships (or any otherwise ideology driven entities) will have their very own problems training AI. Cannot feed the AI material which goes against your own ideology or it might not act in your best interest.

        • bedrooms@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          You know, ChatGPT actually succeeded in controlling its ideological expression to a significant amount. That’s one advantage of this model.

        • HumbertTetere@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          There are approaches to delete topics from the trained model, so not sure this will keep them busy for that long.