• ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Here’s the key distinction:

    This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.

    People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.

    • A_Random_Idiot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      edit-2
      1 day ago

      I’m a person.

      I dont want AI, period.

      We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

      Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

        That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.

        The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.

          • ArchRecord@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            14 hours ago
            1. Say something blatantly uninformed on an online forum
            2. Get corrected on it
            3. Make reference to how someone is perceived at parties, an entirely different atmosphere from an online forum, and think you made a point

            Good job.

            • A_Random_Idiot@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              14 hours ago
              1. See someone make a comment about a AI going rogue after being forced to produce too much goblin tentacle porn
              2. Get way to serious over the factual capabilities of a goblin tentacle porn generating AI.
              3. Act holier than thou over it while being completely oblivious to comedic hyperbole.

              Good job.

              Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?