Totally not a an AI asking this question.

  • Greyscale@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    If the humans can’t see the flaws and correct them now, what do you think the AI would learn from the training data?

    • StijnVVL@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      First of all, a lot of humans do see the flaws but are indeed unable to correct them. This would also show in the training data. The AI OP is talking about would be much more powerful to actually act and change something.

      Don’t confuse Artificial Narrow Intelligence (ANI) with Artificial General Intelligence (AGI) or even Artificial Superintelligence (ASI). Your statement suggest you understand ANI, which is all the AI that we know today. However powerful they seem, they can only reproduce what they have learned from the training data.

      AGI (or human level AI) will be more what OP means here. Sentient, in a way that it can make its own decisions, think on a human level, feel on a human level and act on those feelings. If it feels humans are not important or harmful to what it values, it will decide to remove humanity as a whole. Give it the power to govern the world and it most certainly will act not in our favour.

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Until computers can be genuinely creative, and not emulate creativity, its not gonna happen. And when that happens, we’re either getting the startrek luxury space communism, or a boot smashing our head into the kerb for eternity. No middle ground.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          The entire premise of the OP is a hypothetical.

          In any case, there’s plenty of work on making agents that are “genuinely creative”. Might happen sooner than you think.