I find all this AI stuff fascinating.

  • Orion (awooo)@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    AGI really bothers me and bends my ethics, I really can’t think of any way to approach this problem.

    I don’t like authority even in limited contexts, but it’s sometimes needed and we can deal with it, at the end of the day we all have our brains and that’s something nobody can take away from us or really replace.

    Even dangerous technology like nukes, it sucks that anyone can have this power, but it’s destructive power, you could cause an outcome that’s good for nobody and that’s about all.

    Biology and medicine are much closer to AI, that’s where you start getting into infohazards in the form of bioweapons, and the possibility of extreme inequality if not everyone is allowed equal access. The troubling part is that we are already here, and capitalism can decide whether someone gets to live or not, or if they have a healthy life, that’s so deeply wrong I can’t even fucking begin to describe how I feel about it. If we ever start modifying ourselves without addressing this, it’s going to turn into a dystopia.

    And now we have AGI to worry about, which is all of that and more. We could have dictatorships which don’t require humans to enforce them, those with access to such technology would be like gods to others, anyone with the ability to shape it could influence all of society. Our brains are what make us special and if that becomes replaceable we’d be powerless. Sam Altman says “we” a lot, but why should WE give THEM such power? Or anyone for that matter, I don’t think there’s anyone I would trust with this, not a government, no collective decision making system, no company or individual. But at the same time you can’t give everyone access, because it would be catastrophic. It’s a paradox and I don’t think there’s a solution good for humanity.

    Another thing here is how exactly do you prevent a leak from happening, considering we’re dealing with information here. It’s another infohazard and it only needs to happen once to cause a catastrophe.

    And on top of all that, if we could somehow overcome everything, there are so many ethical and philosophical questions. It seems what we want to make is a perfect slave, a machine to do the thinking for us and better than us (whoever “us” refers to here), and there is this sort of assumption that it’s fine to do it because it’s artificial and that an intelligence that has no consciousness can be created, despite the fact it already arised in at least one biological being - us, and it’s not like we were designed with some magical purpose to be that way. We have absolutely no idea what we’re getting oursleves into here, and it could be the worst thing will ever have done.

    I don’t know how we could stop, considering computing power is ever increasing and it seems inevitable that someone will create these things, but we sure as hell shouldn’t be accelerating and dressing it up in good things, because what we set out to create is a curse.

    Sorry for the long rant, I had a couple thoughts bottled up from the last few days…