• artaxadepressedhorse@lemmyngs.social
    link
    fedilink
    arrow-up
    49
    ·
    edit-2
    8 months ago

    I am sort of curious, bc I don’t know: of all the types of sexual abuse that happens to children, ie being molested by family or acquaintances, being kidnapped by the creep in the van, being trafficked for prostitution, abuse in church, etc etc… in comparison to these cases, how many cases deal exclusively with producing imagery?

    Next thing I’m curious about: if the internet becomes flooded with AI generated CP images, could that potentially reduce the demand for RL imagery? Wouldn’t the demand-side be met? Is the concern normalization and inducing demand? Do we know there’s any significant correlation between more people looking and more people actually abusing kids?

    Which leads to the next part: I play violent video games and listen to violent aggressive music and have for many years now and I enjoy it a lot, and I’ve never done violence to anybody before, nor would I want to. Is persecuting someone for imagining/mentally roleplaying something that’s cruel actually a form of social abuse in itself?

    Props to anybody who asks hard questions btw, bc guaranteed there will be a lot of bullying on this topic. I’m not saying “I’m right and they’re wrong”, but there’s a lot of nuance here and people here seem pretty quick to hand govt and police incredible powers for… I dunno… how much gain really? You’ll never get rights back that you throw away. Never. They don’t make 'em anymore these days.

      • artaxadepressedhorse@lemmyngs.social
        link
        fedilink
        arrow-up
        13
        ·
        8 months ago

        How often does tracking child abuse imagery lead to preventing actual child abuse? Out of all the children who are abused each year, what percentage of their abusers are tracked via online imagery? Aren’t a lot of these cases IRL/situationally based? That’s what I’m trying to determine here. Is this even a good use of public resources and/or focus?

        As for how you personally feel about the imagery, I believe that a lot of things humans do are gross, but I don’t believe we should be arbitrarily creating laws to restrict things that others do that I find appalling… unless there’s a very good reason to. It’s extremely dangerous to go flying too fast down that road, esp with anything related to “terror/security” or “for the children” we need to be especially careful. We don’t need another case of “Well in hindsight, that [war on whatever] was a terrible idea and hurt lots and lots of people”

        And let’s be absolutely clear here: I 100% believe that people abusing children is fucked up, and the fact that I even need to add this disclaimer here should be a red flag about the dangers of how this issue is structured.

          • CorruptBuddha@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            5
            ·
            8 months ago

            Okay… So correct me if I’m wrong, but being abused as a child is like… one of the biggest predictors of becoming a pedophile. So like… Should we preemptively go after these people? You know… To protect the kids?

            How about single parents that expose their kids to strangers when dating. That’s a massive vector for kids to be exposed to child abuse.

          • artaxadepressedhorse@lemmyngs.social
            link
            fedilink
            arrow-up
            5
            ·
            8 months ago

            I appreciate you posting the link to my question, but that’s an article written from the perspective of law enforcement. They’re an authority, so they’re incentivized to manipulate facts and deceive to gain more authority. Sorry if I don’t trust law enforcement but they’ve proven themselves untrustworthy at this point

          • PelicanPersuader@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 months ago

            It already is outlawed in the US. The US bans all depictions precisely because of this. The courts anticipated that there would come a time when people could create images which are indistinguishable from reality so allowing any content to be produced wasn’t permissible.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 months ago

        Of all the problems and challenges with this idea, this is probably the easiest to solve technologically. If we assume that AI-generated material is given the ok to be produced, the AI generators would need to (and easily can, and arguably already should) embed a watermark (visible or not) or digital signature. This would prevent actual photos from being presented as AI. It may be possible to remove these markers, but the reasons to do so are very limited in this scenario.

          • Nollij@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 months ago

            I was actually specifically avoiding all of those concerns in my reply. They’re valid, and others are discussing them on this thread, just not what my reply was about.

            I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              6
              ·
              edit-2
              8 months ago

              I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

              These images are being created with open source / free models. Whatever watermark feature the open source code has will simply be removed by the criminal.

              Watermarking is like a lock on a door. Keeps honest people honest… which is useful, but it’s not going to stop any real criminals.

              • evranch@lemmy.ca
                link
                fedilink
                arrow-up
                7
                ·
                8 months ago

                In this specific scenario, you wouldn’t want to remove the watermark.

                The watermark would be the only thing that defines the content as “harmless” AI-generated content, which for the sake of discussion is being presented as legal. Remove the watermark, and as far as the law knows, you’re in possession of real CSAM and you’re on the way to prison.

                The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented, but I would assume the goal in marketing the real thing could only be to sell it to the worst of the worst, people who get off on the fact that children were abused to create it. And in that case, if AI is indistinguishable from the real thing, how do you sell criminal content if everyone thinks it’s fake?

                Anyways, I agree with other commenters that this entire can of worms should be left tightly shut. We don’t need to encourage pedophilia in any way. “Regular” porn has experienced selection pressure to the point where taboo is now mainstream. We don’t need to create a new market for bored porn viewers looking for something shocking.

                • abhibeckert@beehaw.org
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  8 months ago

                  The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented,

                  It wouldn’t be expensive, you could do it on a laptop in a few seconds.

                  Unless, of course, we decide only large corporations should be allowed to generate images and completely outlaw all of the open source / free image generation software - that’s not going to happen.

                  Most images are created with a “diffusion” model where you take an image, and run an algorithm that slightly modifies it. Over and over and over until you get what you want. You don’t have to (and commonly don’t - for the best results) start with a blank image. And you can run just a single pass, with the output being almost indistinguishable from the input.

                  This is a hard problem to solve and I think catching abuse after it happens is increasingly going to be more difficult. Better to focus on stopping the abuse from happening in the first place. E.g. by flagging and investigating questionable behaviour by kids in schools. That approach is proven and works well.

                  • evranch@lemmy.ca
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    8 months ago

                    The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.

                    Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.

                    A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      I respect your boldness to ask these questions, but I don’t feel like I can adequately answer them. I wrote a 6 paragraph essay but using GPT-4 as a sensitivity reader, I don’t think I can post it without some kind of miscommunication or unintentional hurt. Instead, I’ll answer the questions directly by presenting non-authoritative alternate viewpoints.

      1. No idea, maybe someone else knows
      2. That makes sense to me; I would think there would be a strong pressure to present fake content as real to avoid getting caught but they’re already in deep legal trouble anyway and I’m sure they get off to it too. It’s hard to know for sure because it’s so stigmatized that the data are both biased and sparse. Good luck getting anyone to volunteer that information
      3. I consider pedophilia (ie the attraction) to be amoral but acting on it to be “evil”, ala noncon, gore, necrophilia, etc. That’s just from consistent application of my principles though, as I haven’t humanized them enough to care that pedophilia itself is illegal. I don’t think violent video games are quite comparable because humans normally abhor violence, so there’s a degree of separation, whereas CP is inherently attractive to them. More research is needed, if we as a society care enough to research it.
      4. I don’t quite agree, rights are hard-won and easy-lost but we seem to gain them over time. Take trans rights to healthcare for example - first it wasn’t available to anyone, then it was available to everyone (trans or not), now we have reactionary denials of those rights, and soon we’ll get those rights for real, like what happened with gay rights. Also, I don’t see what rights are lost in arguing for the status quo that pedophilia remain criminalized? If MAPs are any indication, I’m not sure we’re ready for that tightrope, and there are at least a dozen marginalized groups I’d rather see get rights first. Unlike gay people for instance, being “in the closet” is a net societal good because there’s no valid way to present that publicly without harming children or eroding their protections.
    • Zagaroth@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      The issue here is that it enables those who would make the actual CP to hide their work easier in the flood of generated content.

      Animesque art is one thing, photorealistic is another. Neither actually harms an underaged person by existing, but photorealistic enables actual abusers to hide themselves easily. So IMO, photorealistic ‘art’ of this sort needs to be criminalized so that it can not be used as a mask for actual CP.

    • jivandabeast@lemmy.browntown.dev
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      Points about real stuff hiding in a sea of fake stuff aside, because these ais would likely have been trained on images of real children and potentially real abuse material, each new generated image could be considered a re-exploitation of that child.

      Of course, i don’t think that’s true in a legal sense but definitely in an emotional and moral sense. I mean look at the damage deepfakes have done to the mentals for so many celebrities and other victims, then imagine literally a minor trying to move past one of the most traumatic things that could have happened to them

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        I really don’t think it would actually be trained on that specific data to be able to create it. If it can figure out a blueberry dog “child naked” seems pretty boring.