According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.
They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.
No they aren’t. An AI trained on normal every day images of children, and sexual images of adults could easily synthesize these images.
Just like it can synthesize an image of a possum wearing a top hat without being trained on images of possums wearing top hats.
They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.