This may be more of an “out of the loop” thing, but I’m new to this site and I’m noticing that lemmy.world seems surprisingly bereft of any substantial NSFW content. I’m surprised! Isn’t the adage that porn motivates technological progress?

What’s even more surprising is that the NSFW instance seems brand spanking new.

Is there some code-of-conduct thing which has prevented NSFW community growth? Or is it just a demographic thing where there wasn’t much/any demand until the Reddit exodus?

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    13
    ·
    1 year ago

    https://en.m.wikipedia.org/wiki/Stable_Diffusion

    This has the most basic guide to what is happening if you wish to crawl out from under that rock. It includes a built in SFW text to image prompt:

    https://stable-diffusion-art.com/beginners-guide/

    All one has to do is look at the NSFW images marked as being from AI and note the watermark to find where to generate images. Once you make a few, you’ll see small problems that are present in many other image categories. The main issues have to do with excluding certain prompt key words to make the output look real, then stuff like genitalia is not easy to get dialed in well unless you are running the software on your own hardware. This requires a powerful video card to generate the images and a lot of storage space. Once you know this a lot of images become obviously AI generated. There are aspects of lighting, eyes, fingers and toes, easy lighting text prompts and other small details that are harder to avoid in the image output. These start to stand out more once you know.

    This tech is moving very fast right now. The next iteration of Stable Diffusion is set to release this month and it will likely make it impossible to tell what is real and what is fake. Right now SD must start with a low res image, then it can be scaled higher. SDXL will be able to start with a high res image and modify details which has not been possible. With a bit of effort, it will be possible to modify video frame by frame and use a simple text prompt to alter details. I doubt people will do more than clips at first, but with some good scripting using Blender, I could see it working for larger projects.

    Follow the second posted link. And read it. This is FOSS. Combine this with an open source text to text LLVM running on native hardware and you have a real game changing set of technology.

    https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e

    • DevonCode@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They are talking about NSFW images in the fediverse. How does that correlate with AI-generated NSFW images?

      • j4k3@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Originally, I mentioned how there is more complexity in the NSFW AI space than just the imagery, and that it is actually an interesting tool and introduction to the text to image AI. The mods and users in that space are more positive and supportive than average too.

        My main interest in text2img/Stable Diffusion is as a product design tool in Blender. I was hesitant to upgrade a workstation to try out SD, but the NSFW stuff was novel enough give it a try using an internet hosted AI instance. Previously, I tried publicly hosted AI text prompts for product design images, but I didn’t think I would be able to make it work for me. Now I have a much better understanding of the tool and how it works. I don’t regularly visit NSFW stuff, but an article on Stable Diffusion mentioned how it is easier to learn by starting with generating images of humans because we have an innate ability to identify subtle detail in faces and bodies. And hey, if I’m going to try, why not be a connoisseur.

        Like I said, I’m all for keeping NSFW separate. It is just worth mentioning that the prudish stigma attached to anything related to human sexuality, and writing it off as as useless vice is not very intelligent. There is a crossover with useful utility found within this space. I didn’t expect to discover this myself based on my prior assumptions and cultural stigma. I wanted to make mention that this exists if others are interested and looking deeper into the NSFW subject/coexistence situation on Lemmy via this post.

    • moog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      “crawl out from under that rock” ya nah im good ill stay right here

    • Jenga@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      No one understands how your ramblings are relevant to the OP or even the comment you originally replied to is the thing

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        A little less than half do understand based on votes alone. This is like talking about the wonders of the internet future as a programmer in a post office in the early 90’s. This is a big deal, but most people can’t intuitively connect the dots like others. Text to image AI will be extremely disruptive in coming years. Stable Diffusion is too new for most people to have heard about passively. This is the bleeding edge of publicly available tech. This technology will impact the digital lives of everyone. Follow the links provided or learn the hard way. I’ve told you about the internet when you can’t see past a world of postage stamps and newspaper classified ads. Humans are primarily visual. The implications of text generated imagery using AI that is indistinguishable from the real thing is here now. This could greatly enrich, alter, influence, or degrade lives. The next version of Stable Diffusion will be out this month. It is the real game changer because it can be used to realistically edit high resolution images. The output can be perfect, beyond anything you will be able to detect. Think about what all of this means for politics specifically. The broader implications for LLVM’s applied to political strategy are even worse. This should be blatantly obvious, and at the very least, you should know not to trust images at surface value no matter how real they look. Not just Photoshop editing “not real,” I mean everything about the people, places, actions, and content can be faked now.