They also claim that it only takes about 8 seconds to generate various good images.

  • etrotta@kbin.social
    link
    fedilink
    arrow-up
    23
    ·
    11 months ago

    Might want to clarify: The “model” in this case is not a full model like Stable Diffusion, but rather something used like a patch, more comparable to something like LoRA

    I don’t think that anyone would misunderstand anyway, but better safe than sorry

    • astrsk@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      That’s the real meat of this. The future of models will be these smaller, focused “patches” that have some kind of traceable lineage. At least when it comes to marketing and selling these.

  • hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    11 months ago

    I’m always sceptical about those claims.

    Let them prove it, and then we can decide if it’s good or not, instead of getting our hopes up for empty promises.

    Not the first time ppl have made outlandish claims with AI, even though of course you’d expect someone like Nvidia to be cognisant about this kind of marketing.

    • zalack@kbin.social
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      11 months ago

      NVIDIA’s marketing overhypes, but their technical papers tend to be very solid. Obviously it always pays to remain skeptical but they have a good track record in this case.

  • ubermeisters@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 months ago

    Pretty neat. The training process takes a while for textual inversion, which I have enjoyed playing around with. I hope Automatic1111 gets support for this method of training, if it takes off!

      • ubermeisters@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Great question, I wondered the same thing. I’ve got a decent knowledge base where stable diffusion (text to image etc) is concerned, and understand the applications of this Nvidia process, I’m not familiar enough with customization options for LLMs. I haven’t really seen references to hypernetwork/lora/midjourney type applications in LLMs, or anything that really “plugs into” your existing model to augment results, the way stable diffusion is geared for customization. It seems in my limited understanding, that customization for LLMs requires customization of the training ing data, and a completely new training process for the actual model, not a reference model like SD.