Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

  • dwindling7373
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 年前

    There are a huge number of vastly better solutions to get that…

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 年前

      IMO LLMs are ok to get a head start of searching. Like got a vague idea of something but don’t know the exact keywords. LLMs can help and use the output on whatever search engine you like. This saves a lots of time tinkering the right keywords.

      • dwindling7373
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 年前

        Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!

        Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.

        • kitnaht@lemmy.worldBanned
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 年前

          Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.

            • kitnaht@lemmy.worldBanned
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 年前

              Those were statements. Statements of fact.

              Once the models are already trained, it takes almost no power to use them.

              Yes, TRAINING the models uses an immense amount of power - but utilizing the training datasets locally consumes almost nothing. I can run the llama 7b set on a 15w Raspberry Pi for example. Just leaving my PC on uses 400w. This is all local – Nothing entering or leaving the Pi. No communication to an external server, nothing being done on anybody else’s server or any AWS instances, etc.

              • dwindling7373
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 年前

                Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.

              • dwindling7373
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 年前

                Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.