• daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    3 hours ago

    Public free chatbots go up and down. Until recently I was finding Microsoft copilot to give the best answers. But they downgraded it recently, probably to much cost to keep it at that level, and now I’m finding openAI direct website to give best results… for now.

    I just hope for a good enough self hosted model to be available before all the comercial ones shut up over a paywall/adwall for good (which is definitely coming sooner than later).

    • postmateDumbass@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      Wait until there is a nuclear power plant competition between AI companies.

      Time to go complete my transformation to homer simpson.

    • Hnery@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      llama3 is not bad and you can easily run the smaller ones on an average desktop cornfuser

      • PolarisFx@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        But slowly, I filled my home server with whatever CUDA capable cards I have and it’s fine for SD, but I found llama way too slow. I rented a dual A2000 instance for a couple weeks and it was bearable, but still not great.