• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    Anyone using the code specific models, – how are you prompting them? Are you using any integration into vim emacs or other truly open source and offline text editor/IDE; not electron or proton based? I’ve compiled VS code before, but it is basically useless in that form, and the binary version sends network traffic like crazy.

    • z3rOR0ne@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 months ago

      I’ve downloaded the 13B codellama from huggingface, passed it my NVIDIA 2070 via cuda, and have interfaced either through the terminal or lmstudio.

      Usually my prompts include the specific code block and a wordy explanation about what I’m trying to do.

      It’s okay, but it’s not as accurate as chatgpt, and tends to repeat itself a lot more.

      For editor integration, i just opted for codeium in neovim. It’s a pretty good alternative to copilot imho.

        • z3rOR0ne@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          10 months ago

          Because it doesn’t call out to the internet. I even put lmstudio behind firejail to prevent it from doing so. Thusly any code I feed it (albeit pretty trivial code) doesn’t add to chatgpt’s overarching data set.

          It still can produce usable results. It’s just not as consistent. Whenever it gets into a repetitive loop, I just restart it, resetting the initial context, which generally prevents it from repeating itself, at least initially. To be fair, I’ve also experienced this with chatgpt, just not as often.

          TLDR; It’s more private and still useful.