I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it’s an sff machine.

What’s making me weary of going through is the specs of the 7010 itself: it’s a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

Do you think it’s even worth going through?

Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don’t know if it will be good enough.

  • brokenlcdOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I’ll have to check mikupad. For the most part i’ve been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it’s not technically meant for the more traditional assistant use case.

    Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i’ll use podman to make a container for each engine.

    As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can’t really ask more questions if i can’t set it all up first to see what goes wrong / get a sense of what i Haven’t understood.

    I’ll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.

    Thanks for everything.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      Most likely i’ll use podman to make a container for each engine.

      IDK about windows, but on linux I find it easier to just make a python venv for each engine. Theres less CPU/RAM(/GPU?) overhead that way anyway, and its best to pull bleeding edge git versions of engines. As an added benefit, Python that ships with some OSes (like CachyOS) is more optimized that what podman would pull.

      Podman is great if security is a concern though. AKA if you don’t ‘trust’ the code of the engine runtimes.

      ST is good, though its sampling presets are kinda funky and I don’t use it personally.