I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • MasterNerd@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Look into ollama. It shouldn’t be an issue if you stick to 7b parameter models

  • [moved to hexbear]@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    They’re Ryzen processors with “AI” accelerators, so an LLM can definitely run on hardware on one of those. Other options are available, like lower powered ARM chipsets (RK3588-based boards) with accelerators that might have half the performance but are far cheaper to run, should be enough for a basic LLM.

  • StrawberryPigtails@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    It’s doable. Stick to the 7b models and it should work for the most part, but don’t expect anything remotely approaching what might be called reasonable performance. It’s going to be slow. But it can work.

    To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.