EVGA

LLM Inference Rig

Author
Brad_Hawthorne
Insert Custom Title Here
  • Total Posts : 18000
  • Reward points : 0
  • Joined: 2004/06/06 16:13:06
  • Location: Dazed & Confused
  • Status: online
  • Ribbons : 39
2025/01/11 14:00:33 (permalink)
I’m currently building up a compute rig for LLM experiments using a Supermicro X9DRG-QF, two E5-2697v2, 1TB of memory and a rack of old Tesla M40 24GB cards. Two EVGA 1600 P2 for power and two EK AIO 280 for CPU cooling. A decade old example of parts machine CUDA compute power at work. Got some serious deals on the hardware, so it’s been a cheap build so far.
#1

3 Replies Related Threads

    Brad_Hawthorne
    Insert Custom Title Here
    • Total Posts : 18000
    • Reward points : 0
    • Joined: 2004/06/06 16:13:06
    • Location: Dazed & Confused
    • Status: online
    • Ribbons : 39
    Re: LLM Inference Rig 2025/01/11 14:05:17 (permalink)

    #2
    Brad_Hawthorne
    Insert Custom Title Here
    • Total Posts : 18000
    • Reward points : 0
    • Joined: 2004/06/06 16:13:06
    • Location: Dazed & Confused
    • Status: online
    • Ribbons : 39
    Re: LLM Inference Rig 2025/01/11 14:07:03 (permalink)
    Currently set up for Ollama and Open-WebUI use. Running Llama 3.3 70B FP16, QwQ 32B FP16 and Llama Vision 3.2 90B Q8 models.
    post edited by Brad_Hawthorne - 2025/01/11 14:11:27
    #3
    Brad_Hawthorne
    Insert Custom Title Here
    • Total Posts : 18000
    • Reward points : 0
    • Joined: 2004/06/06 16:13:06
    • Location: Dazed & Confused
    • Status: online
    • Ribbons : 39
    Re: LLM Inference Rig 2025/01/11 14:29:54 (permalink)
    If anyone has a SR-X sitting around I'd be interested in swapping one in for the mobo.
    #4
    Jump to:
  • Back to Mobile