You are viewing a single comment's thread from:

RE: Adding external GPU to my AI box

in #ai16 hours ago

This is indeed far too profound.

By the way, six years ago I built a huge desktop machine at home.
It was capable of supporting multiple graphics cards, but I only installed one of the cheapest ones just for basic display output.
About three years ago, I even removed that cheap GPU, because as a network server, I could connect to it via SSH — no need for a monitor at all.

Around eight months ago, I deployed a home AI assistant on it (ollama + open-webui + DeepSeek R1 14B).
It barely works, but I honestly couldn’t imagine what to use it for — after all, ChatGPT is already great and convenient enough for me.

image.png

Sort:  

I was looking around yesterday and came across this and got some ideas.

image.png

About three years ago, I even removed that cheap GPU, because as a network server, I could connect to it via SSH — no need for a monitor at all.

I run headless whenever possible.

It barely works, but I honestly couldn’t imagine what to use it for — after all, ChatGPT is already great and convenient enough for me.

If you are into self hosting, take a look at KaraKeep. It basically Pocket but open source and bring your own AI. I use this 100 times a day.

Check out LM Studio if you haven't, so much better than Ollama.

I don't use local AI much, I mostly use providers, so it is mostly for fun as I need the speed more, but I am working on a project that I do not want my data in the cloud and once I can prove it is profitable, I will be getting some RTX 6000 Pros and Epyc CPUs to run Kimi K2 locally.

Loading...
Loading...