GPU home server
How to build a minimalistic GPU server with 24GB VRAM for running inference and training using modern CUDA. My video on Youtube Github repo GPU home server
How to build a minimalistic GPU server with 24GB VRAM for running inference and training using modern CUDA. My video on Youtube Github repo GPU home server
End result Ollama instance with open-sourced LLM weights running on a local machine (or any machine on the local network for that matter), accessible via open-webui or Python API on the same ma...
The end result 2 pools with 4+2 erasure coding, slow nearline storage on spinning rust and fast solid state storage for model training. Each pool has its own CEPH FS (could be done with a singl...
Reworked the website from React/Vite to Jekyll with Chirpy theme, mainly following this tutorial https://technotim.live/posts/jekyll-docs-site/. The old website was a quick fix with React/Vite from...