Comment by clusterhacks

12 hours ago

I ran ollama first because it was easy, but now download source and build llama.cpp on the machine. I don't bother saving a file system between runs on the rented machine, I build llama.cpp every time I start up.

I am usually just running gpt-oss-120b or one of the qwen models. Sometimes gemma? These are mostly "medium" sized in terms of memory requirements - I'm usually trying unquantized models that will easily run on an single 80-ish gb gpu because those are cheap.

I tend to spend $10-$20 a week. But I am almost always prototyping or testing an idea for a specific project that doesn't require me to run 8 hrs/day. I don't use the paid APIs for several reasons but cost-effectiveness is not one of those reasons.

I know you say you don't use the paid apis, but renting a gpu is something I've been thinking about and I'd be really interested in knowing how this compares with paying by the token. I think gpt-oss-120b is 0.10/input 0.60/output per million tokens in azure. In my head this could go a long way but I haven't used gpt oss agentically long enough to really understand usage. Just wondering if you know/be willing to share your typical usage/token spend on that dedicated hardware?

  • For comparison, here's my own usage with various cloud models for development:

      * Claude in December: 91 million tokens in, 750k out
      * Codex in December: 43 million tokens in, 351k out
      * Cerebras in December: 41 million tokens in, 301k out
      * (obviously those figures above are so far in the month only)
      * Claude in November: 196 million tokens in, 1.8 million out
      * Codex in November: 214 million tokens in, 4 million out
      * Cerebras in November: 131 million tokens in, 1.6 million out
      * Claude in October: 5 million tokens in, 79k out
      * Codex in October: 119 million tokens in, 3.1 million out
    

    As for Cerebras in October, I don't have the data because they don't show the Qwen3 Coder model that was deprecated, but it was way more: https://blog.kronis.dev/blog/i-blew-through-24-million-token...

    In general, I'd say that for the stuff I do my workloads are extremely read heavy (referencing existing code, patterns, tests, build and check script output, implementation plans, docs etc.), but it goes about like this:

      * most fixed cloud subscriptions will run out really quickly and will be insufficient (Cerebras being an exception)
      * if paying per token, you *really* want the provider to support proper caching, otherwise you'll go broke
      * if you have local hardware that is great, but it will *never* compete with the cloud models, so your best bet is to run something good enough, basically cover all of your autocomplete needs, and also with tools like KiloCode an advanced cloud model can do the planning and a simpler local model do the implementation, then the cloud model validate the output

I don't suppose you have (or would be interested in writing) a blog post about how you set that up? Or maybe a list of links/resources/prompts you used to learn how to get there?

  • No, I don't blog. But I just followed the docs for starting an instance on lambda.ai and the llama.cpp build instructions. Both are pretty good resources. I had already setup an SSH key with lambda and the lambda OS images are linux pre-loaded with CUDA libraries on startup.

    Here are my lazy notes + a snippet of the history file from the remote instance for a recent setup where I used the web chat interface built into llama.cpp.

    I created an instance gpu_1x_gh200 (96 GB on ARM) at lambda.ai.

    connected from terminal on my box at home and setup the ssh tunnel.

    ssh -L 22434:127.0.0.1:11434 ubuntu@<ip address of rented machine - can see it on lambda.ai console or dashboard>

      Started building llama.cpp from source, history:    
         21  git clone   https://github.com/ggml-org/llama.cpp
         22  cd llama.cpp
         23  which cmake
         24  sudo apt list | grep libcurl
         25  sudo apt-get install libcurl4-openssl-dev
         26  cmake -B build -DGGML_CUDA=ON
         27  cmake --build build --config Release 
    

    MISTAKE on 27, SINGLE-THREADED and slow to build see -j 16 below for faster build

         28  cmake --build build --config Release -j 16
         29  ls
         30  ls build
         31  find . -name "llama.server"
         32  find . -name "llama"
         33  ls build/bin/
         34  cd build/bin/
         35  ls
         36  ./llama-server -hf ggml-org/gpt-oss-120b-GGUF -c 0 --jinja
    

    MISTAKE, didn't specify the port number for the llama-server

         37  clear;history
         38  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking -c 0 --jinja --port 11434
         39  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking.gguf -c 0 --jinja --port 11434
         40  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF -c 0 --jinja --port 11434
         41  clear;history
    

    I switched to qwen3 vl because I need a multimodal model for that day's experiment. Lines 38 and 39 show me not using the right name for the model. I like how llama.cpp can download and run models directly off of huggingface.

    Then pointed my browser at http//:localhost:22434 on my local box and had the normal browser window where I could upload files and use the chat interface with the model. That also gives you an openai api-compatible endpoint. It was all I needed for what I was doing that day. I spent a grand total of $4 that day doing the setup and running some NLP-oriented prompts for a few hours.