← Back to context

Comment by BloondAndDoom

11 hours ago

This pretty cool, and useful but I only wish this was a website. I don’t like the idea of running an executable for something that can perfectly be done as a website. (Other than some minor features, tbh even you can enable Corsair and still check the installed models from a web browser).

Sounds like a fun personal project though.

>I only wish this was a website. I don’t like the idea of running an executable for something that can perfectly be done as a website.

The tool depends on hardware detection. From https://github.com/AlexsJones/llmfit?tab=readme-ov-file#how-... :

  How it works
  Hardware detection -- Reads total/available RAM via sysinfo, counts CPU cores, and probes for GPUs:

  NVIDIA -- Multi-GPU support via nvidia-smi. Aggregates VRAM across all detected GPUs. Falls   back to VRAM estimation from GPU model name if reporting fails.
  AMD -- Detected via rocm-smi.
  Intel Arc -- Discrete VRAM via sysfs, integrated via lspci.
  Apple Silicon -- Unified memory via system_profiler. VRAM = system RAM.
  Ascend -- Detected via npu-smi.
  Backend detection -- Automatically identifies the acceleration backend (CUDA, Metal, ROCm, SYCL, CPU ARM, CPU x86, Ascend) for speed estimation.

Therefore, a website running Javascript is restricted by the browser sandbox so can't see the same low-level details such as total system RAM, exact count of GPUs, etc,

To implement your idea so it's only a website and also workaround the Javascript limitations, a different kind of workflow would be needed. E.g. run macOS system report to generate a .spx file, or run Linux inxi to generate a hardware devices report... and then upload those to the website for analysis to derive a "LLM best fit". But those os report files may still be missing some details that the github tool gathers.

Another way is to have the website with a bunch of hardware options where the user has to manually select the combination. Less convenient but then again, it has the advantage of doing "what-if" scenarios for hardware the user doesn't actually have and is thinking of buying.

(To be clear, I'm not endorsing this particular github tool. Just pointing out that a LLMfit website has technical limitations.)

I just discovered the other day the hugging face allows you to do exactly this.

With the caveat that you enter your hardware manually. But are we really at the point yet where people are running local models without knowing what they are running them on..?

Huggingface has it built in.

  • Where?

    • In your preferences there is a local apps and hardware, I guess it's a little different because I just open the page of a model and it shows the hardware I've configured and shows me what quants fit.