← Back to context

Comment by jasode

5 hours ago

>I only wish this was a website. I don’t like the idea of running an executable for something that can perfectly be done as a website.

The tool depends on hardware detection. From https://github.com/AlexsJones/llmfit?tab=readme-ov-file#how-... :

  How it works
  Hardware detection -- Reads total/available RAM via sysinfo, counts CPU cores, and probes for GPUs:

  NVIDIA -- Multi-GPU support via nvidia-smi. Aggregates VRAM across all detected GPUs. Falls   back to VRAM estimation from GPU model name if reporting fails.
  AMD -- Detected via rocm-smi.
  Intel Arc -- Discrete VRAM via sysfs, integrated via lspci.
  Apple Silicon -- Unified memory via system_profiler. VRAM = system RAM.
  Ascend -- Detected via npu-smi.
  Backend detection -- Automatically identifies the acceleration backend (CUDA, Metal, ROCm, SYCL, CPU ARM, CPU x86, Ascend) for speed estimation.

Therefore, a website running Javascript is restricted by the browser sandbox so can't see the same low-level details such as total system RAM, exact count of GPUs, etc,

To implement your idea so it's only a website and also workaround the Javascript limitations, a different kind of workflow would be needed. E.g. run macOS system report to generate a .spx file, or run Linux inxi to generate a hardware devices report... and then upload those to the website for analysis to derive a "LLM best fit". But those os report files may still be missing some details that the github tool gathers.

Another way is to have the website with a bunch of hardware options where the user has to manually select the combination. Less convenient but then again, it has the advantage of doing "what-if" scenarios for hardware the user doesn't actually have and is thinking of buying.

(To be clear, I'm not endorsing this particular github tool. Just pointing out that a LLMfit website has technical limitations.)