← Back to context

Comment by diggan

2 days ago

> More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1

Was that document almost exclusively written with LLMs? I looked at it last night (~8 hours ago) and it was riddled with mistakes, most egregious was that the "Run with Ollama" section had instructions for how to install Ollama, but then the shell commands were actually running llama.cpp, a mistake probably no human would make.

Do you have any plans on disclosing how much of these docs are written by humans vs not?

Regardless, thanks for the continued release of quants and weights :)

Oh hey sorry the docs are still in construction! Are you referring to merging GGUFs to Ollama - it should work fine! Ie:

``` ./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf ```

Ollama can only allow merged GGUFs (not splitted ones), so hence the command.

All docs are made by humans (primarily my brother and me), just sometimes there might be some typos (sorry in advance)

I'm also uploading Ollama compatible versions directly so ollama run can work (it'll take a few more hours)

> but then the shell commands were actually running llama.cpp, a mistake probably no human would make.

But in the docs I see things like

    cp llama.cpp/build/bin/llama-* llama.cpp

Wouldn't this explain that? (Didn't look too deep)

  • Yes it's probs the ordering od the docs thats the issue :) Ie https://docs.unsloth.ai/basics/deepseek-v3.1#run-in-llama.cp... does:

    ```

    apt-get update

    apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y

    git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON

    cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server

    cp llama.cpp/build/bin/llama-* llama.cpp

    ```

    but then Ollama is above it:

    ```

    ./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf

    ```

    I'll edit the area to say you first have to install llama.cpp