Comment by antirez

22 days ago

The refusal because often AI writes suboptimal GGML kernels looks very odd, to me. It means that who usually writes manually GGML kernels, could very easily steer the model into writing excellent kernels, and even a document for the agents can be compiled with the instructions on how to do a great work. If they continue in this way, soon a llama.cpp fork will emerge that will be developed much faster and potentially even better: it is unavoidable.

The refusal is probably because OP said "100% written by AI" and didn't indicate an interest in actually reviewing or maintaining the code. In fact, a later PR comment suggests that the AI's approach was needlessly complicated.

  • Also because it's a large PR. Also because the maintainer has better things to do than taking longer and more energy to review than the author spent to write it, just to find that multiple optimisations will be requested, which the author may not be able to take on.

    the creator of llama.cc can hardly be suspected to be reluctant or biased towards GenAI.

    • Absolutely -- it's perfectly understandable. I wanted to be completely upfront about AI usage and while I was willing and did start to break the PR down into parts, it's totally OK for the maintainers to reject that too.

      I wanted to see if Claude Code could port the HF / MLX implementation to llama.cpp and it was successful -- in my mind that's wild!

      I also learned a ton about GPU programming, how omni models work, and refined my approach to planning large projects with automated end to end integration tests.

      The PR was mostly to let people know about the code and weights, since there are quite a few comments requesting support:

      https://github.com/ggml-org/llama.cpp/issues/16186

      1 reply →

[flagged]

  • Complete non-issue in my experience.

    With usage on a daily basis since GPT-4 I have not once encountered a scenario where I was concerned about the output being complex enough and a verbatim copy to warrant such concerns.

    Generally it would seem statistically unlikely to reconstruct a copyrighted work, rather the output should be a probabilistic average. Snippets are typically too common and short to be protected by copyright. Copyright challenges are likely to fail on the "substantial similarity" test.

    I understand plaintiffs would need to show that code is virtually identical, not just similar, and that these parts represent a "substantial" portion of the original work's creative value.