← Back to context

Comment by adefa

6 hours ago

I ran a similar experiment last month and ported Qwen 3 Omni to llama cpp. I was able to get GGUF conversion, quantization, and all input and output modalities working in less than a week. I submitted the work as a PR to the codebase and understandably, it was rejected.

https://github.com/ggml-org/llama.cpp/pull/18404

https://huggingface.co/TrevorJS/Qwen3-Omni-30B-A3B-GGUF

The refusal because often AI writes suboptimal GGML kernels looks very odd, to me. It means that who usually writes manually GGML kernels, could very easily steer the model into writing excellent kernels, and even a document for the agents can be compiled with the instructions on how to do a great work. If they continue in this way, soon a llama.cpp fork will emerge that will be developed much faster and potentially even better: it is unavoidable.

  • The refusal is probably because OP said "100% written by AI" and didn't indicate an interest in actually reviewing or maintaining the code. In fact, a later PR comment suggests that the AI's approach was needlessly complicated.

    • Also because it's a large PR. Also because the maintainer has better things to do than taking longer and more energy to review than the author spent to write it, just to find that multiple optimisations will be requested, which the author may not be able to take on.

      the creator of llama.cc can hardly be suspected to be reluctant or biased towards GenAI.

      2 replies →

  • Some projects refuse for copyright reasons. Back when GPT4 was new, I dug into pretraining reports for nearly all models.

    Every one (IIRC) was breaking copyrights by sharing 3rd-party works in data sets without permission. Some were trained on patent filings which makes patent infringement highly likely. Many breaking EULA's (contract law) by scraping them. Some outputs were verbatim reproductions of copyrighted works, too, which could get someoen sued if they published them.

    So, I warned people to stay away from AI until (a) training on copyrighted/patented works was legal in all those circumstances, (b) the outputs had no liability, and (c) users of a model could know this by looking at the pretraining data. There's no GPT3- or Claude-level models produced that way.

    On a personal level, I follow Jesus Christ who paid for my sins with His life. We're to be obedient to God's law. One is to submit to authority (aka don't break man's law). I don't know that I can use AI outputs if they were illegally trained or like fencing stolen goods. Another reason I want the pretraining to be legal either by mandate or using only permissible works.

    Note: If your country is in the Berne Convention, it might apply to you, too.