Comment by adefa
7 hours ago
Absolutely -- it's perfectly understandable. I wanted to be completely upfront about AI usage and while I was willing and did start to break the PR down into parts, it's totally OK for the maintainers to reject that too.
I wanted to see if Claude Code could port the HF / MLX implementation to llama.cpp and it was successful -- in my mind that's wild!
I also learned a ton about GPU programming, how omni models work, and refined my approach to planning large projects with automated end to end integration tests.
The PR was mostly to let people know about the code and weights, since there are quite a few comments requesting support:
Consider a fork while optimizing. Of Claude can optimize then you could prove someone wrong and get it merged.
Nice work getting multimodal in there already.