Comment by cap11235
6 days ago
https://gist.github.com/cpsquonk/e9a6134e78a2c832161ca973803...
I did Qwen3-256B (a free model, but you'd need a host for something that large, probably. I used Kagi) and Claude Code.
Curious how these look to you.
6 days ago
https://gist.github.com/cpsquonk/e9a6134e78a2c832161ca973803...
I did Qwen3-256B (a free model, but you'd need a host for something that large, probably. I used Kagi) and Claude Code.
Curious how these look to you.
It actually wrote out the code for all the hard stuff.
I like the Python code which outsourced the hard stuff to existing libraries. The odds of that working are higher.
Can you tell it to use the "glam" crate for the vectors, instead of writing out things like vector length the long way?
(We now need standardized low-level types more than ever, so the LLMs will use them.)
I reopened Claude, and asked "Can you use the "glam" crate for the vectors, instead of writing out things like vector length the long way?"
https://gist.github.com/cpsquonk/348009eb7c83a7d499ff5ae70d7...
That's pretty good. Thanks.