Comment by chrismustcode
10 days ago
I’d be stunned if a 270m model could code with any proficiency.
If you have an iPhone with the semi-annoying autocomplete that’s a 34m transformer.
Can’t imagine a model (even if it’s a good team behind it) to do coding with 8x the parameters of a next 3/4 word autocomplete.
Someone should try this on that model: https://www.oxen.ai/blog/training-a-rust-1-5b-coder-lm-with-...