← Back to context

Comment by hodgehog11

20 hours ago

> Programming is more multimodal than math

I have no idea how you come to this conclusion, when the evidence on the ground for those training models suggests it is precisely the opposite.

We are much further along the path of writing code than writing new maths, since the latter often requires some degree of representational fluency of the world we live in to be relevant. For example, proving something about braid groups can require representation by grid diagrams, and we know from ARC-AGI that LLMs don't do great with this.

Programming does not have this issue to the same extent; arguably, it involves the subset of maths that is exclusively problem solving using standard representations. The issues with programming are primarily on the difficulty with handling large volumes of text reliably.

Nah, LLM's are solving unique problems in maths, whereas they're basically just overfitting to the vast amounts of training data with writing code. Every single piece of code AI writes is essentially just a distillation of the vast amounts of code it's seen in it's training - it's not producing anything unique, and it's utility quickly decays as soon as you even move towards the edge of the distribution of it's training data. Even doing stuff as simple as building native desktop UI's causes it massive issues.