Comment by icsa
4 months ago
I think that there are a few critical issues that are not being considered:
* LLMs don't understand the syntax of q (or any other programming language).
* LLMs don't understand the semantics of q (or any other programming language).
* Limited training data, as compared to kanguages like Python or javascript.
All of the above contribute to the failure modes when applying LLMs to the generation or "understanding" of source code in any programming language.
> Limited training data, as compared to kanguages like Python or javascript.
I use my own APL to build neural networks. This is probably the correct answer, and inline with my experience as well.
I changed the semantics and definition of a bunch of functions and none of the coding LLMs out there can even approach writing semidecent APL.