Comment by eigenvalue
1 year ago
I suppose in our new world of LLMs, using as few tokens as possible means you can cram more in a small context window, which could be helpful in various ways.
1 year ago
I suppose in our new world of LLMs, using as few tokens as possible means you can cram more in a small context window, which could be helpful in various ways.
Maybe someone could produce an LLM which takes a line of vector language code and expands it into the dozen(s) of lines of equivalent pseudo-algol?
(I mean, you could skip the whole hallucination thing and write an exact converter, but that'd be a lot of effort for code that'd probably get used about as much as M-expression to S-expression converters do in the lisp world?)
There are a few articles of using J or APL for ANNs and CNNs out there. Here's one: