Comment by yobbo
1 day ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
1 day ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
No comments yet
Contribute on Hacker News ↗