Comment by yobbo
9 hours ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
9 hours ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
No comments yet
Contribute on Hacker News ↗