Comment by yobbo
11 hours ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
11 hours ago
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
No comments yet
Contribute on Hacker News ↗