Comment by jadelcastillo
2 days ago
I think this is a good and pragmatic way to approach the use of LLM systems. By translating to an intermediate language, and then processing further symbolically. But probably you can be prompt injected also if you expose sensible "tools" to the LLM.
No comments yet
Contribute on Hacker News ↗