Comment by verdverm
1 month ago
> zero-hallucination guarantees
This is impossible, either your testing is wrong, incomplete, or you are the one hallucinating
1 month ago
> zero-hallucination guarantees
This is impossible, either your testing is wrong, incomplete, or you are the one hallucinating
[flagged]
Did the AI tell you this is all legit?
I'm not going to make waste time verifying some random on the internets idea that they solved P=NP or hallucinations in LLMs
If you have, you'd be able to get the results published in a peer reviewed forum.
Start there instead of "I'm right, prove me wrong"
Have you built lt the thing to know it actually works, or is this all theory with practice?
Show us you are right with implementation and evaluation
lul, you are one of those P=NP people too...
https://news.ycombinator.com/item?id=46457428
2 replies →
[flagged]
[flagged]
> Don't be a troll. Prove me wrong. Run the code.
There is no code in the repo you linked to, what code am I supposed to run?
This just looks like stateful agents and context engineering. Explain how it is different
[flagged]
[flagged]
[flagged]