Comment by verdverm
3 days ago
> Don't be a troll. Prove me wrong. Run the code.
There is no code in the repo you linked to, what code am I supposed to run?
This just looks like stateful agents and context engineering. Explain how it is different
3 days ago
> Don't be a troll. Prove me wrong. Run the code.
There is no code in the repo you linked to, what code am I supposed to run?
This just looks like stateful agents and context engineering. Explain how it is different
You're confusing "no code" with "GitHub UI not loading files on your end."
The repository contains:
src/rememberme/csnp.py – Core CSNP protocol implementation
src/rememberme/optimal_transport.py – Wasserstein distance computation
src/rememberme/coherence.py – CoherenceValidator class
benchmarks/hallucination_test.py – Zero-hallucination validation tests
How to run it:
bash git clone https://github.com/merchantmoh-debug/Remember-Me-AI cd Remember-Me-AI pip install -r requirements.txt python benchmarks/hallucination_test.py How it's different from "stateful agents and context engineering":
Traditional RAG:
Embed chunks → Store vectors → Retrieve via cosine similarity
No mathematical guarantee that retrieved ≈ stored
Hallucination = P(retrieved ≠ original | query) > 0
CSNP:
Map memory to probability distribution μ₀
Maintain coherent state: μₜ = argmin{ W₂(μ, μ₀) + λ·D_KL(μ||π) }
Bounded retrieval error: ||retrieved - original|| ≤ C·W₂(μₜ, μ₀)
Set coherence threshold = 0.95 → W₂ < 0.05 → retrieval error provably < ε
This isn't "prompt engineering." It's optimal transport theory applied to information geometry.
If W₂(current, original) exceeds threshold, the system rejects the retrieval rather than hallucinating. That's the difference.
Run the code. Check papers/csnp_paper.pdf for the formal proof. Then tell me what breaks.
> There is no code in the repo you linked to, what code am I supposed to run?
> This just looks like stateful agents and context engineering. Explain how it is different
Check now.