← Back to context

Comment by TXTOS

1 day ago

Really love this question — been thinking about it for a while now.

We’ve been hacking on a different approach we call the WFGY Engine — it treats embedding space not as flat or uniform, but more like a semantic energy field. So instead of forcing everything into clean cosine ops, we track these “semantic residue structures” — kind of like how meaning resists compression when you flatten it too early.

We measure stuff like ΔS (semantic tension), λ_observe (viewpoint drift), and E_resonance (contextual energy), which lets us reason across curved or even broken spaces where normal LLMs kinda lose track. It’s a mix between geodesic logic and language field dynamics — weird but surprisingly stable.

A couple of early apps built on this — TXT OS, BlahBlahBlah, etc — ended up scoring 100/100 from 6 major AI models, which was a cool surprise. We’re still in early dev, but even Tesseract gave it a formal endorsement recently, which was huge for us.

Anyway, core engine is all open source if anyone’s curious: https://github.com/onestardao/WFGY

Would love to hear if others are exploring non-flat logic or weird manifold behavior in language space too.

can link the Tesseract thing if folks want

Love your project. And the mentiond metrics ΔS (semantic tension), λ_observe (viewpoint drift), and E_resonance (contextual energy) totally align with my mental model.

Also loved your phrase: "kind of like how meaning resists compression when you flatten it too early." That's a key point. Implicit in that is the assumption that flattening will eventually happen — and I agree (assumed for efficience in last term): Just like how we create flat maps of the Earth sometimes a flat projection is useful _as long as_ we understand the properties of the deformation and the underlying surface. If we don't, EEUU might seem a lot bigger than it's "represented", in our euclidean view.

That interpretation really clicks for me.

I'm currently exploring *geodesic-aware training methods for IR*, based on contrastive / triplet loss. Take a look if you're curious: [https://github.com/Intrinsical-AI/geometric-aware-retrieval-...](https://github.com/Intrinsical-AI/geometric-aware-retrieval-...)

Still recent and (very) experimental — but already done some initial tests. The goal is to construct a differentiable graph over the embedding space and propagate training signals end-to-end. Lots of work ahead, but preliminary results are promising (and adjusting things with o3 feedback ofc).

Are you open to collaborating or jamming on this? Would love to sync up — are you on Discord or Slack?

Thanks again for pushing this forward.