Comment by yanovskishai
4 days ago
Played with Bayesian nets a bit in grad school—Pearl’s causality stuff is still mind-blowing—but I’ve almost never bumped into a PGM in production. A couple things kept biting us: Inference pain. Exact is NP-hard, and the usual hacks (loopy BP, variational, MCMC) need a ton of hand-tuning before they run fast enough.
The data never fits the graph. Real-world tables are messy and full of hidden junk, so you either spend weeks arguing over structure or give up the nice causal story.
DL stole the mind-share. A transformer is a one-liner with a mature tooling stack; hard to argue with that when deadlines loom.
That said, they’re not completely dead - reportedly Microsoft’s TrueSkill (Xbox ranking), a bunch of Google ops/diagnosis pipelines, some healthcare diagnosis tools by IBM Watson built on Infer.NET.
Anyone here actually shipped a PGM that beat a neural baseline? Would really love to appreciate your war stories.
Me either. I have heard stories of it happening, but never personally seen one live. It's really a tooling issue. I think the causal story is super important and will only become more so in the future, but it would be basically impossible to implement and maintain longer-term with today's software.
Kind of like flow-based programming. I don't think there are any fundamental reason why it can't work, it just hasn't yet.
> Pearl’s causality stuff is still mind-blowing
Could you link me to where I could learn more about this?
From recollection, I believe, "The Book of Why", https://a.co/d/aYehsnO, is the general audience introduction to Pearl's approach to causality.
"Causality: Models, Reasoning and Inference", https://a.co/d/6b3TKhQ, is the technical and researcher audience book.