← Back to context

Comment by fooker

8 hours ago

> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees

I like this train of thought. Research shows that decision trees are equivalent to 1-bit model weights + larger model.

But critically, we only know some classes of problems that are effectively solved by this approach.

So, I guess we are stuck waiting for new science to see what works here. I suspect we will see a lot more work on these topics after the we hit some hard LLM scalability limits.