← Back to context

Comment by jononor

3 days ago

If you started with a deep neural network, one can't really use pruning to go all the way down to a parameter count that is directly intepretable (say under 100). One would at least have to try some techniques to get more disentangled representations. But local surrogate models are popular for explainability, see Shap and LIME. For interpretable time series I would encourage to construct features and transformations the old fashioned way, and then learn it all end to end as a differentiable program. Then you can get the best of both worlds.