Comment by orasis
6 months ago
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.
6 months ago
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.
No comments yet
Contribute on Hacker News ↗