Comment by orasis
18 days ago
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.
18 days ago
Check out improve.ai if you want to see this taken to the next level. We combined Thompson Sampling with XGBoost to build a multi-armed bandit that learns to choose the best arm across context. MIT license.
No comments yet
Contribute on Hacker News ↗