← Back to context

Comment by andrewljohnson

12 hours ago

the shakespeare code tuned a little with different training data does a good job of generating Magic The Gathering commander decks

Somewhat related: I wrote up a MTG card generator based on nanoGPT a while ago that I think produces pretty good results for being 1m parameters.

The real neat thing about this is that WotC makes a few thousand new cards each year, so my training data set just grows over time and the model gets better with no effort spent on my part.

https://github.com/jlwitthuhn/TCGGPT

  • It would be interesting to come up with a use case which requires a freshly trained model and isn't just something that generic models can already, especially with 1MM context window

would love more details on this. this is exactly the type of project I'd like to dabble in to get more up to speed.

I like the idea of specific-purpose toy models. How did you tune the code and what dataset you used?