Comment by mg

2 days ago

Cool. Are you running the model on your own server?

Thanks! Not yet. To get this launched quickly and validate the idea, I'm currently using a cloud API to handle the inference.

However, my plan is to eventually deploy the model on my own server. I'll be sure to document the entire process—from setup to optimization—and share it as a detailed guide on the site for anyone interested!