← Back to context

Comment by wolfcola

1 day ago

And so the solution is… paying Anthropic $200/mo…

All of the projects OP mentioned could be vibe coded for <$10 worth of GLM-4.7

  • The inference is cheap, but the context window costs for iteratively debugging architecture issues add up fast. Things like state management or migrations usually require feeding the whole stack back in multiple times, which blows past that budget pretty quickly in my experience.

"Man, it kind of sucks that LLM only does one thing and that my compiled applications stop working after I turn off my LLM service"