Comment by steveBK123
7 hours ago
> More important is whether the productivity improvement is worth a subscription price. Nothing that I have seen until now convinces me about this. On the other hand, I believe that running locally a good open-weights coding assistant, so that you do not have to worry about token price or about exceeding subscription limits in a critical moment, is very worthwhile.
This is one thing I also wonder about. If it's a really good programming helper, making 20% of your job 5x faster, then you can compute the value. Say for a $250K SWE this looks like $40k/year roughly. You don't want to hand 100% of that value to the LLM providers or you've just broken even, so then maybe it is worth $200/mo.
Such a reckoning is possible when the cost of a subscription is truly predictable.
For now, there is a lot of unpredictability in the future cost of AI, whenever you do not host it yourself.
If you pay per token, it is extremely hard to predict how many tokens you will need. If you have an apparently fixed subscription, it is very hard to predict whether you will not hit limits in the most inconvenient moment, after which you will have to wait for a day or so for the limits to be reset.
Recently, there have been a lot of stories where the AI providers seem to try to reduce continuously the limits allowed by a subscription. There is also a lot of incertitude about future raises of the subscription prices, as the most important providers appear to use prices below their expenses, for now.
Therefore, while I agree with you that when something provides definite benefits you should be able to assess whether paying for it provides a net gain for you, I do not believe that using an externally-hosted AI coding assistant qualifies for such an assessment, at least not for now.
EDIT:
After I have written the above, that the future cost of externally-hosted AI coding assistants is unpredictable, what I have written was confirmed by an OpenAI press release that the existing Codex users will be migrated during the following weeks towards token-based pricing rates.
Such events will not affect you if you use an open-weights assistant running on your own HW, when you do not have to care about token usage.
You do have to care about token usage when chosing how to scale your hardware. If you do a negligible amount of AI inference for occasional simple Q&A (which is what most people do), you can get away with a very lean and cheap setup even when running very large, sophisticated models. Agentic use with function calls and responses etc. raises the amount of tokens you use over time by at least one order of magnitude.