← Back to context

Comment by hansmayer

9 hours ago

Well, clunkiness and inherent imprecision for one? We don't need to turn every determinisic application into an LLM wrapper, which produces corrrect outputs x℅ of the time, where x < 100. I can only imagine the negative impact if auch tools become widely spread, we already see the damage the AI slop is creating in hyperscaler infrastructure, software, content etc. To now translate this into the world of physical machines and structures, bears even greater risk. Plus your subscription model is a. super-intransparent, based on token usage b. Risky, as your pricing is clearly dependent on the AI-model providers billing, which as we see from the recent GH Copilot episode, is set up for significant hikes across the board.

In transparent in what sense?

  • Not just intransparent, but also unfair. Why? I think you know this very well, but lets say I need to reprompt your LLM wrapper x times, because, as usually ia the case with these slot machines, they did not give me what I asked for. So now I burn additional tokens you will happily bill me for, without having it in my power to impact how well the underlying LLM works. So its neither under my control, nor do I have a clue as to how to keep token usage under control. So your model for pricing is neither transparent nor fair to me. And as is taught in every business 101, for a business to auccessfuly acquire customers, customers must see the price as transparent AND perceive it as fair.