← Back to context

Comment by sidkshatriya

5 hours ago

> "After 16 minutes and 41 seconds, it came back" ... "further 47 minutes and 39 seconds" ... "After 13 minutes and 33 seconds" ... "After 9 minutes and 12 seconds" ... "After 31 minutes and 40 seconds" ... plus other computations Anyone spotting the issue here? What did that really cost?

Whatever the Joules... (convert to $ using your preferred benchmark price) it is a fraction to what it might take a human Ph. D. weeks to feed and sustain themselves when working on the same problem. The economics on LLMs is just unbeatable (sadly) when compared to us humans.

Compute in science was already subsidized by public funding or by donations. Most supercomputers are financed this way. And that's a good thing. If you have a good science problem that can be computed, apply for compute time. There is nothing wrong to apply that to LLMs as well, like I wrote in my initial post. The human is still required to identity problems that are worth to be computed, to create prompts that the LLM can act on, and to verify results. But, OpenAI providing compute for basically free is still tied to a different incentive: to fuel the hype and to capture the market, while distorting/obfuscating the real costs. That's also the reason for why we cannot claim that 'economics on LLMs is just unbeatable'. It depends on the problem, the reason for a prompt.