Really? I feel like the article pointedly skirted my biggest complaint.
> ## but the code is shitty, like that of a junior developer
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
> Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic. Using agents well is both a both a skill and an engineering project all its own, of prompts, indices, and (especially) tooling. LLMs only produce shitty code if you let them.
I hate pair-programming with junior devs. I hate it. I want to take the keyboard away from them and do it all myself, but I can't, or they'll never learn.
Why would I want a tool that replicates that experience without the benefit of actually helping anyone?
You are helping the companies train better LLMs... Both by just paying for their expenses, but also they will use the training data.
That may or may not be something one considers a worthwhile contribution.
Certainly it is less valuable than helping a person grow their intellectual capacity.
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
This stuck out to me. How long will it continue to be so cheap? I would assume some of the low cost is subsidized by VC money which will dry up eventually. Am I wrong here?
Training frontier models is expensive. Running inference on them is pretty cheap and for solving the same level of problem, will continue to get cheaper. The more use they get out of a model, the less overhead we need to pay on that inference to subsidize the training.
At the rate they're going it'll just get cheaper. The cost per token continues to drop while the models get better. Hardware is also getting more specialized.
Maybe the current batch of startups will run out of money but the technology itself should only get cheaper.
Really? I feel like the article pointedly skirted my biggest complaint.
> ## but the code is shitty, like that of a junior developer
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
> Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic. Using agents well is both a both a skill and an engineering project all its own, of prompts, indices, and (especially) tooling. LLMs only produce shitty code if you let them.
I hate pair-programming with junior devs. I hate it. I want to take the keyboard away from them and do it all myself, but I can't, or they'll never learn.
Why would I want a tool that replicates that experience without the benefit of actually helping anyone?
You are helping the companies train better LLMs... Both by just paying for their expenses, but also they will use the training data. That may or may not be something one considers a worthwhile contribution. Certainly it is less valuable than helping a person grow their intellectual capacity.
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
This stuck out to me. How long will it continue to be so cheap? I would assume some of the low cost is subsidized by VC money which will dry up eventually. Am I wrong here?
Prices have been dropping like a stone over the past two years, due to a combination of new efficiencies being developed for serving plus competition from many vendors: https://simonwillison.net/2024/Dec/31/llms-in-2024/#llm-pric...
I'm not seeing any evidence yet of that trend stopping or reversing.
Training frontier models is expensive. Running inference on them is pretty cheap and for solving the same level of problem, will continue to get cheaper. The more use they get out of a model, the less overhead we need to pay on that inference to subsidize the training.
At the rate they're going it'll just get cheaper. The cost per token continues to drop while the models get better. Hardware is also getting more specialized.
Maybe the current batch of startups will run out of money but the technology itself should only get cheaper.
The article provides no solid evidence that "AI is working" for the author.
At the end of the day this article is nothing but another piece of conjecture on hackernews.
Actually assessing the usefulness of AI would require measurements and controls. Nothing has been proven or disproven here
the irony with AI sceptics is that their opinions usually sound like they've been stolen from someone else