You mean assuming technology improves, which it has for centuries and given there is considerable incentive for it to improve, is naive? Seems inevitable.
Do you think humans have achieved peak intelligence? If so, why, and if not, then why shouldn't you expect artificial forms of intelligence to improve up to and even surpass humans abilities at some point?
Edit: to clarify, I'm not necessarily assuming unbounded acceleration, but tools always start out middling, improvements accelerate as we figure out what works and what doesn't, and then they taper off. We're just starting on the acceleration curve for AI.
Performance on these scores frequently hits plateaus because the style of technology is just unfit for the task.
We are quite far into the development cycle of LLMs. Literally billions of dollars have been poured into them. The rate of improvements over the last 6-12 months has slowed, not accelerated.
There hasn’t been any hint on AGI breakthroughs, so we’re dealing with the tools to help herd stochastic parrots (i.e. agents) for the foreseeable future. And those tools are to just help with how much LLMs hallucinate, it doesn’t make them more creative in a way to improve these scores.
> We are quite far into the development cycle of LLMs.
No, we've barely scratched the surface. Billions of dollars have been poured into the stupidest possible thing that could work + scaling, and we're only now trying more clever things. Fine-tuning on specific tasks will yield considerable productivity benefits in those domains.
I'm not only skeptical of your claim on the "rate of improvements over the last 6-12 months", but it's not even a compelling time horizon to infer any kind of trend at this stage.
You mean assuming technology improves, which it has for centuries and given there is considerable incentive for it to improve, is naive? Seems inevitable.
Do you think humans have achieved peak intelligence? If so, why, and if not, then why shouldn't you expect artificial forms of intelligence to improve up to and even surpass humans abilities at some point?
Edit: to clarify, I'm not necessarily assuming unbounded acceleration, but tools always start out middling, improvements accelerate as we figure out what works and what doesn't, and then they taper off. We're just starting on the acceleration curve for AI.
Performance on these scores frequently hits plateaus because the style of technology is just unfit for the task.
We are quite far into the development cycle of LLMs. Literally billions of dollars have been poured into them. The rate of improvements over the last 6-12 months has slowed, not accelerated.
There hasn’t been any hint on AGI breakthroughs, so we’re dealing with the tools to help herd stochastic parrots (i.e. agents) for the foreseeable future. And those tools are to just help with how much LLMs hallucinate, it doesn’t make them more creative in a way to improve these scores.
> We are quite far into the development cycle of LLMs.
No, we've barely scratched the surface. Billions of dollars have been poured into the stupidest possible thing that could work + scaling, and we're only now trying more clever things. Fine-tuning on specific tasks will yield considerable productivity benefits in those domains.
I'm not only skeptical of your claim on the "rate of improvements over the last 6-12 months", but it's not even a compelling time horizon to infer any kind of trend at this stage.
1 reply →