← Back to context

Comment by literalAardvark

8 hours ago

Since we're not experts, we treat it as a black box. What are the results? Is the quality of the results improving? Is the improvement accelerating or decelerating?

And the answer appears to be that the improvement is accelerating. So how could it be stopping?

https://metr.org/time-horizons/

I don’t think improvement is accelerating. We went from “computers can’t do these things at all” to “now they can” in a few years with the discovery of transformers, and now we get “it can do the same things, except incrementally better, at a drastically higher cost” every few months.

I don’t think that the current AI paradigm has infinite headroom for improvement, similar to how every other AI approach before it eventually hit a limit.

  • Incrementally, higher cost? A model I'm running on a 10 year old entry level computer is better at programming than GPT4. Those are multiple orders of magnitude of improvement in a few years.

    And the link I posted shows the amount of work a query can do increasing non linearly. You can explore the site for more detail and a graph that shows error rates getting halved every couple of months.

    No one said anything about infinite. It doesn't mean we don't have headroom to spare.

    Software itself took 80-120 years to get where it is today depending on how you count. Time is on AIs side here.