← Back to context

Comment by coliveira

21 hours ago

It's very funny that for pretty much any use case of LLMs, they're either too expensive or too incapable or both! There may be a few uses that make sense, but it seems to be incredibly hard to find the balance.

It blows my mind how many computing professionals truly think this is the case. It doesn't take a tech blogger to draw a trend line through the advancements of the past 2.5 years and see where we're headed. The fact that grifters abound on the edges of the industry is a sign of the radical importance of this unexpected breakthrough, not an indication that it's all a grift.

To engage in some armchair psychology, I think this is in large part due to a natural human tendency for stability (which is all the stronger for those in relatively powerful positions like us SWEs). Knowing that believing A would imply that your mortgage is in jeopardy, your retirement plan up-ended, and your entire career completely obscured beyond a figurative singularity point makes believing ~A a very appealing option...

  • > It doesn't take a tech blogger to draw a trend line through the advancements of the past 2.5 years and see where we're headed.

    People did this with airplanes in the 60s, and based on that trajectory we should be exploring the outer edges of our solar system by now. Turns out the market for supersonic jets was unsustainably small and the cost/risk of space exploration is still very high.

    Every sigmoidal curve looks exponential as it starts to enter the linear regime. But eventually the curve turns over, either due to limits in the technology, the marginal cost of the technology, or no clear way to further commercialize it.

    I don't know that we've reached that point with AI, but a do know that extrapolating from a trend line is fraught with peril.