Comment by tharant
6 months ago
> I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.
How is this sentiment not different from my grandfather’s sentiment that calculators and computers (and probably his grandfather’s view of industrialization) are a shortcut to avoid work? From my perspective most tools are used as a shortcut to avoid work; that’s kinda the while point—to give us room to think about/work on other stuff.
Because calculators aren't confidently wrong the majority of the time.
In my experience, and for use-cases that are carefully considered, language models are not confidently wrong a majority of the time. The trick is understanding the tool and using it appropriately—thus the “carefully considered” approach to identifying use-cases that can provide value.
In the very narrow fields where I have a deep understanding, LLM output is mostly garbage. It sounds plausible but doesn't stand up to scrutiny. The basics that it can regurgitate from wikipedia sound mostly fine but they are already subtly wrong as soon as they depart from stating very basic facts.
Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading. I try to not even read much of what LLMs produce. I might give it some text and riff about it if I need ideas, but LLMs are categorically the wrong tool for factual content.
2 replies →
A use-case that can be carefully considered requires more knowledge about the use-case than the LLM, it requires you to understand the specific model's training and happy paths, it requires more time to make it output the thing you want than just doing it yourself. If you don't know enough about the subject or the model, you will get confident garbage
1 reply →
Did you grandpa think that calculators made engineers worse at their jobs?
I don’t know for certain (he’s no longer around) but I suspect he did. The prevalence of folks who nowadays believe that Gen-AI makes everything worse suggests to me that not much has changed since his time.
I get it; I’m not an AI evangelist and I get frustrated with the slop too; Gen-AI (and many of the tools we’ve enjoyed over the past few millennia) was/is lauded as “The” singular tool that makes everything better; no tool can fulfill that role yet we always try to shoehorn our problems into a shape that fits the tool. We just need to use the correct tools for the job; in my mind, the only problem right now is that we have a really capable tool and have identified some really valuable use-cases for that tool yet we also keep trying to use it for (what I believe are, given current capabilities) use-cases that don’t fit the tool.
We’ll figure it out but, in the meantime, while I don’t like to generalize that a tech or its use-cases are objectively good/bad, I do tend to have an optimistic outlook for most tech—Gen-AI included.