I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?
Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?
If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?
I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.
(Trying to find where you might still see this)
I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?
Because if that's it we've made a ludicrously expensive i-ching.
If there is something LLMs are good at it's knowing some obscure fact that only 10 other people on this planet know.
They're also very good at almost knowing an obscure fact that only 10 people know but getting a detail catastrophically wrong about it
No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc.
Oh Lord, no. Not at all. That's what they're terrible at. They are ok-ish at superficial overviews and catastrophically bad at specific minutiae
Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?
5 replies →
Oftentimes it is though, good enough for my purposes.
If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?
I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.
10 replies →