Comment by rglullis
2 hours ago
> I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
From a science standpoint, I'd say whatever "results" you got are completely worthless.
> I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct
And how do you know if your understanding is correct, if you are only taking what the LLM gives to you and you are not able to verify independently?
> Science is what happens when you expect something, test something, and get a result.
Right, but has any LLM come up with any hypothesis on its own? Has any AI said "given all this literature that I read, I'd expect <insert something completely out of the training data space>?".
Asking all of these questions after (allegedly) reading my entire comment either means you didn't pay attention, in which I'm not going to spend any more effort responding either; or you've completely missed the point, in which case I can probably save myself the effort anyway. In any case, if you're genuinely interested in answers to your questions instead of merely posturing, I suggest you re-read carefully and then make a better faith attempt at engaging with it.
I'll leave these direct quotes from the comment as a hint:
> But that only matters if I take its output at face value. […] If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification.