Comment by jacobgkau
6 months ago
> Saying something will 100% happen, and saying something will sometimes happen are diametrically opposed statements and contrary to each other.
No. Saying something will 100% happen and saying something will 100% not happen are diametrically opposed. You can't just call every non-equal statement "diametrically opposed" on the basis that they aren't equal. That ignores the "diametrically" part.
If you wanted to say "I use words that mean what I intend to convey, not words that mean something similar," that would've been fair. Instead, you brought the word "opposite" in, misrepresenting what had been said and suggesting you'll stretch the truth to make your point. That's where the sense of bias came from. (You also pointlessly left "what I intend to convey" in to try and make your argument appear softer, when the entire point you're making is that "what you intend" isn't good enough and one apparently needs to be exact instead.)
This word soup doesn't get to redefine the word opposite, but you're free to keep trying.
Cute that you've now written at least 200 words trying to divert the conversation though, and not a single word to actually address your demonstration of the opposite of understanding how the tools you use work.
The entire premise of my first reply to you was that your hyperbole invalidated your position. If either of us diverted the conversation, it was you.
One of your replies to me included the statement "the LLM probably won't succeed for the API in question no matter how many iterations you try and it's best to move on" (i.e. don't do the work or don't use AI to do it). Yet you continue to repeat that it's my (and everyone else's) lack of understanding that's somehow the problem, not conceding that AI being unable to perform certain tasks is a valid point of skepticism.
> This word soup doesn't get to redefine the word opposite,
You're the one trying to redefine the word "opposite" to mean "any two things that aren't identical."