Comment by mrbungie
6 days ago
You know what's nuts? How so many articles about supporting LLMs and against skeptics are so full of fallacies and logical inconsistencies like strawmans, false dichotomies, appeals to emotion and to authority when they have supposedly almost AGI machines to assist them in their writing. They could at least do a "please take a look at my article and see if I'm commiting any logical fallacies" prompt iteration session if they trust these tools so much.
These kinds of articles that heavily support LLM usage in programming seem to FOMO you or at least suggest that "you are using it wrong" in a weak way just to invalidate contrary or conservative opinions out of the discussion. These are pure rhetorics with such an empty discourse.
I use these tools everyday and every hour in strange loops (between at least Cursor, ChatGPT and now Gemini) because I do see some value in them, even if only to simulate a peer or rubber duck to discuss ideas with. They are extremely useful to me due to my ADHD and because they actually support me through my executive disfunction and analysis paralysis even if they produce shitty code.
Yet I'm still an AI skeptic because I've seen enough failure modes in my daily usage. I do not know how to feel when faced with these ideas because I feel out of the false dichotomy (pay for them, use them every day, but won't think them as valuable as the average AI bro). What's funny is that I'm yet to see an article that actually shows LLMs strengths and weaknesses in a serious manner and with actual examples. If you are going to defend a position, do it seriously ffs.
Just for fun, I asked ChatGPT and it came up with 30+ fallacies + examples. I'm sure some is hallucinated, but woof:
https://chatgpt.com/share/683e62ed-e118-800f-a404-bd49bec799...
As I said in another post. The article is pure rhetoric. It provides no actual numbers, measurements, or examples.
It's just "AI did stuff really good for me" as the proof that AI works