Comment by hotsauceror
6 days ago
I agree with this sentiment.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
To my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.
You can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.
In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
> can actually be pretty useful. It indicates somebody already minimally investigated a thing
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
Yeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.
These days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.
> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
I am amused by the defeatism in your response that expecting anyone to actually try anymore is a lost cause.
If someone is asking a technical question along the lines of “how does this work” or “can I do this,” then I’d expect them to Google it first. Nowadays I’d also expect them to ask ChatGPT. So I’d appreciate their preamble explaining that they already did that, and giving me the chance to say “yep, ChatGPT is basically right, but there’s some nuance about X, Y, and Z…”
Expecting people to stop asking casual questions to LLMs is definitely a lost cause. This tech isn't going anywhere, no matter how much you dislike it.
> expecting anyone to actually try anymore is a lost cause
Well now you're putting words in my mouth.
If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.
"lets ask the dipshit" is how my colleague phrases it
I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
But I'm not interested in the AI's point of view. I have done that myself.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
You're just disregarding data points because "AI bad".
What if I agree with what AI wrote? Should I try to hide that it was generated?
7 replies →
If I wanted to consult an AI, I'd consult an AI. "I consulted an AI and pasted in its answer" is worse than worthless. "I consulted an AI and carefully checked the result" might have value.