← Back to context

Comment by MarkusQ

6 days ago

This begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?

If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?

It's more like someone asks if there are McDonald's in San Francisco, and then someone else searches "mcdonald's san francisco" on Google Maps and then replies with the result. It would have been faster for the person to just type their question elsewhere and get the result back immediately instead of someone else doing it for them.

  • Right. If someone asks "What does ChatGPT think about ...", I'd fully agree that they're being lazy. But if that's _not_ what they ask, we shouldn't assume that that's what they meant.

    We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.