← Back to context Comment by sjsdaiuasgdia 14 days ago Maybe "idea evaluation" is just a bad use case for LLMs? 2 comments sjsdaiuasgdia Reply ggus 14 days ago Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"... sjsdaiuasgdia 14 days ago Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.
ggus 14 days ago Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"... sjsdaiuasgdia 14 days ago Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.
sjsdaiuasgdia 14 days ago Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.
Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.
ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"...
Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.