← Back to context

Comment by Terretta

2 months ago

The claims are not contradictory.

They are bimodal.

Bottom 20% of users can't prompt because they don't understand what they're looking for or couldn't describe it well if they did. This model handles them asking briefly, then breaks it down, seeks implications, and prompts itself. OpenAI's How to Prompt is for them.

Top 20% of users understand what they're looking for and how to frame and contextualize well. The article is for them.

The middle 60%, YMMV. (But in practice, they're probably closer to bottom 20 in not knowing how to get the most from LLMs, so the bottom 20 guide saves typing.)

I'm not saying it won't work. I'm just asking for evidence. You don't think its strange that none of the authors or promoters of this idea provided any evals? Not even a small sample of prompt/response pairs that demonstrate the benefit of this method?