← Back to context

Comment by waynerisner

2 days ago

I’ve had similar friction experiences — especially when reasoning-heavy modes take longer or get retried. That repels me too.

On the search engine comparison: do you feel LLMs reduce cognitive load because they maintain context, whereas search requires more manual synthesis?

Also curious — do you think the frustration is mostly with the model itself, or with the serving/infrastructure layer (Cloudflare, routing, batching, etc.) around it? Both comments seem to point at that layer in different ways.