Comment by ddxv

2 days ago

I recently deleted my ChatGPT and was just looking around for what the other options are. I kinda like fast, but Grok / Perplexity are behind a perpetual CloudFlare check for me. I'm looking for something that doesn't spend forever reasoning out the answer to some basic quick question.

It's interesting, as I type that out it makes me wonder why not just go back to the search engine since it has the AI summaries that have been getting better.

Finally, I do also like the longer reasoning when I have a tough question and usually like to copy paste it around to various models and compare their responses.

I’ve had similar friction experiences — especially when reasoning-heavy modes take longer or get retried. That repels me too.

On the search engine comparison: do you feel LLMs reduce cognitive load because they maintain context, whereas search requires more manual synthesis?

Also curious — do you think the frustration is mostly with the model itself, or with the serving/infrastructure layer (Cloudflare, routing, batching, etc.) around it? Both comments seem to point at that layer in different ways.