Comment by addaon
6 days ago
> But this seems like the easiest one to implement.
Even easier: Just fail. In my experience the ChatGPT web page fails to display (request? generate?) a response between 5% and 10% of the time, depending on time of day. Too busy? Just ignore your customers. They’ll probably come back and try again, and if not, well, you’re billing them monthly regardless.
Is this a common experience for others? In several years of reasonable ChatGPT use I have only experienced that kind of failure a couple of times.
I don't usually see responses fail. But what I did see shortly after the GPT-5 release (when servers were likely overloaded) was the model "thinking" for over 8 minutes. It seems like (if you manually select the model) you're simply getting throttled (or put in a queue).
> Is this a common experience for others?
I should think about whether my experience generalizes.
The user seems to have had a different experience.
Stopped reasoning.