← Back to context

Comment by beering

4 hours ago

people have been complaining about this since GPT-4 and have never been able to provide any evidence (even though they have all their old conversations in their chat history). I think it’s simply new model shininess turning into raised expectations after some amount of time.

I would have thought so too. But my n=1 has CC solving pretty much the same task today and about two weeks ago with drastically degraded results.

The background being that we scrapped working on a feature and then started again a sprint later.

In my cynicism I find it more likely that a massively unprofitable LLM company tries to reduce costs at any price than everyone else suffering from a collective delusion.

I agree with you. I too complain about this same phenomenon with my colleagues, and we always arrive at the same conclusion: it’s probably us just expecting more and more over time.