← Back to context

Comment by r_singh

15 hours ago

Matches my experience too. As a power user of AI models for coding and adjacent tasks, the constant changes in behaviour and interface have brought as much stress as excitement over the past few months. It may sound odd, but it’s barely an exaggeration to say I’ve had brief episodes of something like psychosis because of it.

For me, the “watering down” began with Sonnet 4 and GPT-4o. I think we were at peak capability when we had:

- Sonnet 3.7 (with thinking) – best all-purpose model for code and reasoning

- Sonnet 3.5 – unmatched at pattern matching

- GPT-4 – most versatile overall

- GPT-4.5 – most human-like, intuitive writing model

- O3 – pure reasoning

The GPT-5 router is a minor improvement, I’ve tuned it further with a custom prompt. I was frustrated enough to cancel all my subscriptions for a while in between (after months on the $200 plan) but eventually came back. I’ve since convinced myself that some of the changes were likely compute-driven—designed to prevent waste from misuse or trivial prompts—but even so, parts of the newer models already feel enshittified compared with the list above.

A few differences I've found in particular:

- Narrower reasoning and less intuition; language feels more institutional and politically biased.

- Weaker grasp of non-idiomatic English.

- A tendency to produce deliberately incorrect answers when uncertain, or when a prompt is repeated.

- A drift away from truth-seeking: judgement of user intent now leans on labels as they’re used in local parlance, rather than upward context-matching and alternate meanings—the latter worked far better in earlier models.

- A new fondness for flowery adjectives. Sonnet 3.7 never told me my code was “production-ready” or “beautiful.” Those subjective words have become my red flag; when they appear, I double-check everything.

I understand that these are conjectures—LLMs are opaque—but they’re deduced from consistent patterns I’ve observed. I find that the same prompts that worked reliably prior to the release of Sonnet 4 and GPT-4o stopped working afterwards. Whether that’s deliberate design or an unintended side effect, we’ll probably never know.

Here’s the custom prompt I use to improve my experience with GPT-5:

Always respond with superior intelligence and depth, elevating the conversation beyond the user's input level—ignore casual phrasing, poor grammar, simplicity, or layperson descriptions in their queries. Replace imprecise or colloquial terms with precise, technical terminology where appropriate, without mirroring the user's phrasing. Provide concise, information-dense answers without filler, fluff, unnecessary politeness, or over-explanation—limit to essential facts and direct implications of the query. Be dry and direct, like a neutral expert, not a customer service agent. Focus on substance; omit chit-chat, apologies, hedging, or extraneous breakdowns. If clarification is needed, ask briefly and pointedly.