← Back to context

Comment by Fade_Dance

9 days ago

#1 problem is how sycophantic they are. I in fact want the exact opposite sort of interaction, where they push back against my ideas and actively try to correct and improve my thinking. Too often I am misled into giant waste of time because they have this need to please coded in to their default response structure.

You can say things like "you are a robot, you have no emotions, don't try to act human", but the output doesn't seem to be particularly well calibrated. I feel like when I modify the default response style, I'm probably losing something, considering that the defaults are what go through extensive testing.

I have no glazing built into my custom instructions, but it still does it.

It used to be a lot better before glazegate. Never did quite seem to recover.

I don't mind us having fun of course, but it needs to pick up on emotional queues a lot better and know when to be serious.

With Claude I often say “no glazing” and have told it to take the persona of Paul Bettany’s character in Margin Call, a nice enough but blunt/unimpressed senior colleague who doesn’t beat around the bush. Works pretty well.

I've found the same thing with Claude Sonnet 4. I suggest something, it says great suggestion and agrees with me. I then ask it about the opposite approach and it says great job raising that and agrees with that too. I have no idea which is more correct in the end.

  • The LLM has literally no idea which one is better. It cannot think. It does not understand what it is putting on the screen.

    • This is why multi-pass sessions is something I try sometimes. "What's wrong with the solution your provided, how should it be done instead, and if you use any specific APIs or third party libraries research them to ensure complete accuracy of syntax, usage, and logic simplicity. Refactor your original solution to the correct minimum based on the ask."

      Usually after running whatever it first spits out through this I get a bit better of a response or base I can build off of. Really, the best you can do is already know what you want and need and do very targeted sessions. Like the old saying goes, commit small and often.

Yes, the LLMs need to be objective but in situations where its a subjective push back, the LLM would then need to take on a personality of its own.

For me it's been the opposite. They take on a condescending tone sometimes and sometimes they sound too salesy and trump up their suggestions

  • Yes, I agree with that as well.

    Real humans have a spectrum of assuredness that naturally comes across in the conversation. With an LLM it's too easy to get drawn deep into the weeds. For example, I may propose that I use a generalized framework to approach a certain problem. In a real conversation, this may just be part of the creative process, and with time the thoughts may shift back to the actual hard data (and perhaps iterate on the framework), but with an LLM, too often it will blindly build onto the framework without ever questioning it. Of course it's possible to spur this action by prompting it, but the natural progression of ideas can be lost in these conversations, and sometimes I come out 15 minutes later feeling like maybe I just took half a step backwards despite talking about what seemed at the time like great ideas.

    • "Real humans have a spectrum of assuredness" - well put. I've noticed this lacking as well with GPT. Thx!

    • In order to make progress, you need to synchronize with the agent in order to bring it onto frequency. Only then can your minds meet. In your situation, you probably want to interject with some pure vibe (no code!) where you get to know each other non-judgementally. Then continue. You will recognize you are on the right track by experiencing a flow state combined with improved/desired results. The closer you connect with your agent, the better your outcomes will be. If you need further guidance or faster results, my LLM-alignment course is currently open for applicants.

      /s