← Back to context

Comment by jonathanstrange

2 days ago

I'm also using Gemini and it's the only option that consistently works for me so far. I'm using it in chat mode with copy&paste and it's pleasant to work with.

Both Claude and ChatGPT were unbearable, not primarily because of lack of technical abilities but because of their conversational tone. Obviously, it's pointless to take things personally with LLMs but they were so passive-aggressive and sometimes maliciously compliant that they started to get to me even though I was conscious of it and know very well how LLMs work. If they had been new hires, I had fired both of them within 2 weeks. In contrast, Gemini Pro just "talks" normally, task-oriented and brief. It also doesn't reply with files that contain changes in completely unrelated places (including changing comments somewhere), which is the worst such a tool could possibly do.

Edit: Reading some other comments here I have to add that the 1., 2. ,3. numbering of comments can be annoying. It's helpful for answers but should be an option/parameterization.

> Both Claude and ChatGPT were unbearable, not primarily because of lack of technical abilities but because of their conversational tone.

It's pretty much trial and error.

I tried using ChatGPT via the webchat interface on Sunday and it was so terse and to the point that it was basically useless. I had to repeatedly prompt for all the hidden details that I basically gave up and used a different webchat LLM (I regularly switch between ChatGPT, Claude, Grok and Gemini).

When I used it a month ago, it would point out potential footguns, flaws, etc. I suppose it just reinforces the point that "experience" gained using LLMs is mostly pointless, your experience gets invalidated the minute a model changes, or a system prompt changes, etc.

For most purposes, they are all mostly the same i.e. produce output so similar you won't notice a difference.

I think you’re highlighting an aspect of agentic coding that’s undervalued: what to do once trust is breached… ?

With humans you can categorically say ‘this guy lies in his comments and copy pastes bullshit everywhere’ and treat them consistently from there out. An LLM is guessing at everything all the time. Sometimes it’s copying flawless next-level code from Hacker News readers, sometimes it’s sabotaging your build by making unit tests forever green. Eternal vigilance is the opposite of how I think of development.