← Back to context

Comment by HarHarVeryFunny

10 hours ago

I thought this article was going to be about something else ...

It is really about prompting and writing specs - the "soft" (but really "hard") skill of giving detailed specs to an LLM so it does what you want.

I think the more important, truly soft, skill in the age of AI is going to be communicating with humans and demonstrating your value in communicating both vertically up and down and horizontally within your team. LLMs are becoming quite capable at the "autistic" skill of coding, but they are still horrible communicators, and don't communicate at all unless spoken to. This is where humans are currently, and maybe for a long time, irreplaceable - using our soft skills to interact with other humans and as always translate and refine fuzzy business requirements into the unforgiving language of the machine, whether that is carefully engineered LLM contexts, or machine code.

As far as communication goes, I have to say that Gemini 3.0, great as it is, is starting to grate on me with it's sycophantic style and failure to just respond as requested rather than to blabber on about "next steps" that it is constantly trying to second guess from it's history. You can tell it to focus and just answer the question, but that only lasts for one or two conversational turns.

One of Gemini's most annoying traits is to cheerfully and authoritatively give design advice, then when questioned admit (or rather tell, as if it were it's own insight) that this advice is catastrophically bad and will lead to a bad outcome, and without pause then tell you what you really should do, as if this is going to be any better.

"You're absolutely right! You've just realized the inevitable hard truth that all designers come to! If you do [what I just told you to do], program performance will be terrible! Here is how you avoid that ... (gives more advice pulled out of ass, without any analysis of consequences)"

It's getting old.