Comment by xandrius

6 hours ago

Exactly this. Not sure what code other people who post here are writing but it cannot always and only be bleeding edge, fringe and incredible code. They don't seem to be able to get modern LLMs to produce decent/good code in Go or Rust, while I can prototype a new ESP32 which I've never seen fully in Rust and it can manage to solve even some edge cases which I can't find answers on dedicated forums.

I have a sneaking suspicion that AI use isn't as easy as it's made out to be. There certainly seem to be a lot of people who fail to use it effectively, while others have great success. That indicates either a luck or a skill factor. The latter seems more likely.

What are your secrets? Teach me the dark arts!

There are wide gaps in:

1) the models people are using (default model in copilot vs. Opus 4.5 or Codex xhigh)

2) the tools people are using (ChatGPT vs. copilot vs. codex vs. Claude code)

3) when people tried these tools (e.g., December saw a substantial capability increase but some people only tried AI this one time last March)

4) how much effort people put into writing prompts (e.g., one vague sentence vs. a couple paragraphs of specific constraints and instructions)

Especially with all the hype, it makes sense to me why people have such different estimates for how useful AI actually is.