Comment by devmor

16 days ago

I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

That's what the point of these text documents is, and that's why it doesn't actually produce an efficiency gain the majority of the time.

A programmer who expects the LLM to solve an engineering problem is rolling the dice and hoping. A programmer who has solved an engineering problem and expects the implementation from the LLM will usually get something close to what they want. Will it be faster than doing it yourself? Maybe. Is it worth the cost of the LLM? Probably not.

The wild estimates and hype about AI-assisted programming paradigms come from people winning the dice roll on the former case and thinking that result is not only consistent, but also the same for the latter case.

> I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

Politely need to disagree with this.

Quick example. I'm wrapping up a project where I built an options back-tester from scratch.

The thing is, before starting this, I had zero experience or knowledge with:

1. Python (knew it was a language, but that's it)

2. Financial microstructure (couldn't have told you what an option was - let alone puts/calls/greeks/etc)

3. Docker, PostgreSQL, git, etc.

4. Cursor/IDE/CLIs

5. SWE principles/practices

This project used or touched every single one of these.

There were countless (majority?) of situations where I didn't know how to define the problem or how to articulate the solution.

It came down to interrogating AI at multiple levels (using multiple models at times).

  • I should have specified that I am referring to their usage for experienced developers working on established projects.

    I think that they have much more use for someone with no/little experience just trying to get proof of concepts/quick projects done because accuracy and adherence to standards don't really matter there.

    (That being said, if Google were still as useful of a tool as it was in its prime, I think you'd have just as much success by searching for your questions and finding the answers on forums, stackexchange, etc.)

  • ok, but if you don't have a lot of prior experience in this domain, how do you know your solution is good?

    • *zero experience

      Short answer: No idea. Because I don't trust my existing sources of feedback.

      Longer answer:

      I've only gotten feedback from two sources...

      AI (multiple models) and a friend that's a SWE.

      Despite my best efforts to shut down AI's bias towards positive feedback - it keeps saying the work is ridiculously good and thinks I need to seriously consider a career change.

      My friend - who knows my lack of experience - had a hard time believing I did the work. But he's not a believable source - since friends won't give you cold, hard feedback.

      I'm thinking about sharing it on here when it's done.