Comment by quantdev1

16 days ago

> I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

Politely need to disagree with this.

Quick example. I'm wrapping up a project where I built an options back-tester from scratch.

The thing is, before starting this, I had zero experience or knowledge with:

1. Python (knew it was a language, but that's it)

2. Financial microstructure (couldn't have told you what an option was - let alone puts/calls/greeks/etc)

3. Docker, PostgreSQL, git, etc.

4. Cursor/IDE/CLIs

5. SWE principles/practices

This project used or touched every single one of these.

There were countless (majority?) of situations where I didn't know how to define the problem or how to articulate the solution.

It came down to interrogating AI at multiple levels (using multiple models at times).

I should have specified that I am referring to their usage for experienced developers working on established projects.

I think that they have much more use for someone with no/little experience just trying to get proof of concepts/quick projects done because accuracy and adherence to standards don't really matter there.

(That being said, if Google were still as useful of a tool as it was in its prime, I think you'd have just as much success by searching for your questions and finding the answers on forums, stackexchange, etc.)

  • Thanks for clarifying, and great points.

    I could see how it would be dangerous in large-scale production environments.

    • Not just dangerous, but much less useful in general! Once you are making changes to a large piece of software, the context of your problem grows exponentially and you can provide less and less as summary to the LLM. Of course "prompt engineering" is the art of distilling this context down as accurately as possible, but it becomes a diminishing return in all but the most perfectly architectured, functional solutions where problems are encapsulated.

ok, but if you don't have a lot of prior experience in this domain, how do you know your solution is good?

  • *zero experience

    Short answer: No idea. Because I don't trust my existing sources of feedback.

    Longer answer:

    I've only gotten feedback from two sources...

    AI (multiple models) and a friend that's a SWE.

    Despite my best efforts to shut down AI's bias towards positive feedback - it keeps saying the work is ridiculously good and thinks I need to seriously consider a career change.

    My friend - who knows my lack of experience - had a hard time believing I did the work. But he's not a believable source - since friends won't give you cold, hard feedback.

    I'm thinking about sharing it on here when it's done.