← Back to context

Comment by dnautics

2 days ago

Think about fitts law: the fastest place to click under a cursor is the location of the cursor. For an LLM the least context-expensive feedback is no feedback at all.

I think codebases that are strongly typed sometimes have bad habits that "you can get away with" because of the typing and feedback loops, the LLM has learned this.

https://x.com/neogoose_btw/status/2023902379440304452?s=61

This is well put. If the LLM gets the type wrong, then we're already discussing a failure scenario with a feedback loop involving back-and-forth changes.

LLMs are not really good at this. The idea that LLMs benefit from TypeScript is a case of people anthropomorphizing AI.

The kinds of mistakes AI makes are very different. It's WAY better than humans at copying stuff verbatim accurately and nailing the 'form' of the logic. What it struggles with is 'substance' because it doesn't have a complete worldview so it doesn't fully understand what we mean or what we want.

LLMs struggle more with requirements engineering and architecture since architecture ties into anticipating requirements changes.

  • > The kinds of mistakes AI makes are very different.

    I think that's a bit extreme. If a programming language has good ergonomics for a short attention span human, it will likely be better for an LLM too.

    However, to make good predictions about what an LLM will or will not be good at you should have a good "theory of mind" for the LLMs that will in some ways be different from a human.