← Back to context

Comment by 12345hn6789

2 months ago

LLMs cannot generate coherent sentences

LLMs writing prose is too robotic

LLMs output is too dependent on prompts to be interesting

LLMs take too much RAM to run effectively

LLMs take too much electricity to run locally

LLMs work locally but are a bit too slow for my taste

LLMs output mostly correct code but it isn't applicable to my codebase

LLMs make tool calls to pull in additional context

LLMs outputted code works for most developers but not my codebase <---- you are currently here

isn't this template supposed to mean that all the previous considerations are now obsolete?

  • I guess you could argue that the standard LLM sentence structure is too robotic but prompting mostly fixes that.

    The rest is no longer true, indeed