Comment by overgard
7 days ago
LLM's can't really reason, in my opinion (and in a lot of researchers), so, being a little harsh here but given that I'm pretty sure these things are trained on vast swaths of open source software I generally feel like what things like Cursor are doing can be best described as "fancy automated plagiarism". If the stuff you're doing can be plagiarized from another source and adapted to your own context, then LLM's are pretty useful (and that does describe a LOT of work), although it feels like a little bit of a grey area to me ethically. I mean, the good thing about using a library or a plain old google search or whatnot is you can give credit, or at least know that the author is happy with you not giving credit. Whereas with whatever Claude or ChatGPT is spitting out, I mean, I'm sure you're not going to get in trouble for it but part of me feels like it's in a really weird area ethically. (especially if it's being used to replace jobs)
Anyway, in terms of "interesting" work, if you can't copy it from somewhere else than I don't think LLMs are that helpful, personally. I mean they can still give you small building blocks but you can't really prompt it to make the thing.
What I find a bit annoying is that if you sit in the llm you never get an intuition about the docs because you are always asking the llm. Which is nice in some cases but it prevents discovery in other cases. There’s plenty of moments where I’m reading docs and learn something new about what some library does or get surprised it lacks a certain feature. Although the same is true for talking to an llm about it. The truth is that I don’t think we really have a good idea of the best kind of human interface for LLMs as a computer access tool.
FWIW, I've had ChatGPT suggest things I wasn't aware of. For example, I asked for the cleanest implementation for an ordered task list using SQLAlchemy entities. It gave me an implementation but then suggested I use a feature SQLAlchemy already had built in for this exact use case.
SQLAlchemy docs are vast and detailed, it's not surprising I didn't know about the feature even though I've spent plenty of time in those docs.