Comment by 59nadir

2 days ago

I am personally somewhere in-between these two places. I've used ChatGPT to get unstuck a few times this past week because I was at the end of my rope with regards to some GPU crashes that I couldn't make heads or tails of. I then used it for less headache-inducing things and overall it's been an interesting experience.

For research I'm enjoying asking ChatGPT to annotate its responses with sources and reading those; in some cases I've found SIGGRAPH papers that I wouldn't have stumbled upon otherwise, and it's nice to get them all in a response.

ChatGPT (4o, if it's of any interest) is very knowledgeable about DirectX12 (which we switched to just this week) and I've gained tons of peripheral knowledge with regards to the things I've been battling with, but only one out of four times has it been able to actually diagnose directly what the issue was; three separate times it's been something it didn't really bring up or note in any meaningful regard. What helped was really just me writing about it, thinking about everything around it and for that it's been very helpful.

Realistically, if someone let an agent running on this stuff loose on our code base it would likely end up wasting days of time and still not fix the issue. Even worse, the results would have to be tested on a specific GPU to even trigger the issue to begin with.

It seems to me that fancy auto-complete is likely the best this would be able to do still, and I actually like it for that. I don't use LLM-assisted auto-complete anymore, but I used to use GitHub Copilot back in 2022 and it was more productive than my brief tests of agents.

If I were to regularly use LLMs for actual programmit it would most likely be just for tab-completion of "rest of expressions" or 1 line at a time, but probably with local LLMs.