Comment by AIPedant

3 months ago

I realize there were bigger problems, but this makes me very sad:

  Learning - Over the past year my workflow has changed immensely, and I regularly use AI to learn new technologies, discuss methods and techniques, review code, etc. The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance. While Bevy and Rust evolve rapidly - which is exciting and motivating - the pace means AI knowledge lags behind, reducing the efficiency gains I have come to expect from AI assisted development. This could change with the introduction of more modern tool-enabled models, but I found it to be a distraction and an unexpected additional cost.

In 2023 I wondered if LLM code generation would throttle progress in programming language design. I was particularly thinking about Idris and other dependently-typed languages which can do deterministically correct code generation. But it applies to any form of language innovation: why spend time learning a new programming language that 100% reliably abstracts boilerplate away, when an LLM can 95% reliably slop the boilerplate? Some people (me) will say that this is unacceptably lazy and programmers should spend time reading things, the other will point to the expected value of dev costs or whatever. Very depressing.