← Back to context

Comment by shahzaibmushtaq

1 year ago

2 lessons to learn from this blog:

> these LLMs won’t replace software engineers anytime soon, because it requires a strong engineering background to recognize what is actually a good idea, along with other constraints that are domain specific.

> One issue with my experiments is that I’m benchmarking code improvement using Python, which isn’t the coding language developers consider when hyperoptimizing performance.

TBH I'm not sure how he arrived at "won’t replace software engineers anytime soon"

The LLM solved his task. With his "improved prompt" the code is good. The LLM in his setup was not given a chance to actually debug its code. It only took him 5 "improve this code" commands to get to the final optimized result, which means the whole thing was solved (LLM execution time) in under 1 minute.

  • A non-engineer would not be able to interpret ANY of what he did here, or fix any of the bugs.

    • A non-engineer by definition would not be able to fix bugs.

      But why does it matter that they won't be able to interpret anything? Just like with real engineers you can ask AI to provide an explanation digestible by an eloi.

      7 replies →

  • Did you read the two paragraphs written above and the one where he made that statement?

    My comments on "what you are not sure" is that Max is a software engineer (I am sure a good one) and he kept iterating the code until it reached close to 100x faster code because he knew what "write better code" looked like.

    Now ask yourself this question: Is there any chance a no-code/low-code developer will come to a conclusion deduced by Max (he is not the only one) that you are not sure about?

    An experienced software engineer/developer is capable of improving LLM written code into better code with the help of LLM.