← Back to context

Comment by simonw

8 days ago

That's why I'm writing a guide about how to use this stuff to produce good code.

> That's why I'm writing a guide about how to use this stuff to produce good code.

Consider the halting problem[0]:

  In computability theory, the halting problem is the problem
  of determining, from a description of an arbitrary computer
  program and an input, whether the program will finish
  running, or continue to run forever. The halting problem is
  undecidable, meaning that no general algorithm exists that
  solves the halting problem for all possible program–input
  pairs.

Essentially, it identifies that mathematics cannot prove an arbitrary program will or will not terminate based on the input given to it. So if math cannot express a solution to this conundrum, how can any mathematical algorithm generate solutions to arbitrary problems which can be trusted to complete (a.k.a. "halt")?

Put another way, we all know "1 + 2 = 3" since elementary school. Basic math assumed everyone knows.

Imagine an environment where "1 + 2" 99% of the time results in "3", but may throw a `DivisionByZeroException`, return NaN[1], or rewrite the equation to be "PI x r x r".

Why would anyone trust that environment to reliably do what they instructed it to do?

0 - https://en.wikipedia.org/wiki/Halting_problem

1 - https://en.wikipedia.org/wiki/NaN

  • I find the challenge of using LLMs to usefully write software despite their non-deterministic nature to be interesting and deserving of study.

    • I get the appeal and respect the study you are engaging.

      A meta-question I posit is; at what point does the investment in trying to get "LLMs to usefully write software despite their non-deterministic nature" become more than solving the problems at hand without using those tools?

      For the purpose of the aforementioned, please assume commercial use as opposed to academic research.