Comment by rjpower9000

3 days ago

I won't argue with the idea whether you can test a particular program produces a result (we aren't so interested in programs that run forever in any case). Your observation of the origins of TDD makes sense in this regard: there may be domains where the TDD approach is more useful, but it's not a generic panacea, and assuming it is can lead to sadness.

My point was more about arbitrary programs and tasks. In general, I don't think there's any particular process you can follow that, in effect, reduces programming or math to a checklist. I'm sure there are techniques that can help structure many problems but they're necessarily fairly general e.g. "think about the problem, take a nap, think about it some more".

What I was hoping to highlight was more the danger of trying to hill-climb without having the right set of skills, thus assuming you can solve a problem by following a particular pattern.

To give an example, I remember trying to derive the quadratic equation when I was young, and no amount of algebraic repositioning was going to help me unless I know how to complete the square! But once that type of trick is in your toolbox, you can begin to solve all sorts of other problems.

Even the bowling calculator can become hard if you don't know what your doing, not because TDD is bad or a wrong way to do things, but because you don't know how to reason about your program. Even bowling has 12^13 outcomes, with a bunch of silly special cases if you approach it the wrong way. Thus a truly naive approach might lead you down a weird direction.

> reduces programming or math to a checklist

I mean, you can. "Have an initial idea. Define a utility function. Apply gradient descent".

It's just that all three steps are really really hard.

TDDs insight is that gradient descent is relatively easy if the utility function is one-dimensional and monotonic. (Bonus point, it still works with the set of initial ideas being empty)

The "tricks in your toolbox" are ultimately all about simplifying the utility function from "exhaustive mapping of problem domain to solution demain proves valid" to a simpler one. In your example, you mapped the problem domain from "solve order two polynomial" to "complete square, take square root, solve order one polynomial".

You _could_ apply these tricks mechanically (hey, that's what symbolic algebraic systems do in your example), but it would require an initial formal specification of the problem - that's ultimately what Norvig does for his Sudoku approach - and, for a general approach, a way to reason over formal specifications.

It always boils down to "how well do you understand the problem, and how well can you describe it formally". TDD works best for "not at all, not at all".

  • > I mean, you can. "Have an initial idea. Define a utility function. Apply gradient descent".

    > It's just that all three steps are really really hard.

    Haha, I was hoping for something a little easier than that! I'm not sure gradient descent would apply for all problem spaces, but I get the gist of what you're saying.

    > TDDs insight is that gradient descent is relatively easy if the utility function is one-dimensional and monotonic. (Bonus point, it still works with the set of initial ideas being empty)

    That's sort of what I was driving at. I certainly won't argue you can't, with time and patience, at least exhaustively enumerate a solution space.

    My impression of the TDD literature e.g. things like https://en.wikipedia.org/wiki/Transformation_Priority_Premis... is that they're pushing an idea that you can systematically walk through a set of transformations and get a program, that we can thus avoid the "really really hard" steps you mention.

    This hill-climbing style matches closely with the monotonic utility function you mention. And if there are lots of interesting problems where this works for people, then that's great. I certainly won't object to having a system for approaching problems, and the general idea of trying to avoid adding complexity too early.

    My original motivation was really just observing what appeared to happen when you apply these techniques _outside of their scope_. The failure mode becomes this sort of fascinating circling around a local minimum, with local changes that don't really make progress towards the ultimate goal. This is exactly what you'd see in an ML domain so it's kind of interesting to see it in the real-world.

    Mea culpa: that was the part I found really interesting. I likely tried to stretch the point too broadly. Ultimately what I wanted to convey was there's no general way to avoid the hard part.