Comment by groby_b
3 days ago
> reduces programming or math to a checklist
I mean, you can. "Have an initial idea. Define a utility function. Apply gradient descent".
It's just that all three steps are really really hard.
TDDs insight is that gradient descent is relatively easy if the utility function is one-dimensional and monotonic. (Bonus point, it still works with the set of initial ideas being empty)
The "tricks in your toolbox" are ultimately all about simplifying the utility function from "exhaustive mapping of problem domain to solution demain proves valid" to a simpler one. In your example, you mapped the problem domain from "solve order two polynomial" to "complete square, take square root, solve order one polynomial".
You _could_ apply these tricks mechanically (hey, that's what symbolic algebraic systems do in your example), but it would require an initial formal specification of the problem - that's ultimately what Norvig does for his Sudoku approach - and, for a general approach, a way to reason over formal specifications.
It always boils down to "how well do you understand the problem, and how well can you describe it formally". TDD works best for "not at all, not at all".
> I mean, you can. "Have an initial idea. Define a utility function. Apply gradient descent".
> It's just that all three steps are really really hard.
Haha, I was hoping for something a little easier than that! I'm not sure gradient descent would apply for all problem spaces, but I get the gist of what you're saying.
> TDDs insight is that gradient descent is relatively easy if the utility function is one-dimensional and monotonic. (Bonus point, it still works with the set of initial ideas being empty)
That's sort of what I was driving at. I certainly won't argue you can't, with time and patience, at least exhaustively enumerate a solution space.
My impression of the TDD literature e.g. things like https://en.wikipedia.org/wiki/Transformation_Priority_Premis... is that they're pushing an idea that you can systematically walk through a set of transformations and get a program, that we can thus avoid the "really really hard" steps you mention.
This hill-climbing style matches closely with the monotonic utility function you mention. And if there are lots of interesting problems where this works for people, then that's great. I certainly won't object to having a system for approaching problems, and the general idea of trying to avoid adding complexity too early.
My original motivation was really just observing what appeared to happen when you apply these techniques _outside of their scope_. The failure mode becomes this sort of fascinating circling around a local minimum, with local changes that don't really make progress towards the ultimate goal. This is exactly what you'd see in an ML domain so it's kind of interesting to see it in the real-world.
Mea culpa: that was the part I found really interesting. I likely tried to stretch the point too broadly. Ultimately what I wanted to convey was there's no general way to avoid the hard part.