Comment by rjpower9000

2 days ago

> I mean, you can. "Have an initial idea. Define a utility function. Apply gradient descent".

> It's just that all three steps are really really hard.

Haha, I was hoping for something a little easier than that! I'm not sure gradient descent would apply for all problem spaces, but I get the gist of what you're saying.

> TDDs insight is that gradient descent is relatively easy if the utility function is one-dimensional and monotonic. (Bonus point, it still works with the set of initial ideas being empty)

That's sort of what I was driving at. I certainly won't argue you can't, with time and patience, at least exhaustively enumerate a solution space.

My impression of the TDD literature e.g. things like https://en.wikipedia.org/wiki/Transformation_Priority_Premis... is that they're pushing an idea that you can systematically walk through a set of transformations and get a program, that we can thus avoid the "really really hard" steps you mention.

This hill-climbing style matches closely with the monotonic utility function you mention. And if there are lots of interesting problems where this works for people, then that's great. I certainly won't object to having a system for approaching problems, and the general idea of trying to avoid adding complexity too early.

My original motivation was really just observing what appeared to happen when you apply these techniques _outside of their scope_. The failure mode becomes this sort of fascinating circling around a local minimum, with local changes that don't really make progress towards the ultimate goal. This is exactly what you'd see in an ML domain so it's kind of interesting to see it in the real-world.

Mea culpa: that was the part I found really interesting. I likely tried to stretch the point too broadly. Ultimately what I wanted to convey was there's no general way to avoid the hard part.