← Back to context

Comment by schoen

4 days ago

Isn't there a less-conceptual (but still conceptual) problem that correctness of software is commonly abrupt rather than continuous? You don't get a series of almost-right programs gradually approximating the right program, you have a correct program and variations on it may fail completely.

Of course, whether this is literally true depends on what sort of algorithmic problem you're approaching.

But there must be many real-world problems in which the very-nearly-correct program produces completely wrong behavior that doesn't resemble the correct behavior at all. In those circumstances, you couldn't expect to find the correct program empirically through iterative improvements.

Edit: Maybe the incremental test-driven approach would work in cases where you have an externally given specification or design that already breaks the algorithmic part up into smaller, easier, and more verifiable pieces.

Obvious counters aside (like syntax issues or whatever) I have almost the opposite intuition. Most of my programs start out as partial solutions to a problem I don't fully understand, and it is only through interaction with the environment (users, or other machines sometimes) that the edge-cases and incorrect assumptions become clear. These programs have a lifecycle of refinement to deployment to analysis to refinement and repeat. At each step they are workable or almost-so, and over time the solutions start to map the domain more correctly. Sometimes (like in business) the domain is evolving simultaneously!

(can give examples if anyone's interested but this is getting long already)

I imagine this wouldn't work so well for hard algorithmic stuff where there are mathematical properties you need to be aware of and maintain. But I find most problems I solve are more organic - people are quite resilient to fuzzy boundaries, so people-facing stuff tends to have that property too. There's a large fuzzy space of "workable solutions", so to speak, and navigating that space is kind of inevitable if you want a quality solution.

Perhaps I'm just not intelligent enough to one-shot that kind of stuff =P

  • >I imagine this wouldn't work so well for hard algorithmic stuff where there are mathematical properties you need to be aware of and maintain

    Mathematical properties are often even more ideal candidates for being encoded into either types or property tests.

    Business oriented code is usually where most people see TDD (how it is normally taught) fall down - where you need "some kind of dashboard with x, y and z" but the exact details arent precisely nailed down. However, if you can do it right these scenarios work extremely well with snapshot test driven development. Most people just dont do it or dont know how.

    • Okay interesting, I've never heard of snapshot testing. I'll have to play with it some time.

      I agree that mathematical problems are much easier to test, but I think only once you know the mathematics. Like I think it's possible that TDD fell flat for the sudoku solver because the dude just didn't know what properties he wanted. In that situation writing tests is like casting bones.

      But I'm not convinced one way or the other... for me tests have always been most useful for regression and basic quality checks. Which is very useful! Means you never (hopefully anyway) take a step backwards as you evolve a program.

    • Aren't snapshot tests regression tests? "Snapshot test driven development" to me implies that you would generate the snapshot you want (somehow) and write code until the output matched the snapshot.

      3 replies →

I always use the following analogy:

I f your customer orders a feature, you implement all the code, just the button to call the feature is missing, then you delivered nothing.

If you just add the button to the program, but implement nothing else, you delivered the feature. It's just still buggy.

> Isn't there a less-conceptual (but still conceptual) problem that correctness of software is commonly abrupt rather than continuous? You don't get a series of almost-right programs gradually approximating the right program, you have a correct program and variations on it may fail completely.

I consider it to be plausible that such a topology could exist (at least for many situations). The problem rather is that such a topology would likely behave very different from users' expectations.

Yes. The difference between "A program that does what you want" and "A program that crashes on startup" can be one character.

Working large systems overwhelmingly started out as working small systems, with working systems all in-between.

This is not an endorsement of TDD, but shows that there is a correctness path from small to large, without usually needing to take large leaps in-between, and taking such a path tends to be the most successful strategy.