Comment by jaredcwhite
19 hours ago
I'm sad programmers lacking a lot of experience will read this and think it's a solid run-down of good ideas.
19 hours ago
I'm sad programmers lacking a lot of experience will read this and think it's a solid run-down of good ideas.
I’m more afraid that some manager will read this and impose rules on their team. On the surface one might think that having more test coverage is universally good and won’t consider trade offs. I have a gut feeling that Goodhart’s Law accelerated with AI is a dangerous mix.
Goodhart's Law works on steroids with AI. If you tell a human dev "we need 100% coverage," they might write a few dummy tests, but they'll feel shame. AI feels no shame - it has a loss function. If the metric is "lines covered" rather than "invariants checked," the agent will flood the project with meaningless tests faster than a manager can blink. We'll end up with a perfectly green CI/CD dashboard and a completely broken production because the tests will verify tautologies, not business logic
"fast, ephemeral, concurrent dev environments" seems like a superb idea to me. I wish more projects would do it, it lowers the barrier to contributions immensely.
> "fast, ephemeral, concurrent dev environments" seems like a superb idea to me.
I've worked at one (1) place that, whilst not quite fully that, they did have a spare dev environment that you could claim temporarily for deploying changes, doing integration tests, etc. Super handy when people are working on (often wildly) divergent projects and you need at least one stable dev environment + integration testing.
Been trying to push this at $CURRENT without much success but that's largely down to lack of cloudops resources (although we do have a sandbox environment, it's sufficiently different to dev that it's essentially worthless.)
Yeah, this is something I'd like more of outside of Agentic environments; in particular for working in parallel on multiple topics when there are long-running tasks to deal with (eg. running slow tests or a bisect against a checked out branch -- leaving that in worktree 1 while writing new code in worktree 2).
I use devenv.sh to give me quick setup of individual environments, but I'm spending a bit of my break trying to extend that (and its processes) to easily run inside containers that I can attach Zed/VSCode remoting to.
It strikes me that (as the article points out) this would also be useful for using Agents a bit more safely, but as a regular old human it'd also be useful.
What’s bad about them? We make things baby-safe and easy to grasp and discover for LLMs. Understandability and modularity will improve.
I have almost 30 years of experience as a programmer and all of this rings true to me. It precisely matches how I've been working with AI this year and it's extremely effective.
Could you be more specific in your feedback please.
100% test coverage, for most projects of modest size, is extremely bad advice.
Pre-agents, 100% agree. Now, it's not a bad idea, the cost to do it isn't terrible, though there's diminishing returns as you get >90-95%.
7 replies →
laziness? unprofessionalism? both? or something else?
3 replies →