Comment by marcus_holmes
3 days ago
Back in the 90's I ran a dev team building Windows applications in VB, and had the rule that the dev machines had to be lower-specced than the user machines they were programming for.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
> Like maybe we should have dev servers that are deliberately throttled back
This is a good point. Often datasets are smaller in dev. If a reasonable copy of live data is used, devs would have an intuition of what is making things slow. Doesn't work for live data that is too big to replicate on a developer's setup though.