← Back to context

Comment by rqtwteye

2 days ago

I have been in the workforce for almost 30 years now and I believe that everybody is getting more squeezed so they don’t have the time or energy to do a proper job. The expectation is to get it done as quickly as possible and not do more unless told so.

In SW development in the 90s I had much more time for experimentation to figure things out. In the last years you often have some manager where you basically have to justify every thing you do and always a huge pile of work that never gets smaller. So you just hurry through your tasks.

I think google had it right for a while with their 20% time where people could do wanted to do. As far as I know that’s over.

People need some slack if you want to see good work. They aren’t machines that can run constantly on 100% utilization.

> In the last years you often have some manager where you basically have to justify every thing you do and always a huge pile of work that never gets smaller. So you just hurry through your tasks.

This has been my exact experience. Absolutely everything is tracked as a work item with estimates. Anything you think should be done needs to be justified and tracked the same way. If anything ever takes longer than the estimate that was invariably just pulled out of someones ass (because it's impossible to accurately estimate development unless you're already ~75% of the way through doing it, and even then it's a crapshoot) you need to justify that in a morning standup too.

The end result of all of this is every project getting bogged down by being stuck on the first version of whatever architecture was thought up right at the beginning and there being piles of tech debt that never gets fixed because nobody who actually understands what needs to be done has the political capital to get past the aforementioned justification filter.

  • Also this push to measure everything means that anything that can’t be measured isn’t valued.

    One of your teammates consistently helps unblock everyone on the team when they get stuck? They aren’t closing as many tickets as others so they get overlooked on promotions or canned.

    One of your teammates takes a bit longer to complete work, but it’s always rock solid and produces fewer outages? Totally invisible. Plus they don’t get to look like a hero when they save the company from the consequences of their own shoddy work.

    • The biggest mistake those employees make on their way to getting overlooked is assuming their boss knows.

      Everyone needs to advocate for themselves.

      A good boss will be getting feedback from everyone and staying on top of things. A mediocre boss will merely see "obvious" things like "who closed the most tickets." A bad boss may just play favorites and game the system on their own.

      If you've got a bad boss who doesn't like you, you're likely screwed regardless. But most bosses are mediocre, not actively bad.

      And in that case, the person who consistently helps unblock everyone needs to be advertising that to their manager. The person who's work doesn't need revisiting, who doesn't cause incidents needs to be hammering that home to their manager. You can do that without throwing your teammates under the bus, but you can't assume omnipotence or omniscience. And you can't wait until the performance review cycle to do it, you have to demonstrate it as an ongoing thing.

      6 replies →

    • What you're describing was precisely our culture at the last startup.

      One group plans ahead and overall do a solid job, so they're rarely swamped, never pull all-nighters. People are never promoted, they're thought of as slacking and un-startup-like. Top performers leave regularly because of that.

      The other group is behind on even the "blocker"-level issues, people are stressed and overworked, weekends are barely a thing. But — they get praised for hard work. The heroes. (And then leave after burning out completely.)

      (The company was eventually acquired, but employees got pennies. So it worked out well for the founders, while summarily ratfucking everyone else involved. I'm afraid this is very common.)

      1 reply →

    • > Also this push to measure everything means that anything that can’t be measured isn’t valued.

      Never thought I'd see an intelligent point made on hackernews, but there it is. You are absolutely correct. This really hit home for me.

      1 reply →

  • It's fascinating that you end up sort of doing the work twice, you build an excel (or jira) model of the work work along with the actual work to be done.

    Often this extends to the entire organization, where you have like this parallel dimension of spreadsheets and planning existing on top of everything.

    Eats resources like crazy to uphold.

    • Jira is already almost like "productivity theater" where engineers chart the work for the benefit of managers, and managers of managers only. Many programmers already really resent having to deal with it. Soon it will be a total farce, as engineers using MCP Jira servers have LLMs chart the "work" and manage the tickets for them, as managers do the same in reverse, instructing LLMs to summarize the work being done in Jira.

      It'll be nothing but LLMs talking to other LLMs under the guise of organizational productivity in which the only one deriving any value from this effort is the companies charging for the input and output tokens. Except, they are likely operating at a loss...

      15 replies →

    • Yes but metrics! How can the CEO look like they know what's happening without understanding anything if they don't have everyone producing numbers?

    • This compounds with each _team_ modeling the work in jira/excel too!

  • Absolutely everything is tracked as a work item with estimates. Anything you think should be done needs to be justified and tracked the same way.

    My grandpa once said something that seemed ridiculous but makes a lot of sense: that every workplace should have a “heavy” who steals a new worker’s lunch on the first day, just to see if he asserts himself. Why? Not to haze or bully but to filter out the non-fighters so that when management wants to impose quotas or tracking, they remember that they’d be enforcing this on a whole team of fighters… and suddenly they realize that squeezing the workers isn’t worth it.

    The reason 1950s workplaces were more humane is that any boss who tried to impose this shit on workers would have first been laughed at, and then if he tried to actually enforce it by firing people, it would’ve been a 6:00 in the parking lot kinda thing.

    • > steals a new worker’s lunch on the first day, just to see if he asserts himself

      > to filter out the non-fighters

      This is bullying and hazing.

    • Many of the workers in the 1950s were combat veterans who had lived through some shit and weren't as easy to push around. Contrast that to today when a lot of people tend to panic over minor hazards like a respiratory virus with a >99% survival rate. That cowardice puzzled me until I realized that a lot of younger people have led such sheltered lives that they have never experienced any real hardship or serious physical danger so they lack the mental resilience to cope with it. They just want to be coddled and aren't willing to fight for anything.

    • That generation had it more together as citizens, and they held on to power for a long time. Postwar all of the institutions in the US grew quickly, and the WW2 generation moved up quickly as a result. The boomer types sat in the shadows and learned how to be toxic turds, and inflicted that on everyone.

      10 replies →

    • What if the workers decide the work is imposing on them? Maybe that's a good thing but it could go too far.

    • > The reason 1950s workplaces were more humane is that any boss who tried to impose this shit on workers would have first been laughed at, and then if he tried to actually enforce it by firing people, it would’ve been a 6:00 in the parking lot kinda thing.

      That era also had militant labor organization and real socialist and communist parties in the US. Anticommunism killed all that and brought us to the current state of affairs where employers that respect their employees even a little bit are unicorns.

      2 replies →

This is my experience as well. In the late 90s/early 2000s I had the luxury of a lot of time to deeply and learn Unix, Perl, Java, web development, etc., and it was all self-directed. Now with Agile, literally every hour is accounted for, though we of course have other ways of wasting time by overestimating tasks and creating unnecessary do-nothing stories in order to inflate metrics and justify dead space in the sprint.

  • >> literally every hour is accounted for

    I saw one company where early-career BA/PMs (often offshore) would sit alongside developers and "keep them company" almost all day via zoom.

    • Everyone's complaining about that as a developer, and rightly so. But that can't be easy for the PMs, either, trying to find a way to "add value" when they have no idea what's going on.

      I'd expect there to be some "unexpected network outages" regularly in that kind of situation...

    • This is kind of cool as an alternative process to develop apps with. Literally product in a zoom window telling you what to build as you go along. No standups, no refinement, no retros etc. Just a PM that really knows what the customer needs and the developer just building those as you go along.

      2 replies →

  • If you're creating nothing stories to justify work life balance and avoid burnout your organization has a problem. Look into Extreme Programming and Sustainable Pace.

    • I think thats the observation being made. Most people respond to the organizational problem with the only tools they have, which manifests as that.

      Usually management knows and doesnt care about the problem

      1 reply →

  • And yet well over half of professional developers have productivity so low that if they get laid off the term gets the same amount done...

> People ... aren’t machines that can run constantly on 100% utilization.

You also can't run machines at 100% utilisation & expect quality results. That's when you see tail latencies blow out, hash maps lose their performance, physical machines wear supra-linearly... The list goes on.

  • The standard rule for CPU-bound RPC server utilization is 80%. Any less and you could use fewer machines; any more and latency starts to take a hit. This is when you're optimizing for latency. Throughput is different.

    • Doesn't this depend on the number of servers, crash rates and recovery times? I wouldn't feel confident running 3 servers running at 80% capacity in ultra low latency scenarios. A single crash would overwhelm the other 2 servers in no time.

      1 reply →

  • Difference is machines break and that costs lots of money.

    People just quit, some businesses consider it a better outcome.

> I have been in the workforce for almost 30 years now and I believe that everybody is getting more squeezed so they don’t have the time or energy to do a proper job. The expectation is to get it done as quickly as possible and not do more unless told so.

That's my impression as well, but I'd stress that this push is not implicit or driven by metrics or Jira. This push is sold as the main trait of software projects, and what differentiates software engineering from any other engineering field.

Software projects are considered adaptable, and all projects value minimizing time to market. This means that on paper there is no requirement to eliminate the need to redesign or reimplement whole systems or features. Therefore, if you can live with a MVP that does 70% of your requirements list but can be hacked together in a few weeks, most would not opt to spend more man months only to get minor increments. You'd be even less inclined to pay all those extra man months upfront if you can quickly get that 70% in a few weeks and from that point onward gradually build up features.

You can’t brute-force insight.

I'm often reminded of that Futurama episode “A Pharaoh to Remember” (S04E07), where Bender is whipping the architects/engineers in an attempt to make them solve problems faster.

Definitely squeezed.

They say AI, but AI isn't eliminating programming. I've wrote a few applications with AI assistance. It probably would've been faster if I wrote it myself. The problem is that it doesn't have context and wildly assumes what your intentions are and cheats outcomes.

It will replace juniors for that one liner, it won't replace a senior developer who knows how to write code.

  • I felt this way with Github Copilot but I started using Cursor this week and it genuinely feels like a competent pair programmer.

    • What work are you doing the last few days? My experience is for a very narrow range of tasks, like getting the basics of a common but new to me API working, they are moderately useful. But the overwhelming majority of the time they are useless.

    • This has been my experience as well.

      Cursor Chat and autocomplete are near useless, and generate all sorts of errors, which on the whole cost more time.

      However, using composer, passing in the related files explicitly in the context, and prompting small changes incrementally has been a game changer for me. It also helps if you describe the intended behaviour in excruciating detail, including how you want all the edge cases/errors handled.

    • I recently tried Cursor for about a week and I was disappointed. It was useful for generating code that someone else has definitely written before (boilerplate etc), but any time I tried to do something nontrivial, it failed no matter how much poking, prodding, and thoughtful prompting I tried.

      Even when I tried to ask it for stuff like refactoring a relatively simple rust file to be more idiomatic or organized, it consistently generated code that did not compile and was unable to fix the compile errors on 5 or 6 repromptings.

      For what it's worth, a lot of SWE work technically trivial -- it makes this much quicker so there's obviously some value there, but if we're comparing it to a pair programmer, I would definitely fire a dev who had this sort of extremely limited complexity ceiling.

      It really feels to me (just vibes, obviously not scientific) like it is good at interpolating between things in its training set, but is not really able to do anything more than that. Presumably this will get better over time.

      7 replies →

I was about to post largely the same thing. There is a saying in design: "Good, fast, cheap --- pick two." The default choice always seems to be fast and cheap nowadays. I find myself telling other people to take their time, but I too have worked jobs where the workloads were far too great to do a decent job. So this is what we get.

The article addresses the fact that it's more of the "job" that the software company provides as an extension of their services isn't really a "job" a la "SW development in the 90s"

It's the after effect of companies not being penalized for using the exploitation dragnet approach to use people in desperate situations to generate more profits while providing nothing in return.

Have we learnt nothing? 100% utilisation of practically any resource will result in problems with either quality or schedules.

What, as an industry, do we need to do to learn this lesson?

  • It needs to be reflected faster in quarterly results. When the effect takes a year or two, nobody notices and there are too many other variables/externalities to place blame.

People have to care about outcomes in order to get good outcomes. Its pretty difficult to get someone to work extra time, or care about the small stuff if there is a good chance that they will be gone in 6 months.

Alternatively, if leadership is going to cycle over in 6 months - then no one will remember the details.

One time during a 1:1 with who I consider the best manager I ever had, in the context of asking now urgent something needed to get done, I said something along the llines of how I tend to throttle to around 60% of my "maximum power" to avoid burnout but I could push a bit harder if the task we were discussing was essential with to warrant it. He said that it wasn't necessary but also stressed that any time in the future that I did push myself further, I should always return to 60% power as soon as I could (even if the "turbo boost" wasn't enough to finish whatever I was working on. To this day, I'm equally amazed at both how his main concern with the idea of me only working at 60% most of the time was that I didn't let myself get pressured into doing more than that and the fact that there are probably very few managers out there who would react well to my stating the obvious truth that this is necessary

I totally agree, it was a stark contrast between phd life and purely sw engineer life, in terms of doing things the way i wanted.

I've always thought if I gave better estimates about how long things would take, my schedule would support a decent job.

But black swans seem to be more common than anticipated.

(I also wonder - over your career, do you naturally move up to jobs with higher salaries and higher expectations?)

It's almost as if people don't understand what the word "productivity" means. That's all it is, if you hear "x increase in productivity" and it sounds great, it really means : you, the worker, work harder after we fire other people and thus are "more productive" because you did the same out put that 2 people did. Sucker. And we all eat this shit up.

> People need some slack

Definitely. If you tighten a bearing up-to 100% - to zero "play", it will stop rotating easy.. and start wearing. Which is.. in people-terms, called burnout.

Or as article below says, (too much) Efficiency is the Enemy..

https://fs.blog/slack/

I think letting devs 2 hours a day, that they can flex so if they wanna use it on Fridays its fine, for personal projects, whether internal or otherwise. Just think of all the random tech debt that could be resolved if devs had 2 hours a day to code anything, including new projects that benefit everyone. Most people can only squeeze out about 6 hours worth of real work anyway. You burn up by the end of the day.

  • >Just think of all the random tech debt that could be resolved if devs had 2 hours a day to code anything, including new projects that benefit everyone.

    regardless of the potential benefits of this plan, zero tech debt would get erased.

    imho net tech debt would increase by the 80 20 rule, meaning that you're not going to get more than 80% of the side projects fully wrapped in the 20% of the time that you've allotted to them.

    • I guess tech debt could even be increased in some cases. Some people shouldn't have too much time available :-)

I've even seen this and it seems to have accelerated in the last 10 years or so. I'm seeing roles be combined, deadlines get tighter, and quality go down. Documentation has also gotten worse. This all seems pretty odd when you consider the tools to develop, test, and even document have mostly gotten more powerful/better/faster.

There are fields of study that agree with you. It is evidence based that treating your workers well, having reasonable quotas and expectations for work life balance, good wages and reinforcement for effort, etc creates conditions where workers perform more efficiently and last longer

But many organizations reject this. Why wouldn’t they? There is a surplus of workers and consumers accept substandard products. Skimp on training, put out crap. Throw workers into the fire, demand everything from them, get furious if they don’t prioritize the company above everything in their life, burn them out, cut them loose, pick another from the stack of resumes

I was talking to someone who works for a startup recently. A colleague died and it was announced on a Friday. They were expected to finish the day. On Monday their replacement started and the team was told to bring this person up to speed asap. No space to grieve, no time to process. Soulless and inhuman. Disgusting and sociopathic behavior

Capitalism eventually ends up in those with capital making those without capital work until they drop. We are in that eventuality right now.

Same. What's crazier now is nobody in management seems to want to take a risk, when the risks are so much lower. We have better information, blogs, posts on how others solved issue, yet managers are still like "we can't risk changing our backend from dog shit to postgres". . . .when in the 90s you would literally be figuring it all out yourself, making a gut call and you'd be supported to venture into the unknown.

now it's all RSU, Stock Prices, FAANG ego stroking and mad dashes for the acquihire exit pushing out as much garbage as possible while managers shine it up like AI goodness

> In SW development in the 90s I had much more time for experimentation to figure things out. In the last years you often have some manager where you basically have to justify every thing you do and always a huge pile of work that never gets smaller.

Software development for a long time had the benefit that managers didn't get tech. They had no chance of verifying if what the nerds told them actually made sense.

Nowadays there's not just Agile, "business dashboards" (Power BI and the likes) and other forms of making tech "accountable" to clueless managers, but an awful lot of developers got bought off to C-level and turned into class traitors, forgetting where they came from.

  • I commend you for having an opinion so bad I can't tell if you're satirizing marxists or not.

    Let me ask you this, would you rather be managed by a hierarchy made up of people who don't understand what you do? Because I assure you it is far worse than being managed by "class traitors".

    • well, not the original poster, but I have been managed by both kinds, and the best manager I ever had was not a former techie and the worst was a former programmer.

      The worst manager did often say things that were sort of valuable and correct in a general way, like "well you don't actually know that because it hasn't been tested" which was of course true, but he also seemed to think he could tell people what the correct way to do something was without knowing the technology and the codebase. This often meant that I had to go to junior developers later, after a meeting, and say "concerning ticket X, T. didn't consider these things(listing the things), so that while it is true that we should in principle do what T. said, it will not be adequate, you will also need to do this - look at the code for this function here, it should be abstracted out in some way probably, this was my crappy way of handling the problem in desperation Y months ago."

      Trying to explain to him why he was wrong was impossible in itself, he was a tech genius evidently, and you just had to give it up after a bit, and figure that at some time in the future the decisions would be reversed after "we learned" something.¨

      on edit: in the example I give the manager as I said was correct in what he wanted done, but as I said it was inadequate as the bug would keep recurring if only that was done, so more things had to be done that were not as pretty or as pure as what he wanted.

    • I want my manager to help get the business out of my way- managing requirements, keeping external dependencies on track, fussy paperwork and such.

      I don't need my manager second-guessing my every decision or weighing in on my PRs making superficial complaints about style while also bemoaning our velocity.

      Hands down, the best managers I've had have all been clueless about the languages and types of work I do, and the worst managers have (or think they) have some understanding of what I do.

    • > Let me ask you this, would you rather be managed by a hierarchy made up of people who don't understand what you do? Because I assure you it is far worse than being managed by "class traitors".

      One's direct manager should be a developer, yes. The problem is the level above that - most organisations don't have a SWE career track, so if you want a pay rise you need a promotion and that's only available for managerial roles.

      The problem there is that a lot of developers make very bad managers and a lot of organisations don't give a fuck about giving their managers the proper skills training. The result is then usually a "tech director" who hasn't touched code in years but just loves to micromanage based on knowledge from half a decade ago or more. That's bad enough in Java, but in NodeJS, Go, Rust or other hipster reinvent-the-wheel stacks it's dangerous.

      They come in and blather completely irrelevant, way outdated or completely wrong "advice", plan projects with way less resources than the project would actually need - despite knowing what "crunch time" entails for their staff themselves.

      1 reply →