Comment by bitwize

1 month ago

To understand how business views developers, reread Tim Bryce's Theory P: The Philosophy of Managing Programmers (which is old enough to drink in the USA today): https://web.archive.org/web/20160407111718fw_/http://phmains...

Tim Bryce was kind of the anti Scott Adams: he felt that programmers were people of mediocre intelligence at best that thought they were so damn smart, when really if they were so smart, they'd move into management or business analysis where they could have a real impact, and not be content with the scutwork of translating business requirements into machine-executable code. As it is, they don't have the people skills or big-picture systems thinking to really pull it off, and that combined with their snobbery made them a burden to an organization unless they were effectively managed—such as with his methodology PRIDE, which you could buy direct from his web site.

Oddly enough, in a weird horseshoe-theory instance of convergent psychological evolution, Adams and Bryce both ended up Trump supporters.

Ultimately, however, "the Bryce was right": the true value in software development lies not in the lines of code but in articulating what needs to be automated and how it can benefit the business. The more precisely you nail this down, the more programming becomes a mechanical task. Your job as a developer is to deliver the most value to the customer with the least possible cost. (Even John Carmack agrees with this.) This requires thinking like a business, in terms of dollars and cents (and people), not bits and bytes. And as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so. Programmers as a professional class will be drastically reduced or eliminated, and replaced with business analysts with some technical understanding but real strength on the business/people side, where the real value gets added. LLMs meaningfully allow people to issue commands to computers in people language, for the very first time. As they evolve they will be more capable of implementing business requirements expressed directly in business language, without an intermediator to translate those requirements into code (i.e., the programmer). This was always the goal, and it's within reach.

Thanks for the information about Tim Bryce and the relationship with Adams's obsessions.

Regarding your assertion:

> as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so.

That remains to be seen. This is the story that AI evangelists are peddling and that employes are salivating over, for sure.

  • Geez, man. Even Eric "Cathedral and the Bazaar" Raymond is mindblown that he can basically specify software into existence. The technology is here today, it's real, and it works.

In my experience translating requirements into a formal language (programming language) is where a lot of the important details are actually worked out. The process of taking the "squishy" thoughts/ideas and translating them into code is a forcing function for actually clarifying and correcting those ideas.

  • In PRIDE, the important details are worked out before the first line of code is written, in the form of flowcharts and other technical documentation. The earlier in the design and development cycle this is done, the less work you have to do over the entire SDLC and the more time/effort/money you'll save.

    Bryce: "Mental laziness can also be found in planning and documenting software. Instead of carefully thinking through the logic of a program using graphics and text, most programmers prefer to dive into source code without much thinking."

    • My point is that you can plan as much as you want in advance, but until the rubber meets the road you will have details that you don't realize are wrong. Those might be small things, or fundamental problems you didn't realize you had.

      IMO writing code isn't really more laborious than writing flowcharts and docs. I typically write code to explore the problem, and iterate until I have a good design.

      What you're describing is more or less the waterfall model, which has its advantages, but also drawbacks. I don't see anything reason to treat code as only a final implementation step. It can also be a useful tool to aid in thinking and design.

      > The earlier in the design and development cycle this is done, the less work you have to do over the entire SDLC and the more time/effort/money you'll save.

      I believe this is only true if you treat the first code you write as the final implementation. Of course that's going to cause problems.