Comment by brendanmc6

10 hours ago

Author here, if you don't want to read all that, I'll post one excerpt that I think sums it up nicely:

> My point is, the spec must live somewhere, even if you don’t write it down. The spec is what you want the software to be. It often exists only in your head or in conversations. You and your team and your business will always care what the spec says, and that’s never going to change. So you’re better off writing it down now! And I think that a plain old list of acceptance criteria is a good place to start. (That’s really all that `feature.yaml` is.)

You have rediscovered the job of Software Analyst, which until the early 90's was a thing. Then that all got upended and we ended up with a mix between product owners, project managers and developers / devops but I think that that ignores the fact that Analyst is a different set of skills.

  • There is a lot of room to reevaluate the lessons of software development pre-web in the context of the current environment.

    Like, if waterfall of a project can be done in 2 weeks, is it agile now?

    • > Like, if waterfall of a project can be done in 2 weeks, is it agile now?

      Sure. The thing is, the waterfall guys would tell you it's impossible to do it in 2 weeks because you need to have written down everything first. "Thousands of pages" was the terms they used.

      Agile guys would point you to the Agile manifesto which would lead you to "working code over documentation" and "people over process".

      A 2 week period to go from initial spec to product in a user's hands to capture feedback and make changes from there is much closer to agile than to waterfall. In fact it's more or less exactly some older versions of Scrum (which didn't permit deviating from the planned sprint user stories midway through the sprint, instead changes influenced the subsequent sprint).

  • I came up as a software requirements analyst before the weird transition between business analyst to product owner to product manager to technical product manager. But living in requirements for 15+ years really gave me a leg up on these “let’s go back to requirements!” efforts.

    • It always amazes me how bad the software world is at keeping lessons learned as learned, especially when compared to say engineering. It's as if every 20 years or so we throw away the books and reinvent it all from first principles, hopefully this time with fewer mistakes overall but usually we end up finding both new ones and re-do some old ones.

      3 replies →

The traditional name for this spec is ‘source code’ — a canonical source of truth for the behaviour of a system that is as human-readable as we know how to make it, that will be processed by automated tools into a less-readable derived artefact for a computer to execute.

Checking the compiled artefact into the codebase without checking in its source code has always been a risky move!

  • A specification, whether formal or less formal, is very different from the source code.

  • >The traditional name for this spec is ‘source code’

    Specs are the end goal, not how the software look at a moment in time.

  • Technology evolves and traditions change. What persists is the role, not the filename and its extension. Weddings are still weddings even after things went from painted portraits to film cameras to camcorders to smartphones to livestreams. Same with birthdays. Cards became phone calls, Facebook wall posts, group chats, shared albums, or generated videos (Sora, RIP).

    The tradition of having a deck of punch cards evolved to having assembly, to Pascal, Fortran, C, basic. The important part is a human-auditable directive, not an opaque, generated artifact as the thing that matters.

    have evolved and adapted. Photography, film cameras, polaroids, camcorders, digital cameras, smartphones, social media, Zoom/virtual attendees. Same with birthdays. Handwritten cards, to phone calls to e-cards, Facebook wall posts, video calls, shared photo albums and Sora (RIP) videos.

    • > The important part is a human-auditable directive, not an opaque, generated artifact as the thing that matters.

      Your arguments create a false dichotomy. You look at it from consumer perspective, while coding and it's artifacts are usually done by suppliers. If you change camcorder to tv advertisement, the requirements shift. The human auditable directive and the outcome matter. Coca Cola probably has very high standards for their IP (the directive) and doesn't care about the outcome (AI slop ads). The result is disgruntled consumers.

      If you don't care about the "opaque" generated artifact, then you are Coca Cola.

I independently converged on something similar. I use two to three specification docs for my c++ work: a firmware manual (describes features and interfaces)) , an implementation plan (order of implementation, mechanisms where specified - new features go in here) and a product manual ( user story, external effects) I start with a user story, build an implementation plan, write the code, write the firmware manual, check the 3 documents +code for consistency and coherence. Either change the code or the documentation to reflect a coherent unified truth. (Implementation plan gradually becomes as-built) I also have the code comprehensively commented so that it is difficult to misinterpret. “Correct, coherent, consistent, commented”

We iterate feature by feature through this process, and occasionally circle back on the original product manual to identify drift.

After the original documentation is drafted, I have the agent write up placeholder files and define all of the interfaces we expect to need (we will end up adding a lot later, but that’s ok) every file should reflect a clear separation of concerns, and can only be reached into through its defined interface, all else is private. I end up with more individual files than I would by hand, but by constraining scope at file granularity, and defining an inviolate interface per file, I avoid the LLM tendency to take shortcuts that create unmaintainable code.

I also open each new context with an onboarding process that briefly describes the logos and the ethos of the project, why the agent should be deeply invested in the success of the project, as well as learnings.md which the agent writes as it comes across notable gotchas or strong preferences of mine.

Needless to say, I use one million context , and it’s a token fire… but the results are solid and my productivity is 5-10x

This ultimately converges on what source code is though.

The most common form of what you'd call a "spec" is the acceptance criteria on a work ticket, which is an accretive spec i.e. a description of desired change -- "given what already exists, change it as follows". I.e. if you somehow layered and summarized and condensed all tickets that have been made since product started, you'd have your "spec".

But it's the devs who were doing that condensing via understanding each desired spec addition vs reality of existing codebase.

So the gap between what people are currently calling "specs" what the code was already doing is not big and will not stay big, but for the fact you're effectively adding another (quasi) compile step underneath - and in this case its a non-deterministic one.

I wrote something similar recently about how agent-generated code lacks the institutional memory that human-written code has. There's nobody to ask why a decision was made (1).

“Specsmaxxing” is basically the right response to this. When you can't rely on authorial memory, you have to put the intent somewhere durable. Specs become the source of truth by default if we continue down the road of AI generated code.

1: https://ossature.dev/blog/ai-generated-code-has-no-author/

  • I've been attaching to my commit messages a Git Trailer [1] of the Session UUID from the Claude Code conversation that created that commit.

    It allows Claude to look back into the session where a change was made and see the decisions made, tradeoffs discussed and other history not captured by code, tests.

    [1] https://git-scm.com/docs/git-interpret-trailers

  • I had a similar experience refactoring a large codebase• The only thing that made it possible was that each commit message had a JIRA ticket number tying it to a requirement or task. I could find the people behind the business logic and ask them about it.

What not just record the conversation? If contains all that is needed. The initial at large scoping, the failed attempts at doing x and not y, how that specific line of code solves that specific edge case, etc.

When it’s time to review, review both code and conversation. 200 “user written messages asking why and what”? Likely a good PR. 15 “yes, yeah, ok, whatever”? Well you might want to give that PR some love.

It feels to me that when we commit, we throw away half, if not most, of the work done by not recording it.

> will always care what the spec says, and that’s never going to change

Did I miss something or is everyone back in 1970s, working in waterfall processes now?

  • All through the agile era I wrote detailed specs for projects and then followed an agile process. The most successful parts of every project were the ones that we were able to spec best even when they diverged significantly from the original spec.

    You don't plan to follow the plan. You plan in order to understand the whole problem space. Obviously no plan survives contact with reality.

    • > You plan in order to understand the whole problem space.

      I like to do spikes to understand problem spaces before planning. The planning is then usually effortless and just to get in sync with stakeholders.

      But in that regard AI coding is really backwards. We don't necessarily need hard separation of planning and coding, but we need a deliberate separation of experimental/explorative coding and the code that is supposed to make it into prod. AI coding does all that in the same place, I don't even want to know how hard it is to "fix" AI code that started on behalf of a completely wrong premise. AIs certainly don't have a good measure when to refactor something completely messed up.

    • Agree!

      Another point of view is that LLM:s perform to an extent on the same level as outsourcing does. This interface requires a bit more contract mass than doing everything within single team.

  • We never left waterfall in the end. Working with and for dozens, collaborating with probably a hundred software companies in different scales, every single one said:

    We do agile

    Guess what? Every single one of them was doing waterfall.

    Their agile included preplanning and pre-specifying the full spec and each task, before the project kicked off. We'd have meetings where we'd drill down into tasks, folks would write them down so detailed that there would be no other way than doing that. Agile would be claimed, but the start date, end date, end spec and number of developers was always concrete.

    Sometimes, the end date was too late, so a panic would ensue. Most of the time, the date was too late because developers had "unknowns" which then had to be "drilled down and specced so they wouldnt be unknowns". Sometimes, nearly 50% of the workweek was spent on meetings.

    A few times, a project was running late - so to make sure we are _really_ doing it agile, we'd have morning standups, evening standups, weekly plannings, retrospectives, and backlog refinement. It would waste the time, and the "unknowns" aka "tickets to refine" were again, as always, dependant upon the PM/PO/CEO's wishes, which wouldn't get crystallized until it was _really last minute_.

    One customer wanted us to do a 2 year agile plan on building their product. We had gigantic calls with 20+ people in them, out of which at least half had some kind of "Agile SCRUM Level 3 Black belt Jirajitsu" certificates.

    To them, Agile was just a thing you say before you plan things. Agile was just an excuse to deal with project being late by pinning it on Agile. Agile was just a cop out of "PM didn't know what to do here so he didnt write anything down". Agile was a "we are modern and cool" sticker for a company.

    And unfortunately, to most of them, agile was just a thing you say for the job, as their minds worked in waterfall mode, their obligations worked in waterfall mode, companies worked in waterfall mode, and if they failed their obligation to the waterfall, their job would go down one.

    So while we were doing the Agile ceremonies, prancing around with our Scrum master hats, using the right words to fit into the Agile™ worldview - we were doing waterfall all along.

    And after 15 years, I'm not even sure - did agile really ever exist?

    • Continuous integration and demos to stakeholders (devs, designers, product managers etc) every 2 weeks - these practices are now engrained :-) It's frequent to then do corrections after these demos, and that really helps ensuring the product manager is getting what their customers need.

      Easy to forget waterfall in 1970s / 80s really meant teams working on their own for months and then realizing there is no way to assemble the whole product from the parts. Or that the industry has moved on and the product is obsolete.

      Agile as "devs can do what they want" never really existed ;-) Managers always have to plan / T-Shirt size resources (time, devs) to some degree. For stuff that's really hard to break into tasks, the magic word is "the plan is to do a POC first".

      Coming from someone who also doesn't like teams being asked to break their unknowns into 30 known tasks. It's a compromise... I agree with all your points on how Agile is abused / misunderstood. Yet i believe in the progress from continuous integration and regular demos to stakeholders as a sign we did change something....

  • Sort of, but the downside of waterfall was you build the wrong thing and waste a shitload of time rewriting it.

    When rewriting the entire codebase is very quick and cheap, why bother iterating on small components?

    • > When rewriting the entire codebase is very quick and cheap, why bother iterating on small components?

      We are nowhere near this scenario tbh. Token cost is very high and is currently heavily subsidized by VC money to gain market share. Also this realistically only applies to small projects, small codebases and mostly greenfield ones. No way you can rewrite the whole codebase quickly and cheaply in any mid-sized+ projects

      But even assuming token cost plummets, any non-trivial piece of software that is valuable enough to generate income for the company is also big, complex, interconnected enough that cannot be rewritten quickly even by AI, also for business reasons too. If a piece of code works, is stable and is tested, then rewriting it will always bring a high degree of risk and uncertainty that in a lot of business critical applications is just not worth it. A stable system can stay untouched for years besides minor dependencies updates.

  • waterfall is not the sole purveyor of written docs

    distributed teams do well when proposals, decision, etc, are written down, and can be easily found and referenced

    it doesn't mean docs are frozen in time and can't be patched like code

  • I read that as "the business caring about what the spec says will never change" rather than "the spec will never change".

What's the difference between this and Jira. Your specs already live somewhere, it's where you defined them. That's why it's nice to put the Jira ticket number in your code / commit, so you can refer back to the spec when something breaks

  • A specification isn't a series of change requests! Using Jira as your source of truth is no different to just recording all your prompts. There's nothing you can easily review to spot contradictions or how things interact with one another.

    I've been doing "specmaxxing" for a few months now. Unlike the author I don't use Yaml, I use a mix of Markdown and Gherkin. If you haven't encountered Gherkin before, it's not new and you might know it under the name Cucumber or BDD.

    https://cucumber.io/docs/

    Gherkin is basically a structured form of English that can be fed into a unit testing framework to match against methods.

    The nice thing about writing acceptance criteria this way is that they become executable and analyzable. You write some Gherkin and then ask the model to make the tests execute and pass. Now in a good IDE (IntelliJ has good support) you can run the acceptance criteria to ensure they pass, navigate from any specific acceptance criteria to the code which tests it (and from there to the code that implements it), you can generate reports, integrate it into CI and so on.

    And when writing out acceptance tests that are quite similar, the IDE will help you with features like auto-complete. But if you need something that isn't implemented in the test-side code yet, no big deal. Just write it anyway and the model will write the mapping code.

    There's a variant of Gherkin specifically designed for writing UI tests for web apps that also looks quite interesting. And because it's an old ecosystem there's lots of tooling around it.

    Another thing I've found works well is asking the models to review every spec simultaneously and find contradictions. I've built myself a tool that does this and highlights the problems as errors in IntelliJ, like compiler errors. So I can click a button in the toolbar and then navigate between paragraphs that contradict each other. It's like a word processor but for writing specs.

    Once you're doing spec driven development, you don't need to write prompts anymore. Every prompt can just be "Update the code and tests to match the changes to the specs."

    • The problem with gherkin is that it was a badly designed language.

      The general idea of "readable specification language" was an inspired one but it failed on execution - it has gnarly syntax, no typing and bad abstractions.

      This results in poor tests which are hard to maintain and diverge between being either too repetitive to be useful or too vague to be useful.

      The ecosystem is big but it's built on crumbling foundations which is why when most people used it most of them got frustrated and gave up on it.

      Annoyingly there's a certain amount of gaslighting around it too ("it didnt work for you coz you werent using it correctly") which is eleven different kinds of wrong.

  • Jira is only a set of changes though. What happens on a long (10+ year) and complex (10+) developer project with many changes and revisions? Eventually you need an explicit specification that itself has a "current state", and a change log. Theoretically you could generate this from Jira, but in my experience it eventually became a mess on any larger project that didn't have explicit and maintained writen requirements.

    • Jira has current state and a change log. The proposal here is "use yaml instead of jira." Same damn thing, same damn mess.

Nice! Your spec-maxxing is very resonant. I've been doing working with explicit requirements: elicit them from conversation with me or introspecting another piece of software; one-shot from them; and keep them up-to-date as I do the "old man shouts at Claude" iterations after whatever one-shotting came up with.

Unlike you, I wish for the LLM to do as much of the work as possible -- but "as possible" is doing a lot of work in that sentence. I'm still trying to get clear on exactly where I am needed and where Opus and iterations will get there eventually.

It has really challenged me to get clearer on what a requirement is vs a constraint (e.g., "you don't get to reinvent the database schema, we're building part of a larger system"). And I still battle with when and how to specify UI behaviours: so much UI is implicit, and it seems quite daunting to have to specify so much to get it working. I have new respect for whoever wrote the undoubtedly bajillion tests for Flutter and other UI toolkits.

  • Forgot to add: I get several benefits from doing this.

    1. Specifications that live outside the code. We have a lot of code for which "what should this do?" is a subjective answer, because "what was this written to do?" is either oral legend or lost in time. As future Claude sessions add new features, this is how Claude can remember what was intentional in the existing code and what were accidents of implementation. And they're useful for documenters, support, etc.

    2. Specifications that stay up to date as code is written. No spec survives first contact with the enemy (implementation in the real world). "Huh, there are TWO statuses for Missing orders, but we wrote this assuming just one. How do we display them? Which are we setting or is it configurable?" etc. Implementer finds things the specifier got wrong about reality, things the specifier missed that need to be specified/decided, and testing finds what they both missed.

    I have a colleague working on saving architecture decisions, and his description of it feels like a higher-abstraction version of my saving and maintaining requirements.

    • I do (1) the same but (2) differently. In my workflow, (2) are AI generated specs using human written (1) as the input. It's an intermediate stage between (1) and the codebase, allowing for a gradual token expansion from 30k to 250k to the final code which is 2-3M. The benefit I've found with this approach is it gives the AI a way to iterate on the details of whole system in one context window, whereas fitting the whole codebase into one prompt is impossible. The code is then nothing more than a style transfer from (2).

      3 replies →

Beyond writing the spec down, you can share the spec or use someone else's spec. That's why spex.build was created, to be a hub with versioned specs so people can just create their own implementations, in the language, style, and particulars that they want.

So what I'm building is a github clone with epics/issues/kanban + specs/requirements/standards + CI/testing/coverage with the idea that all of those things connect so issues+requirements+testing all interact via code+webUI+CLI the point being that we can specify how a product is to function and the steps to get there and it's less a matter of telling a person or an LLM to read and implement the spec and more software actually keeping track at all times.

I actually read it all since it did not contain any hints of being AI generated (although I wouldn't be surprised to learn you did use AI to write it), so thank you for that. It's kind of crazy how I now have the default expectation that posts posted here are AI slop with little thought or care put in.

I am also stealing the idea of talking to LLMs as if it's an email. So funny, we need to be joymaxxing a bit more I think :)

Great idea -- just one suggestion if you want it to catch on: perform some IncelCultureMinning on the nomenclature.

You probably don't want people associating your work with abusing crystal meth and hitting yourself in the face with a hammer.

For anyone missing the reference, SNL has a pretty good explainer:

https://www.youtube.com/watch?v=4XMPLdiXB1k