The Five Levels: From spicy autocomplete to the dark factory

17 days ago (danshapiro.com)

I've talked to a team that's doing the dark factory pattern hinted at here. It was fascinating. The key characteristics:

- Nobody reviews AI-produced code, ever. They don't even look at it.

- The goal of the system is to prove that the system works. A huge amount of the coding agent work goes into testing and tooling and simulating related systems and running demos.

- The role of the humans is to design that system - to find new patterns that can help the agents work more effectively and demonstrate that the software they are building is robust and effective.

It was a tiny team and they stuff they had built in just a few months looked very convincing to me. Some of them had 20+ years of experience as software developers working on systems with high reliability requirements, so they were not approaching this from a naive perspective.

I'm hoping they come out of stealth soon because I can't really share more details than this.

  • Holy cow I actually bought this comment and it was on my mind for a bit, then saw another simonw comment about "the team" below. Check your sources folks!

    Almost had me you cheeky devil you :)

  • What's the point honestly.

    Given the pace of current ai, in 2 months dark factories will peak hype and then in another 6 months it will be fully identified in its cost/benefit drawbacks, and the wisdom of the crowds will have a relatively accurate understanding of its general usefulness, and the internet will move on to other things.

    The next generation of ai coding will make dark factories legit due to their ability to architect decently. Then generation after will make dark factories obsolete due to their ability to make it right the first time. That's about 8 months out for SOTA, and 14 months out for Sonnet/Flash/Pro users.

    No need for them to come out of stealth, just imagine 1000s of junior/mid engineers crammed into an office given vague instructions to build an app and spit out code. Imagine a cctv in the room overlooking the hundreds of desks, and then press fast forward 100x speed.

    That's literally what they built, because that's what's possible with Opus.

    • The funny thing is that the rest of the software industry is dying, except for the trillions of venture capital being invested into these AI coding whatevers. But given the slow death of software, once these AI coding whatevers are finished, there's going to be nothing of value left for them to code.

      But I'm sure the investors will still come out just fine.

  • You'd think at some point it'll be enough to tell the AI "ok, now do a thorough security audit, highlight all the potential issues, come up with a best practices design document, and fix all the vulnerabilities and bugs. Repeat until the codebase is secure and meets all the requisite protocol standards and industry best practices."

    We're not there yet, but at some point, AI is gonna be able to blitz through things like that the way they blitz through making haikus or rewriting news articles. At some point AI will just be reliably competent.

    Definitely not there yet. The dark factory pattern is terrifying, lol.

    • That's definitely a pattern people are already starting to have good results from - using multiple "agents" (aka multiple system prompts) where one of them is a security reviewer that audits for problems and files issues for other coding agents to then fix.

      I don't think this worked at all well six months ago. GPT-5.2 and Opus 4.5 might just be good enough for this pattern to start being effective.

      5 replies →

    • Honestly I’m not sure we’re not there yet, run this prompt as a ralph loop for 2 days on your codebase and see where you at...

  • My biggest project (in LOCs) is 100% AI written and I've given up reviewing the code on it. Huge web-based content management system with a native desktop app companion. It's worked flawlessly 24/7 for the last couple of months. I add a new feature every week or so, but I just do the code-as-English dance now and test what comes out. It's almost exclusively all Gemini 3 Pro and Opus 4.5. I've gone fully dark on that project.

    I have other projects where I review almost every line, but everything is edging towards the dark side.

    I've been coding for 40 years in every language you can think of. Glad that's over, honestly. It always got in the way of turning an idea into a product.

  • Canadian girlfriend coding strikes again.

    I would love for someone to point to a codebase done by an ai with the code, history and cost that's good. It's always a ball of mud that doesn't work and even the ai that coded it up can't maintain it.

  • > Nobody reviews AI-produced code, ever. They don't even look at it.

    How is this supposed to differ from the original Karpathy definition of vibe coding? Is it just "vibe coding plus rigorous verification"?

    (Or is it mainly intended to sound more desirable than vibe coding?)

    • "vibe coding plus rigorous verification" is a really good way of describing it.

Level Six: knowledge on how to build products deteriorates, more high level thinking is outsourced to AI. AI are asked to simply put out several versions and possibilities of products and testers go through harvesting candidates that are the most usable and have the least bugs, good enough for production. It could take a long time or it could happen very quick.

Level Seven: no one even knows what software is anymore, they just pray to AI to solve their problems and hope for the best. Some priests occasionally do random stuff that seems to affect outcomes, but no one knows for sure.

  • Level Eight: so few people do any paid labor any more, and society failed to figure out any sort of distributive income system such as UBI, so increasing chronic and endemic poverty is slowly eating away at revenue generation from AI designed and coded products and services.

Having actually run some of the software produced by nearly "dark software factories," a lot of that software is completely shit.

Yegge's Beads is a genuinely good design, for example, but it's flakier and more broken the Unix vendor Motif implementations in 1993, and it eats itself more often than Windows 98 would blue screen.

I can actually run a bunch of orchestrated agents, and get code which isn't complete shit. But it's an extremely skill-intensive process, because I'm acting as product manager, lead engineer, and the backstop for the holes in the cognition of a bunch of different Claudes.

So far, the people promising completely dark software factories are either high on their own supply, or grifting to sell books (or occasionally crypto). Or so I judge from using the programs they ship.

  • I found it kind of fitting that didn't even describe what a human would still do at level 5 nor why it would be desirable. It's just the "natural" progression of a 5 step ladder and that seems to be reason enough.

    • Well isnt the point humans wouldn't need to do basically anything?

      It would be 'desirable' because the value is in the product of the labour not the labour itself. (Of course the resulting dystopian hellscape might be considered undesirable)

      3 replies →

People are very pessimistic here in the comments, but I see no fundamental, long term reason why AI generated code can't be refactored, maintained and tested by AI just as well (or better) than average-quality human generated code. Especially because things are evolving - by the time the projects will need to be maintained, there will likely already be better tools to do that. So while I wouldn't vibecode drivers for life support systems yet, there is significant runway of tech debt for most use cases.

The autopilot analogy is good because level 4-5 are essentially vaporware outside of success in controlled environments backed by massive investment and engineering.

We're going to need to become a lot more creative about what and how we test if we're ever to reach dark factory levels. Unit tests and integration tests are one thing, but truly testing against everything in a typical project requirements document is another thing.

  • The team I saw doing this had a fake Slack channel full of fake users, each of which was constantly hammering away trying out different things against a staging environment version of the system.

    That was just one of the tricks they were using, and this was a couple of months ago so they've no-doubt come up with a bunch more testing methods since then.

    • I dread to imagine tbe state lf the code, there are some antipatterns that LLMs come back to again and again.

The analogy is a good fit. I'm at level 0 because no way in hell I'm going to die from cruise control.

I imagine there should be two levels above: 6: The AI designs the product and 7: A market where AI (now completely autonomous) sells incomprehensible products to other AI's. Like a project Dwain factor enhancer where Dwain is a fictional character coined by an onlyfax DND bot.

One of other authors he links to[0] brags that he's released 10 projects in the past month, like "Super Xtreme Mapper, a high-end, professional MIDI mapping software for professional DJs", which has 4 stars on Github. Despite the "high-end, professional...for professional" description, literally no one is going to use it, because this guy can't [be trusted to] maintain this software. Even if Claude Code is doing all the work, adding all the features, and fixing all the bugs, someone has to issue the command to do that work, and to foot the bill. This guy is just spraying code around and snorting digital coke.

There is plausibly something here with AI-generated code but as always, the value is not in the first release but in the years of maintenance and maturation that makes it something you can use and invest in. The problem with AI is that it's giving these people hyper-ADHD, they can't commit to anything, and no one will use vibe-coded tools--I'm betting not even themselves after a month.

[0] https://nraford7.github.io/road-runner-economy/

  • My feeling is that AI-generated code is disposable code.

    It’s great if you can quickly stand up a tool that scratches an itch for you, but there is minimal value in it for other people, and it probably doesn’t make sense to share it in a repo.

    Other people could just quickly vibe-code something of equal quality.

    • That's how I've been using and treating it, though I'm not primarily a developer. I work in ops, and LLMs write all sorts of disposable code for me. Primarily one-off scripts or little personal utilities. These don't get shared with anyone else, or put on github, etc. but have been incredibly helpful. SQL queries, some python to clean up or dig through some data sets, log files, etc. to spit out a quick result when something more robust or permanent isn't needed.

      Plus, so far, LLMs seem better at writing code to do a thing over directly doing the thing, where it's more likely to hallucinate, especially when it comes to working with large CSV or Json files. "Re-order this CSV file to be in Alphabetical order by the Name field" will make up fake data, but "Write a python script to order the Name filed in this CSV to be alphabetical" will succeed.

      3 replies →

    • My growing (cynical) feeling is that AI-generated code is legacy-code-as-a-service. It is by nature trained on other people and company's legacy code. (There's the training set window which is always in the past. There's the economics question of which companies would ever volunteer to opt-in their best proprietary production code into training sets. Sure there are a few entirely open source companies, but those are still the exception and not the rule.) "Vibe code" is essentially delivered as Day Zero "Legacy Code" in the sense that the person who wrote that code is essentially no longer at the company (even if context windows get extended to incredibly huge sizes and you have great prompt preservation tools, eventually you no longer have the original context and not to mention that the Models themselves retrain and get upgraded every so many months are essentially "different people" each time. But most importantly the Models themselves can't tell you the motivating "how" or "why" of anything, at best maybe good specs documents and prompts do, but even that can be a gamble).

      The article starts with a lot of words about how the meaning and nature of "tech debt" are going to change a lot as AI adoption increases and more vibe coding happens, but I think I disagree on what that change means. I don't AI reduces "tech debt". I don't think it is "deflationary" in any way. I think AI are going to gift us a world of tech debt "hyperinflation". When every application in a company is "legacy code" all you have is tech debt.

      Having worked in companies with lots of legacy code, the thing you learn is that those apps are never as disposable as you want to believe. The sunk cost fallacy kicks in. (Generative AI Tokens are currently cheap, but cheap isn't free. Budgets still exist.) Various status quo fallacies kick in: "that's how the system has always worked", "we have to ensure every new version is backwards compatible with the old version", "we can't break anyone's existing process/workflow", "we can't require retraining", "we need 1:1 all the same features", and so forth.

      You can't just "vibe code" something of equal quality if you can't even figure out what "equal quality" means. That's many the death of a legacy code "rewrite project". By the time you've figured out how every user uses it (including how many bugs are load-bearing features in someone's process) you have too many requirements to consider, not enough time or budget left, and eventually a mandate to quit and "not fix what isn't broken". (Except it was broken enough to start up a discovery process at least once, and may do so again when the next team thinks they can dream up a budget for it.)

      Tech debt isn't going away and tech debt isn't getting eliminated. Tech Debt is getting baked into Day Zero of production operations. (Projects may be starting already "in hock to creditors". The article says "Dark Software Factory" but I read "Dark Software Pawn Shop".) Tech debt is potentially increasing at a faster than human scale of understanding it. I feel like Legacy Code skills are going to be in higher demand than ever. It is maybe going to be "deflationary" in cost for those jobs but only because the supply of Legacy Code projects will be so high and software developers will have a buffet to choose from.

      4 replies →

  • > snorting digital coke

    What an apt description -- the website on the other side of that link is the most coked-out design I've ever seen.

  • Software products are about unique competitive value that grows over time. Products have it or not. AI produced software is like open source in a sense, you get something for free. But whose gonna get rich if everybody can just duplicate your product by asking AI to do it, again?

    Think of investing in the stock market by asking AI to do all the trading, for you. Great maybe you make some money. But when everybody catches on that it is better to let the AI do the trading, then others's AI is gonna buy the same stocks as yours, and their price goes up. Less value for you.

    • Spot on. That's why so far all of the supposed solutions to 'the programmer problem' have failed.

      Whether this time it will be different I don't know. But originally compilers were supposed to kill off the programmers. Then it was 3G and 4G languages (70's, 80's). Then it was 'no code' which eventually became 'low code' because those pesky edge cases kept cropping up. Now it is AI, the 'dark factory' and other fearmongering. I'll believe it when I see it.

      Another HN'er has pointed me into an interesting direction that I think is more realistic: AI will become a tool in the toolbox that will allow experts to do what they did before but faster and hopefully better. It will also be the tool that will generate a ton of really, really bad code that people will indeed not look at because they can not afford to look at it: you can generate more work for a person in a few seconds of compute time than you can cover in a lifetime. So you end up with half baked buggy and insecure solutions that do sort of work on the happy path but that also include a ton of stuff that wasn't supposed to be there in the first place but that wasn't explicitly spelled out in the test set (which is a pretty good reflection of my typical interaction with AI).

      The whole thing hinges on whether or not that can be fixed. But I'm looking forward to reading someone's vibe coded solution that is in production at some presumably secure installation.

      I'm going to bet that 'I blame the AI' is a pattern what we will be seeing a lot of.

      5 replies →

  • > The problem with AI is that it's giving these people hyper-ADHD

    Shouldn't be a problem - I've seen AT LEAST half a dozen almost-assuredly vibe coded projects related to dealing with ADHD in the last month...

    Show HN: I gamified a productivity app to help my ADHD friends get things done https://news.ycombinator.com/item?id=46797212

    Show HN: built a 24h-clock based radial planner to help with ADHD time blindness https://news.ycombinator.com/item?id=46668890

    Show HN: DayZen: Visual day planner for ADHD brains https://news.ycombinator.com/item?id=46742799

    Show HN: ADHD Focus Light https://news.ycombinator.com/item?id=46537708

    Show HN: I built Focusmo – a focus app for ADHD time-blindness https://news.ycombinator.com/item?id=46695618

    Show HN: Local-First ADHD Planner for Windows and Android https://news.ycombinator.com/item?id=46646188

  • > One of other authors he links to[0] brags that he's released 10 projects in the past month, like "Super Xtreme Mapper, a high-end, professional MIDI mapping software for professional DJs", which has 4 stars on Github. Despite the "high-end, professional...for professional" description, literally no one is going to use it, because this guy can't [be trusted to] maintain this software. Even if Claude Code is doing all the work, adding all the features, and fixing all the bugs, someone has to issue the command to do that work, and to foot the bill. This guy is just spraying code around and snorting digital coke.

    While I'd expect almost nobody to use apps meeting this description, I disagree about why:

    It's not that other people have to foot the bill, it's that the bill is so low that it's a question of this particular app being discovered amongst all the others.

    $15/month is a rounding error on most budgets. If every musician buys a Claude subscription and prompts for their own variations on this idea, there's a few million other apps that also do all that this app does, which vary from completely identical (because the prompts themselves were also) to utterly personalised for the particular preferences of exactly one artist.

  • There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.

    Sometimes it might, if there are security implications. You might need to fix bugs in networking code, or update crypto handling, or whatever, and those types of things are fine. The idea that you can't have legitimately useful one-off software, used by millions, despite not being updated, is a silly artifact of the MBA takeover of big tech.

    Continuous development is not intrinsic to the "goodness" of software. Sometimes it's a big disappointment if software hasn't been updated consistently, but other times, it just doesnt matter. I've got scripts, little apps, tools, things that I've used, sometimes daily, for over a decade, that never ever ever get updated, and I'd be annoyed if I had to. They have simple tasks to perform that they do well; you dont need all the rest of the "and now we have liquid glass icons! oh, and mandatory telemetry, and if you want ads to go away, you must pay for a premium subscription"

    The value is in the utility - the work done by the software. How much effort and maintenance goes into creating it often has nothing to do with how useful it is.

    Look at windows 11 - hundreds of billions of dollars and years of development and maintenance and it's a steaming pile of horseshit. They're driving people to Linux in record numbers.

    Blender is a counter example. They're constructive and deliberate.

    What's likely to happen is everyone will have AI access to built-on-the-fly apps and tools that they retain for future use, and platforms will consolidate and optimize the available tools, and nobody will need to vibe-code or engage in extensive software development when their AI butler can do all the software work they might need done.

    • > There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.

      If what you are saying is that _maintenance_ is not the same as feature updates and changes, then I agree. If you are literally saying that you think software, once released, doesn't ever need any further changes for maintenance rather than feature reasons, I disagree.

      For instance, you mention "security implications," but as a "might" not "will." I think this vastly underestimates security issues inherent in software. I'd go so far say that all software has two categories of security issues -- those that known today, and those that will be uncovered in the future.

      Then there's the issue of the runtime environment changing. If it's web-based, changing browser capabilities, for instance. Or APIs it called changing or breaking. Etc.

      Software may not be physical, but it's subject to entropy as much as roads, rails, and other good and infrastructure out in the non-digital world.

      2 replies →

    • Sure, but the reason why this is the case is simple: writing software is easy. Writing good software is stupendously hard. So all those manyears that went into maintaining software were effectively just hardening, polishing bug fixes and slow adjustment to changing requirements and new situations. If you throw it all out whenever the requirements change you never and up with something that is secure or as bug free as you can make it.