Comment by FitchApps

9 days ago

This is all wonderful and all but what happens when these tools aren't available - you lose internet connection or the agent is misconfigured or you simply ran out of credits. How would someone support their business / software / livelihood? First, the agents would take our software writing tasks then they encroach on CI/CD and release process and take over from there...

Now, imagine a scenario of a typical SWE in todays or maybe not-so-distant future: the agents build your software, you simply a gate-keeper/prompt engineer, all tests pass, you're now doing a production deployment at 12am and something happens but your agents are down. At that point, what do you do if you haven't build or even deployed the system? You're like a L1 support at this point, pretty useless and clueless when it comes to fully understanding and supporting the application .

I've had a fairly long career as a web dev. When I started, I used to be finicky about configuring my dev environment so that if the internet went down I could still do some kind of work. But over time, partly as I worked on bigger projects and partly as the industry changed, that became infeasible.

So you know what do, what I've been doing for about a decade, if the internet goes down? I stop working. And over that time I've worked in many places around the world, developing countries, tropical islands, small huts on remote mountains. And I've lost maybe a day of work because of connectivity issues. I've been deep in a rainforest during a monsoon and still had 4g connection.

If Anthropic goes down I can switch to Gemini. If I run out of credits (people use credits? I only use a monthly subscription) then I can find enough free credits around to get some basic work done. Increasingly, I could run a local model that would be good enough for some things and that'll become even better in the future. So no, I don't think these are any kind of valid arguments. Everyone relies on online services for their work these days, for banking, messaging, office work, etc. If there's some kind of catastrophe that breaks this, we're all screwed, not just the coders who rely on LLMs.

  • > I've worked in many places around the world, developing countries, tropical islands, small huts on remote mountains

    I am genuinely curious about your work lifestyle.

    The freedom to travel anywhere while working sounds awesome.

    The ability to work anywhere while traveling sounds less so.

  • Meanwhile I’ve lost roughly a month from internet issues. My guess is you’re experience was unusual enough you felt the need to component where most developers who where less lucky or just remember more issues didn’t.

    • > Meanwhile I’ve lost roughly a month from internet issues.

      If you tell me "I lost internet at home and couldn't work there", it's one thing. But that you simply went about a month without internet connection, I find it hard to believe.

      1 reply →

  • > people use credits? I only use a monthly subscription

    Those still have limits, no? Or if there's a subscription that provides limitless access, please tell me which one it is.

    • I've been on ChatGPT Pro plan since introduced, and also used codex-rs since it was made public, never hit a limit. Came close last week, not sure if the limits were recently introduced or there always was but they got lowered, but I think that's as close to "unlimited" as you can get without running your own inference.

      I've tried Anthropic's Max plan before, but hit limits after just a couple of hours, same with Google's stuff, but wasn't doing anything radically different when I tried those, compared with Codex, so seems other's limits are way lower.

      2 replies →

    • I finally bit the bullet and got a $200 Claude subscription last month. It's been a busy month and I've used it a lot. More than is healthy, more than I sustainably could for more than a few weeks. I've managed to hit a 5 hour limit exactly once (20 minutes before it refreshed) and I've never hit more than 80% of a weekly limit.

      But if I did - and I could imagine having some specific highly parallelizable work like writing a bazillion unit tests where I send out 40 subagents at a time - then the solution would be to buy two subscriptions. Not switch to API billing.

      3 replies →

  • And here am I thinking that my life depends too much on the internet and the knowledge you can find on it. So if something big/extreme happens like nuclear war, major internet outage etc, I know nothing. No recipes, so basic medical stuff, like how to use antibiotics, electronics knowledge, whatever. I don't have any books with stuff like that like my parents used to. I have seen some examples of backed up Wikipedia for offline usage and local llms etc and am thinking of implementing something as a precaution for these extreme events.

    • That's a very different problem than OP

      You should keep physical books, food, and medication for a SHTF scenario

      "Back to Basics", "Where There Is No Doctor" and the Bible are my SHTF books

      You won't be coding in a SHTF scenario.

  • > And over that time I've worked in many places around the world, developing countries, tropical islands, small huts on remote mountains. And I've lost maybe a day of work because of connectivity issues. I've been deep in a rainforest during a monsoon and still had 4g connection.

    cries on a Bavarian train

    • If it's any consolation, Bavaria is a beautiful part of the world that's up there with any tropical island or rainforest. I hope to visit again sometime.

      1 reply →

  • I consider it more or less immoral to be expected to use the Internet for anything other than retrieving information from others or voluntarily sharing information with others. The idea that a dev environment should even require finicky configuration to allow for productive work sans Internet appalls me. I should only have to connect in order to push to / pull from origin, deploy something or acquire build tools / dependencies, which should be cached locally and rarely require any kind of update.

    • Do you know how many times since 1999 I have had my work Internet go down? Definitely not enough to spend time worrying about it. The world didn’t stop.

      In 2022, funny enough I was at an AWS office (I worked remotely when I worked there) working in ProServe, us-east-1 was having issues that was affecting everything, guess what we all did? Stopped working, the world didn’t come to an end.

      Even now that I work from home, on the rare occasions that Internet goes down, I just use my phone if I need to take a Zoom call.

      7 replies →

Same thing you do if AWS goes down. Same thing we used to do back in the desktop days when the power went out. Heck one day before WFH was common we all got the afternoon off 'cause the toilets were busted and they couldn't keep 100 people in an office with no toilets. Stuff happens. And if that's really not acceptable, you invest in solutions with the understanding that you're dumping a lot of cash into inefficient solutions for rare problems.

  • Ya, I will say the argument isn't much different than "what happens if there is no gas for your tractor".

    • i think its more like what if ur gps isnt working but you're just supposed to drive down the block

Why wouldn't these tools be available suddenly? Once you answer the question, the challenge then becomes mitigating that situation rather than doing things the old way. Like having backup systems, SLAs from network and other providers, etc.

Actually, the last thing you probably want is somebody reverting back to doing things the way we did them 20 years ago and creating a big mess. Much easier to just declare an outage and deal with it properly according to some emergency plan (you do have one, right?).

CI/CD are relatively new actually. I remember doing that stuff by hand. I.e. I compiled our system on my Desktop system, created a zip file, and then me and our operations department would use an ISDN line to upload the zip file to the server and "deploy" it by unzipping it and restarting the server. That's only 23 years ago. We had a Hudson server somewhere but it had no access to our customer infrastructure. There was no cloud.

I can still do that stuff if I need to (and I sometimes do ;-) ). But I wouldn't dream of messing with a modern production setup like that. We have CI/CD for a reason. What if CI/CD were to break? I'd fix it rather than adding to the problem by panicking and doing things manually.

  • > Why wouldn't these tools be available suddenly?

    Take a look at how ridiculously much money is invested in these tools and the companies behind them. Those investments expect a return somehow.

    • The models are already made. They can just run the very useful models they have indefinitely, and they’d be profitable. Or when they go under someone else can buy the rights to the weights.

      Anthropic, a common coding model provider, has said that their models generate enough cash to cover their own training costs before the next one is released. If they stopped getting massive investments, they should be able to coast with the models they have.

    • I look at this as cost savings waiting to happen. Nvidia extorts companies to the extent of tens of thousands for a GPU. Somebody's going to undercut them. At the same time, people are working on optimizations as well. Using cheap CPUs for inference instead of expensive GPUs. Doesn't work for anything but if your model is small enough you can get away with it. Using lower bit quantization makes the models cheaper to run. Using hacks like prompt caching makes subsequent calls more efficient. Etc.

      Your base assumption is that it is expensive and therefore these companies will eventually fail when they keep on making less money than they are spending. The reality is that they are indeed spending enormously now and making a lot of very non linear progress. At the same time a lot of that stuff is being widely published and quite a lot of it is open source. At some point you might get consolidation and maybe some companies indeed don't make it. But their core tech will survive. Investors might be crying in a corner. But that won't stop people from continuing to use the tech in some form or another.

      I already have a laptop that can some modestly largish models locally. I'm not going to spend 40K or whatever on something that can run a GPT 5 class model. But it's not going to cost that in a few years either. This tech is here to stay. We might pay more or less for it. The current state is the worst it is ever going to be. It's going to be faster, bigger, better, cheaper, more useful, etc. At some point the curves flatten and people might start paying attention to cost more. Maybe don't burn a lot of gas in expensive and inefficient gas generators (as opposed to more efficient gas power plants) and maybe use cheap wind/solar instead. Maybe get some GPUs from a different vendor at a lower price? Maybe take a look at algorithm efficiencies, etc. There is a lot of room for optimization in this market. IMHO surviving companies will be making billions, will be running stuff at scale, and will be highly profitable.

      Maybe some investors won't get their money back. Shit happens. That's why it's called venture capital. The web bubble bursting didn't kill the web either.

On device models (deepseek-coder, etc) are very good // better than the old way of using stack overflow on the internet. I have been quite productive on long haul flights without internet!

You're an engineer, your goal is to figure stuff out using the best tools in front of you

Humans are resilient, they reliably perform (and throw great parties) in all sorts of chaotic conditions. Perhaps the thing that separates us most from AI is our ability to bring out our best selves when baseline conditions worsen

  • I know this gets asked all the time, but what is your preferred workflow when using local models? I was pretty deep into it early on, with Tabby and Continue.dev, but once I started using Claude Code with Opus it was hard to go back. I do the same as you, and still use them on flights and whatnot, but I think my implementation could be improved.

  • on-device models are still a tier or two below most frontier models(really opus 4.5).

The tools are going to ~zero (~ 5 years). The open source LLM's are here. No one can put them back or take them down. No internet, no problem. I don't see a long term future in frontier llm companies.

  • What I don't get is, how are these free LLMs getting funded? Who is paying $20-100 million to create an open weights LLM? Long term why would they keep doing it?

    • I see what you're saying, but it doesn't matter that much in the long run. If everything stopped right now, the state-of-the-art open source models can still solve a lot of problems. They may never solve coding, per se, but they're good enough.

  • Do you mean the open binary LLMs, or did you find the secret training data and the random seed for LLaMa?

This is the argument that people used to fight against rich customized IDEs like emacs for decades. What if you need to ssh into a machine that only has baseline vi in an emergency?

I'll happily optimize my life for 99.999% of the time.

If the Internet is down for a long time, I've got bigger problems anyway. Like finding food.

  • > If the Internet is down for a long time, I've got bigger problems anyway.

    I don't know about you, but I don't connect to the internet most of the time, and it makes more productive, not less.

Yeah! I use JetBrains AI assistant sometimes, which suddenly showing only blank window, nothing else. So, not getting anything out of it. But I can see my credits are being spent!

IF I was totally dependent on it, I would be in trouble. Fortunately I am not.

What good would being able to “build my software” without internet access unless I’m building software for a disconnected desktop? Exactly what am I going to do with it? How am I going to get to my servers?

  • > unless I’m building software for a disconnected desktop?

    ... Why wouldn't you build software that works there?

    As I understand things, the purpose of computers is to run software.

    But more importantly, let's suppose your software does require an Internet connection to function.

    Why should that imply a requirement for your development environment to have one?

    Why should that imply a requirement for a code generation tool to have one?

    • Because to a first approximation, no one wants desktop software, maintenance is a pain, it’s a pain to distribute across a large organization and people want to use the same app across devices and no one will pay me for it.

      > But more importantly, let's suppose your software does require an Internet connection to function.

      Because I have been able to depend on “fast” internet since 2000 both at home and at work, just like I’ve been able to depend on a compiler since 1992? There is nothing so important that can’t wait in the rare chance that internet goes out.

      > Why should that imply a requirement for a code generation tool to have one

      Because I don’t want to spend thousands of dollars to run a frontier model locally when I can spend $20/month and codex is included with my ChatGPT subscription?

      4 replies →

Or your business gets flagged by an automated system for dubious reasons with no way to appeal. It's the old story of big tech: they pretend to be on your side first, but their motives are nefarious.

People used to and still say the same thing about GPS. As these systems mature they stay up and become incorporated into our workflows. The implication in the case of GPS was that navigating on your own is not a very critical task anymore. Correspondingly the implication here is that software design and feature design are more important than coding or technical implementation. Similar to Google, it's more important that you know how and what to ask for rather than be able to generate it yourself.

That reminds me of when teachers would say: what if you're without a calculator? And yet we all have smartphones in our pockets today with calculators.

  • > That reminds me of when teachers would say: what if you're without a calculator? And yet we all have smartphones in our pockets today with calculators.

    Your teachers had the right goal, but a bad argument. Learning arithmetic isn't just about being able to do a calculation. It's about getting your brain comfortable with math. If you always have to pull out a goddamn calculator, you'll be extremely limited.

    Trust me, elementary-age me was dumb to not listen to those teachers and to become so calculator-dependent.

    • I think learning arithmetic is a good idea, but it’s only a part of computation. I don’t think we should get too hung up on a particular method of computation (bc there’s so many ways).

  • Certain subjects we treat as if one has to learn woodworking before taking violin lessons.

    We just really underestimate sentimentality in our society because it doesn't fit our self conception.

    • Very fair. I think even more we underestimate our own sentimentalities. eg- the teacher that believes adding or multiplication has to be done a particular way (like the standard algorithm vs. partial products).

  • Having a deep intuition about what the calculator is doing is the skill we were actually being taught. Teachers don't know always understand why things are being taught.

    • > Teachers don't know always understand why things are being taught.

      Yes, but I don't think that is the actual bottleneck, even when they do, most children probably don't care about abstract goals, but rather about immediate skills in their everyday life, or just the statement, that they will need it.

      2 replies →

    • I’m a fan of using a variety of methods to teach, no issue with that. My issue is with teachers that don’t admit how the world is changing. Dinosaurs.

  • And yet calculating your shopping expenses to prevent getting screwed by buggy vending machines, or quickly making rough estimations at your work, is as useful as ever. Tell me how you can learn calculus and group theory, when you skipped primary school math.

It’s like with most programmers today having forgotten assembly. If their compiler breaks, what are they going to do?!

(I jest a bit, actually agree since turning assembly->compiled code is a tighter problem space than requirements in natural language->code)

The stack overflow era wasn’t that long ago and none of us could write a library call without consulting online sources.

You are at least a decade late to post fears about developers reliance on the internet. It was complete well before the LLM era

  • > none of us could write a library call without consulting online sources.

    I use SO quite often, but it is for questions I would otherwise consult other people, because I can't figure it out short of reverse-engineering something. For actual documentation man pages and info documents are pretty awesome. Honestly I dread leaving the world of libraries shipped with my OS vendor, because the quality of documentation drops fast.

  • I rely on the internet just as much as the rest of you. When that goes down, I crack out man pages, and the local copy of the documentation I can build from source code comments, and (after a 5-minute delay while I figure out how to do that) I'm back to programming. I'm probably half as quick, but I'm also learning more (speeding me up when the internet does come back on), so overall it's not actually time lost.

    • Or I can just take a break, go to the gym downstairs, etc …

      Before you go on about kids these days, my first time coding was on an Apple //e in assembly.

Well, you're supposed to pay for the Platinum Pro Gold Deluxe package which includes priority support with an SLA so that six months down the road you get a one month credit for the outage that destroyed your business.

I invested in a beefy laptop that can run Qwen Coder locally and it works pretty good. I really think local models are the future, you don’t have to worry about credits or internet access so much.

  • What are the specs, and how does it compare to Copilot or GPT Codex?

    • You can check out https://news.ycombinator.com/item?id=43856489

      Copilot comparison:

      Intelligence: Qwen2.5-Coder-32B is widely considered the first open-source model to reach GPT-4o and Claude 3.5 Sonnet levels of coding proficiency. While Copilot (using GPT-4o) remains highly reliable, Qwen often produces more concise code and can outperform cloud models in specific tasks like code repair.

      Latency: Local execution on an M3 Max provides near-zero network latency, resulting in faster "start-to-type" responses than Copilot, which must round-trip to the cloud.

      Reliability: Copilot is an all-in-one "vibe" that integrates deeply into VS Code. Qwen requires local tools like Ollama or MLX-LM and a plugin like Continue.dev to achieve the same UX.

      GPT-Codex:

      Intelligence & Reasoning: In recent 2025–2026 benchmarks, the Qwen3-Coder series has emerged as the strongest open-source performer, matching the "pass@5" resolution rates of flagship models like GPT-5-High. While OpenAI’s latest GPT-5.1-Codex-Max remains the overall leader in complex, project-wide autonomous engineering, Qwen is frequently cited as the better choice for local, file-specific logic.

      Architecture & Efficiency: OpenAI models like GPT-OSS-20b (a Mixture-of-Experts model) are optimized for extreme speed and tool-calling. However, the M3 Max with 64GB is powerful enough to run the Qwen3-Coder-30B or 32B models at full fidelity, which provides superior logic to OpenAI's smaller "mini" or "OSS" models.

      Context Window: Qwen models offer substantial context (up to 128K–256K tokens), which is comparable to OpenAI’s specialized Codex variants. This allows you to process entire modules locally without the high per-token cost of sending that data to OpenAI's servers.

I think you laid out why so much mobey is being pressed into this: its digital crack and if they can addict enough businesses, they have subscription moats. Oraclification.

> - you lose internet connection or the agent is misconfigured or you simply ran out of credits.

What happens when github goes down. You shrug and take a long lunch.

  • When GitHub goes down? I keep working, that's the point of a distributed version control system.

    • Yes, and when you do want to share with your colleagues `git push /media/user/usb` takes a few seconds and plugging an Ethernet cable into both computers and disabling ufw takes a few minutes (when you need to find a cable first).

It’s kinda scary actually. After getting used to ai doing all the work, doing it yourself again is like using a toilet without a bidet.

Losing connectivity is a non-issue because it will come back soon enough absent some global event. The realistic risks are rather:

* all services are run at a loss and they increase price to the point the corp doesn’t want to pay for everyone any more.

* it turns out that our chats are used for corporate espionage and the corps get spooked and cut access

* some dispute between EU and US happens and they cut our access.

The solution’s having EU and local models.

> This is all wonderful and all but what happens when these tools aren't available - you lose internet connection or the agent is misconfigured or you simply ran out of credits. How would someone support their business / software / livelihood?

This is why I suggest developers use the free time they gain back writing documentation for their software (preferably in your own words not just AI slop), reading official docs, sharpening your sword, learning design patterns more thoroughly. The more you know about the code / how to code, the more you can guide the model to pick a better route for a solution.

  • I'm seeing things that are seriously alarming though. Claude can now write better documentation and document things 95% there (we're building a set of MCP tools and API end-points for a large enterprise..) - Claude is already either writing code or fixing bugs or suggesting fixes. We have a PM, who has access to both React and API projects, on our team who saw one of the services return 500; they used Claude to pinpoint the bug to exact database call and suggest a fix. So now, it's quite common for PMs to not only post bugs but also "suggested fixes" from the agents. In a not so distant future, developers here will be simply redundant since PM can just use Claude to code and support the entire app. Right now, they still rely on us for support and deployments but that could go away too.

    • PMs could have chosen to do this before, though. Sure, LLMs obviously empower them but the main reason you have developers is to have someone to be accountable to, and they thus have to be extra careful and thoughtful about the code they write. The PMs could come up with adhoc fixes but unless they're also willing to be on the hook for the code, then it's not terribly useful organizationally imo

    • Sure Claude can write better docs but if you dont write the documentation yourself you wont fully know the codebase. I would argue write docs and then have Claude critique it. Then adjust.

    • This doesn't really seem to be the point? Op is being prescriptive, talking about what we should do, not about what could be done.

      Apply to anything else: you could eat out at restaurants every night, and it would do a great job in feeding you! Think of all the productivity you would gain relying on agential chefs. With restaurants even I can eat like a French chef, they have truly democratized food. And they do a perfect job these days executing dishes, only some mistakes.

      2 replies →

    • Well, if they make the decision to accept the suggestion and it's wrong, that's on them. But if you do, that's on you. LLM? How can your boss blame the LLM? Like yelling at it?

      1 reply →

At some point it will get treated like infrastructure, what a typical SWE is doing when cloudfare is broken or AWS is down.

  • At most places I've worked, we can still get things done when AWS/GCP/Azure/OCI are down. For my own selfhosted work, I'm more self-reliant. But I'm aware there are some companies who do 100% of their work within AWS/GCP/Azure/OCI and are probably 100% down when they go down. That's a consequence of how they decided to architect their apps, services and infrastructure.

How would you answer the same question about water or electricity?

Your pizza restaurant is all wonderful and all but what happens when the continual supply of power to the freezer breaks? How will you run your restaurant then?

> This is all wonderful and all but what happens when these tools aren't available - you lose internet connection or the agent is misconfigured or you simply ran out of credits.

i would work on the hundreds of non-coding tasks that i need to do. or just not work?

what do you do when github actions goes down?