← Back to context

Comment by _pdp_

5 hours ago

With everything that we have done so far (our company) I believe by end of 2026 our software will be self improving all the time.

And no it is not AI slop and we don't vibe code. There are a lot of practical aspects of running software and maintaining / improving code that can be done well with AI if you have the right setup. It is hard to formulate what "right" looks like at this stage as we are still iterating on this as well.

However, in our own experiments we can clearly see dramatic increases in automation. I mean we have agents working overnight as we sleep and this is not even pushing the limits. We are now wrapping major changes that will allows us to run AI agents all the time as long as we can afford them.

I can even see most of these materialising in Q1 2026.

Fun times.

What exactly are your agents doing overnight? I often hear folks talk about their agents running for long periods of time but rarely talk about the outcomes they're driving from those agents.

  • We have a lot of grunt work scheduled overnight like finding bugs, creating tests where we don’t have good coverage or where we can improve, integrations, documentation work, etc.

    Not everything gets accepted. There is a lot of work that is discarded and much more pending verification and acceptance.

    Frankly, and I hope I don’t come as alarmist (judge for yourself from my previous comments on Hn and Reddit) we cannot keep up with the output! And a lot of it is actually good and we should incorporate it even partially.

    At the moment we are figuring out how to make things more autonomous while we have the safety and guardrails in place.

    The biggest issue I see at this stage is how to make sense of it all as I do not believe we have the understanding of what is happening - just the general notion of it.

    I truly believe that we will reach the point where ideas matter more than execution, which what I would expect to be the case with more advanced and better applied AI.