Show HN: Replacing my OS process scheduler with an LLM

1 month ago (github.com)

This is a pretty funny project, you've outsourced the neurotic developers that keep their task manager open and kill off processes they don't like.

I wouldn't call it replacing the scheduler though - more that you've made a scheduler manager.

  • haha exactly. i realized i spent too much time staring at htop wondering what is this process?, so i decided to automate my own anxiety.

    Scheduler Manager is definitely the more accurate term. Im just the middleman between the chaos and the kernel.

    • Now we need processes to gain awareness of the process manager and integrate an LLM into each process to argue with the process manager why it should let them live.

      1 reply →

  • I resemble that comment!

    But seriously, it does really bug me on principle that DropBox should use over half a GB simply because it uses Chromium, even when nothing is visible.

    • For me it's LSP servers taking 2 gigs of RAM. With Antigravity, Google managed to go beyond this, it is totally unusable for me (but other VScode clones work fine, apart from the 2 Go LSP servers).

OP here. this is a cursed project lol, but i wanted to see: What happens if you replace the OS scheduler with an LLM?

With Groq speed (Llama 3 @ 800t/s), inference is finally fast enough to be in the system loop.

i built this TUI to monitor my process tree. instead of just showing CPU %, it checks the context (parent process, disk I/O) to decide if a process is compiling code or bloatware. It roasts, throttles, or kills based on that.

Its my experiment in "Intelligent Kernels" how they would be. i used Delta Caching to keep overhead low.

It really is cursed to be spending hundreds of watts of power in a datacenter somewhere to make a laptop run slightly faster.

  • oh absolutely. burning a coal plant to decide if i should close discord is peak 2025 energy. strictly speaking, using the local model (Ollama) is 'free' in terms of watts since my laptop is on anyway, but yeah, if the inefficiency is the art, I'm the artist.

    • Running ollama to compute inference uses energy that wouldn't have been used if you weren't running ollama. There's no free lunch here.

    • An interesting thought experiment - a fully local, off-grid, off-network LLM device. Solar or wind or what have you. I suppose the Mac Studio route is a good option here, I think Apple make the most energy efficient high-memory options. Back of the napkin indicates it’s possible, just a high up front cost. Interesting to imagine a somewhat catastrophe-resilient LLM device…

      4 replies →

  • An entire datacenter on the other hand, might be appealing to spot things you wouldn't otherwise see in a sea of logs and graphs.

You can't replace the NTOS scheduler. This is more of an automated (?) process manager.

  • you are technically right (the best kind of right). i am running in userspace, so i cant replace the actual thread scheduling logic in Ring 0 without writing a driver and BSODing my machine.

    think of this more as a High-Level Governor. The NTOS scheduler decides which thread runs next, but this LLM decides if that process deserves to exist at all.

    basically; NTOS tries to be fair to every process. BrainKernel overrides that fairness with judgment. if i suspend a process, i have effectively vetoed the scheduler.

    • > NTOS tries to be fair to every process

      This is a super simplification of the NTOS scheduler. It's not that dumb!

      > if i suspend a process, i have effectively vetoed the scheduler.

      I mean, I suppose? It's the NTOS scheduler doing the suspension. It's like changing the priority level -- sure, you can do it, but it's generally to your detriment outside of corner cases.

  • I do wonder how painfully slow the computer would be if you actually did replace the in-kernel scheduler with an LLM...

You're underselling this as a process manager, it could also be a productivity tool with some prompt changes; Determine procrastination apps: games, non-professional chat, video streaming and kill it.

  • That is actually a brilliant pivot.

    A 'Focus Mode' that doesn't just block URLs but literally murders the process if I open Steam or Civilization VI.

    I could probably add a --mode strict flag that swaps the system prompt to be a ruthless productivity coach. 'Oh, you opened Discord? Roast and Kill.'

    Thanks for the idea mate!

    • I was looking for a project which would run an LLM-powered character (like Clippy), who would periodically screenshot my screen and comment on my life choices.

      Sadly the only project I've found was for windows OS

If it doesn’t find a process that needs roasting or killing for a while, will it see itself as bloatware and commit suicide?

[dead]

  • Great point. In a real kernel, non determinism is a bug. Here, it's a feature (or at least, a known hazard). To answer your question: There is no Ctrl+Z for SIGKILL. Once the LLM decides to kill a process, it's gone. My reasoning for 'rollback' is actually latency. I built in a 'Roasting Phase' where the agent mocks the process for a few seconds before executing the kill. That delay acts as an optimistic lock it gives me a window to veto the decision if I see it targeting something critical.

    If I'm AFK and it kills my IDE? I treat that as the system telling me to touch grass.