Comment by zenoprax
13 hours ago
> witr is successful if users trust it during incidents.
> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.
This seems contradictory to me.
13 hours ago
> witr is successful if users trust it during incidents.
> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.
This seems contradictory to me.
The last bit
> supervised by a human who occasionally knew what he was doing.
seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.
I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.
Thank you, yes I added it in jest and still keeping it for sometime. It was always meant to be removed in future.
If you're capable of auditing the LLM’s outputs and doing a decent code review then you don't need an LLM.
Nobody who was writing code before LLMs existed "needs" an LLM, but they can still be handy. Procfs parsing trivialities are the kind of thing LLMs are good at, although apparently it still takes a human to say "why not using an existing library that solves this, like https://pkg.go.dev/github.com/prometheus/procfs"
5 replies →
Neither do you need and IDE, syntax highlighting or third party libraries, yet you use all of them.
There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.
1 reply →
need and can use are different things.
I'd not trust any app that parses /proc to obtain process information (for reasons [0]), specially if the machine has been compromised (unless by "incident", the author means another thing):
https://news.ycombinator.com/item?id=46364057
Fair enough! That line was meant tongue‑in‑cheek, and to be transparent about LLM usage. Rest assured, they were assistants, not authorities.
No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).
Regardless of code correctness, it's easy enough for malware to spoof process relationships.
I agree, the LLM probably has a much better idea of what's happening than any human