← Back to context

Comment by zenoprax

11 hours ago

> witr is successful if users trust it during incidents.

> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.

This seems contradictory to me.

The last bit

> supervised by a human who occasionally knew what he was doing.

seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.

I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.

  • Thank you, yes I added it in jest and still keeping it for sometime. It was always meant to be removed in future.

  • If you're capable of auditing the LLM’s outputs and doing a decent code review then you don't need an LLM.

    • Neither do you need and IDE, syntax highlighting or third party libraries, yet you use all of them.

      There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.

      1 reply →

Fair enough! That line was meant tongue‑in‑cheek, and to be transparent about LLM usage. Rest assured, they were assistants, not authorities.

No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).

Regardless of code correctness, it's easy enough for malware to spoof process relationships.