Comment by 112233
7 hours ago
Forbidding LLM to write comments and docstrings (preferrably enforced by build and commit hook) is one of the best "hacks" for using that thing. LLM cannot help itself but emit poisonous comments.
7 hours ago
Forbidding LLM to write comments and docstrings (preferrably enforced by build and commit hook) is one of the best "hacks" for using that thing. LLM cannot help itself but emit poisonous comments.
And since it's vibe coded, no one knows what the opcodes are. LLM won't remember. Human has no comments. Human can't trust post-hoc LLM-generated comments because they're poisonous.
Or maybe clone the comments from where it cloned the source.
Meh. No human has written the horrors llm produces. At least I am yet to see codebase like that. Let me attempt a theatrical reenactment:
... and so on. It basically uses comments as a wall to scribble grandiose graffiti about it's valiant conquests in following explicit instruction after fifth repeat and not commiting egregious violence agains common sense.
I'm guessing "it" is Gemini here? Claude rarely adds comments at that level.
1 reply →
I used to worry that using LLMs to code would let them use my code and train on my hard work. Then I realised how bad my code is, so I'm probably singlehandedly holding off an agi catastrophe.