← Back to context

Comment by somenameforme

18 hours ago

It has excellent presentation, excess verbosity, and is wholly nonsensical. Read the code. It uses excessive whitespace doing things like function calls/declarations with one parameter per line, and so it's probably like 100 lines "real" code of mostly tight functions -- the presentation/objections make no sense whatsoever.

I was able to generate extremely comparable output from ChatGPT by telling it to create a hyper-negative review, engage in endless hyperbole, and focus on danger, threats, and the obvious inexperience of the person who wrote it. Such is the nature of LLMs it'd happily produce the similar sort of nonsense for even the cleanest and tightest code ever written. I'll just quote its conclusion because LLM verbosity is... verbose.

---

Conclusion This code is a ticking time bomb of security vulnerabilities, AWS billing horrors, concurrency demons, and maintenance black holes. It would fail any professional code review:

Security: Fails OWASP Top 10, opens SSRF, IP spoofing, credential leakage

Reliability: Race conditions, silent failures, unbounded threading

Maintainability: Spaghetti architecture, no documentation, magic literals

Recommendation: Reject outright. Demolish and rewrite from scratch with proper layering, input validation, secure defaults, IAM roles, structured logging, and robust error handling.

---

Oooo sick burn. /eyeroll

> I was able to generate extremely comparable output from ChatGPT by telling it

Just to check, you know that ChatGPT is fully built on human writing right?

Would it be ironic if I claim "what you write looks like what the tool can output, so you used the tool" if the tool was built to output stuff that looks like what you write.

Fun fact: anything you or me write looks like ChatGPT too. It could be surprising if people didn't spend billions and stole truckloads of scraped unlicensed content including content created by you and me to get the tool to literally do just this.