← Back to context

Comment by aurareturn

10 days ago

This went straight to the top of HN. I don't understand.

The article doesn't offer much value. It's just saying that you shouldn't use an LLM as the business logic engine because it's not nearly as predictable as a program that will always output the same thing given the same input. Anyone who has any experience with ChatGPT and programming should already know this is true as of 2025.

Just get the LLM to implement the business logic, check it, have it write unit tests, review the unit tests, test the hell out of it.

Why do you think top upvoted posts have to be a 1:1 correlation of value? If you look at the most watched videos on youtube, the most popular movies, or sorted by top of all time on subreddits, the only correlation is that people liked them the most.

The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.

Social media upvotes are less frustrating imo if you see it as a measurement of attention, not a funneling of value. Yes people like things that give them value but they also like reading things with a good title.

  •   The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.
    

    I mean, the message is wrong as well. LLMs can provide customer support. In that case, it's the business logic.

Yep, that's exactly what it's saying. I wrote it because people kept asking me how I was getting ChatGPT to do things, and the answer is: I'm not. Not everything is obvious to everyone. As to why it went straight to the top, I think people resonate with the title, and dislike the buzziness around everything being described as an agent.

  • Honestly, I still don't understand the message you're conveying.

    So you're saying that ChatGPT helped you write the business logic, but it didn't write 100% of it?

    Is that your insight?

    Or that it didn't help you write any business logic at all and we shouldn't allow it to help us write business logic as well? Is that what you're trying to tell us?

    • > So you're saying that ChatGPT helped you write the business logic, but it didn't write 100% of it?

      ChatGPT didn't write any business logic, and I'm really struggling to see how you got there from reading the article. The message is: don't use LLMs to execute any logic.

      5 replies →