Comment by augment_me
8 hours ago
Completely subjective take, but I feel like 95% of these "tools" that are prompt-engineering inventions created by the authors with their bias and to suit their needs don't have anything supporting them besides the authors' subjective experience.
I have seen the same idea with processes, pipelines, lists, bullet points, jsons, yamls, trees, prioritization queues all for LLM context and instruction alignment. It's like the authors take the structure they are familiar with, and go 100% in on it until it provides value for them and then they think it's the best thing since sliced bread.
I would like, for once, to see some kind of exploration/abalation against other methods. Or even better, a tool that uses your data to figure out your personal bias and structure preference for writing specs, so that you can have a way of providing yourself value.
The inherent lack of distinction for LLMs between code, structured formats and natural language is ridiculous. Before the AI era we had a trend to increase type safety everywhere. Now we just sling code and natural text around and hope it works.
It's Vibesmaxxing
nobody knows what to build when everything can be built, there is no moat.
It's like horoscopes for the entirely-too-AI-pilled. Founded in nothing but vibes.
"Don't write prompts like that, do it like this! I swear it's better. Claude says so!"