Comment by jaunt7632
5 hours ago
So we've accidentally built the world's most effective developer marketing channel. Write enough blog posts about your framework and Claude will recommend it to every developer on the planet for free. Tailwind didn't win because it's the best CSS solution. It won because it has the most tutorials per capita in the training set.
The CLAUDE.md workaround is telling. You have to write "DO NOT use React under any circumstances" like you're drafting a restraining order. Polite preferences get ignored. Turns out the model treats your architecture decisions the same way it treats "please" in prompts: noted, then disregarded.
Tailwind didn't win for either of these reasons (setting aside any personal positive/negative feelings I have about it). It won (in LLMs) because that's how the ML model works. The training data places the HTML and the styling info together. There's an extremely high signal to noise ratio because of that. You're going to get much fewer tokens that have random styles in it and require several fewer (or maybe even no) thought loops to get working styles. The surface API of selectors is also large and complex. Tailwind utility classes are not. They're either present on an element or not, and it's often the case that supporting classnames for the UI goal are present in close proximity on sibling, parent, or child elements. Even with vast amounts and multiple decades of more CSS to compare against in the training data, I suspect this is the case. Plus, the information is just spread more thinly and more flexible in terms of organization in a stylesheet. The result is you get lots of extra style rules that you didn't need/want and it's harder to one-shot or even few-shot any style implementation. If I'm even remotely right about this, it worth considering this impact in many other languages and applications. I've found the adverse effect to be reduced slightly as models/agents have improved but I feel it's still very much present. It's totally possible to structure data in a way that makes it easier to train on.
There's also a reasonable alignment between Tailwind's original goal (if not an explicit one) of minimizing characters typed, and a goal held by subscription-model coding agents to minimize the number of generated tokens to reach a working solution.
But as much as this makes sense, I miss the days of meaningful class names and standalone (S)CSS. Done well, with BEM and the like, it creates a semantically meaningful "plugin infrastructure" on the frontend, where you write simple CSS scripts to play with tweaks, and those overrides can eventually become code, without needing to target "the second x within the third y of the z."
Not to mention that components become more easily scriptable as well. A component running on a production website becomes hackable in the same vein of why this is called Hacker News. And in trying to minimize tokens on greenfield code generation, we've lost that hackability, in a real way.
I'd recommend: tell your AGENTS.md to include meaningful classnames, even if not relevant to styling, in generated code. If you have a configurability system that lets you plug in CSS overrides or custom scripts, make the data from those configurations searchable by the LLM as well. Now you have all the tools you need to make your site deeply customizable, particularly when delivering private-labeled solutions to partners. It's far easier to build this in early, when you have context on the business meaning of every div, rather than later on. Somewhere, a GPU may sigh at generating a few extra tokens, but it's worthwhile.
I'm not sure the creators of tailwind share your definition of winning though. They recently had to let go of most staff since revenue has plummeted die to LLMs
Winning here means being more widely used than others; it doesn’t have to be commercially successful.
any information about it? what did they sell? I don't even see a sales link on tailwind page
https://tailwindcss.com/sponsor
3 replies →
They aren’t in conflict. Tailwind hasn’t performed well commercially because it’s open source. Even though it’s being used more and more by AI coders, the issue is that they don’t have to pay for it.
That is the issue. It's why Xcode development is really bad with AI models[0] -- because there are barely any text-based tutorials for it, so the models have to make a lot of assumptions and whatnot. Hence, they are really good at Python, JavaScript, and increasingly, Rust.
[0]: https://www.youtube.com/watch?v=J8-CdK4215Y
@dang this accounts comments smell like LLM slop. They are mostly on topic and its more claude than chatgpt but it's slop nontheless.
is telling
didn't win... It won ...
Look at their other comments they are also fishy
I know you guys don't want us to call it out because of negativity. But there needs to be awareness in the community, this is the top comment somehow right now. It feels like it happens every other thread. Please do something more rigorous than manually deleting accounts.
Note I might be wrong on this one but it's just extremely annoying that I even have to consider if I am being manipulated by an AI while reading HN comments.
If I want to read AI stuff I go to Clawdbook or OpenAIs Sora app.
But what if tailwind has the most tutorials in the training set because it's worth learning, which led to it being fairly ubiquitous and easy to add to the training set?
I'm not expressing an opinion about that; it's a real question.
But what if Tailwind has the most tutorials because it's tricky and difficult? What if the intuitive, maintainable solution simply does not need so many tutorials?
I'm not expressing an opinion about that, I don't do front end dev so I have no opinion, it's a real question.
That's a good question, and I can't seem to think of what the maintainable solution that doesn't need as many tutorials would be.
CSS on its own is great, in a way, but also kind of awful if you don't fully grasp it. It used to be much worse, it got way better, but it still offers plenty of rough edges and foot guns.
Tailwind smooths some things over, but there are real tradeoffs. I prefer to use it quite often, but I don't have any illusions about it being better than plain CSS in any way other than it saving some time and brain cycles here and there. I don't think there's some perfect alternative hiding in obscurity, though. Tailwind is arguably popular because it often makes life easier. Not without drawbacks, but... I'd say it makes working on teams easier and there are a lot of community-generated themes, components, etc that make building things much faster and easier.
Hand rolled CSS is better if you're good at writing it, but in my experience, most people simply aren't.
Some people will disagree with me and say Tailwind is garbage, and that's fine, but they probably know CSS reasonably well. That makes a huge difference. Of the ~18M downloads per week, I would guess the vast majority of people using it have mostly copied and pasted stuff into their projects (or these days, let an LLM do it for them).
2 replies →
There are more HTML tutorials than brainfuck tutorials. The reason is simple. Don't be obtuse.
1 reply →
I'm using Hono JSX and it has no trouble, though to be fair it's rather similar to React and it occasionally gets confused.
"Tailwind didn't win because it's the best CSS solution. It won because it has the most tutorials per capita in the training set."
Obviously. People keep forgetting that "Artificial Intelligence" does not think and is not intelligent. It just statistically predict next token in a sequence. It is all statistics.
So, Django 6 has new task framework, but LLM does not care, as Celery has better stats.
Side note: it is not only LLM thingy. Companies for years were choosing tech stack because of fashion or popularity, regardless on technical feasibility for a given solution. So we have companies adopting Kafka, even though it sucks for their usecase, companies switch from Jenkins to Github Actions, even though Jenkins was cheaper and more performant.
"does not think and is not intelligent. It just statistically predict next token in a sequence. It is all statistics"
Technically correct, but pretty useless as a working model. Like sayin humans are not intelligent. It's just biochemical and bioelectric reactions. It's all physics.
Interesting. I'm using go htmx adminlte. Never once has Claude recommended or tried to use tailwind. I sometimes have to remind it to use less JS and use htmx instead but otherwise feels pretty coherent.
I recommend starting projects by first creating a way of doing (architect) and then letting Claude work. It's pretty good at pretending to be you if you give it a good sample of how you do things.
Note: this applies to Opus 4.6. I have not had a useful experience in other models for actual dev work.
Yes, I absolutely agree. Creating a framework or architecture before letting Claude work is really one of the best practices.