AI is a business model stress test

14 hours ago (dri.es)

Tailwind Labs relied on a weird monetization scheme. Revenue was proportional to the pain of using the framework. The sudden improvement in getting desired UI without relying on pre-built templates killed Tailwind Labs.

There are many initiatives in a similar spot, improving your experience at using Next.js would hurt Vercel. Making GitHub actions runners more reliable, stable and economical would hurt Microsoft. Improving accessibility to compute power would hurt Amazon, Microsoft and Google. Improving control and freedom over your device would hurt apple and Google.

Why should we be sympathetic to the middleman again?

If suddenly CSS became pleasant to use, Tailwind would be in a rough spot. See the irony?

"Give everything away for free and this people will leave technology", geohot said something like this and I truly appreciate. Technology will heal finally

  • That’s not at all why I bought Tailwind Plus. I bought it (at work) to have a solid collection of typical components and UI patterns, mostly expertly designed with a lot of attention to detail, to use as inspiration and as a shared language with other frontend devs and designers. I rarely (if ever) actually copy pasted any of their HTML or Tailwind styles. It’s mostly used as reference and inspiration. The fact that it’s implemented in Tailwind is mostly irrelevant (Tailwind really isn’t hard to use, especially after your first couple of small projects).

  • > If suddenly CSS became pleasant to use, Tailwind would be in a rough spot.

    CSS is pleasant to use. I know I find it pleasant to use; and I know there are quite a few frontend developers who do too. I didn't pay much attention to tailwind, until suddenly I realized that it has spread like wildfire and now is everywhere. What drove that growth? Which groups were the first adopters of tailwind; how did they grow; when did the growth skyrocket? Why did it not stay as niche as, say, htmx?

    • People like tailwind because it feels like the correct abstraction. It helps you colocate layout and styling, thereby reducing cognitive load.

      With CSS you have to add meaningless class names to your html (+remember them), learn complicated (+fragile) selectors, and memorise low level CSS styles.

      With tailwind you just specify the styling you want. And if using React, the “cascading” piece is already taken care of.

      13 replies →

    • Imo CSS is not pleasant to use, but Tailwind is at least as bad and furthermore is bad in addition to the CSS badness which it does not fully replace. It is a mystery to me as well how it got so popular.

      (I know many people disagree, which is fair enough.)

    • Writing CSS manually was never all that pleasant for me, mostly the part about debugging it when it doesn't do what I want.

      So I tried Tailwind and it seemed to help.

      But now that Claude Opus 4.5 is writing all my code, it can write and debug CSS better than I can use Tailwind. So, CSS it is.

      2 replies →

    • I haven’t seen this mentioned much, but Tailwind’s rise closely followed a shift away from runtime CSS-in-JS toward build-time, deterministic styling.

      Many JSX-era libraries (MUI, styled-components, Emotion) generate styles at runtime, which works fine for SPAs but creates real friction for SSR, streaming, and time-to-first-paint (especially for content-heavy or SEO-sensitive domains).

      As frameworks like Next.js, Vue, Svelte, Angular, and now RSC all moved server-first, teams realized they couldn’t scale entire domains as client-only SPAs without performance and crawler issues.

      Tailwind aligned perfectly with that shift: static CSS, smaller runtime bundles, predictable output, and zero hydration coupling. It wasn’t about utility classes. It was about build-time certainty in a server-rendered world :)

    • I used plain CSS for more than a decade and felt the benefits of Tailwind within 10 minutes of getting started. What fueled the growth of Tailwind is that it makes web development palpably easier.

      26 replies →

    • > What drove that growth?

      It is a natural fit with component-based frontend frameworks like React. You keep the styles with the component instead of having to work in two places. And it’s slightly nicer than writing inline styles.

      The core CSS abstraction (separating “content” from “presentation”) was always flimsy but tailwind was the final nail in that coffin.

      Then of course LLMs all started using it by default.

      1 reply →

    • > CSS is pleasant

      So is SQL. To me. But some otherwise rational people have an irrational dislike of sql. Almost like someone seeking to seal a small bruise with wire mesh because bandaids are hard to rip off. The consequence shows with poorly implemented schema-free nosql and bloated orm tools mixed in with sql.

      But some folks just like it that way. And most reasons boil down to a combination of (1) a myopic solution to a hyper specific usecase or (2) an enterprise situation where you have a codebase written by monkeys with keyboards and you want to limit their powers for good or (3) koolaid infused resume driven development.

    • I'll give my guess - it's because of rhe "fullstack" bullshit.

      I am a backend developer. I like being a backend developer. I can of course do more than that, and I do. Monitoring, testing, analysis, metrics, etc.

      I can do frontend development, of course I can. But I don't like it, I don't approach it with any measure of care. It's something I "unfortunately have to do because someone who is not a developer thought that declaring everyone should be doing everything was a good idea".

      I don't know how to do things properly on the front end, but I definitely can hammer them down to a shape that more or less looks like it should. For me, shit like Bootstrap or Tailwind or whatever is nice. I have to spend less time fiddling with something I think is a waste of my time.

      I love working with people that are proper front end developers for that reason, and I always imagined they would prefer things more native such as plain CSS.

  • > If suddenly CSS became pleasant to use

    Not being sarcastic, but this will never be. CSS is a perfectly functional interface, but the only way it becomes less annoying is when you abstract it behind something more user friendly like tailwind or AI (or you spend years building up knowledge and intuition for its quirks and foibles).

    We have decades of data at this point that fairly conclusively shows that many people find CSS as an interface inherently confusing.

    • I agree. I actually think CSS (and SQL or other “perfectly functional” interfaces) hold some kind of special power when it comes to AI.

      I still feel that the main revolution of AI/LLMs will be in authoring text for such “perfectly functional”-text bases interfaces.

      For example, building a “powerful and rich” query experience for any product I worked on was always an exercise in frustration. You know all the data is there, and you know SQL is infinitely capable. But you have to figure out the right UI and the right functions for that UI to call to run the right SQL query to get the right data back to the user.

      Asking the user to write the SQL query is a non-starter. You either build some “UI” for it based on what you think is the main usecases, or go all in and invent a new “query language“ that you think (or hope) makes sense to your user. Now you can ask your user to blurb whatever they feel like, and hope your LLM can look at that and your db schema, and come up with the “right” SQL query for it.

      1 reply →

    • Things like flexbox have made CSS indescribably better and easier to use than it used to be. It's still bad, but degrees matter a lot.

      As a fullstack dev, I couldn't do pixel-perfect CSS 10 years ago, and today I can. That's a lot of progress.

      1 reply →

    • It's getting better (in a C++ kinda way), certainly, but...

      It's ultimately still driven my matching "random" identifiers (classes, ids, etc.) across semantic boundaries. Usually, the problem is that the result is mostly visual which makes it disproportionately hard to actually do tests for CSS and make sure you don't break random stuff if you change a tiny thing in your CSS.

      For old farts like me: It's like the Aspect-Oriented Programming days of Java, but you can't really do meaningful tests. (Not that you could do 'negative' testing well in AOP, but even positive testing is annoyingly difficult with CSS.)

      EDIT: Just to add. It's an actually difficult problem, but I consider the "separate presentation from content" idea a bit of a Windmill of sorts. There will always be interplay and an artificial separation will lead to ... awkward compromises and friction.

  • > killed Tailwind Labs

    They are still around.

    > "Give everything away for free and this people will leave technology"

    This is more interesting, although somewhat generally understood (can be conflated with people seeing "free" and "cheap" and therefore undesirable). It depends on your definitely of longevity but we certainly have a LOT of free software that has, so far, lasted the test of time.

  • > Making GitHub actions runners more reliable, stable and economical would hurt Microsoft.

    Can you explain this one a bit? I know some folks who would absolutely increase their spend if Actions runners were better. As it stands I've seen folks trying to move off of them.

    • If they were twice as efficient, you would only need to run half as many and spend less.

    • Gh actions runners had a dubious implementation of sleep that would cause runners to hang on 100% usage for weeks/months. A simple fix was proposed and neglected for 10 years. This discussion resurfaced recently with zig abandoning GitHub entirely and criticizing this specific issue. A fix was them merged following an announcement that self hosted runners will now be charged by the minute. Of course this two facts are totally independent but yeah, yeah, sure.

      5 replies →

  • You can’t even have dynamic Classes: https://tailwindcss.com/docs/detecting-classes-in-source-fil...

    > <div class="text-{{ error ? 'red' : 'green' }}-600"></div>

    —- I find it really crazy that they think would be good idea. I wonder how many false positive css stuff is being added given their “trying to match classes”. So if you use random strings like bg-… will add some css. I think it’s ridiculous, but tells that people that use this can’t be very serious about it and won’t work in large projects.

    —— > Using multi-cursor editing When duplication is localized to a group of elements in a single file, the easiest way to deal with it is to use multi-cursor editing to quickly select and edit the class list for each element at once

    Instead of using a var and reusing, you just use multi cursors. Bad suggestions again.

    —-

    > If you need to reuse some styles across multiple files, the best strategy is to create a component

    But on benefits says

    > Your code is more portable — since both the structure and styling live in the same place, you can easily copy and paste entire chunks of UI around, even between different projects.

    —-

    > Making changes feels safer — adding or removing a utility class to an element only ever affects that element, so you never have to worry about accidentally breaking something another page that's using the same CSS.

    CSS in js fixed this long time ago.

    —-

    <div class="mx-auto flex max-w-sm items-center gap-x-4 rounded-xl bg-white p-6 shadow-lg outline outline-black/5 dark:bg-slate-800 dark:shadow-none dark:-outline-offset-1 dark:outline-white/10"> <img class="size-12 shrink-0" src="/img/logo.svg" alt="ChitChat Logo" /> <div> <div class="text-xl font-medium text-black dark:text-white">ChitChat</div> <p class="text-gray-500 dark:text-gray-400">You have a new message!</p> </div> </div>

    So many classes you need to learn to use it.

  • I know I’m echoing others’ responses but CSS in 2026 is incredible, easy to use and beautiful.

    I find the tailwindcss approach inexcusable and unmaintainable.

  • I think you live in a conspiracy world.

    > Improving accessibility to compute power would hurt Amazon, Microsoft and Google.

    Yeah, if they were not competing against each other.

    > If suddenly CSS became pleasant to use, Tailwind would be in a rough spot. See the irony?

    Honestly, I don't. If people suddenly adopted a heathier lifestyle, doctors, at least dentists, would be in a rough spot.

    See the irony? Well, again I don't.

  • > Tailwind Labs relied on a weird monetization scheme. Revenue was proportional to the pain of using the framework.

    Really? To me, Tailwind seemed like the pinnacle of how anyone here would say “open source software” should function. Provide a solid, truly open source, software and make money from consulting or helping others use it and selling custom built solutions around it. The main sin of Tailwind was assuming that type of business could scale to a “large business” structure as opposed to “a single dev”-type project. By a “single dev”-type I don’t mean literally one guy, but more a very lean and non-corporate or company-like structure.

    Vercel (and redislabs, mongo, etc) are different because they are in the “we can run it for you” business. Which is another “open source” model I have dabbled in for a while in my career. Thinking that the honest and ethical position is to provide open source software, then offer to host it for people who don’t want to selfhost and charge for that.

    • From the developer perspective not much changes despite organization structure being completely different in this comparison (trillion dollar company vs 10 individual contributors).

      Tailwind Labs revenue stream was tied to documentation visit, that was the funnel. The author's argument was this revenue stream was destroyed by a slight quality of life improvement (having llms fill in css classes). Tailwind Labs benefits from: a) documentation visit b) inability to implement desired layout using the framework (and CSS being unpleasant). It seems there is a conflict of interest between the developer expecting the best possible experience and the main revenue stream. Given that a slight accidental improvement in quality of life and autonomy for users destroyed the initiative main revenue stream, it would be fair to say it doesn't just "seems like a conflict of interest". Definitely disagree with it being the "pinnacle" of how open source should function but I also won't provide any examples because it is besides the point. I will point out that fsf is fine for many decades now, and a foundation with completely different structure like zig foundation seems to be ok with a somewhat proportional revenue (orders of magnitude less influence, adoption and users, maybe 10-20x less funding)

    • Wasn't the tailwind team just a few people? Might be misremembering but my impression was a team under 10 people, which is tiny

  • The pain and opportunity will move elsewhere.

    Tailwind is a handy tool that deserves support and sustainability

  • Your synthesis points to software in the public interest. Governments need to start forking projects and guiding them through maturity, the same as other public utilities.

In my opinion LLMs are intellectual property theft. Just as if I started distributing copies of books. This substantially reduces the incentive for the creation of new IP.

All written text, art work, etc needs to come imbued with a GPL style license: if you train your model on this, your weights and training code must be published.

  • I think there is a real issue here, but I do not think it is as simple as calling it theft in the same way as copying books. The bigger problem is incentives. We built a system where writing docs, tutorials, and open technical content paid off indirectly through traffic, subscriptions, or services. LLMs get a lot of value from that work, but they also break the loop that used to send value back to the people and companies who created it.

    The Tailwind CSS situation is a good example. They built something genuinely useful, adoption exploded, and in the past that would have meant more traffic, more visibility, and more revenue. Now the usage still explodes, but the traffic disappears because people get answers directly from LLMs. The value is clearly there, but the money never reaches the source. That is less a moral problem and more an economic one.

    Ideas like GPL-style licensing point at the right tension, but they are hard to apply after the fact. These models were built during a massive spending phase, financed by huge amounts of capital and debt, and they are not even profitable yet. Figuring out royalties on top of that, while the infrastructure is already in place and rolling out at scale, is extremely hard.

    That is why this feels like a much bigger governance problem. We have a system that clearly creates value, but no longer distributes it in a sustainable way. I am not sure our policies or institutions are ready to catch up to that reality yet.

    • > We have a system that clearly creates value, but no longer distributes it in a sustainable way

      The same thing happened (and is still happening) with news media and aggregation/embedding like Google News or Facebook.

      I don't know if anyone has found a working solution yet. There have been some laws passed and licensing deals [1]. But they don't really seem to be working out [2].

      [1] https://www.cjr.org/the_media_today/canada_australia_platfor...

      [2] https://www.abc.net.au/news/2025-04-02/media-bargaining-code...

      1 reply →

    • > I do not think it is as simple as calling it theft in the same way as copying books

      Aside from the incentive problem, there is a kind of theft, known as conversion: when you were granted a license under some conditions, and you went beyond them - you kept the car past your rental date, etc. In this case, the documentation is for people to read; AI using it to answer questions is a kind of conversion (no, not fair use). But these license limits are mostly implicit in the assumption that (only) people are reading, or buried in unenforceable site terms of use. So it's a squishy kind of stealing after breaching a squishy kind of contract - too fuzzy to stop incented parties.

    • There will be no royalties, simply make all the models that trained on the public internet also be required to be public.

      This won't help tailwind in this case, but it'll change the answer to "Should I publish this thing free online?" from "No, because a few AI companies are going to exclusively benefit from it" to "Yes, I want to contribute to the corpus of human knowledge."

      1 reply →

    • It's not as simple as calling it theft, but it is simply theft, plus the other good points you made.

    • > We have a system that clearly creates value, but no longer distributes it in a sustainable way.

      It does not "create value" it harvests value and redirects the proceeds it accrues towards its owners. The business model is a middleman that arbitrages the content by separating it from the delivery.

      Software licensing has been broken for 2 decades. That's why free software isn't financially viable for anybody except a tiny minority. It should be. The entire industry has been operating by charity. The rich mega corporations have decided they're not longer going to be charitable.

  • I stopped writing open source projects on github because why put a bunch of work into something for others to train off of without any regard for the original projects

    • I don't understand this mindset. I solve problems on stackoverflow and github because I want those problems to stay solved. If the fixes are more convenient for people to access as weights in an LLM... who cares?

      I'd be all for forcing these companies to open source their models. I'm game to hear other proposals. But "just stop contributing to the commons" strikes me as a very negative result here.

      We desperately need better legal abstractions for data-about-me and data-I-created so that we can stop using my-data as a one-size-fits-all square peg. Property is just out of place here.

      4 replies →

    • that's why i don't add comments to my commits, i don't want them to know the reason for the changes.

    • Good, we don’t want code that people are possessive of, in the software commons. The attitude that you are concerned about what people do with your output means that nobody should touch your output, too big a risk of drama.

      We don’t own anything we release to the world.

  • > if you train your model on this, your weights and training code must be published.

    The problem here is enforcement.

    It's well known that AI companies simply pirated content in order to train their models. No amount of license really helps in that scenario.

    • The problem here is the "so what?"

      Imagine OpenAI is required by law to list their weights on huggingface. The occasional nerd with enough GPUs can now self host.

      How does this solve any tangible problems with LLMs regurgitating someone else's work?

      2 replies →

  • > if you train your model on this, your weights and training code must be published.

    This feels like the simplest & best single regulation that can be applied in this industry.

    • I feel to be consistent the output of that model will also be under that same open license.

      I can see this being extremely limiting in training data, as only "compatible" licensed data would be possible to package together to train each model.

      1 reply →

  • We already have more IP than any human could ever consume. Why do we need to incentivize anything? Those who are motivated by the creation itself will continue to create. Those who are motivated by the possibility of extracting rent may create less. Not sure that's a bad thing for humanity as a whole.

  • By your analogy human brains as also IP thefts, because they ingest what's available in the world, mix and match them, and synthesize slightly different IPs based on them.

  • Do I have to publish my book for free because I got inspiration from 100's of other books I read during my life?

    • Humans are punished for plagiarism all the time. Myriad examples exist of students being disenrolled from college, professionals being fired, and personal reputations tarnished forever.

      When a LLM is trained on copyright works and regurgitates these works verbatim without consent or compensation, and then sells the result for profit, there is currently no negative impact for the company selling the LLM service.

    • Issue to me is that I or someone else bought those books. Or in case of local libraries the authors got money for my borrowing copy.

      And I can not copy paste myself to discuss with thousands or millions of users at time.

      To me clear solution is to make some large payment to each author of material used in traing per training of model say 10k to 100k range.

    • If your book reproduces something 95% verbatim, you won't even be able to publish it.

  • >This substantially reduces the incentive for the creation of new IP.

    Not all, but some kind of IP.

    Some of those that is created for sake of creating it and nothing else.

    • The psychology behind "creating it for the sake of creating it" can also be significantly changed by seeing someone then take it and monetize it without so much as a "thank you".

      It's come up quite often even before AI when people released things under significantly looser licenses than they really intended and imagined them being used.

  • > This substantially reduces the incentive for the creation of new IP.

    You say that like it's a bad thing...

  • This is one way to look at it.

    The other way is to argue that LLMs democratize access to knowledge. Anyone has access to all ever written by humanity.

    Crazy impressive if you ask me.

    • If the entities democratizing access weren't companies worth hundreds of billions of dollars with a requirement to prioritize substantial returns for their investors, I'd agree with you!

    • Yes, it seems that way now.

      The first one's free.

      After you're hooked, and don't know how to think any more for yourself, and all the primary sources have folded, the deal will be altered.

  • If I was able to memorize every pixel value to reconstruct a movie from memory, would that be theft?

  • > imbued with a GPL style license

    GPL died. Licenses died.

    Exnation: LLMs were trained also on GPL code. The fact that all the previously-paranoid businesses that used to warn SWEs not to touch GPL code with a ten foot pole are now fearlessly embracing LLMs' outputs, means that de facto they consider an LLM their license-washing machine. Courts are going to rubber stamp it because billions of dollars, etc.

  • > This substantially reduces the incentive for the creation of new IP

    And as a result of this, the models will start consuming their own output for training. This will create new incentives to promote human generated code.

  • Education can be viewed as intellectual property theft. There have been periods in history when it was. "How to take an elevation from a plan" was a trade secret of medieval builders and only revealed to guild members. How a power loom works was export-controlled information in the 1800s, and people who knew how a loom works were not allowed to emigrate from England.

    The problem is that LLMs are better than people at this stuff. They can read a huge quantity of publicly available information and organize it into a form where the LLM can do things with it. That's what education does, more slowly and at greater expense.

  • Does anyone know of active work happening on such a license?

    • Writing the license is the easy part, the challenge is in making it legally actionable. If AI companies are allowed to get away with "nuh uh we ran it through the copyright-b-gone machine so your license doesn't count" then licenses alone are futile, it'll take lobbying to actually achieve anything.

      2 replies →

    • I got an

      "LLM Inference Compensation MIT License (LLM-ICMIT)"

      A license that is MIT compatible but requires LLM providers to pay after inference, but only restricts online providers, not self-hosted models

    • I can ask Claude to generate you one right now. It will be just a bunch of bullshit words no matter how much work you put into writing them down (like any other such license).

  • In my opinion information wants to be free. It's wild to me seeing the tech world veer into hyper-capitalism and IP protectionism. Complete 180 from the 00s.

    IMO copyright laws should be rewritten to bring copyright inline with the rest of the economy.

    Plumbers are not claiming use fees from the pipes they installed a decade ago. Doctor isn't getting paid by a 70 year old for saving the 70 year old when they were in a car accident at age 50.

    Why should intellectual property authors be given extreme ownership over behavior then?

    In the Constitution Congress is allowed to protect with copyright "for a limited time".

    The status quo of life of author + 99 years means works can be copyrighted for many peoples entire lives. In effect unlimited protection.

    Why is society on the hook to preserve a political norm that materially benefits so few?

    Because the screen tells us the end is nigh! and giant foot will crush us! if we move on from old America. Sad and pathetic acquiescence to propaganda.

    My fellow Americans; must we be such unserious people all the time?

    This hypernormalized finance engineered, "I am my job! We make line go up here!" culture is a joke.

    • Excuse me, but even if in principle of “information wants to be free”, the actual outcome of LLMs is the opposite of democratizing information and access. It completely centralizes accesses, censorship, and profits in the hands of a few mega corporations.

      It is completely against the spirit of information wants to be free. Using that catch phrase in protection of mega corps is a travesty.

      2 replies →

    • > In my opinion information wants to be free.

      Information has zero desires.

      > It's wild to me seeing the tech world veer into hyper-capitalism and IP protectionism.

      Really? Where have you been the last 50 years?

      > Plumbers are not claiming use fees from the pipes they installed a decade ago.

      Plumbers also don't discount the job on the hopes they can sell more, or just go around installing random pipes in random locations hoping they can convince someone to pay them.

      > Why should intellectual property authors be given extreme ownership over behavior then?

      The position that cultural artifacts should enter into the commons sooner rather than later is not unreasonable by any means, but most software is not cultural, requires heavy maintenance for the duration of its life, and still is well past obsolescence, gone and forgotten, well before the time frame you are discussing.

  • Commercialization may be a net good for open source, in that it helps sustain the project’s investment, but I don’t think that means that you’re somehow entitled to a commercial business just because you contributed something to the community.

    The moment Tailwind becomes a for-profit, commercial business, they have to duke it out just like anyone else. If the thing you sell is not defensible, it means you have a brittle business model. If I’m allowed to take Tailwind, the open source project, and build something commercial around it, I don’t see why OpenAI or Anthropic cannot.

    • The difference is that they are reselling it directly. They charge for inference that outputs tailwind.

      It's fine to have a project that generates html-css as long as the users can find the docs for the dependencies, but when you take away the docs and stop giving real credit to the creators it starts feeling more like plagiarism and that is what's costing tailwind here.

      1 reply →

  • Intellectual property was kind of a gimmick to begin with, though. Let's not pretend like copyright and patents made any sense to begin with

    • They exist to protect the creator/inventor and allows them to get an ROI on their invested time/effort. But honestly today, the abundance of content, especially that can be generated by LLM, completely breaks this. We're overwhelmed with choice. Content has been comodotized. People will need to come to grasp with that and find other ways to get an ROI.

      The article does provide a hint: "Operate". One needs to get paid for what LLMs cannot do. A good example is Laravel. They built services like Forge, Cloud, Nightwatch around open source.

      1 reply →

  • The idea of being able to “steal” ideas is absolutely silly.

    Yeah we’ve got a legal system for it, but it always has been and always will be silly.

  • In my opinion, IP is dead. Strong IP died in 2022, along with the Marxist labor theory of value; of which IP derives its (hypothetical) value. It no longer matters who did what when and how. The only thing that matters is that exists, and it can be consumed, for no cost, forever.

    IP is the final delusion of 19th century thinking. It was crushed when we could synthesize anything, at little cost, little effort. Turns out, the hard work had to be done once, and we could automate to infinity forever.

    Hold on to 19th century delusions if you wish, the future is accelerating, and you are going to be left behind.

I wish I could upvote this more than once. The author gets it, you have to sell outcomes. Not features. Seems like every open source company that doesn’t market an outcome to buyers will face a similar threat. And this particular go to market strategy was “brittle” before AI.

> AI didn't kill Tailwind's business. It stress tested it.

The earthquake didn’t destroy the building — it stress tested it.

  • In software (and business in general!), innovation is expected. If you built a building in San Francisco that couldn't handle a relatively minor earthquake you could argue it would be a 'stress test'.

    • If you’re right that AI’s impact on business is akin to a ‘relatively minor earthquake in San Francisco,’ a lot of investors are going to be really fucking bummed out.

"The value got extracted, but compensation isn't flowing back. That bothers me, and it deserves a broader policy conversation.

What I keep coming back to is this: AI commoditizes anything you can fully specify. [...]

So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again."

This seems like a useful framing to be aware of, generally.

The internet has always kinda run on the ambiguity of "does the value flow back". A quote liberated from this article itself; all the content that reporters produce that's laundered back out through twitter; 12ft.io; torrents; early youtube; late youtube; google news; apache/mit vs gnu licenses; et cetera..

I see "hackers" in these comments are now advocating to make "criminal contempt of business model" a serious thing, instead of a mere meme used to describe draconian copyright and patent laws.

Companies providing AI services should offer ads for the things the AI is using. And I don't mean "Tailwind could pay Google to advertise in Gemini", I mean "Google should be clearly and obviously linking back to Tailwind when it uses the library in its output"

They already do this sort of thing inside outputs from Deep Research, and possibly without. But the output should be less muted, inline, recessed and more "I have used Tailwind! Check out how to make it better [HERE](link)"

They should be working with the owners of these projects, especially ones with businesses, and directing traffic to them. Not everyone will go, and that's fine, but at least make the effort. The infrastructure is in place already.

And yes, right now this would not be across-the-board for every single library, but maybe it could be one day?

It's the same problem news sites have been facing for years because of Google News and Facebook. Their solution so far has been country-level bans on it (Canada).

  • But why would you click the tailwinds link if you're getting correct answers? You don't need documentation or consulting.

    Google and Facebook couldn't provide full news articles because of copyright law. They just showed headline and summary provided by the news websites (and still eventually got sued for showing the summaries).

  • The number one rule of these apps is don’t link outside the app (because then the user will stop their session).

  • I feel that the battle is already lost when we aren't arguing that the AI service must get a license from the copyright holder like everybody else, but instead just arguing how many crumbs is the AI service morally obliged to throw back.

To call it a stress "test" is dismissive.

A stress test on a bank doesn't actually erase the revenue and financially jeopardize the bank.

Implementing layoffs is not a stress test.

This feels like the OpenSSL problem where we do probably need some kind of industry organization to maintain these things. There’s a chicken and the egg problem that these AI companies need someone to keep maintaining tailwind if they want it to keep working in their prompts.

Maybe that limits the ability for the head of tailwind to run their own business and make more income, but something gotta give.

> So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again.

Sounds like even more incentive for "managing" problems and creating business models around them instead of solving them.

Calling it a stress test seems a bit off. Would we say that invention of lightbulbs was a "stress test" for candle related business models? Or would we just say that business models had to change in response to current events.

Tailwind plus is available for one time payment that provides lifetime access to current and future components. With AI cutting off the flow for new buyers, revenue shrivels up much quicker than what it would've been if it was a recurring subscription.

Killed by AI or competitors offering Tailwind templates and UI kits at a much lower price or for free?

The root of the issue is that Tailwind was selling something that people can now recreate a bespoke version of in mere minutes using a coding agent. The other day I vibe coded a bespoke dependabot/renovate replacement in an hour. That was way easier than learning any of these tools and fighting their idiosyncrasies that don’t work for me. We no longer need Framer because you can prompt a corporate website faster than you can learn Framer. It is, fortunately or unfortunately, what it is and we all have to adapt.

I want to be clear, it sucks for Tailwind for sure and the LLM providers essentially found a new loophole (training) where you can smash and grab public goods and capture the value without giving anything back. A lot of capitalists would say it’s a genius move.

Framing it as a "conduit" disruption might make a lot of assumptions about the fundamental economic value of software in the future. In a world (whether near term or long term) where you can just ask the computer to make whatever software you want, what are the economics of retailing/licensing any software at all? Open source or otherwise?

So AI is attempting to replace SAP as the traditional way of testing if you company is strong enough ?

> Value is shifting to operations: deployment, testing, rollbacks, observability. You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.

I agree somewhat but eventually these can be automated with AI as well.

  • Unless you replace the entire workforce, you'd be surprised how much organizational work and soft skills are involved in an infrastructure at scale.

    Like sure, there is a bunch of stuff like monitoring, alerting that is telling us that a database is filling up it's disk. This is already automated. It could also have automated remediation with tech from the 2000s with some simple rule-based systems (so you can understand why those misbehaved, instead of entirely opaque systems that just do whatever).

    The thing is though, very often the problem isn't the disk filling up or fixing that.

    The problem is rather figuring out what silly misbehavior the devs introduced, if a PM had a strange idea they did not validate, if this is backed by a business case and warrants more storage, if your upstream software has a bug, or whatever else. And then more stuff happens and you need to open support cases with your cloud provider because they just broke their API to resize disks, ...

    And don't even get me started on trying to organize access management with a minimally organized project consulting team. Some ADFS config resulting from that is the trivial part.

  • If "99.95% uptime on Black Friday", and "keeping a site secure, updated, and running" can ever be automated (by which I mean not a toy site and not relying on sheer luck), not only 99.99% of people in IT are out of a job, but humans as intelligent beings are done. This is such a doomsday scenario that there's not even a point in discussing it.

  • Care to provide a prompt that leads to coding agent achieving 99.95% uptime on Black Friday as an example?

I'd note a couple of things:

Not to nitpick but if we are going to discuss the impact of AI, then I'd argue "AI commoditizes anything you can specify." is not broad enough. My intuition is "AI commoditizes anything you can _evaluate/assess_." For software automation we need reasonably accurate specifications as input and we can more or less predict the output. We spend a lot of time managing the ambiguity on the input. With AI that is flipped.

In AI engineering you can move the ambiguity from input to the output. For problems where there is a clear and cheaper way of evaluating the output the trade-off of moving the ambiguity is worth it. Sometimes we have to reframe the problem as an optimization problem to make it work but same trade-off.

On the business model front: [I am not talking specifically about Tailwind here.] AI is simply amplifying systemic problems most businesses just didn't acknowledge for a while. SEO died the day Google decided to show answer snippets a decade ago. Google as a reliable channel died the day Google started Local Services Advertisement. Businesses that relied on those channels were already bleeding slowly; AI just made it sudden.

On efficiency front, most enterprises could have been so much more efficient if they could actually build internal products to manage their own organizational complexity. They just could not because money was cheap so ROI wasn't quite there and even if ROI was there most of them didn't know how to build a product for themselves. Just saying "AI first" is making ROI work, for now, so everyone is saying AI efficiency. My litmus test is fairly naive: if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.

  •   > if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
    

    exactly, "efficiency" nice to say in a vacuum but what you really need is quality (all-round) and understanding your customer/market

> You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.

Uh, yeah you can. There’s a whole DevOps ecosystem of software and cloud services (accessible via infrastructure—as-code) that your agents can use to do this. I don’t think businesses who specialize in ops are safe from downsizing.

  • Yep - exactly. Ops isn't immune to LLMs stealing your customers. Given that most of the "open source product with premium hosting" models are just reselling hyperscaler compute at a huge markup, the customers are going to realize pretty quickly that they can use an LLM to setup some basic devops and get the same uptime. Most of these companies are offering a middleman service that becomes a bad deal the moment the customer has access to expertise they previously lacked.

    I also think he's glossing over the fact that one of the reasons why companies choose to pay for "ops" to run their software for them is because it's built by amateurs or amateurs-playing-professional and runs like shit. I happen to know this first hand from years of working at a company selling hosting and ops for the exact same CMS that Dries' business hosts (Drupal, a PHP-based CMS) and the absolute garbage that some people are able to put together in frameworks like Wordpress and Drupal is truly astounding. I'm not even talking about the janky local businesses where their nephew who was handy with computers made them a Wordpress site - big multinational companies have sites in these frameworks that can barely handle 1x their normal traffic and more or less explode at 1.5x.

    The business of hosting these customers' poorly optimized garbage remains a big business. But we're entering into an era where the people who produce poorly optimized software have a different path to take rather than throwing it to a SaaS platform that can through sheer force of will make their lead-weight airplane fly. They can spend orders of magnitude less money to pay an LLM to make the software actually just not run like shit in the first place. Throwing scaling at the problem of 99.95% is a blunt instrument that only works if the person paying doesn't have the time, money, or knowledge to do it themselves.

    Companies like these (including the one I work for currently) are absolutely going to get squeezed from both directions. The ceiling is coming down as more realize they can do their own devops, and the floor is rising as customer code quality gets better. Eventually you have to try your best to be 3 ft tall instead of 6.

I know for a fact that all SOTA models have linux source code in them, intentionally or not which means that they should follow the GPL license terms and open-source part of the models which have created derivative works out of it.

yes, this is indirectly hinting that during training the GPL tainted code touches every single floating point value in a model making it derivative work - even the tokenizer isn't immune to this.

  • > the tokenizer isn't immune to this

    A tokenizer's set of tokens isn't copyrightable in the first place, so it can't really be a derivative work of anything.

    • GPL however, does put restrictions on it, even the tokenizer. It was specifically crafted in a way where even if you do not have any GPL licensed sourcecode in your project, but it was built on top of it you are still binded by GPL limitations.

      the only reason usermode is not affected is because they have an exclusion for it and only via defined communication protocol, if you go around it or attempt to put a workaround in the kernel guess what: it still violates the license - point is: it is very restrictive.

      4 replies →

  • When you say “in” them, are you referring to their training data, or their model weights, or the infrastructure required to run them?

One of the biggest shortcomings of Open Source was that it implicitly defaulted to a volunteer model and so financing the work was always left as an exercise for the reader.

Hence (as TFA points out) open source code from commercial entities was just a marketing channel and source of free labor... err, community contributions... to auxiliary offerings that actually made money. This basic economic drive is totally natural but creates dynamics that lead to suboptimal behaviors and controversy multiple times.

For instance, a favorite business model is charging for support. Another one was charging for a convenient packaging or hosting of an “open core” project. In either case, the incentives just didn’t align towards making the software bug-free and easily usable, because that would actively hamper monetization. This led to instances of pathological behavior, like Red Hat futzing with its patches or pay-walling its source code to hamper other Linux vendors.

Then there were cases where the "open source" branding was used to get market-share, but licenses restricted usage in lucrative applications, like Sun with Java. But worse, often a bigger fish swooped in to take the code, as they were legally allowed to, and repackage it in their own products undercutting the original owners. E.g. Google worked around Sun's licensing restrictions to use Java completely for free in Android. And then ironically Android itself was marketed as "open source" while its licensing came with its own extremely onerous restrictions to prevent true competition.

Or all those cases when hyperscalers undercut the original owners’ offerings by providing open source projects as proprietary Software as a Service.

All this in turn led to all sorts of controversies like lawsuits or companies rug-pulling its community with a license change.

And aside from all that, the same pressures regularly led to the “enshittification” of software.

Open Source is largely a socialist (or even communist) movement, but businesses exist in a fundamentally capitalistic society. The tensions between those philosophies were inevitable. Socialists gonna socialize, but capitalists gonna capitalize.

With AI, current OSS business models may soon be dead. And personally I would think, to the extent they were based on misaligned incentives or unhealthy dynamics, good riddance!

Open Source itself will not go away, but it will enter a new era. The cost of code has dropped so much, monetizing will be hard. But by the same token, it will encourage people, having invested so much fewer resources creating it, to release their code for free. A lot of it will be slop, but the quantity will be overwhelming.

It’s not clear how this era will pan out, but interesting times ahead.

> You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.

This is completely wrong. Agents will not just be able to write code, like they do now, but will also be able to handle operations, security, continuing to check, and improve the systems, tirelessly.

  • And someday we will have truly autonomous driving cars, we will cure cancer, and humans will visit Mars.

    You can't prompt this today, are you suggesting this might come literally tomorrow? 10 years? 30? At that unknown time will your comment become relevant?

  • I'm working on a project now and what you're saying is already true. I have agents that are able to handle other things apart from code.

    But these are MY agents. They are given access to MY domain knowledge in the way that I configured. They have rules as defined by ME over the course of multi-week research and decision making. And the interaction between my agents is also defined and enforced by me.

    Can someone come up with a god-agent that will do all of this? Probably. Is it going to work in practice? Highly unlikely.

  • So you think a statement about the current state of things is wrong because you believe that sometime in the future agents are going to magically do everything? Great argument!

  • To be able to do this requires perfect domain knowledge AND environment knowledge AND be able to think deeply about logical dominoes (event propagation through the system, you know, the small stuff that crashes cloudflare for the entire planet for example).

    Please wake me up when Shopify lets a bunch of agentic LLMs run their backends without human control and constant supervision.

    • The extreme here is thinking machines will do everything. The reality is likely far closer to less humans being needed.

>Open Source was never the commercial product. It's the conduit to something else.

this is correct. If you open source your software, then why are you mad when companies like AWS, OpenAI, etc. make tons of money?

Open Source software is always a bridge that leads to something else to commercialize on. If you want to sell software, then pick Microsoft's model and sell your software as closed source. If you get mad and cry about making money to sustain your open source project, then pick the right license for your business.

  • > then pick the right license for your business

    That's one of the issues with AI, though; strongly copylefted software suddenly finds itself unable to enforce its license because "AI" gets a free pass on copyright for some reason.

    Dual-licensing open source with business-unfriendly licensing used to be a pretty good way to sell software, but thanks to the absurd legal position AI models have managed to squeeze themselves into, that stopped in an instant.

    • Open source software helped to dramatically reduce the cost of paid software, because there is a now a minimum bar of functionality you have to produce in order to sell software.

      And, in many cases, you had to produce that value yourself. GPL licensing lawsuits ensured this.

      AI extracting value from software in such a way that the creators no longer can take the small scraps they were willing to live on seems likely to change this dynamic.

      I expect no-source-available software (including shareware) to proliferate again, to the detriment of open source.