← Back to context

Comment by jillesvangurp

3 months ago

This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.

Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.

There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.

Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.

> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

So let's all just give zero fucks about our moral values and just multiply monetary ones.

  • >So let's all just give zero fucks about our moral values and just multiply monetary ones.

    You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.

    That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.

    If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.

    You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.

    • > If AI can make things 1000x more efficient,

      Is that the promise of the faustian bargain we're signing?

      Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?

      53 replies →

    • >They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.

      And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.

      I don't think using the 2025 US government for a moral compass is helping your case either.

      >If AI can make things 1000x more efficient

      Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?

  • AI is just a tool, like most other technologies, it can be used for good and bad.

    Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

    If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.

    • You are thinking too small.

      The goal of AI is NOT to be a tool. It's to replace human labor completely.

      This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

      To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.

    • The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).

    • I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)

    • > AI is just a tool, like most other technologies, it can be used for good and bad.

      The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

      I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

      However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.

      9 replies →

    • >Where are you going to draw the line?

      How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).

      I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.

      >AI as a technology is almost impossible to stop.

      Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.

      Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.

      3 replies →

    • If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

      AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

      I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.

      6 replies →

  • What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.

    You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.

    • Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"

      15 replies →

  • That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.

    • This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?

  • [flagged]

    • The age old question: do people get what they want, or do they want what they (can) get?

      Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?

    • Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.

      So is it rational for a web design company to take a moral stance that they won't use JavaScript?

      Is there a market for that, with enough clients who want their JavaScript-free work?

      Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?

I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.

  • The latest DeepSeek and Kimi open weight models are competitive with GPT-5.

    If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.

    Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.

> And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.

I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.

I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.

However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.

  • Website building started dying off when SquareSpace launched and Wix came around. WordPress copied that and its been building blocks for the most part since then. There are few unique sites around these days.

  • > we don't see the authors getting the recognition.

    In that sense AI has been the biggest heist that has ever been perpetrated.

    • Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.

      Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.

      It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.

      People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.

      You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.

      Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?

      13 replies →

Totally agree, but I’d state it slightly differently.

This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.

Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)

>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most

There. Fixed!

AI is not a tool, it is an oracle.

Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.

  • Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...

  • Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.

  • Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.

    • > if you combine the Stone Soup strategy with Clever Hans syndrome you can sell the illusion of not working for 8 billable hours a day

      No thanks, I'm good.

  • "praying that the next prompt finally spits out something decent is not a business strategy."

    well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history

    as much as I agree with your statement but the real world doesn't respect that

    • > one of the most fastest growing user acquisition user base in history

      By selling a dollar of compute for 90 cents.

      We've been here before, it doesn't end like you think it does.

I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.

  • But what if the AI generated course was actually good, maybe even better than the human generated course? Which one would you pick then?

> Arguing against progress as it is happening is as old as the tech industry. It never works.

I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.

So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.

And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?

Not everything that comes out of Silicon Valley is automatically good.