It’s been a very hard year

2 months ago (bell.bz)

I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.

I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...

  • I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

    You might as well work on product marketing for ai because that is where the client dollars are allocated.

    If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.

    • Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

      I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.

      What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.

      the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?

      Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.

      Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get

      12 replies →

    • >but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

      Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.

      I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.

  • Has anyone considered that the demand for web sites and software in general is collapsing?

    Everyone and everything has a website and an app already. Is the market becoming saturated?

    • I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.

    • And new companies are created every day, and new systems are designed every day, and new applications are needed every day.

      The market is nowhere close to being saturated.

      2 replies →

  • > In my opinion this business died 2 years ago

    It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.

  • I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.

  • This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.

    Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

    The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.

    There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.

    Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.

    • > Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

      So let's all just give zero fucks about our moral values and just multiply monetary ones.

      145 replies →

    • I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.

      1 reply →

    • > And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.

      I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.

    • I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.

      However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.

      16 replies →

    • Totally agree, but I’d state it slightly differently.

      This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.

    • Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)

      >the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most

      There. Fixed!

    • I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.

      4 replies →

    • > Arguing against progress as it is happening is as old as the tech industry. It never works.

      I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.

      So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.

      And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?

      Not everything that comes out of Silicon Valley is automatically good.

  • markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".

  • what happen if the market is right and this is "new normal"?????

    same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital

  • Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.

    • Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.

      Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.

      3 replies →

I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.

That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.

This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.

Since then it's been more of the same as far as consulting.

I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.

An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.

  • >> even when mine (that they had already approved) would save them 400K a year

    You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.

    • > You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.

      The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.

      1. https://news.ycombinator.com/item?id=32184183

      2 replies →

    • I would go further and say that even at software companies, even for dev that goes directly into the product, engineering is often seen as a cost center.

      The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"

      So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.

      1 reply →

    • I work as a consultant and tend to focus on helping startups grow their revenue. And what you're saying here is almost word for word what I often recommend as the *first thing* they should do.

      In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.

      This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.

    • >> even when mine (that they had already approved) would save them 400K a year You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers

      Totally agree. This is a big reason I went into solutions consulting.

      In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.

      I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.

      We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.

  • Very few people suspected that github is being used to train the ai when we were all pushed the best practice of doing frequent commit.

    a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.

    Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.

    Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.

    • >> Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.

      I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.

      Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.

      I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.

      Strange tides we're living in right now.

    • If I had gone with the flow in 1995 I would have got my MCSE and worked for a big government bureaucracy.

      Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...

      Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc

      They needed to generate a large body of data to train our future robot overlords to enslave us.

      We the 'experienced' are definitely not their target -- too much independence of thought.

      To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).

  • They also produce crap once you leave the realm of basic CRUD web apps... Try using it with Microsofts Business Central bullshit, does not work well.

    • I have worked with a lot of code generation systems.

      LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.

      Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)

      For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.

      Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.

      I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.

      1 reply →

  • [flagged]

    • I earned their respect over many years of hard work -- hardly a freebie!

      I will say that being social and being in a scene at the right time helps a lot -- timing is indeed almost everything.

      1 reply →

    • >I'm not for/or against a particular style, it must be real nice if life just solves everything for you while you just chill or whatever. But, a nice upside of being made of talent instead of luck is that when luck starts to run out, well, ... you'll be fine anyway :).

      This is wildly condescending. Holy.

    • Talent makes luck. Ex-colleagues reach out to me and ask me to work with them because they know the type of work I do, not because it's lucky.

      Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.

      2 replies →

In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.

  • > Saying "well you have to follow the market" is such a cravenly amoral perspective.

    You only have to follow the market if you want to continue to stay relevant.

    Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.

    So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.

    • I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.

      The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.

      9 replies →

    • I was talking to a friend of mine about a related topic when he quipped that he realized he started disliking therapy when he realized they effectively were just teaching him coping strategies for an economic system that is inherently amoral.

      > So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.

      You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.

      Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.

      We shouldn't have to choose between principles and food on the table.

      1 reply →

    • Sometimes companies become irrelevant while following the market, while other companies revolutionize the market by NOT following it.

      It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.

    • > it might mean going out of business for ideological reasons

      taking a moral stance isn't inherently ideological

    • >the options are follow the market or find a different line of work if you don’t like the way the market is going

      You can also just outlive the irrationality. If we could stop beating around the bush and admit we're in a recession, that would explain a lot of things. You just gotta bear the storm.

      It's way too late to jump on the AI train anyway. Maybe one more year, but I'd be surprised if that bubble doesn't pop by the end of 2027.

  • No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.

    If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.

    • > If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.

      Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.

      I may even move to a cathedral model or just stop sharing the software I write with the general world, too.

      Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.

      Life is more nuanced than that.

      6 replies →

    • This metaphor implies a sort of AI inevitably. I simply don't believe that's the case. At least, not this wave of AI.

      The people pushing AI aren't listening to the true demand for AI. This, its not making ita money back. That's why this market is broken and not prone to last.

  • Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.

    • Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.

      3 replies →

  • Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.

    • I assume they are weathering the storm if they are posting like this and not saying "we're leaving the business". A proper business has a war chest for this exact situation (though I'm unsure of how long this businesses has operated)

  • AI is amoral is an opinion.

    Following the market is also not cravenly amoral, AI or not.

    • If the market is immoral, following it is immoral. And it seems like more of society is disagreeing that AI is moral.

  • I find this very generic what you are saying and they.

    What stance against AI? Image generation is not the same as code generation.

    There are so many open source projects out there, its a huge difference than taking all the images.

    AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?

  • I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.

    A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.

    As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.

    The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.

    IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.

    • > This is effectively no different from a construction company building an office building, or a bakery baking a cake.

      A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].

      I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.

      [1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...

    • > As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.

      I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.

      That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".

    • << over-focusing on small moral cursades against specific players like this and not the game as a whole

      Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.

      1 reply →

  • nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.

    • > he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim

      No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.

      1 reply →

  • I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?

    AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.

As someone who has sold video tech courses since 2015, I don't know about the future.

I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.

It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".

Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.

It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.

  • As someone who was probably a consumer of such courses, thanks, first off. But second, what happened with me was I would be on the bench at my company and be going hard at training on various tools and technologies and then get whipsawed by them into another direction completely and never once used the thing I was learning in my actual work. So I gave up on training. I would pretend to do training, but basically just screwed around doing whatever I wanted--I learned auto painting and dent repair while on the bench, basically anything. The last few years of my career sucked anyways. So yeah, I can understand why your business has dried up--there's no point in actually learning anything ahead of actually needing to use it. There just isn't. That goes for certs, too. People are burned the hell out on tech because there is so much of it now. It used to be you could do a bunch of Java certs or whatever and have a career. Now, you have to know and be experienced with EVERYTHING and that just is not possible when every technology has 2-5 competing clones of it.

    • Yep, I know what you mean.

      It skews back-end stats too. For example if someone buys a course, I hope they take it in full so they feel happy and fulfilled but in reality a good portion never start. It's like Steam games. Some people just like collecting digital goods knowing they exist and that gives comfort.

      Massive course platforms are still thriving it seems so the market is there but it is way more saturated than 10 years ago. My first Docker course in 2015 was like maybe 1 out of 5 courses out there, but now there's 5,000 courses and Docker's documentation has gotten a lot better over time.

      I haven't figured out how to make things work, I just know I love tech, solving real world problems, documenting my journey (either through blog posts or courses) and traveling. It would be amazing to be able to travel the world and make courses. Back when I started 10 years ago I didn't realize I like traveling so much so I squandered that extra time.

  • Were most of your sales coming via your site and/or organic search?

    It sounds like you have a solid product, but you need to update your marketing channels.

    • Almost 100% of sales come from organic searches. Usually people would search for things like "Docker course" or "Flask course" and either find my course near the top of Google or they would search for some specific problem related to that content and come across a blog post I wrote on my main site which linked back to the course somewhere (usually).

      Now the same thing happens, but there's 20x less sales per month.

      I've posted almost 400 free videos on YouTube as well over the years, usually these videos go along with the blog post.

      A few years back I also started a podcast and did 100 weekly episodes for 2 years. It didn't move the needle on course sales and it was on a topic that was quite related to app development and deployment which partially aligns with my courses. Most episodes barely got ~100 listens and it was 4.9 rated out of 5 on major podcast platforms, people emailed me saying it was their favorite show and it helped them so much and hope I never stop but the listener count never grew. I didn't have sponsors or ads but stopped the show because it took 1 full day a week to schedule + record + edit + publish a ~1-2 hour episode. It was super fun and I really enjoyed it but it was another "invest 100 days, make $0" thing which simply isn't sustainable.

      1 reply →

  • This is always sad to hear. I really want more educational material out there that isn't just serving "beginner bait" and I'd love love love more technical podcasts out there. But it seems like not much of the audience is looking for small creators for that. Perhaps they only focus on conference studies.

    And yeah, I agree with the other reponsder that AI + Google's own enshittification of search may have cost your site traffic.

    • It's too time consuming to create--I looked into doing some kind of "new tech" podcast with a colleague and it was just too much work for very little moolah.

I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.

We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.

I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.

  • Its a global industry shift.

    You can either hope that this shift is not happening or that you are one of these people surviving in your niche.

    But the industry / world is shifting, you should start shifting with.

    I would call that being innovative, ahead etc.

    • The industry is not really shifting. It's not shifting to anything. It's just that the value is being captured by parasitic companies. They still need people like me to feed them training data while they destroy the economics of producing that data.

      5 replies →

    • Sure they say that about every fad. Let's see how you feel when the bubble pops.

      In my eyes, that's when the grifters get out and innovators can actually create value.

      4 replies →

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff

If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.

This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.

Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.

  • > If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.

    I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use

    • Yeah, gotta disagree with this one. Every senior and above around me have figured a workflow that makes their job faster. Internal usage dashboards say the same thing.

      1 reply →

    • Even if it isn't, the OP can still make hay while the sun is still shining, even if it'll eventually set, as the saying goes. But to not make hay and slowly see it set while losing your income, I won't ever understand that.

  • > what the market wants

    Pretty sure the market doesn't want more AI slop.

    • There is absolutely AI slop out there. Many companies rushed to add AI, a glorified chat bot to their existing product, and have marketed it as AI.

      There is also absolutely very tasteful products that add value using LLM and other more recent advancements.

      Both can exist at the same time.

      4 replies →

    • Pretty sure HN has become completely detached from the market at this point.

      Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.

      1 reply →

    • Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.

      4 replies →

    • The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.

      The only thing that's going to change is the quality of the slop will get better by the year.

      2 replies →

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.

I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.

There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.

[1]: https://textquery.app/

  • Software is a means to an end. It always has been. There are a privileged few who have the luxury of being able to thoughtfully craft software. The attention to detail needs to go into what people see, not in the code underneath.

  • >It was naïve to obsess over subtleties that only a handful of users would ever notice.

    "When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through." - Steve jobs

    Didn't take long for people to abandon their principles, huh?

'I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too.'

I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.

  • >I have my personal qualms, but run those through my internal ethics to see if there is conflict

    Do you "run them through" actual ethics, too?

    • See.. here is a problem. You say 'actual' ethics as if those were somehow universal and not ridiculously varied across the board. And I get it, you use the term, because a lot of readers will take it face value AND simply use their own value system to translate them into what agrees with them internally. I know, because I do the same thing when I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.

      With that said, mister ants in the pants, what does actual mean to you in this particular instance?

      7 replies →

Not a big fan of his these days but Gary Vaynerchuk has my favorite take on this:

"To run your business with your personal romance of how things should be versus how they are is literally the great vulnerability of business."

  • It's very likey the main reason that small businesses like local restaurants, bakeries, etc. fail. People start them based on a fantasy and don't know how to watch the hard realities of expenses and income. But like gravity, there's no escaping those unless you are already wealthy enough for it all to just be a hobby.

  • So we should cater to those with the lowest ethical standards instead?

    • That's what this community has shifted towards these past few years. Didn't take too long for the "hacker scene" to crumble to corporate greed.

      I had hope during the NFT days, but I guess many here always wanted a not that told them they were smart and correct. Alas.

I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.

  • Survival is easy if you just sell out.

    • It's unfair to place all the blame on the individual.

      By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.

      But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.

      4 replies →

    • if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.

      4 replies →

  • Surely there's AI usage that's not morally reprehensible.

    Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...

    • How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.

      [0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested

      3 replies →

  • I wonder if there is a pivot where they get to keep going but still avoid AI. There must be for a small consultancy.

  • > "a self-inflicted wound"

    "AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.

    Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.

    • I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.

      But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.

      This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.

      3 replies →

Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO

Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…

  • > 20 years as a dev and tech lead and full stack, including stints as a EM and CTO

    > Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…

    I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?

    • Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.

      Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.

      A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.

      It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.

    • As CTO I wasnt in a big tech company, it was a 50 person digital studio in the south my salary as was 275k at the highest point in my career- so I never made FAANG money

      1 reply →

  • Come to Europe. Salaries are (much) lower, but we can use good devs and you'll have vacation days and health care.

    • The tech sector in UK/EU is bad, too. And the cost of living in big cities is terrible for the salaries.

      They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.

    • Thanks - my wife and I actually have a long term plan to shift to the EU

      Applied to quite a few EU jobs via LinkedIn but nothing came of it- I suspected they wanted people already in EU countries

      Both of us are US Citizens but we don't want to retire in the US it seems to be becoming a s*hole esp around healthcare

    • What's the unemployment rate like?

      I'm not sure the claim "we can use good devs" is true from the perspective of European corporations. But would love to learn otherwise?

      And of course: where in Europe?

    • Maybe one day, but your game industry isn't that much better than ours. Wouldn't want to move overseas only to still have the studio shut down.

    • It would be worth it mathematically to be unemployed in the US for up to 3-5 years in hopes of landing another US job.

    • Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.

      1 reply →

> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term. It is such an “interesting” statement in on many levels.

Market has changed -> we disagree -> we still disagree -> business is bad.

It is indeed hard to swim against the current. People have different principles and I respect that, I just rarely - have so much difficulty understanding them - see such clear impact on the bottom line

Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.

  • What do LLMs have to do with typing on phones, cancer research, or TTS?

    Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.

    I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.

    • I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.

      I wish I had Walmart in my area, the grocery stores here suck.

      2 replies →

  • Intentionally or not, you are presenting a false equivalency.

    I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.

    • How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.

      2 replies →

    • Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?

  • They are a marketing firm, so the stance within their craft is much more narrow than cancer.

    Also, we clearly aren't prioritizing cancer research if Altman has shifted to producing slop videos. That's why sentiment is decreasing.

    >Make it make sense.

    I can't explain to one who doesn't want to understand.

  • There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db

Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.

  • I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.

    HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.

    • I think it's more likely that software training as an industry is dead.

      I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.

      2 replies →

    • This isn't a bootcamp course. I don't think Andy's audience is one trying to convert an HTML course into a career wholesale. It's for students or even industry people who want a deeper understanding of the tech.

      Not everyone values that, but anyone who will say "just use an LLM instead" was never his audience to begin with.

  • How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.

  • Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…

  • Struggling because they're deliberately shooting themselves in the foot by not taking on the work their clients want them to take. If you don't listen to the market, eventually the market will let you fall by the way side.

After reading the post I kept thinking about two other pieces, and only later realized it was Taylor who had submitted it. His most recent essay [0] actually led me to the Commoncog piece “Are You Playing to Play, or Playing to Win?” [1], and the idea of sub-games felt directly relevant here.

In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.

I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.

Links:

[0] https://taylor.town/iq-not-enough

[1] https://commoncog.com/playing-to-play-playing-to-win/

His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.

  • I can definitely tell. Some sites just seem to give zero fucks about usability, just that it looks pretty. It's a shame

Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)

We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)

  • It's not really whether one has invested in the companies or not, it's more that we can see the author shooting themselves in the foot by not wanting to listen to the market. It's like selling vinegar at a lemonade stand (and only insisting on selling vinegar, not lemonade). It's simply logically nonsensical to us "Americans."

  • some would say historically that isn’t quite the case lol

    • LOL. Some would say it's been beaten out of us too...which makes Americans telling us to be enterprising even funnier.

I'm sure author's company does good work, but the marketplace doesn't respond well to, "we're really, _really_ good,", "trust me," "you won't be disappointed." It not only feels desperate, but is proof-free. Show me your last three great projects and have your customers tell me what they loved about working with you. Anybody can say, "seriously, we're really good."

  • the "trust me" has a trailer, testimony from industry experts, and gasp a good looking website that doesnt chug and still looks modern and dynamic. Bonus points for the transparency about 2025, we don't get much of that these days.

    It could still be trash, but they are setting all the right flags.

Everyone gets to make their own choices and take principled stances of their choosing. I don’t find that persuasive as a buy my course pitch though

  • I do. But sadly I don't have money and December/January are my slowest months these past few years. I'm exactly that "money is tight" crowd being talked about.

Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.

I have a family member that produces training courses for salespeople; she's doing fantastic.

This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.

Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.

  • >Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.

    I think courses like these are peak "DIY". These aren't courses teaching you to RTFM. It's teaching you how to think deeper and find the edge cases and develop philosophy. That's knowledge worth its weight in gold. Unlike React tutorial #32456 this is showing us how things really work "under the hood".

    I'd happily pay for that. If I could.

    >don't sell to musicians. They don't have any money

    But programmers traditionally do have money?

    >if you're against AI, you might have missed how good of a learning tool LLMs can be.

    I don't think someone putting their business on the line with their stance needs yet another HN squeed on why AI actually good. Pretty sure they've thought deeply of this.

Maybe they dont need to "create" website anymore, fixing other website that LLM generated is the future now

we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess

same thing would happen with AI generated website

  • >fixing other website that LLM generated is the future now

    I barely like fixing human code. I can't think of a worse job than fixing garbage in, garbage out in order to prop up billionaires pretending they don't need humans anymore. If that's the long term future then it's time for a career shift.

    I'm still much more optimistic about prospects, fortunately.

  • > same thing would happen with AI generated website

    Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.

The author has painted themselves into a corner. They refuse to do business with companies that use AI, and they try to support their business with teaching courses, which is also being impacted by AI.

They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?

  • > However they need to face current reality. What value-add can they provide in areas not impacted by AI?

    I'm sure the author has thought much longer on this than I, but I get the vibes here of "2025 was uniquely bad for reasons in and outside of AI". Not "2025 was the beginning of the end for my business as a whole".

    I don't think demand for proper engineering is going away, people simply have less to spend. And oncestors have less to invest or are all in gambling on AI. It's a situation that will change for reasons outside the business itself.

My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.

I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.

I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?

Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important

>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.

I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt

  • Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.

I don’t think they’re unique. They’re simply among the first to run into the problems AI creates.

Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.

People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.

I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.

"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"

The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"

and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"

This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.

I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.

  • I guess then, that he is relying on his customers not discovering that there are options out there that will do this for them, without a "middle man" as it were. Seems like shaky ground to be standing on, but I suppose it can work for a while, if he already has good relationships in his industry.

    • I think what he does is thinking in possibilities, just what an entrepreneur should do.

On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.

  • It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.

    All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.

    • It's also the exact same human beings who were doing the NFT and metaverse bullshit and insisting they were the next best things and had to jump ship to the next "Totally going to change everything" grift because the first two reached the end of their runs.

      I wonder what their plan was before LLMs seemed promising?

      These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.

  • That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.

  • In the case of the author, their market isn't LLM makers directly, it's the people who use those LLMs, so the author's market is much bigger and isn't susceptible to collapse if LLM makers go bankrupt (because they can just go back to what they are already doing now pre-LLM), quite the opposite as this post shows.

  • No, the "market" is 6 billion people making thousands of individual decisions daily.

    • Do you think 6 billion people are claiming for more effort and resources to be put on AI?

  • > On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.

    Careful now, if they get their way, they’ll be both the market and the government.

Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.

  • Why would your company or business suddenly require no effort due to covid.

    • Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.

      And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.

      I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.

      Switched into miltech where demand is real.

> we won’t work on product marketing for AI stuff, from a moral standpoint

I fundamentally disagree with this stance. Labeling a whole category of technologies because of some perceived immorality that exists within the process of training, regardless of how, seems irrational.

I noticed a phenomenon on this post - many people are tying this person's business decisions to some sort of moral framework, or debating the morality of their plight.

"Moral" is mentioned 91 times at last count.

Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?

EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?

I feel for the author. I do both mechanical and software engineering and I’m in this career(s) because I love making things and learning how to do that really well. Been having the most difficult time accepting the idea that there isn’t a good market for people like us - artisans, craftsmen, whatever the term might be - who are obsessive about exceptional quality and the time and effort it takes to get there. In this day and age, and especially when LLMs look ever more like they can produce at least a cheap, dollar store approximation of the real deal, “doing things really well” is going to be relegated to an ever more niche market.

I appreciate and respect that this org is avoiding AI hype work, but I don't know if there are long term reputational benefits. Clients are going to be more turned off by your reasons not to do work than your having a "principled business".

From the clients perspective, it's their job to set the principles (or lack thereof) and your job to follow their instructions.

That doesn't mean it's the wrong thing to do though. Ethics are important, but recognise that it may just be for the sake of your "soul".

Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".

The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.

But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.

Par for the course

ceaseless AI drama aside, this blog and the set-studio website look and feel great

I hope things turn around for them it seems like they do good work

> we won’t work on product marketing for AI stuff, from a moral standpoint

Can someone explain this?

  • Some folks have moral concerns about AI. They include:

    * The environmental cost of inference in aggregate and training in specific is non-negligible

    * Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.

    * AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.

    * The primary companies who are selling AI products have, at times, controversial pasts or leaders.

    * Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.

    * The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.

    * AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.

    Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)

    • I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.

      My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.

      I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.

      1 reply →

    • "Please don't downvote me for trying to provide a neutral answer to this person's question"

      Please note, that there are some accounts downvoting any comment talking about downvoting by principle.

Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.

Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.

I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.

  • > Everyone wants everything with the AI.

    All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.

  • >When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing.

    No, that's the most important situation to consider the moral thing. My slightly younger peers years back were telling everyone to eat tide pods. That's a pretty important time to say "no, that's a really stupid idea", even if you don't get internet clout.

    I'd hope the tech community of all people would know what it's like to resist peer pressure. But alas.

    >But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.

    I don't see that at all in the article. Quite the opposite here actually. I just see a person being transparent about their business and morals and commentors here using it to try and say "yea but I like AI". Nothing here attacked y'all for liking it. The author simply has his own lines.

    • By victim blaming I meant some comments here. I can relate to the author, and the narrative that it's my fault for trying to be myself and keep to my ways triggers me.

I'm just some random moron, but I just clicked on TFA, and it looks like a very pretty ad.

What am I missing?

> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.

imo LLMs are (currently) good at 3 things:

1. Education

2. Structuring unstructured data

3. Turning natural language into code

From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.

No need to buy the hype, but no reason to die from it either.

  • > imo LLMs are (currently) good at 3 things

    Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.

    • Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.

      3 replies →

> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.

If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.

At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.

I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.

LLMs themselves are not a fad or overhyped. Even my mum (almost 70) is using an LLM nowadays, and she hates computers. Adapt or die and I don’t say this happily. I hate that I have to use an LLM to stay competitive, something I’m not used to in my life. I was always competitive bare-mind. Now I need to be armed with an LLM.

I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.

  • I'm critical of AI because of climate change. Training and casual usage of AI takes a lot of resources. The electricity demand is way too high. We have made great progress in bringing a lot of regenerative energy to the grid, but AI eats up a huge part of it, so that other sectors can't decarbonize as much.

    We are still nowhere near to get climate change under control. AI is adding fuel to the fire.

  • My domain is games. It's a battlefield out there (pun somewhat intended). I ain't touching anything Gen-AI until we figure out what the hell is going on with regards to copyright, morality of artists, and general "not look like shit"-ness.

    Sad part is I probably will still be accused of using AI. But I'll still do my best.

    • Ship has sailed and that won’t be figured out. There is very little chance to put the cat back into the bag.

Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.

"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."

You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.

I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.

I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.

Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.

Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?

Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.

> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

Sounds like a self inflicted wound. No kids I assume?

It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.

Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.

  • Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.

    User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.

    If you won't even lean into things like this you're just leaving yourself behind.

    • >here's a lot of real meat to it that will obviously survive the hype cycle.

      Okay. When the hype cycle dies we can re-evaluate. Stances aren't set in stone.

      >If you won't even lean into things like this

      I'm sure Andy knows what kind of business was in his clients and used that to inform his acceptance/rejection of projects. It mentions web marketing so it doesn't seem like much edutech crossed ways here.