← Back to context

Comment by wartywhoa23

3 months ago

> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

So let's all just give zero fucks about our moral values and just multiply monetary ones.

>So let's all just give zero fucks about our moral values and just multiply monetary ones.

You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.

That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.

If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.

You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.

  • > If AI can make things 1000x more efficient,

    Is that the promise of the faustian bargain we're signing?

    Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?

    • While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week

      50 replies →

    • You will probably be dead.

      But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.

  • >They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.

    And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.

    I don't think using the 2025 US government for a moral compass is helping your case either.

    >If AI can make things 1000x more efficient

    Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?

AI is just a tool, like most other technologies, it can be used for good and bad.

Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.

  • You are thinking too small.

    The goal of AI is NOT to be a tool. It's to replace human labor completely.

    This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

    To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.

  • The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).

  • I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)

  • > AI is just a tool, like most other technologies, it can be used for good and bad.

    The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

    I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

    However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.

    • > The same could be said of social media

      Yes, absolutely.

      Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.

      6 replies →

  • >Where are you going to draw the line?

    How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).

    I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.

    >AI as a technology is almost impossible to stop.

    Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.

    Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.

  • If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

    AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

    I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.

What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.

You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.

  • Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"

That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.

  • This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?

[flagged]

  • The age old question: do people get what they want, or do they want what they (can) get?

    Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?

  • > when the market is telling you loud and clear they want X

    Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.

    [1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...

    [2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...

    • He is right though:

      "Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”

      ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.

      AI Transcription of Videos is now a really cool and helpful feature in MS Teams.

      Segment Anything literaly leapfroged progress on image segmentation.

      You can generate any image you want in high quality in just a few seconds.

      There are already human beings being shitier in their daily job than a LLM is.

    • 1) it was failure of specific implementation

      2) if you had read the paper you wouldn’t use it as an example here.

      Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.

      2 replies →

    • Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.

  • [flagged]

    • This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.

      1 reply →

    • What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.

      This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.

      30 replies →

  • Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.

    So is it rational for a web design company to take a moral stance that they won't use JavaScript?

    Is there a market for that, with enough clients who want their JavaScript-free work?

    Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?