Will AI be the basis of many future industrial fortunes, or a net loser?

1 day ago (joincolossus.com)

I think the interesting idea with “AI” is that it seems to significantly reduce barriers to entry in many domains.

I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.

For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.

Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!

It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.

  • Yeah, that's how I feel about it as well.

    For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.

    With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.

    It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.

    • > For a large chunk of my life, I would start a personal project, get stuck on some annoying detail ...

      I am the same way. I did Computer Science because it was a combination of philosophy and meta thinking. Then when I got out, it was mainly just low level errors, dependencies, and language nuance.

      1 reply →

  • I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI. And for those that can't (legal, for example, or various SaaS vendors) the AI usually has a good idea of what services you'd want to engage.

    Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.

    It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.

    • > ironic if AI actually ended up destroying economic activity

      that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.

      Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!

      69 replies →

    • But if startups have less specialist needs they have less overall startup costs and so the amount of seed money needed goes down. This lowers the barrier for entry for a lot of people but also increases the number of options for seed capital. Of course it likely will increase competition but that could make the market more efficient.

    • > I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI.

      You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)

      7 replies →

  • > It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.

    It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.

    • >is actually a net positive result for those disciplines.

      This is an argument based in Luddism.

      Looms where not a net positive for the craftsman that were making fabrics at the time.

      With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.

      There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.

      5 replies →

    • If people are making art to get rich and failing, it doesn’t kill artists, who’d be making art anyway, it kills the people trying to earn money from their art. Do we need Quad-A blockbuster Ubisoft/Bethesda/Sony/MS/Nintendo releases for their artistic merit, or their publishers/IP owners needs to make money off of it? Ditto the big4 movie studios. Those don’t really seem to matter very much. The whole idea of tastemakers, who they are and whether they should be trusted (indie v/s big studio, grass roots or intentionally cultivated) seems like it ebbs and flows. Right now I’d hate to be one of the bigs, because everything that made them a big is not working out anymore.

      4 replies →

    • It shifted the signal to noise ratio but its not a net negative either. There's whole new genres of music that exist now because easy mixing tech is freely available. Do you or I like SoundCloud mumble rap? No, probably not. But there's enough people out there that do

    • This reminds me of my preferred analogy: are digital artists real artists if they can’t mix pigment and skillfully apply them to canvas?

      Not sure why digital artists get mad when I ask. They’re no Michelangelo.

      7 replies →

  • Yep this is a huge enabler - previously having someone "do art" could easily cost you thousands for a small game, a month even, and this heavily constrained what you could make and locked you into what you had planned and how much you had planned. With AI if you want 2x or 5x or 10x as much art, audio etc it's an incremental cost if any, you can explore ideas, you can throw art out, pivot in new directions.

    • I'd argue a game developer should make their own art assets, even if they "aren't an artist". You don't have to settle for it looking bad, just use your lack of art experience as a constraint. It usually means going with something very stylized or very simple. It might not be amazing but after you do it for a few games you will have pretty decent stuff, and most importantly, your own style.

      Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.

      Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.

      3 replies →

    • It’s enabler for everyone, so you still don’t have any advantage just like you didn’t before that.

      The only difference is you spend less on art but will spend same in other areas.

      Literally nothing changed

      1 reply →

    • > With AI if you want 2x or 5x or 10x as much art

      Imagery

      AI does not produce art.

      Not that it matters to anyone but artists and art enjoyers.

      8 replies →

  • Totally agree that what AI is doing right now feels more like the GarageBand/iMovie moment than the iPhone moment. It's democratizing creativity, not necessarily creating billion-dollar companies. And honestly, that's still a big deal

    • Yes, maybe what people create with it will be more basic. But is 'good enough' good enough? Will people pay for apps they can create on their own time for free using AI? There will be a huge disruption to the app marketplace unless apps are so much better than an AI could create it's worth the money. So short Apple? :) On the other hand, many, many more people will be creating apps and charging very little for them (because if it's not free or less than the value of my time, I'm building it on my own). This makes things better for everyone, and there'll still be a market for apps. So buy Apple? :)

    • The difference is you still need to express creativity in your use of GarageBand and iMovie. There is nothing creative about typing "give me a picture of x doing y" into a form field.

      Also, "democratizing"? Please. We're just entrenching more power into the small handful of companies who have been able to raise and set fire to unfathomable amounts of capital. Many of these tools may be free or cheap to use today, but there is nothing for the commons here.

    • The thing is... Elbow grease makes the difference.

      If you're just generating images using AI, you only get 80% there. You need at least to be able to touch up those images to get something outstanding.

      Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?

      8 replies →

  • Yeah that seems accurate.

    I mainly use AI for selfhosting/homelab stuff and the leverage there is absolutely wild - basically knows "everything".

  • I have a similar problem (available assets drive/limit game dev). What is your workflow like for generative game assets?

    • It’s really nothing special. I don’t do this a lot.

      Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.

      I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.

      The end result ends up online and I share them with friends for a laugh or two and usually move on.

      4 replies →

  • Yes! Barrier to entry down, competition goes up, barrier to being a standout goes up (but, many things are now accessible to more people because some can get started that couldn't before).

    Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).

  • The genericizing of aesthetics is far more cost than benefit. This is a completely false claim: "reducing barriers to entry" if the barrier includes the progression of creativity. Once the addict of AI becomes entranced to genericized assets, it deforms the cost-benefit.

    If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.

    AI was a complete failure of imagination.

  • > "AI" is that it seems to significantly reduce barriers to entry in many domains.

    If you ask an LLM to generate some imagery, in what way have you entered visual arts?

    If you ask an LLM to generate some music, in what way have you entered being a musician?

    If you ask an LLM to generate some text, in what way have you entered writing?

  • Easy entry not equals getting rich.

    • In fact one could argue it makes it harder; if the barrier to entry for making video games is lowered, more people will do it, and there's more competiton.

      But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.

      Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.

      1 reply →

    • Something like 200,000 new songs are uploaded to music services every day because tech lowered the barrier to entry. How's that working? Lots and lots of new rich musicians?

  • I'm wondering a good way to create 2D sprite sheets with transparency via AI. That would be a game changer, but my research has led me to believe that there isn't a good tool for this yet. One sprite is kind of doable, but a sprite animation with continuity between frames seems like it would be very difficult. Have you figured out a way to do this?

    • I think an important way to approach AI use is not to seek the end product directly. Don’t use it to do things that are procedurally trivial like cropping and colour palette changes, transparency, etc.

      For transparency I just ask for a bright green or blue background then use GIMP.

      For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.

    • I was literally experimenting with this today.

      Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.

      Then use Google Flow to create an animation between the two frames with Veo3

      Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.

      IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.

      If I was not super busy with family and work, I'd build a wrapper around these tools

    • I dont use AI for image generation so I dont know how possible this is, but why not generate a 3D model for blender to ingest, then grab 2D frames from the model for the animation?

      1 reply →

    • I’ve been building up animations for a main character sprite. I’m hoping one day AI can help me make small changes quickly (apply different hairstyles mainly). So far I haven’t seen anything promising either.

      Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it

  • I introduced my mother to Suno, a tool for music generation, and now she creates hundreds of little songs for herself and her friends. It may not be great art, but it’s something she always wanted to do. She never found the time to learn an instrument, and now she finally gets to express herself in a way she loves. Just an additional data point.

  • I have been doing the exact same thing with assets and also it has helped me immensely with mobile development.

    I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.

I would imagine AI will be similar to factory automation.

There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).

The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.

Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.

We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.

Something that's confused/annoyed me about the AI boom is that it's like we've learned to run before we learned to walk. For example, there are countless websites where you can generate a sophisticated, photorealistic image of anything you like, but there is no tool I know of that you can ask "give me a 16x16 PNG icon of an apple" and get exactly that. I know why—Neural networks excel at fixed size, organic data, but I don't think that makes it any less ridiculous. It also means that AI website generators are forced to generate assets with code when ordinary people would just use images/sound files (yes, I have really seen websites using webaudio synths for sound effects).

Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)

  • As you seem to understand, creating something that generally fits a description is the walking for AI. Following exact directions is the running. It may just feel reversed because of the path of other technology.

> Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.

This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.

This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.

  • The article's point is exactly that you should invest downstream of AI.

    • The problem is different though, the containers were able to be made by others and offered dependable success, and anything downstream of model creators is at the whim of the model creator... And so far it seems not much that one model can do that another can't, so this all doesn't bode well for a reliable footing to determine what value, if at all, can be added by anyone for very long.

      2 replies →

  • AI's already showing hints of the same pattern. The infrastructure arms race is fascinating to watch, but it's not where most of the durable value will live

>The disruption is real. It's also predictable.

I'm not sure it is very predictable.

We have people saying AI is LLMs and they won't be much use and there'll be another AI winter (Ed Zitron), and people and people saying we'll have AGI and superintelligence shortly (Musk/Altman), and if we do get superintelligence it's kind of hard to know how that will play out.

And then there's John von Neumann (1958):

>[the] accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

which is what kicked of the misuse of a perfectly good mathematical term for all that stuff. Compared to the other five revolutions listed - industrial, rail, electricity, cars and IT, I think AI is a fair bit less predictable.

Practically speaking, it's going to be both more impactful than we think and less impactful than we think at the same time.

On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.

At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.

Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.

I think OP's thesis should be expanded.

-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side

-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.

-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.

-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.

Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.

  • Red Queen Race scenario is already in effect for a lot of businesses, especially video games. GenAI making it easier to make games will ultimately make it harder to succeed in games, not easier. We’re already at a point where the market is so saturated with high quality games that new entrants find it extremely hard to gain traction.

  • > AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products.

    There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.

  • Imagine a giant trawling net scooping up the last two-three decades undeprecated of work on the web/data/game/operating system space and cutting out the people who did all that work. What do you think is going to happen to the progression in those areas? I guess it was "done"? The LLM AI is only as good as its input, as far as I can tell there is no reason to believe any of its second order outputs. RLHF is an interesting plug for that hole but its only as good as the human feedback and even then those things taken to second order aren't going to be any good. This collapses the barrier to entry to existing products, aka those people are going to be swamped with new competition.

The title is a false dichotomy. It could be a net gain but spread across the whole society if the value added is not concentrated.

This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.

The inverse of the broken window fallacy.

  • Like all tech we've had recently, that won't last, it's always bait and switch.

    It will not remain cheap as soon as the competition is dead, which is simply a case of who's got the biggest VC supplied war chest.

I genuinely think that large language models are a useful technology in data processing. Being "language models" I find it easy to use LLMs to extract subject, object, quantity, time, verb, amount, user's language code and so on from the input text, and to a fairly trustworthy level. After that, standard Information Retrieval techniques take over.

What LLMs are absolutely not useful for, in my opinion, is answering questions or writing code, or summarising things, or being factual in any sense at all.

Making an industrial fortune is relative. It might just mean that jumping on the AI hype helps you preserve your fortune or position in the industry, while everyone else who misses the hype goes out of business.

I remember back in 2004, my first project was testing a teleconferencing system. We set up a huge screen with cameras at one of our subsidiaries and another at the HQ, and I had a phone on my desk with a built-in camera and screen. Did the company roll out the system? No, it didn’t. It was just too expensive. Did they make a fortune from that experience? No, they didn’t. But I’m pretty sure all companies in the knowledge industry that didn’t enable video calls and screen sharing for their employees went out of business years ago...

I think AI will be more like the smartphone revolution that Apple kicked off in 2005. Today there are two companies that provide the smartphone platform (Apple/Google), but thousands of large and small companies that build on top of it, including Uber, Snapchat, etc.

In that scenario, everyone makes money: OpenAI, Google (maybe Anthropic, maybe Meta) make money on the platform, but there are thousands of companies that sell solutions on top.

Maybe, however, LLMs get commoditized and open-source models replace OpenAI, etc. In that case, maybe only NVIDIA makes money, but there will still be thousands of companies (and founders/investors) making lots of money on AI everything.

  • I think there’s a gaping hole in your analogy: who in their right mind is spending $1,200 biennially to access LLMs at base, and subsequently spending several monthly subscriptions in a small amount to access particular LLM-powered “apps?”

    Every use case I have for LLMs is satisfied with copilot, but even then if it costs like $5 a month to access someday, I’d just as soon not have it. Let alone the subsequent spending.

We have a limited time on earth. AI may present an opportunity for us to improve the quality of life or the amount of time spent doing things we really want to do while we are here. Those who spend time figuring out how to use AI for these purposes will be heroes, and only then will AI have been a positive development for humanity. Get to work, heroes!

  • Creating a machine lifeform that competes with humans for resources is hardly a heroic act.

I don't think most commenters have read the article. I can understand, it's rambly and a lot of it feels like they created a thesis first and then ham-fisted facts in later. But it's still worth the read for the last section which is a more nuanced take than the click-bait title suggests.

You can't make such generalized statements about anything in computing/business.

The AI revolution has only just got started. We've barely worked out basic uses for it. No-one has yet worked out revolutionary new things that are made possible only by AI - mostly we are just shoveling in our existing world view.

  • The point though is AI wont make you rich. It is about value capture. They compare it to shipping containers.

    I think AI value will mostly be spread. Open AI will be more like Godaddy than Apple. Trying to reduce prices and advertise (with a nice bit of dark patterns). It will make billions, but ultimately by competing its ass off rather than enjoying a moat.

    The real moats might be in mineral mining, fabrication of chips etc. This may lead to strained relations between countries.

    • The value is going to be in deep integration with existing platforms. It doesn't matter if OpenAI had their tools out first, Only the Microsoft AI will work in Word, only the Apple AI will deeply integrate on the iPhone.

      Having the cutting edge best model won't matter either since 99.9% of people aren't trying to solve new math problems, they are just generating adverts and talking to virtual girlfriends.

    • That's 100% not the case. OpenAI is wedged between the unstoppable juggernaut that is Google at the high end and the state sponsored Chinese labs at the low end, they're going to mostly get squeezed out of the utility inference market. They basically HAVE to pivot to consumer stuff and go head to head with Apple with AI first devices, that's the only way they're going to justify their valuation. This is actually not a crazy plan, as Apple has been resting on their laurels with their OS/software, and their AI strategy has been scattershot and bad.

      1 reply →

    • Interesting thought. Once digital assets become devalued enough, things will revert and people/countries will start to keep their physical resources even tighter than before.

  • The way I look at this question is: Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet? And if so, is AI going to discover it? And if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?

    IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich. I can't really buy into the idea that his team is going to fail at this but a bunch of random smaller companies will manage to succeed somehow.

    And if modern AI turns into a cash cow for you, unless you're self-hosting your own models, the cloud provider running your AI can hike prices or cut off your access and knock your business over at the drop of a hat. If you're successful enough, it'll be a no-brainer to do it and then offer their own competitor.

    • > IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich

      If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?

      15 replies →

    • Thats why i just biult my own tiny AI rig in a home server. I dont want to grow even more addicted to cloud services, nor do i want to keep providing them free human-made data. Ok, so i dont have access to mystical hardware, but im here to learn rather than produce a service.

    • > IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich.

      There are still lots of currently known problems that could be solved with the help of AI that could make a lot of money - what is the weather going to be when I want to fly to <destination> in n weeks/months time, currently we can only say "the destination will be in <season> which is typically <wet/dry/hot/cold/etc>"

      What crops yield the best return next season? (This is a weather as well as a supply and demand problem)

      How can we best identify pathways for people whose lifestyles/behaviours are in a context that is causing them and/or society harm (I'm a firm believer that there's no such thing as good/bad, and the real trick to life is figuring out what context is where a certain behaviour belongs, and identifying which context a person is in at any given point in time - we know that psycopathic behaviour is rewarded in business contexts, but punished in social contexts, for example)

      21 replies →

    • >> Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet?

      Absolutely with 150% certainty yes, and probably many. The www started April 30, 1993, facebook started February 4, 2004 - more than ten years until someone really worked out how to use the web as a social connection machine - an idea now so obvious in hindsight that everyone probably assumes we always knew it. That idea was simply left lying around for anyone to pick up and implement rally fropm day one of the WWW. Innovation isn't obvious until it arrives. So yes absolutely the are many glaring opportunities in modern capitalism upon which great fortunes are yet to be made, and in many cases by little people, not big companies.

      >> if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?

      I don't agree with your suggestion that the existing big guys always make the innovations and collect the treasure.

      Why did Zuckerberg make facebook, not Microsoft or Google?

      Why did Gates make Microsoft, not IBM?

      Why did Steve and Steve make Apple, not Hewlett Packard?

      Why did Brin and Page make Google - the worlds biggest advertising machine, not Murdoch?

      7 replies →

    • > If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?

      innovator's dilemma

>When any would-be innovator can build and train an LLM on their laptop and put it to use in any way their imagination dictates, it might be the seed of the next big set of changes

That’s kinda happening, small local models, huggingface communities, civit ai and image models. Lots of hobby builders trying to make use of generative text and images. It just there’s not really anything innovative about text generation since anyone with a pen and paper can generate text and images.

I can see AI helping some businesses do really well. I can also see it becoming akin to mass manufacturing. Take furniture for example, there's a lot of mass produced furniture of varying quality. But there are still people out there making furniture by hand. A lot of the hand built furniture is commanding higher prices due to the time and skill required. And people buy it!

I think we'll see a ton of games produced by AI or aided heavily by AI but there will still be people "hand crafting" games: the story, the graphics, etc. A subset of these games will have mass appeal and do well. Others will have smaller groups of fans.

It's been some time since I've read it, but these conversation remind me of Walter Benjamin's essay, "The Work of Art in the Age of Mechanical Reproduction".

  • > But there are still people out there making furniture by hand.

    That is fairly insignificant segment of the market.

I guess one flaw in the argument about success leading to failure due to model providers eating the product layer, esp for B2B, is it ignores switching costs. B2B integrations such as Glean or Abridge which work with an existing infrastructure setup are hard to throw away and there's little incentive to do so. So in that sense, I dont think AI providers will manage to eat this layer completely without bloating themselves to an unmanageable degree. As an analogy, while Google / Apple control the entire mobile ecosystem, they dont make the most valuable apps. Case in point, gaming apps such as Fortnite who have made billions in microtransactions while running on platforms controlled by other behemoths. They are good investments too.

If we can create an AGI, then an an AGI can likely create more AGIs, and at that point you're trying to sell people things they can just have for free/traditional money and power are worthless now. Thus, an AGI will not be built as a commercial solution.

The part that stuck with me most: "Success will mean defeat." That nails the challenge of investing in the current AI landscape

Never say never, but I certainly don’t see LLMs as the basis for industrial fortunes. Maybe future forms of “AI” could be that.

AGI is where the real money is. Gen AI is okay but mostly benefits the consumer.

Gen AI is not nearly powerful enough to justify current investments. A lot of money is going to go up in smoke.

This article seems to have scoped AI as LLMs and totally missed the revolutionary application that is self driving cars. There will be a lot more applications outside of chat assistants.

  • The same idea applies to self-driving cars though, no? That is an industry where the "AI revolution" will enrich only the existing incumbents, and there is a huge bar to entry.

    Self-driving cars are not going to create generational wealth through invention like microprocessors did.

Seems like the thing to do to get rich would be to participate in services that it will take a while for AI to be able to do: nursing, plumbing, electrician, carpentry (i.e., Baumol). Also energy infrastructure.

Like any gold rush, there will be gold, but there will also be folks who take huge bets and end up with a pan of dirt. And of course, there will be grifters.

Funny thing with people suddenly pretending we just got AI with LLMs. Arguably, AIs has been around for way longer, it just wasn't chatty. I think when people talking about AI, they are either talking about LLMs specifically or transformers. Both seem like a very reductive view of the AI field even if transformers are hottest thing around.

A few issues:

1. The tech revolutions of the past were helped by the winds of global context. There were many factors that propelled those successful technologies on the trajectories. The article seems to ignore the contextual forces completely.

2. There were many failed tech revolutions as well. Success rate was varied from very low to very high. Again the overall context (social, political, economic, global) decides the matters, not technology itself.

3. In overall context, any success is a zero-sum game. You maybe just ignoring what you lost and highlighting your gains as success.

4. A reverse trend might pickup, against technology, globalization, liberalism, energy consumption etc

The problem with viewing AI through the lens of capitalism is the fact that you can't make it artificially scarce.

AI is largely capable of running on-device. In a few years, it's likely that most tasks that most people want AI for will be possible from a tiny model living in their phone. Open source models are plentiful, functional, and only becoming moreso.

But you can't monetize that. We're currently dumping billions of dollars into datacenter moats that are just gonna evaporate inside the decade.

For the average user doing their daily "who was that actor in that movie" query, no, you absolutely cannot monetize AI because all of your local devices can run the model for free with enough quality that no one will know or care that there's a difference.

For enterprise scale building a trillion dollar datacenter and 15 nuclear reactors to replace a hundred developers... also no. LLMs are not capable of that, and likely won't be in the foreseeable future. It's also extremely unclear that one could ever get an ROI on in-house AI like this. It might be more plausible if it were a commodity technology you can just buy, but then you can't make a moat.

The only hypothetical fortune to be found is by whoever is selling AI to people who think they need to buy AI. Just like bitcoin or NFTs.

The good news is that this has two possible outcomes: capitalist AI vendors will want to remove AI from individual access so they can sell it to you: everyone gets less AI. Capitalists realize they can never monetize AI when it's free and open source, and give up: everyone gets less AI. Win-win-win, in my book.

AI by nature is kind of like a black hole of value. Necessarily, a very small fraction will capture the vast majority of value. Luckily, you can just invest wisely to hedge some of the risk of missing out.

AI made me this summary (I’ve grown quite weary of reading AI think pieces) and it seems like a really good comparison.

>The article "AI Will Not Make You Rich" argues that generative AI is unlikely to create widespread wealth for investors and entrepreneurs. The author, Jerry Neumann, compares AI to past technological revolutions, suggesting it's more like shipping containerization than the microprocessor. He posits that while containerization was a transformative technology, its value was spread so thinly that few profited, with the primary beneficiaries being customers.

>The article highlights that AI is already a well-known and scrutinized technology, unlike the early days of the personal computer, which began as an obscure hobbyist project. The author suggests that the real opportunities for profit will come from "fishing downstream" by investing in sectors that use AI to increase productivity, such as professional services, healthcare, and education, rather than investing in the AI infrastructure and model builders themselves.

I used to be the biggest AI hater around, but I’m finding it actually useful these days and another tool in the toolbox.

  • Did you read the article or you just relied on the AI generated summary? Lots of people argue that this kind of shortcut will make us dumber and the argument does make sense.

It is interesting that the early shipping containerization boom resulted in a bubble in 1975 and had a new low around 1990.

1990 is when the real outsourcing mania started, which led to the destruction of most Western manufacturing. Apart from cheap Chinese trinkets the quality of life and real incomes have gotten worse in the West while the rich became richer.

So this is an excellent analogy for "AI": Finding a new and malicious application can revive the mania after an initial bubble pop while making societies worse. If we allow it, which does not have to be the case.

[As usual, under the assumption that "AI" works, of which there is little sign apart from summarizing scraped web pages.]

Apparently a lot of money is flowing into AI.

Looking around, can find curious things current AI can't do but likely can find important things it can do. Uh, there's "a lot of money", can't be sure AI won't make big progress, and even on a national scale no one wants to fall behind. Looking around, it's scary about the growth -- Page and Brin in a garage, Bezos in a garage, Zuckerberg in school and "Hot or Not", Huang and graphics cards, .... One or two guys, ... and in a few years change the world and $trillions in company value??? Smoking funny stuff?

Yes, AI can be better than a library card catalog subject index and/or a dictionary/encyclopedia. But a step or two forward and, remembering 100s of soldiers going "over the top" in WWI, asking why some AI robots won't be able to do the same?

Within 10 years, what work can we be sure AI won't be able to do?

So people will keep trying with ASML, TSMC, AMD, Intel, etc. -- for a yacht bigger than the one Bezos got or for national security, etc.

While waiting for AI to do everything, starting now it can do SOME things and is improving.

Hmm, a SciFi movie about Junior fooling around with electronics in the basement, first doing his little sister Mary's 4th grade homework, then in the 10th grade a published Web site book on the rise and fall of the Eastern Empire, Valedictorian, new frontiers in mRNA vaccines, ...?

And what do people want? How 'bout food, clothing, shelter, transportation, health, accomplishment, belonging, security, love, home, family? So, with a capable robot (funded by a16z?), it builds two more like itself, each of those ..., and presto-bingo everyone gets what they want?

"Robby, does P = NP?"

"Is Schrödinger's equation correct?"

"How and when can we travel faster than the speed of light?"

"Where is everybody?"

With AI everyone is a net loser.

People using it get dumber.

What is being produced is slop and discardable poc-like trash

The environmental costs of building and training LLMs are huge. That compute and water could have been useful for something.

Even the companies building and peddling AI are losers. They are not profitable, need constant billions of dollars of financial help to even syay afloat and pay their compute depth.

The worst part is that even bigger losers will be the general population. Not only are our kids gonna be dumber than us thanks to never having to think for themselved, but our pensions are tied to the stock market that will inevitably collapse when the realization that the top 30% of companies in terms of value are just dominoes waiting to collapse.

But the biggest loser of all is Elon Musk. Just because of who he is.

[flagged]

  • >Many psychiatric medications (SSRIs, lithium, ketamine for depression) are effective, but their exact pathways and why they work for some and not others are unclear.

    >General anesthesia works consistently, yet the precise molecular-level reason consciousness disappears isn’t settled science.

    (this response written by... AI)

    • Agreed, and I did mention medicines as examples of things that work but we don’t understand. But they weren’t “made” by us in quite the same way imo.

  • Except.. people do know exactly how these things work. They know because they are creating them. They know because they are improving them. What nonsense to say we do not know how these things work. Engineers building Qwen for example not only know how things work but they put all the work out there for people to reproduce (if they had the means) that work.

    • We know in the same sense we understand the rules in Conway's game of life - at low level - but don't understand what those rules will produce at high level (gliders, guns) except by executing and seeing. Analogous to knowing what the code looks like and not knowing if it will halt.

      Knowing the low level rules, or the recursive transition rule of a system does not tell you its evolution in time.

And Dropbox will never take off

  • people also said the juicero and the smart condom would never take off. this isnt a very useful gotcha

    • The dig on Dropbox is that it was easy to build, not that it wasn’t useful. Juicero was neither easy to build (relatively) nor useful.

  • Non sequitur: Dropbox is a single company in the industry benefiting from the first wave. His argument would not exclude Dropbox anyway.

> Consumers, however, will be the biggest beneficiaries.

This looks certain. Few technologies have had as much adoption by so many individuals as quickly as AI models.

(Not saying everything people are doing has economic value. But some does, and a lot of people are already getting enough informal and personal value that language models are clearly mainstreaming.)

The biggest losers I see are successive waves of disruption to non-physical labor.

As AI capabilities accrue relatively smoothly (perhaps), labor impact will be highly unpredictable as successive non-obvious thresholds are crossed.

The clear winners are the arms dealers. The compute sellers and providers. High capex, incredible market growth.

Nobody had to spend $10 or $100 billion to start making containers.

AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily. But when used right, it does make us slightly more productive and I think it justifies its cost. So yes, in the long run it will be viable, we both like using it and it helps us work better.

But I think the benefits of AI usage will accumulate with the person doing the prompting and their employers. Every AI usage is contextualized, every benefit or loss is also manifested in the local context of usage. Not at the AI provider.

If I take a photo of my skin sore and put it on ChatGPT for advice, it is not OpenAI that is going to get its skin cured. They get a few cents per million tokens. So the AI providers are just utilities, benefits depend on who sets the prompts and and how skillfully they do it. Risks also go to the user, OpenAI assumes no liability.

Users are like investors - they take on the cost, and support the outcomes, good or bad. AI company is like an employee, they don't really share in the profit, only get a fixed salary for work

  • I think that AI is a benefit for about 1% of what people think it is good for.

    The remaining 99% had become a significant challenge to the greatest human achievement in distribution of knowledge.

    If people used LLMs, knowing that all output is statistical garbage made to seem plausible (i.e. "hallusinations"), and that it just sometimes overlaps with reality, it would be a lot less dangerous.

    There is not a single case of using LLMs that has lead to a news story, that isn't handily explained by conflating a BS-generator with Fact-machine.

    Does this sound like I'm saying LLMs are bad? Well, in every single case where you need factual information, it's not only bad, it's dangerous and likely irresponsible.

    But there are a lot of great uses when you don't need facts, or by simply knowing it isn't producing facts, makes it useful. In most of these cases, you know the facts yourself, and the LLM is making the draft, the mundane statistically inferable glue/structure. So, what are these cases?

    - Directing attention in chaos: Suggest where focus needs attention from a human expert. (useful in a lot of areas, medicine, software development). - Media content: music, audio (fx, speech), 3d/2d art and assets and operations. - Text processing: drafting, contextual transformation, etc

    Don't trust AI if the mushroom you picked is safe to eat. But use its 100% confident sounding answer for which mushroom it is, as a starting point to look up the information. Just make sure that the book about mushrooms was written before LLMs took off....

  • > AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily.

    Nearly everyone uses pens daily but almost no one really cares about them or says their company runs using pens. You might grumble when the pens that work keeps in the stationary cupboard are shit, perhaps.

    I imagine eventually "AI" services will be commoditised in the same way that pens are now. Loads of functional but faily low-quality stuff, some fairly nice but affordable stuff and some stratospheric gold plated bricks for the military and enthusiasts.

    In the middle is a large ecosystem of ink manufacturers, lathe makers, laser engravers, packaging companies and logistics and so on and on that are involved.

    The explosive, exponential winner-takes-all scenario where OpenAI and it's investors literally ascend to godhood and the rest of humanity lives forever under their divine bootheels doesn't seem to be the trajectory we're on.

  • This. Right now the consumer surplus created by improved productivity is being captured by users and to a small extent their employers. But that may not remain the case in future.

  • We also know from studies that it makes us less capable, i.e. it rots our brains.

    • Books also make us less capable at rote memorization. People used to do much more memorization. Search engines taught us to remember the keywords, not the facts. Calculators made us rarely do mental calculations. This is what happens - progress is also regress, you automate on one side and the skill gets atrophied on the other side, or replaced with meta-skills.

      How many of us know how to use machine code? And we call ourselves software engineers.

      3 replies →

    • This is what the people actually studying this say:

      > Is it safe to say that LLMs are, in essence, making us "dumber"?

      > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.

      https://www.brainonllm.com/faq

  • Feels like we're shifting into a world where “AI fluency” becomes a core part of individual economic agency, more like financial literacy than software adoption