← Back to context

Comment by bkettle

15 hours ago

> “wow, I really can do _anything_ if I can just figure out how

Except this time it’s “if I can just figure out how and pay for the Claude API usage”.

This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.

Yes, they can still write code the manual way, but if the norm is to use AI I suspect that beginner’s guides, tutorials, etc. will become less common.

There has generally always been some barrier. Computer access, internet access, books etc. If AI coding stays around, which looks like it will, it will just be the current generations barrier.

I don’t think it is sad at all. There are barriers to all aspects of life, life is not fair and at least in our lifetimes will never be. The best anyone can do is to help those around them and not get caught up the slog of the bad things happening in the world.

  • But traditional barriers have been able to be knocked down more easily with charity, because it's easier to raise charity money for capex than opex.

    It was common to have charity drives to get computers into schools, for example, but it's much harder to see people donating money for tokens for poor people.

    Previous-generation equipment can be donated, and can still spark an interest in computing and programming. Whereas you literally now can't even use ChatGPT-4.

    • "It's harder to convince other people to pay for this for me" is an insane criticism. Not every AI model needs a premium account, you can even run many excellent models locally if you don't want to pay for an internet connection.

      At some point you just have to accept that yes things are easier if you have a little bit of spending money for things. That's not "sad" it's a basic fact of life.

      2 replies →

    • Small models and processors are going to continue improving to the point that you’ll be able to vibe code locally on your phone at some point.

      When the iPhone came out, not everyone had a smartphone. Now 90% of the US has a smartphone, and many of these smartphones run generative local models.

Yep, I used to spend a lot of time learning PHP on a web server which was part of my internet subscription. Without it being free, I would never have learn how to create websites and would have never got in programming, the trigger was that free web hosting with PHP that was part of the internet connection my parents were already paying for

  • There are plenty of free models available; many that rival their paid counterparts.

    A kid interested in trying stuff can use Qwen Coder for free [1].

    If the kid's school has Apple Silicon Macs (or iPads), this fall, each one of them will have Apple's 3 billion parameter Foundation Models available to them for free [2].

    Swift Playground [3] is a free download; Apple has an entire curriculum for schools. I would expect an upgrade to incorporate access to the on-board LLM

    [1]: https://openrouter.ai/qwen/qwen3-coder:free

    [2]: https://developer.apple.com/videos/play/wwdc2025/286

    [3]: https://developer.apple.com/swift-playground/

    • Swift and swift playground might be a good introduction to programming, but it feels likely not to lead to as many opportunities as a more popular system. And I don’t just mean job opportunities.

    • I guess hardware being able to run a local model will eventually get cheap enough, but for a lot of people even buying an Apple device or something with a good enough GPU is prohibitive.

      1 reply →

  • "Already being paid for by someone else" is very different than "free."

They're not that expensive for anyone that has the tech skills to actually make good use out of them. I've been paying around with Claude Code, using API credits rather than the monthly fee. It costs about $5 per one-hour session. If you're going to be doing this professionally it's worth springing for the $100/month membership to avoid hitting credit limits, but if you just want to try it out, you can do so without breaking the bank.

A bigger question for me is "Does this actually increase my productivity?" The jury is still out on that - I've found that you really need to babysit the algorithm and apply your CS knowledge, and you also have to be very clear about what you're going to tell it later, don't let it make bad assumptions, and in many cases spell out the algorithm in detail. But it seems to be very good at looking up API details, writing the actual code, and debugging (if you guide it properly), all things that take a non-trivial amount of tedium in everyday programming.

  • 12-year-old me wasn’t putting my tech skills to good use enough to pay $5 every time I sat down at the computer. I was making things though, and the internet was full of tutorials, chat rooms, and other people you could learn from. I think it would be sad if the same curious kid today was told “just pay $5 and ask Claude” when pestering someone in IRC about how to write a guestbook in Perl.

    • 12-year-old me wasn't either, but he was noodling around on a computer that cost $2500 (more like $5500 in today's dollars). I think our parents loved us very much and must have had some means to afford the capital cost of a computer back then.

      I don't see my 7-year-old paying $5 for each hour he wants to program (and no way in hell would I give him my credit card), but I could easily envision paying $20/month for a Claude subscription and letting him use it. We pay more than that for Netflix & Disney+.

      7 replies →

    • 12-year-old me had (or rather, my family had) a Celeron 333 MHz and a Pentium III 550 MHz, both from Gateway, because that was the sole awesome perk my dad got from working there: literally free computers, with a required number of years of employment to pay them off. In 2000, the P3 was still pretty hot shit. I dual-booted them with every Linux distro under the sun. Since we had dial-up, the only way I had those distros was from 4-H [0], which at the time in Nebraska had a partnership with University of Nebraska to do tech instruction; once a quarter, we’d drive down to a campus (usually UNL) and spend a weekend learning something (LAMP stack, hardware troubleshooting, etc.), and having a LAN party at night. Also we had free access to their (at the time) screamingly fast internet, so I would download distros and packages to try out later.

      My online upbringing was very much of the RTFM variety, and I am convinced that was and is a good method to learn. It’s not like the grumpy graybeards were cruel, they just didn’t want to waste their time answering the same “how do I…” questions from noobs. If you explained what you were experiencing, what you had read, and what you had tried, they were more than happy to help out. I don’t think that’s an unreasonable approach.

      [0]: https://4-h.org/

  • I think you said it. $100/mo and you're not even sure if it'll increase your productivity. Why on earth would I pay that? Do I want to flush $100 down the toilet and waste several days of my life to find out?

    • You don't have to pay $100 to find out, you can do that for ~$5-20 by directly buying API credits.

      I don't know for sure whether it's worth it yet. Further experimentation is needed, as well as giving it an honest shot and trying to learn the nuances of the tool. But the way I look at it - if this actually is a future career path, the net present value of its payoff is measured in the millions of dollars. It's worth spending ~$20 and a few nights of my time to figure that out, because the odds can be pretty damn low and still have the expected value pencil out. It's sorta like spending $200 on 1/4 of a Bitcoin in 2013 because I was curious about the technology - I fully expected it to be throwing money down the toilet, but it ended up being quite worth it. (I wish I'd had the same mindset when I could've bought into the Ethereum ICO at a penny or so an ETH.)

  • I have the tech skills to use them. In my 30s and I could not spend $5 on a one hour coding session even if it 10xed my productivity. 1-2 hours would literally break the bank for me

> This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary

Agreed. And on the one hand you have those who pay an AI to produce a lot of code, and on the other hand you have those who have to review that code. I already regularly review code that has "strange" issues, and when I say "why does it do this?" the answer is "the AI did it".

Of course, one can pay for the AI and then review and refactor the code to make it good, but my experience is that most don't.

>the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.

Considering opportunity cost, a young person paying $20 or $100 per month to Claude API access is way cheaper than a young person spending a couple of years to learn to code, and some months coding something the AI can spit in 10 minutes.

AI coding will still create generations that even programming graduates know fuck all about how to code, and are also bad at reasoning about the AI produced code they depend on or thinking systematically (and that wont be getting any singularity to bail them out), but that's beside the point.

  • But all the other students are doing the same, so the expectation will quickly become use of tools for potentially years.

    My introduction to programming was through my dad's outdated PC and an Arduino, and that put me on par with the best funded.

  • Applying opportunity cost to students is a bit strange...

    People need to take time to get good at /something/. It's probably best to work with the systems we have and find the edge where things get hard, and then explore from there. It's partly about building knowledge, but also about gumption and getting some familiarity with how things work.

yes indeed, who will pay? I run a lot through open models locally using LM Studio and Ollama, and it is nice to only be spending a tiny amount of extra money for electricity.

I am retired and not wanting to spend a ton of money getting locked long term into using an expensive tool like Claude Code is a real thing. It is also more fun to sample different services. Don’t laugh but I am paying Ollama $20/month just to run gpt-oss-120b very fast on their (probably leased) hardware with good web search tooling. Is it worth $20/month? Perhaps not but I enjoy it.

I also like cheap APIs: Gemini 2.5-flash, pro when needed, Kimi K2, open models on Groq, etc.

The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.

  • > The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.

    In a sense it is illegal, even though the whole tech scene has been doing it for decades, price dumping is an illegal practice and I still don't understand why it has never been considered as such with tech.

    Most startups with VC investors work only through price dumping, most unicorns came to be from this bullshit practice...

    • "Price dumping" isn't an economic term in common use.

      "Dumping" in international trade is somewhat similar but the reasons that is illegal are very different: https://en.m.wikipedia.org/wiki/Dumping_(pricing_policy)

      Pricing at a loss by VC funded companies is great for consumers. It rarely is at a loss though - they look at the lifetime value.

      Pricing at a loss by big tech could be viewed as anticompetitive. Personally I like that Gemini keeps OpenAI prices lower but one could argue it has stopped OpenAIs growth.

      4 replies →

I agree that access is a problem now, but I think it is one that hardware improvements will solve very quickly. We are a few generations of Strix Halo type hardware away from effortlessly running very good LLMs locally. (It's already possible, but the hardware is about $2000 and the LLMs you can run are good but not very good.) AFAIK AMD have not released the roadmap for Medusa Halo, but the rumours [1] are increased CPU and GPU performance, and increased bandwidth. Another iteration or two of this will make Strix Halo hardware more affordable, and the top-of-the-line models will be beasts for local LLMs.

[1]: https://www.notebookcheck.net/Powerful-Zen-6-Medusa-Halo-iGP...

LLMs are quickly becoming cheaper. Soon they will be “cheap as free,” to quote Homestar Runner. Then programming will be solved, no need for meatbags. Enjoy the 2-5 years we have left in this profession.

  • You say that, but subscription prices keep going up. Token price goes down but token count goes up. Companies are burning billions to bring you the existing prices, and multiple hundreds per month is not enough to clear the bar to use these tools.

    I’m personally hoping for a future with free local LLMs, and I do hope the prices go down. I also recognize I can do things a little cheaper each year with the API.

    However it is far from a guaranteed which direction we’re heading in, and I don’t think we’re on track to get close to removing the monetary barrier anytime soon.

    • My bill for LLMs is going up over time. The more capable, higher-context models dramatically increase my productivity.

      The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..

      And it does better than $6k-$10k programmers in Africa, India, and Asia.

      It's the mainframe era all over again, where access to computing is gated by $$$.

      1 reply →

  • Did you read the original article?

    LLM code still needs to be reviewed by actual thinking humans.

One can create a free Google account and use Gemini for free.

Or think it this way: It's easy to get base level free LLM (Toyota) but one should not expect free top of the shelf (Porsche).

  • Previously most Porsche development tools were available to everyone though, such as GCC.

    • Software development costs hundreds of dollars in the 90s. My parents bought VB 6 for $600.

      Only in tech are we shocked when things cost money. I don't know that any other industry expects such a reality.

Maybe local models can address this, but for me the issue is that relying on LLMs for coding introduces gatekeepers.

> Uh oh. We're getting blocked again and I've heard Anthropic has a reputation for shutting down even paid accounts with very few or no warnings.

I'm in the slack community where the author shared their experiment with the autonomous startup and what stuck out to me is that they stopped the experiment out of fear of being suspended.

Something that is fun should not go hand-in-hand with fear of being cut off!

This is a pro for a lot of the people whom AI people are targeting: idiots with money.

  • be careful maybe the idiots will be the only one left with money, and the smart people like you could be homeless.

Eh back in the day computers were expensive and not everyone could afford one (and I don't mean a library computer that you can work on, one you can code and hack on). The ubiquity of computing is not something that's been around forever.

There have always been costs and barriers for the cutting edge.

  • The problem isn’t cost, it’s reproducibility and understanding. If rely on a service you can’t fully understand to get something done, you’re beholden to the whims of its provider.

    • Sure but that's not what the person I was replying to was talking about, nor what I was talking about.

      Cost of access is absolutely a problem in tech.

      The problem can certainly be multi-faceted though.

You made me realize exactly why I love skill-based video games, and shun the gacha games (especially those with PvP). You swiped to gain power over players who don't. Yay?

The knowledge check will also slowly transfer towards the borders of fast iteration and not necessarily knowledge depth. The end goal is to make a commodity out of the myth of the 10x dev, and take more leverage away from the devs.