Comment by epiccoleman
1 day ago
I love this article just for the spirit of fun and experimentation on display. Setting up a VPS where Claude is just asked to go nuts - to the point where you're building a little script to keep Claude humming away - is a really fun idea.
This sort of thing is a great demonstration of why I remain excited about AI in spite of all the hype and anti-hype. It's just fun to mess with these tools, to let them get friction out of your way. It's a revival of the feelings I had when I first started coding: "wow, I really can do anything if I can just figure out how."
Great article, thanks for sharing!
> “wow, I really can do _anything_ if I can just figure out how
Except this time it’s “if I can just figure out how and pay for the Claude API usage”.
This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.
Yes, they can still write code the manual way, but if the norm is to use AI I suspect that beginner’s guides, tutorials, etc. will become less common.
There has generally always been some barrier. Computer access, internet access, books etc. If AI coding stays around, which looks like it will, it will just be the current generations barrier.
I don’t think it is sad at all. There are barriers to all aspects of life, life is not fair and at least in our lifetimes will never be. The best anyone can do is to help those around them and not get caught up the slog of the bad things happening in the world.
But traditional barriers have been able to be knocked down more easily with charity, because it's easier to raise charity money for capex than opex.
It was common to have charity drives to get computers into schools, for example, but it's much harder to see people donating money for tokens for poor people.
Previous-generation equipment can be donated, and can still spark an interest in computing and programming. Whereas you literally now can't even use ChatGPT-4.
4 replies →
Yep, I used to spend a lot of time learning PHP on a web server which was part of my internet subscription. Without it being free, I would never have learn how to create websites and would have never got in programming, the trigger was that free web hosting with PHP that was part of the internet connection my parents were already paying for
There are plenty of free models available; many that rival their paid counterparts.
A kid interested in trying stuff can use Qwen Coder for free [1].
If the kid's school has Apple Silicon Macs (or iPads), this fall, each one of them will have Apple's 3 billion parameter Foundation Models available to them for free [2].
Swift Playground [3] is a free download; Apple has an entire curriculum for schools. I would expect an upgrade to incorporate access to the on-board LLM
[1]: https://openrouter.ai/qwen/qwen3-coder:free
[2]: https://developer.apple.com/videos/play/wwdc2025/286
[3]: https://developer.apple.com/swift-playground/
3 replies →
"Already being paid for by someone else" is very different than "free."
They're not that expensive for anyone that has the tech skills to actually make good use out of them. I've been paying around with Claude Code, using API credits rather than the monthly fee. It costs about $5 per one-hour session. If you're going to be doing this professionally it's worth springing for the $100/month membership to avoid hitting credit limits, but if you just want to try it out, you can do so without breaking the bank.
A bigger question for me is "Does this actually increase my productivity?" The jury is still out on that - I've found that you really need to babysit the algorithm and apply your CS knowledge, and you also have to be very clear about what you're going to tell it later, don't let it make bad assumptions, and in many cases spell out the algorithm in detail. But it seems to be very good at looking up API details, writing the actual code, and debugging (if you guide it properly), all things that take a non-trivial amount of tedium in everyday programming.
12-year-old me wasn’t putting my tech skills to good use enough to pay $5 every time I sat down at the computer. I was making things though, and the internet was full of tutorials, chat rooms, and other people you could learn from. I think it would be sad if the same curious kid today was told “just pay $5 and ask Claude” when pestering someone in IRC about how to write a guestbook in Perl.
9 replies →
I think you said it. $100/mo and you're not even sure if it'll increase your productivity. Why on earth would I pay that? Do I want to flush $100 down the toilet and waste several days of my life to find out?
3 replies →
I have the tech skills to use them. In my 30s and I could not spend $5 on a one hour coding session even if it 10xed my productivity. 1-2 hours would literally break the bank for me
> This is one of the sadder things about AI usage getting more standard that I haven’t seen discussed much—-the barrier to entry is now monetary
Agreed. And on the one hand you have those who pay an AI to produce a lot of code, and on the other hand you have those who have to review that code. I already regularly review code that has "strange" issues, and when I say "why does it do this?" the answer is "the AI did it".
Of course, one can pay for the AI and then review and refactor the code to make it good, but my experience is that most don't.
>the barrier to entry is now monetary rather than just knowledge-based, which will make it _much_ harder for young people with no money to pick up.
Considering opportunity cost, a young person paying $20 or $100 per month to Claude API access is way cheaper than a young person spending a couple of years to learn to code, and some months coding something the AI can spit in 10 minutes.
AI coding will still create generations that even programming graduates know fuck all about how to code, and are also bad at reasoning about the AI produced code they depend on or thinking systematically (and that wont be getting any singularity to bail them out), but that's beside the point.
But all the other students are doing the same, so the expectation will quickly become use of tools for potentially years.
My introduction to programming was through my dad's outdated PC and an Arduino, and that put me on par with the best funded.
Applying opportunity cost to students is a bit strange...
People need to take time to get good at /something/. It's probably best to work with the systems we have and find the edge where things get hard, and then explore from there. It's partly about building knowledge, but also about gumption and getting some familiarity with how things work.
yes indeed, who will pay? I run a lot through open models locally using LM Studio and Ollama, and it is nice to only be spending a tiny amount of extra money for electricity.
I am retired and not wanting to spend a ton of money getting locked long term into using an expensive tool like Claude Code is a real thing. It is also more fun to sample different services. Don’t laugh but I am paying Ollama $20/month just to run gpt-oss-120b very fast on their (probably leased) hardware with good web search tooling. Is it worth $20/month? Perhaps not but I enjoy it.
I also like cheap APIs: Gemini 2.5-flash, pro when needed, Kimi K2, open models on Groq, etc.
The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.
> The AI, meaning LLM, infrastructure picture is very blurred because of so many companies running at a loss - which I think should be illegal because long term I think it is misleading consumers.
In a sense it is illegal, even though the whole tech scene has been doing it for decades, price dumping is an illegal practice and I still don't understand why it has never been considered as such with tech.
Most startups with VC investors work only through price dumping, most unicorns came to be from this bullshit practice...
5 replies →
I agree that access is a problem now, but I think it is one that hardware improvements will solve very quickly. We are a few generations of Strix Halo type hardware away from effortlessly running very good LLMs locally. (It's already possible, but the hardware is about $2000 and the LLMs you can run are good but not very good.) AFAIK AMD have not released the roadmap for Medusa Halo, but the rumours [1] are increased CPU and GPU performance, and increased bandwidth. Another iteration or two of this will make Strix Halo hardware more affordable, and the top-of-the-line models will be beasts for local LLMs.
[1]: https://www.notebookcheck.net/Powerful-Zen-6-Medusa-Halo-iGP...
LLMs are quickly becoming cheaper. Soon they will be “cheap as free,” to quote Homestar Runner. Then programming will be solved, no need for meatbags. Enjoy the 2-5 years we have left in this profession.
You say that, but subscription prices keep going up. Token price goes down but token count goes up. Companies are burning billions to bring you the existing prices, and multiple hundreds per month is not enough to clear the bar to use these tools.
I’m personally hoping for a future with free local LLMs, and I do hope the prices go down. I also recognize I can do things a little cheaper each year with the API.
However it is far from a guaranteed which direction we’re heading in, and I don’t think we’re on track to get close to removing the monetary barrier anytime soon.
2 replies →
Did you read the original article?
LLM code still needs to be reviewed by actual thinking humans.
Does anyone have a good recommendation of a claude code like tool that uses locally hosted models?
I believe gemini-cli can do this. I'm not sure though.
One can create a free Google account and use Gemini for free.
Or think it this way: It's easy to get base level free LLM (Toyota) but one should not expect free top of the shelf (Porsche).
Previously most Porsche development tools were available to everyone though, such as GCC.
1 reply →
Maybe local models can address this, but for me the issue is that relying on LLMs for coding introduces gatekeepers.
> Uh oh. We're getting blocked again and I've heard Anthropic has a reputation for shutting down even paid accounts with very few or no warnings.
I'm in the slack community where the author shared their experiment with the autonomous startup and what stuck out to me is that they stopped the experiment out of fear of being suspended.
Something that is fun should not go hand-in-hand with fear of being cut off!
This is a pro for a lot of the people whom AI people are targeting: idiots with money.
be careful maybe the idiots will be the only one left with money, and the smart people like you could be homeless.
1 reply →
Eh back in the day computers were expensive and not everyone could afford one (and I don't mean a library computer that you can work on, one you can code and hack on). The ubiquity of computing is not something that's been around forever.
There have always been costs and barriers for the cutting edge.
The problem isn’t cost, it’s reproducibility and understanding. If rely on a service you can’t fully understand to get something done, you’re beholden to the whims of its provider.
1 reply →
You made me realize exactly why I love skill-based video games, and shun the gacha games (especially those with PvP). You swiped to gain power over players who don't. Yay?
The knowledge check will also slowly transfer towards the borders of fast iteration and not necessarily knowledge depth. The end goal is to make a commodity out of the myth of the 10x dev, and take more leverage away from the devs.
For me, I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control. I enjoy the actual process of crafting code myself. For similar reasons, I could never be a manager.
Agents are a boon for extraverts and neurotypical people. If it gets to the point where the industry switches to agents, I’ll probably just find a new career
I strongly disagree agents are for extroverts.
I do agree it’s definetly a tool category with a unique set of features and am not surprised it’s offputting to some. But it’s appeal is definetly clear to me as an introvert.
For me LLM:s are just a computer interface you can program using natural language.
I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort.
Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!
It’s a great time to be a software engineer!
> For me LLM:s are just a computer interface you can program using natural language.
I wish they were, but they're not that yet because LLMs aren't very good at logical reasonsing. So it's more like an attempt to program using natural language. Sometimes it does what you ask, sometimes not.
I think "programming" implies that the machine will always do what you tell it, whatever the language, or reliably fail and say it can't be done because the "program" is contradictory, lacks sufficient detail, or doesn't have the necessary permissions/technical capabilities. If it only sometimes does what you ask, then it's not quite programming yet.
> Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!
I wish that, too, were true, and maybe it will be someday soon. But if I need to manually review the agent's output, then it doesn't feel like offloading much aside from the typing. All the same concentration and thought are still required, even for the boring things. If I could at least trust the agent to tell me if it did a good job or is unsure that would have been helpful, but we're not even there yet.
That's not to say the tools aren't useful, but they're not yet "programming in a natural language" and not yet able to "offload" stuff to.
23 replies →
> For me LLM:s are just a computer interface you can program using natural language.
Sort of. You still can't get a reliable output for the same input. For example, I was toying with using ChatGPT with some Siri shortcuts on my iPhone. I do photography on the side, and finding a good time for lighting for photoshoots is a usecase I use a lot so I made a shortcut which sends my location to the API along with a prompt to get the sunset time for today, total amount of daylight, and golden hour times.
Sometimes it works, sometimes it says "I don't have specific golden hour times, but you can find those on the web" or a useless generic "Golden hour is typically 1 hour before sunset but can vary with location and season"
Doesn't feel like programming to me, as I can't get reproducible output.
I could just use the LLM to write some API calling script from some service that has that data, but then why bother with that middle man step.
I like LLMs, I think they are useful, I use them everyday but what I want is a way to get consistent, reproducible output for any given input/prompt.
1 reply →
>I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort. >Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!
I agree and I feel that having LLM's do boilerplate type stuff is fantastic for ADD people. The dopamine hit you get making tremendous progress before you get utterly bored is nice. The thing that ADD/ADHD people are the WORST at is finishing projects. LLM will help them once the thrill of prototyping a green-field project is over.
5 replies →
I find Claude great at all of the boilerplate needed to get testing in place. It's also pretty good at divining test cases to lock in the current behavior, even if it's buggy. I use Claude as a first pass on tests, then I run through each test case myself to make sure it's a meaningful test. I've let it loose on the code coverage loop as well, so it can drill in and get those uncommon lines covered. I still don't have a good process for path coverage, but I'm not sure how easy that is in go as I haven't checked into it much yet.
I'm with you 100% on the boring stuff. It's generally good at the boring stuff *because* it's boring and well-trod.
Last week there was this post about flow state, and pretty much explains the issue:
https://news.ycombinator.com/item?id=44811457
1 reply →
It's interesting that every task in the world is boring to somebody, which means nothing left in the world will be done by those interested in it, because somebody will gladly shotgun it with an AI tool.
> For me LLM:s are just a computer interface you can program using natural language. ... boring tasks cause extreme discomfort ... Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!
The problem with this perspective, is that when you try to offload exactly the same boring task(s), to exactly the same LLM, the results you get back are never even close to being the same. This work you're offloading via natural language prompting is not programming in any meaningful sense.
Many people don't care about this non-determinism. Some, because they don't have enough knowledge to identify, much less evaluate, the consequent problems. Others, because they're happy to deal with those problems, under the belief that they are a cost that's worth the net benefit provided by the LLM.
And there are also many people who do care about this non-determinism, and aren't willing to accept the consequent problems.
Bluntly, I don't think that anyone in group (1) can call themselves a software engineer.
Programming implies that it's going to do what i say. I wish it did.
> Agents are a boon for extraverts and neurotypical people.
This sounds like a wild generalization.
I am in neither of those two groups, and I’ve been finding tools like Claude Code becoming increasingly more useful over time.
Made me much more optimistic about the direction of AI development in general too. Because with each iteration and new version it isn’t getting anywhere closer to replacing me or my colleagues, but it is becoming more and more useful and helpful to my workflow.
And I am not one of those people who are into “prompt engineering” or typing novels into the AI chatbox. My entire interaction is typically short 2-3 sentences “do this and that, make sure that XYZ is ABC”, attach the files that are relevant, let it do its thing, and then manual checks/adjustments. Saves me a boatload of work tbh, as I enjoy the debugging/fixing/“getting the nuanced details right” aspect of writing code (and am pretty decent at it, I think), but absolutely dread starting from a brand new empty file.
> I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control.
Try aider.chat (it's in the name), but specifically start with "ask" mode then dip a toe into "architect" mode, not "code" which is where Claude Code and the "vibe" nonsense is.
Let aider.chat use Opus 4.1 or GPT-5 for thinking, with no limit on reasoning tokens and --reasoning-effort high.
> agents are a boon for extraverts and neurotypical people.
On the contrary, I think the non-vibe tools are force multipliers for those with an ability to communicate so precisely they find “extraverts and neurotypical people” confounding when attempting to specify engineering work.
I'd put both aider.chat and Claude Code in the non-vibe class if you use them Socratically.
thanks for this, going to try it out - i need to use paid api and not my claude max or gpt pro subn, right?
2 replies →
> Agents are a boon for extraverts and neurotypical people.
Please stop with this kind of thing. It isn't true, it doesn't make sense and it doesn't help anyone.
I bet your code sucks in quality and quantity compared to the senior+ engineer who uses the modern tools. My code certainly did even after 20 years of experience, much of that as senior/staff level at well paying companies.
For what it’s worth I’m neurodivergent, introverted and have avoided management up to the staff+level. Claude Code is great I use it all day every day now.
For me (an introvert), I have found great value in these tools. Normally, I kind of talk to myself about a problem / algorithm / code segment as I'm fleshing it out. I'm not telling myself complete sentences, but there's some sort of logical dialog I am having with myself.
So I just have to convert that conversation into an AI prompt, basically. It just kind of does the typing for the construct already in my head. The trick is to just get the words out of my head as prompt input.
That's honestly not much different than an author writing a book, for example. The story line is in their head, they just have to get it on paper. And that's really the tricky part of writing a novel as much as writing code.
I therefore don't believe this is an introvert/extrovert thing. There are plenty of book authors which are both. The tools available as AI code agents are really just an advanced form of dictation.
I kind of think we will see some industry attrition as a result of LLM coding and agent usage, simply because the ~vIbEs~ I'm witnessing boil down to quite a lot of resistance (for multiple reasons: stubbornness, ethics, exhaustion from the hype cycle, sticking with what you know, etc)
The thing is, they're just tools. You can choose to learn them, or not. They aren't going to make or break your career. People will do fine with and without them.
I do think it's worth learning new tools though, even if you're just a casual observer / conscientious objector -- the world is changing fast, for better or worse, and you'll be better prepared to do anything with a wider breadth of tech skill and experience than with less. And I'm not just talking about writing software for a living, you could go full Uncle Ted and be a farmer or a carpenter or a barista in the middle of nowhere, and you're going to be way better equipped to deal with logistical issues that WILL arise from the very nature of the planet hurtling towards 100% computerization. Inventory management, crop planning, point of sale, marketing, monitoring sensors on your brewery vats, whatever.
Another thought I had was that introverts often blame their deficits in sales, marketing and customer service on their introversion, but what if you could deploy an agent to either guide, perform, or prompt (the human) with some of those activities? I'd argue that it would be worth the time to kick the tires and see what's possible there.
It feels like early times still with some of these pie in the sky ideas, but just because it's not turn-key YET doesn't mean it won't be in the near future. Just food for thought!
"ethics"
I agree with all of your reasons but this one sticks out. Is this a big issue? Are many people refusing to use LLMs due to (I'm guessing here): perceived copyright issues, or power usage, or maybe that they think that automation is unjust?
2 replies →
> Agents are a boon for extraverts and neurotypical people.
As an extrovert the chances I'll use an AI agent in the next year is zero. Not even a billion to one but a straight zero. I understand very well how AI works, and as such I have absolutely no trust in it for anything that isn't easy/simple/solved, which means I have virtually no use for generative AI. Search, reference, data transformation, sure. Coding? Not without verification or being able to understand the code.
I can't even trust Google Maps to give me a reliable route anymore, why would I actually believe some AI model can code? AI tools are helpers, not workers.
>no trust in it for anything that isn't easy/simple/solved
I'm not sure what part of programming isn't generally solved thousands of times over for most languages out there. I'm only using it for lowly web development but I can tell you that it can definitely do it at a level that surprises me. It's not just "auto-complete" it's actually able to 'think' over code I've broken or code that I want improved and give me not just one but multiple paths to make it better.
1 reply →
At one point in my life I liked crafting code. I took a break, came back, and I no longer liked it--my thoughts ranged further, and the fine-grained details of implementations were a nuisance rather than ~pleasurable to deal with.
Whatever you like is probably what you should be doing right now. Nothing wrong with that.
I think they're fantastic at generating the sort of thing I don't like writing out. For example, a dictionary mapping state names to their abbreviations, or extracting a data dictionary from a pdf so that I can include it with my documentation.
>Agents are a boon for extraverts and neurotypical people.
I completely disagree. Juggling several agents (and hopping from feature-to-feature) at once, is perfect for somebody with ADHD. Being an agent wrangler is great for introverts instead of having to talk to actual people.
I think you misunderstand what this does. It is not only a coding agent. It is an abstraction layer between you and the computer.
It is effin nutzo that you would try to relate chatting with AI and agentic LLM codegen workflows to the intra/extra vert dichotomy or to neuro a/typicality - you so casually lean way into this absolute spectrum that I don’t even think associates the way you think it does, and it’s honestly kind of unsettling, like - what do you think you know about me, and about My People, that apparently I don’t know??
If it doesn’t work for you that’s fine, but turning it into some tribalised over-generalization is just… why, why would you do that, who is that kind of thing useful for??
Agents are boon for introverts who fucking hate dealing with other people (read: me). I can iterate rapidly with another 'entity' in a technical fashion and not have to spend hours explaining in relatable language what to do next.
I feel as if you need to work with these things more, as you would prefer to work, and see just how good they are.
You are leaving a lot of productivity on the table by not parallelizing agents for any of your work. Seemingly for psychological comfort quirks rather than earnestly seeking results.
Automation productivity doesn’t remove your own agency. It frees more time for you to apply your desire for control more discerningly.
I can imagine there are plenty of use cases, but I could not find one for myself. Can you give an example?
Pretty sure we can make LLM agents to transform declarative inputs to agentic action.
> Agents are a boon for extraverts and neurotypical people
As a neurodivergent introvert, please don't speak for the rest of us.
That stuck out to me as well. People will make up all sorts of stories to justify their resistance to change.
1 reply →
On one hand, I agree with you that there is some fun in experimenting with silly stuff. On the other hand...
> Claude was trying to promote the startup on Hackernews without my sign off. [...] Then I posted its stuff to Hacker News and Reddit.
...I have the feeling that this kind of fun experiments is just setting up an automated firehose of shit to spray places where fellow humans congregate. And I have the feeling that it has stopped being fun a while ago for the fellow humans being sprayed.
This is an excellent point that will immediately go off-topic for this thread. We are, I believe, committed, into a mire of CG content enveloping the internet. I believe we will go through a period where internet communications (like HN, Reddit, and pages indexed by search engines) in unviable. Life will go on; we will just be offline more. Then, the defense systems will be up to snuff, and we will find a stable balance.
I hope you're right. I don't think you will be, AI will be too good at impersonating humans.
1 reply →
My theory (and hope) is the rise of a web of trust system.
Implemented so that if a person in your web vouches for a specific url (“this is made by a human”) you can see it in your browser.
4 replies →
Indeed. I worry though. We need those defense systems ASAP. The misinformation and garbage engulfing the internet does real damage. We can't just tune it out and wait for it to get better.
I definitely understand the concern - I don't think I'd have hung out on HN for so long if LLM generated postings were common. I definitely recognize this is something you don't want to see happening at scale.
But I still can't help but grin at the thought that the bot knows that the thing to do when you've got a startup is to go put it on HN. It's almost... cute? If you give AI a VPS, of course it will eventually want to post its work on HN.
It's like when you catch your kid listening to Pink Floyd or something, and you have that little moment of triumph - "yes, he's learned something from me!"
(author here) I did feel kinda bad about it as I've always been a 'good' HNer until that point but honestly it didn't feel that spammy to me compared to some human generated slop I see posted here, and as expected it wasn't high quality enough to get any attention so 99% of people would never have seen it.
I think the processes etc that HN have in place to deal with human-generated slop are more than adequate to deal with an influx of AI generated slop, and if something gets through then maybe it means it was good enough and it doesn't matter?
That kind of attitude is exactly why we're all about to get overwhelmed by the worst slop any of us could ever have imagined.
The bar is not 'oh well, it's not as bad as some, and I think maybe it's fine.'
1 reply →
Did you?
Spoiler: no he didn't.
But the article is interesting...
It really highlights to me the pickle we are in with AI: because we are at a historical maximum already of "worse is better" with Javascript, and the last two decades have put out a LOT of javascript, AI will work best with....
Javascript.
Now MAYBE better AI models will be able to equivalently translate Javascript to "better" languages, and MAYBE AI coding will migrate "good" libraries in obscure languages to other "better" languages...
But I don't think so. It's going to be soooo much Javascript slop for the next ten years.
I HOPE that large language models, being language models, will figure out language translation/equivalency and enable porting and movement of good concepts between programming models... but that is clearly not what is being invested in.
What's being invested in is slop generation, because the prototype sells the product.
I'm not a fan of this option, but it seems to me the only way forward for online interaction is very strong identification on any place where you can post anything.
Back in FidoNet days, some BBSs required identification papers for registering and only allowed real names to be used. Though not known for their level headed discussions, it definitely added a certain level of care in online interactions. I remember the shock seeing the anonymity Internet provided later, both positive and negative. I wouldn't be surprised if we revert to some central authentication mechanism which has some basic level of checks combined with some anonymity guarantees. For example, a government owned ID service, which creates a new user ID per website, so the website doesn't know you, but once they blacklist that one-off ID, you cannot get a new one.
4 replies →
That can be automated away too.
People will be more than willing to say, "Claude, impersonate me and act on my behalf".
6 replies →
See also: https://news.ycombinator.com/item?id=44860174 (posted 12 hours ago)
it's annoying but it'll be corrected by proper moderation on these forums
as an aside i've made it clear that just posting AI-written emoji slop PR review descriptions and letting claude code directly commit without self reviewing is unacceptable at work
The Internet is already 99% shit and always has been. This doesn't change anything.
It's gotten much worse. Before it was shit from people, now it's corporate shit. Corporate shit is so much worse.
I mean I can spam HN right now with a script.
Forums like HN, reddit, etc will need to do a better job detecting this stuff, moderator staffing will need to be upped, AI resistant captchas need to be developed, etc.
Spam will always be here in some form, and its always an arms race. That doesnt really change anything. Its always been this way.
This is the kind of thing people should be doing with AI. Weird and interesting stuff that has a "Let's find out!" Attitude.
Often there's as much to be learned from why it doesn't work.
I see the AI hype to be limited to a few domains.
People choosing to spend lots of money on things speculatively hoping to get a slice of whatever is cooking, even if they don't really know if it's a pie or not.
Forward looking imagining of what would change if these things get massively better.
Hyperbolic media coverage of the above two.
There are companies taking about adding AI for no other reason than they feel like that's what they should be doing, I think that counts as a weak driver of hype, but only because cumulatively, lots of companies are doing it. If anything I would consider this an outcome of hype.
Of these the only one that really affects me is AI being shoehorned into places it shouldn't
The media coverage stokes fires for and against, but I think it only changes the tone of annoyance I have to endure. They would do the same on another topic in the absence of AI. It used to be crypto,
I'm ok with people spending money that is not mine on high risk, high potential reward. It's not for me to judge how they calculate the potential risk or potential reward. It's their opinion, let them have it.
The weird thing I find is the complaints about AI hype dominating. I have read so many pieces where the main thrust of their argument is about the dominance of fringe viewpoints that I very rarely encounter. Frequently they take the stance that anyone imagining how the world might change from any particular form of AI as a claim that that form is inevitable and usually imminent. I don't see people making those claims.
I see people talking about what they tried, what they can do, and what they can't do. Everything they can't do is then held up by others as if it were a trophy and proof of some catestrophic weakness.
Just try stuff, have fun, if that doesn't interest you, go do something else. Tell us about what you are doing. You don't need to tell us that you aren't doing this particular thing, and why. If you find something interesting tell us about that, maybe we will too.
every vibe coded thing I've built is trash, but it's amazingly fun to do.
I've tried to explain it to other devs that it's like dumping out a 10000 piece jigsaw puzzle and trying to put it together again.
it's just fun.
There was a time when everyone hand-coded HTML. Then came Macromedia Dreamweaver and Microsoft FrontPage which promised a WYSIWYG experience. No one would ever need to "learn HTML and CSS" because the tool could write it for them. Those tools could crank out a website in minutes.
When those tools created some awful, complex and slow output, only the people who knew HTML could understand why it wasn't working and fix things.
Vibe coding is in a similar place. It demos really well. It can be powerful and allows for quick iteration on ideas. It works, most of the time. Vibe coding can produce some really terrible code that is not well architected and difficult to maintain. It can introduce basic logic errors that are not easily corrected through multiple prompts back to the system.
I don't know if they will ever be capable of creating production quality systems on par with what senior engineers produce or if they will only get incrementally better and remain best for prototypes and testing ideas.
> it’s just fun
For some definitions of fun… :)
Not sure if I'd want Claude doing whatever on a production vps/node, but I like the idea of a way to use Claude Code on the go/wherever you are. I'm going to setup KASM workspaces on my free OCI server and see how it works there.
https://hub.docker.com/r/linuxserver/kasm
Thanks for sharing this! I have been trying on and off to run RooCode on a VPS to use it on the go. I tried Code Server but it does not share "sessions". KASM seems interesting for this. Do share if you write a blog post on setting it up
It’s pretty straightforward through the Linuxserver docker image deployment. I have some notes here re: configuration and package persistence strategy via brew:
https://gist.githubusercontent.com/jgbrwn/28645fcf4ac5a4176f...
Maintaining scheduled playing with what's changed/new/different is mandatory with the tools one already uses, let alone any new ones.