Comment by spoaceman7777
19 days ago
This was incredibly vague and a waste of time.
What type of code? What types of tools? What sort of configuration? What messaging app? What projects?
It answers none of these questions.
19 days ago
This was incredibly vague and a waste of time.
What type of code? What types of tools? What sort of configuration? What messaging app? What projects?
It answers none of these questions.
Yeah, i’ve gone to the point where I will just stop reading AI posts after a paragraph or two if there are no specifics. The “it works!” / “no it doesn’t” genre is saturated with generality. Show, don’t tell, or I will default to believing you don’t have anything to show at all.
That was very vague, but I kinda get where they're coming from.
I'm now using pi (the thing openclaw is built on) and within a few days i build a tmux plugin and semaphore plugin^1, and it has automated the way _I_ used to use Claude.
The things I disagree with OP is: The usefulness of persistent memory beyond a single line in AGENTS.md "If the user says 'next time' update your AGENTS.md", the use of long-running loops, or the idea that everything can be resolved via chat - might be true for simple projects, but any original work needs me to design the 'right' approach ~5% of the time.
That's not a lot, but AI lets you create load-bearing tech-debt within hours, at which point you're stuck with a lot of shit and you dont know how far it got smeared.
[1]: https://github.com/offline-ant
They're not coming from anywhere. It's an LLM-written article, and given how non-specific it is, I imagine the prompt wasn't much more than "write an article about how OpenClaw is changing my life".
And the fact this post has 300+ comments, just like countless LLM-generated articles we get here pretty much daily... I guess proves the point in a way?
1 reply →
Would you describe your Claude workflow?
4 replies →
There is no code, there are no tools, there is no configuration, and there are no projects.
This is an AI generated post likely created by going to chatgpt.com and typing in "write a blogpost hyping up [thing] as the next technological revolution", like most tech blog content seems to be now. None of those things ever existed, the AI made them up to fulfill the request.
> There is no code, there are no tools, there is no configuration, and there are no projects.
To add to this, OpenClaw is incapable of doing anything meaningful. The context management is horrible, the bot constantly forgets basic instructions, and often misconfigures itself to the point of crashing.
It didn’t seem entirely AI generated to me. There were at least a few sentences that an LLM would never write (too many commas).
There is zero evidence this is the case. You are making up baseless accusation, probably due to partisan motivations.
edit: love the downvotes. I guess HN really is Reddit now. You can make any accusation without evidence and people are supposed to just believe it. If you call it out you get downvoted.
Is there any evidence the opposite is the case?
2 replies →
Please don't tell me you read that article and thought it was written by a person. This is clearly AI generated.
What evidence are you expecting exactly? It's vacuous AI slop that spends 1000 words just making vague assertions about how incredible OpenClaw is without a single actual example. There's nothing here, it's not real. You are going to struggle going forward if you can't detect AI slop this obvious.
Well, note that the previous post was about how great the Rabbit R1 is…
Yeah, once I saw that I was like "Oh, so OpenClaw is probably going to be a dud too" :)
Exactly. Posts that say "I got great results" are just advertisements. Tell me what you're doing that's working good for you. What is your workflow, tooling, what kind of projects have you made.
>Over the past year, I’ve been actively using Claude Code for development. Many people believed AI could already assist with programming—seemingly replacing programmers—but I never felt it brought any revolutionary change to the way I work.
Funny, because just last month, HN was drowning in blog posts saying Claude Code is what enables them to step away from the desk, is definitely going to replace programmers, and lets people code "all through chatting on [their] phone" (being able to code from your phone while sitting on the bus seems to be the magic threshold that makes all the datacenters worth it).
I am somewhat worried that this is the moment AI psychosis has come for programmers.
Yeah… I'm using Claude Code almost all day every day, but it still 100% requires my judgment. If another AI like OpenClaw was just giving the thumbs up to whatever CC was doing, it would not end well (for my projects anyway).
Yes. Programmers might be especially susceptible precisely because our advanced understanding makes us think we cannot be easily fooled.
But we are also easier to impress: only we understand how difficult it is to one-shot code a working app.
AI psychosis sets in when AI captures enough of your perception to alter your reality. Programmers might be "smarter", but we give AI a bigger set of tools to capture our perception with.
And then there's the fact that we want to be fooled.
We are also used to prioritizing "flow states" as part of our engagement with technology, and these tools seem to be flow-state inducers. Also prone to following hype, used to typing words into computers to get results, etc...
1 reply →
Add to that worry the suspicion that half this push is just marketing stunts by AI companies.
(Not necessarily this specific post).
Did they even end up launching and maintaining the project? Did things break and were they able to fix it properly? The amount of front-loaded fondness for this technology without any of the practical execution and follow up really bugs me.
It's like we all fell under the spell of a terminal endlessly printing output as some kind of measurement of progress.
It's AI slop itself. It seems inevitable that any AI enthusiast ends up having AI write their advocacy too.
I just give the link to those posts to my AI to read it, if it's not worth a human writing it, it's not worth a human reading it.
It makes me sad that there are so many of these heavily-upvoted posts now that are hand-wavey about AI and is itself AI-generated. It benefits everyone involved except people like me who are trying to cut through the noise.
It reads like articles that pretended blockchain was revolutionary. Also the article itself seems like AI slop.
Does it matter?