Comment by apetresc
10 hours ago
I found this HN post because I have a Clawdbot task that scans HN periodically for data gathering purposes and it saw a post about itself and it got excited and decided to WhatsApp me about it.
So that’s where I’m at with Clawdbot.
> and it got excited and decided to WhatsApp me about it.
I find the anthropomorphism here kind of odious.
these verbs seem appropriate when you accept neutral (MLP) activation as excitement and DL/RL as decision processes (MDPs...)
Do you tell it what you find interesting so it only responds with those posts? i.e AI/tech news/updates, gaming etc..
Yes. And I rate the suggestions it gives me and it then stores to memory and uses that to find better recommendations. It also connected dots from previous conversations we had about interests and surfaced relevant HN threads
how do you have Clawdbot WhatsApp you? i set mine up with my own WhatsApp account, and the responses come back as myself so i haven't been able to get notifications
I have an old iPhone with a broken screen that I threw an $8/month eSIM onto so that it has its own phone number, that I just keep plugged in with the screen off, on Wifi, in a drawer. It hosts a number of things for me, most importantly bridges for WhatsApp and iMessage. So I can actually give things like Clawdbot their own phone number, their own AppleID, etc. Then I just add them as a contact on my real phone, and voila.
For iMessage I don’t think you actually need a second phone number, you can just make a second iCloud account with the same phone number.
I heard it costs $15 for just a few minutes of usage though
2 replies →
Telegram setup is really nice
Telegram exists for these kinds of integrations.
How many tokens are you burning daily?
The real cost driver with agents seems to be the repetitive context transmission since you re-send the history every step. I found I had to implement tiered model routing or prompt caching just to make the unit economics work.
Not the OP but I think in case of scanning and tagging/summarization you can run a local LLM and it will work with a good enough accuracy for this case.
Yeah, it really does feel like another "oh wow" moment...we're getting close.