Comment by jumploops
13 hours ago
Oh, I agree completely. I avoid loose language, revise my wording, and usually write prompts that require scrolling on mobile.
It isn’t so much that I feel restricted, I guess it’s that mobile wasn’t as big of a game changer as it was ~6 months ago.
My bandwidth feels more restricted by my own cognitive capacity (usually due to do context switching), rather than the limits of the model itself, and the mobile interface makes that worse.
I’ve recently found myself reserving larger tasks for “keyboard time” and reverting my thinking back to notes (in mobile), which I’ll then formulate to the LLM at some future time.
> What tunnel setup do you use by the way?
I “vibecoded” an agentic runtime that operates my machine generally (including TUIs like Codex/Claude Code), which I connect through a custom proxy and mobile app (both also vibecoded).
I previously tried Cloudflare Tunnels and an SSH setup, but it all felt a bit hacky.
Unfortunately the app is iOS only, but I could open source it and you’d probably be able to make an Android clone quickly (:
That could be cool, no issues with Claude Code not working in third party harnesses or their recent changes about different (more expensive) billing for programmatic usage? I guess I generally use OpenAI models which don't care.