Comment by troupo

5 days ago

> What are you using today? In my experience LLMs are already pretty good at this.

LLMS are good at "find me a two week vacation two months from now"?

Or at "do my taxes"?

> how to use Cowork.

Yes, and I taught my mom how to use Apple Books, and have to re-teach her every time Apple breaks the interface.

Ask your non-tech friends what they do with and how they feel about Cowork in a few weeks.

> I think where our view differs is I expect that models will be able to get good at making custom interfaces, and then help the user personalize it to their tasks.

How many users you see personalizing anything to their task? Why would they want every app to be personalized? There's insane value in consistency across apps and interfaces. How will apps personalize their UIs to every user? By collecting even more copious amounts of user data?

"LLMS are good at "find me a two week vacation two months from now"?"

Of course they are. I gave one a similar prompt a few weeks ago, albeit quite a bit more verbose (actually I just dictated it, train of thought, with couple of 'eh actually, forget what I just said about x, do y instead") and although I wasn't brave enough to give it my credit card and finalize the bookings, it would have paid for the bookings I had it set up for me, had I done that. I gave it some RL constraints, like "we're meeting friends in place xyz at such and such date, make sure we're there then" and it did everything from watching we wouldn't be spending too many hours driving per day to check that hotels are kid friendly to things to do and see and what public holidays there are so that we know when supermarkets close early and a bunch of details I wouldn't have thought of. It checked my (and my wife's) calendar, checked what I had going on work wise, etc.

That is a fully solved 'problem' man. LLMs will run the whole thing for you. Just provide it with the login details to booking websites and you're off to the races.

I did have it upgrade the car, even if that pushed the cost outside the budget I gave it. Next time it'll know LOL.

  • >although I wasn't brave enough to give it my credit card and finalize the bookings

    So it's not trustworthy enough for you, someone clearly interested in the hype of LLMs.

    • It's a matter of getting used to things. We're only a few weeks further, I maybe would have given it now. It'd need some way to keep it private I guess, maybe I could have used a one off CC number. Those are just technicalities at this point. It got me to the point where I just had to enter my details and click a few confirm buttons. Those are solved problems. I'm not sure why the denialists here are saying those things are 'impossible'. I mean I've seen them happen, what do you want me to say? Claiming this is 'just hype' is ostrich behavior. I've been playing with an abliterated Gemma 4 yesterday on my local machine. Yes it would take longer and require a bunch of harness fiddling, but even if OpenAI and Anthropic would collapse tomorrow, I'm confident I could still do the exact same thing the day after with with what I have right now on my hard disk. I'm not sure what you want me to tell you mate. Yes there's rough edges to work out or just in general workflows to improve but the ideas are way beyond 'proof of concept'. There's people like myself using these things for purposes that 6 months ago were science fiction. I don't care if you believe me or not, I'm just some dude on the internet, but level of delusion on how 'inferior' these models (with proper harnessing) are is mind boggling for someone like me who sees it happen literally 20 centimeters to the side on my screen from where I see people claim that those things are impossible.

> Or at "do my taxes"?

codex did my taxes this year (well it actually implemented a normalization pipeline and a tax computing engine which then did the taxes, but close enough)

  • > well it actually implemented a normalization pipeline and a tax computing engine which then did the taxes, but close enough

    You can't seriously believe laymen will try to implement their own tax calculators.

    • of course not.

      what I believe is that laymen will put all their tax docs into codex and tell it to 'do their taxes' and the tool will decide to implement the calculator, do the taxes and present only the final numbers. the layman won't even know there was a calculator implemented.

      4 replies →

  • If your prompt was more complex than "do my taxes", then this is irrelevant.

    • it was many hours of working with codex, guidance and comparing to known-good outputs from previous years, but a sufficiently smart model would be able to just do it without any steering; it'd still take hours, but my input wouldn't be necessary. a harness for getting this done probably exists today, gastown perhaps or something that the frontier labs are sitting on.

      4 replies →

  • Sounds fascinating! If you wrote an article on this I bet it'd have a good shot at making it to the home page of HN.

> LLMS are good at "find me a two week vacation two months from now"?

Yes?

===

edit: Just tested it with that exact prompt on Claude. It asked me who I was traveling with, what type of trip and budget (with multiple choice buttons) and gave me a detailed itinerary with links to buy the flights ( https://www.kayak.com/flights/ORD-LIS/2026-06-13/OPO-ORD/202... )

  • I'd love to try and replicate, but I'm not letting any of these tools anywhere near a real browser and capabilites :)

  • Perfect - and this use case will be enshitificated first. LLM provider will charge small fee for proper recommendation placing. Got to recoup investment.