Comment by dataviz1000
1 day ago
I use Playwright to intercept all requests and responses and have Claude Code navigate to a website like YouTube and click and interact with all the elements and inputs while recording all the requests and responses associated with each interaction. Then it creates a detailed strongly typed API to interact with any website using the underlying API.
Yes, I know it likely breaks everybody's terms of service but at the same time I'm not loading gigabytes of ads, images, markup, to accomplish things.
If anyone is interested I can take some time and publish it this week.
I also do this. My primary use case is for reproducing page layout and styling at any given tree in the dom. So, capturing various states of a component etc.
I also use it to automatically retrieve page responsiveness behavior in complex web apps. It uses playwright to adjust the width and monitor entire trees for exact changes which it writes structured data that includes the complete cascade of styles relevant with screenshots to support the snapshots.
There are tools you can buy that let you do this kind of inspection manually, but they are designed for humans. So, lots of clickety-clackety and human speed results.
---
My first reaction to seeing this FP was why are people still releasing MCPs? So far I've managed to completely avoid that hype loop and went straight to building custom CLIs even before skills were a thing.
I think people are still not realizing the power and efficiency of direct access to things you want and skills to guide the AI in using the access effectively.
Maybe I'm missing something in this particular use case?
> There are tools you can buy that let you do this kind of inspection manually, but they are designed for humans.
You should try my SnipCSS Claude Code plugin. It still uses MCP as skill (haven't converted to CLI yet), but it does exactly what you want for reproducing designs in Tailwind/CSS at AI speeds.
https://snipcss.com/claude_plugin
> My first reaction to seeing this FP was why are people still releasing MCPs?
MCPs are more difficult to use. You need to use an agent to use the tools, can't do it manually easily. I wonder if some people see that friction as a feature.
its mostly because MCPs handle auth in a standardised way and give you a framework you can layer things like auth, etc on top of.
Without it youre stuck with the basic http firewall, etc which is extremely dangerous and this is maybe the 1 opportunity we have to do this.
And people forget, Claude Code isn’t the only Claude surface, and CLIs don’t help in other surfaces other than Cowork.
I do this via BrowserOS -- https://github.com/browseros-ai/BrowserOS
It has an in-built MCP server and I use it with claude code, codex and like it quite a lot.
I love how HN is loving this idea when it's the exact same thing Anthropic and OpenAi (and every other llm maker) did.
It's God's gift to them when it lets them bypass ads and dl copyrighted material. But it's Satan's curse on humanity when the Zuck does it to train his llm and dl copyrighted material.
Both scale and purpose make them completely different things. You're acting as if they're the same when they're not.
I won't comment about dl but ads are trackers and spyware for me. I don't spy on websites' owners, I have my human rights to stop those trackers.
Zuck serves ads/spywares to other users, he deserves to taste his own medicines, not me.
I think there's a little bit of the Goomba fallacy at play here to be fair
Yes, it's a god's gift when the average user can do it, and satan's curse what a hated fucking mega-corp is doing it.
Where's the contradiction?
You can see this pattern in many different topics: updoots are highly correlated with a positive answer to "do I personally get to profit"?
Yes, and? People need to eat. Billionaires are generally not interested in whether or not the average Joe gets to eat.
I would love to pay for content. I'm _paying_ for YouTube Premium.
But heck. Do I hate the YouTube interface, it degraded far past usability.
Write to their support. Oh, wait.
So you’re that Hal Jordan then? Why would a Green Lantern feel the need to defend either? I feel like the Guardians would not accept your arguments as soon as you got to Oa, poozer. I guess what I am saying is don’t have a famous name. Seems obvious.
OP appears to be talking about real life. What are you on about?
1 reply →
You conflate web crawling for inference with web crawling for training.
Web crawling for training is when you ingest content on a mass scale, usually indiscriminately, usually with a dumb crawler for scale's sake, for the purposes of training an LLM. You don't really care whether one particular website is in the dataset (unless it's the size of Reddit), you just want a large, diverse, high-quality data mix.
Web crawling for inference is when a user asks a targeted question, you do a web search, and fetch exactly those resources that are likely to be relevant to that search. Nothing ends up in the training data, it's just context enrichment.
People have a much larger issue with crawling for training than for inference (though I personally think both are equally ok).
I do something similar [1] but it leverages WebMCP (see Amazon example [2]). Could probably turn it into a strongly typed API.
[1] https://github.com/sidwyn/webmcp-tool-library
[2] https://github.com/sidwyn/webmcp-tool-library/blob/main/cont...
Why even use Playwright for this? I feel like Claude just needs agent-browser and it can generate deterministic code from it.
you mean this one? https://github.com/vercel-labs/agent-browser
It is 2 months old!
My excuse for not keeping up is that I'm in so deep that Claude Code can predict the stock market.
I'll still publish mine and see if has any value but agent browser looks very complete.
Thank you for sharing!
5 replies →
You can just start claude with the —chrome flag too and it will connect to the chrome extension.
Please do.
Did you compare playwright with mcp? Why one over another?
I use MCP usually, because I heard it’s less detectable than playwright, and more robust against design changes, but I didn’t compare/test myself
Very interested. Would even pay for an api for this. I am doing something similar with vibium and need something more token efficient.
have you tried vibium's cli + agent skill?
Would this hypothetically be able to download arbitrary videos from youtube without the constant yt-dlp arms race?
Don’t know how this could be more stable than ytdlp. When issues come up they’re fixed really quickly.
yt-dlp was very recently broken for ~2 days for any Youtube videos that required cookies: https://github.com/yt-dlp/yt-dlp/issues/16212
Here is what actually fixed it: https://github.com/yt-dlp/ejs/pull/53/changes
yt-dlp is relatively stable, but still occasionally breaks for long periods. I get the sense YouTube is becoming increasingly adversarial to yt-dlp as well.
I don't know the details, but it doesn't seem like yt-dlp is running the entire YouTube JS+DOM environment. Something like a real headless browser seems like it would break less often, but be much heavier weight. And Youtube might have all sorts of other mitigations against this approach.
5 replies →
> yt-dlp arms race
I don't know anything about yt-dlp.
It would probably help people who want to go to a concert and have a chance to beat the scalpers cornering the market on an event in 30 seconds hitting the marketplace services with 20,000 requests.
I can try to see if can bypass yt-dlp. But that is always a cat and mouse game.
To clarify - yt-dlp is a command line tool for downloading youtube videos, but it's in a constant arms race with the youtube website because they are constantly changing things in a way that blocks yt-dlp.
1 reply →
If it can save all the video/audio fragment and call ffmpeg to join them together. Maybe?
Yes, please do and ping me when it's done lol. Did you make it into an agent skill?
Exactly, it is an agent skill that interacts pressing buttons and stuff with a webpage capturing and documenting all the API requests the page makes using Playwright's request / response interception methods. It creates and strongly typed well documented API at the end.
Sounds awesome. I've been using mitmproxy's --mode local to intercept with a separate skill to read flow files dumped from it, but interactive is even better.
I use chrome devtools MCP to the same end - it works great for me. Interested in what advantages you see in using Playwright over chrome devtools?
I just ask Claude to reverse engineer the site with Chrome MCP. It goes to work by itself, uses your Chrome logged in session cookies, etc.
yes please! i need a "comment to follow" functionality on HN
i had claude code oneshot it: https://github.com/swyxio/websiteinterceptor
thanks swyx! you're always on top of stuff
I would love it if you had time to publish it!
I was doing similar by capturing XHR requests while clicking through manually, then asking codex to reverse engineer the API from the export.
Never tried that level of autonomy though. How long is your iteration cycle?
If I had to guess, mine was maybe 10-20 minutes over a few prompts.
I assume you're not logged into those sites, in order to avoid bans and the risk of hitting the wrong button like, say, "Delete Account".
It turns any authenticated browser session into a fully typed REST API proxy — exposing discovered endpoints as local Hono routes that relay requests through the browser, so cookies and auth are automatic.
The point is that it creates an API proxy in code that a Typescript server calls directly. The AI runs for about 10 minutes with codegen. The rest of the time it is just API calls to a service. Remove the endpoint for "Delete Account" and that API endpoint never gets called.
This 100% breaks everyone's terms of service. I would not recommend nor encourage using.
I always used playwrite as an alternative to selenium, relatively surprised by its ability to interface with LLMs.
I would like to see this!
+1, publish, but how will we know when you have published...
Yes, please do!
100% I'll response to this by Friday with link to Github.
I use Patchright + Ghostery and I have a cleaver tool that uses web sockets to pass 1 second interval screenshots to the a dashboard and pointer / keyboard events to the server which allow interacting with websites so that a user can create authentication that is stored in the chrome user profile with all the cookies, history, local storage, ect.. in the cloud on a server.
Can you list some websites that don't require subscription that you would like to me to test against? I used this for Robinhood and I think Linked in would be a good example for people to use.
Would you be open to sharing your Github profile now so I could follow you? I don't check on here very often.
Another +1, it would be incredibly useful to play with this approach! (and fun)
Id like to see this published as well thx!
Please do!
Please publish!
Commenting to follow up.
Wow. Yes please.
isnt it what everyone that needs web validation does?
[dead]