Show HN: Web Scout AI – Auto-discover every user journey (zero config)
6 hours ago (github.com)
Hi HN, I'm Adi. I built Web Scout AI, an open-source tool that takes a URL and autonomously discovers every user journey on the site — no scripts, no selectors, no config. Point it at your homepage and it finds paths like Homepage → Product Listing → Product Detail → Cart → Checkout on its own.
Repo: https://github.com/apexkid/web-scout-ai
It uses Claude's vision + Playwright to explore a site the way a human would: look at the page, decide what's clickable, click it, repeat. It does this recursively via BFS, building a complete graph of every reachable state. When it finds 50 product cards, it recognizes the pattern and explores one representative instead of all 50. Cookie banners and popups get auto-dismissed. The output is a journey graph, Mermaid diagrams, and a set of replayable JSON files along with network requests fired at every interactions.
Who's can use it and for what:
* QA/Testing — The most obvious use case. Run auto against your staging environment, get a full set of discovered journeys, then replay them after every deploy. No test scripts to write or maintain. When the site changes, re-run discovery instead of fixing selectors. One team told me they went from 2 weeks of manual test writing to a single afternoon of reviewing auto-discovered journeys.
* 3P API auditing — This is the one I didn't expect. The replay engine captures every XHR/fetch request at every step — full request and response bodies. Teams are using this to verify that analytics events (GA4, Segment, etc.) actually fire at the right moments in the right order. "Does our checkout funnel fire the right events at every step?" becomes a replay + grep instead of a manual walkthrough.
* Journey documentation — PMs and designers use the Mermaid diagram output to get a ground-truth map of what users can actually do. Turns out the real journey graph rarely matches what's in the Figma file. Dead ends, loops, and unreachable states show up immediately.
* Post-deploy smoke tests — Run replay all in CI after a deploy. It replays every known journey through a real browser and reports pass/fail per step. No LLM cost, runs in parallel, takes minutes. If a flow breaks, you know which step and which selector failed.
* Competitive analysis — Point it at a competitor's site and get a structured map of their user flows. What journeys do they support? What does their checkout look like? All captured as screenshots and structured JSON.
Love the Claude+Playwright BFS. One idea we learned the hard way is to fingerprint each state with the URL plus a short DOM hash so you can skip duplicates (product cards, cookie modals, infinite-scroll clones) and keep the graph manageable. Also store the network events that come out of each replayed journey and run them in parallel in CI so you can immediately spot which selector or API call starts failing after a deploy.