← Back to context

Comment by theorchid

21 days ago

However, there is a lack of information when a user opens your website after interacting with AI.

Google Search Console shows the user's query if the query is popular enough and your website is in the search results. Bing shows all queries, even if they are not popular, and if your website is in the search results.

But if AI recommends your website when answering people's questions, you cannot find out what questions the user discussed, how many times your website was shown, and in what position. You can see the UTM tag in your website analytics (for example, GPT adds utm source), but that is the maximum amount of information that will be available to you. But if a user discussed a question with AI and only got your brand name, and then found your site in a search engine, you won't be able to tell that they found you with the help of AI advice.

This is exactly what set me off in trying to figure out the visibility gap.

What’s strange is that we’re moving into a world where recommendations matter more than a click, but attribution still assumes a traditional search funnel. By the time someone lands on your site, the most important decision may have already happened upstream and you have no idea.

The UTM case you mentioned is a good example: it only captures direct "AI to site" clicks, but misses scenarios where AI influences the decision indirectly (brand mention to later search to visit). From the site’s perspective tho... yeah it looks indistinguishable from organic search. It makes me wonder whether we’ll need a completely new mental model for attribution here. Perhaps less about “what query drove this visit” and more about “where did trust originate.”

Not sure what the right solution is yet, but it feels like we’re flying blind during a pretty major shift in how people discover things.

  • This is why most of these AI search visibility tools focus on tracking many possible prompts at once. LLMs give 0 insight into what users are actually asking, so the only thing you can do is put yourself in the user’s shoes and try to guess what they might prompt.

    Disclaimer: I've built a tool in this space (Cartesiano.ai), and this view mostly comes from seeing how noisy product mentions are in practice. Even for market-leading brands, a single prompt can produce different recommendations day to day, which makes me suspect LLMs are also introducing some amount of entropy into product recommendations (?)

    • I don’t think there’s a clean solution yet but I’m not convinced brute force prompt enumeration scales either, given how much randomness is baked in. I guess that’s why I’ve started thinking about this less as prompt tracking and more as signal aggregation over time. Looking at repeat fetches, recurring mentions, and which pages/models seem to converge on the same sources. It doesn’t tell you what the user asked, but it can hint at whether your product is becoming a defensible reference versus a lucky mention.

      From someone who's built a tool in this space, curious if you’ve seen any patterns that cut through the noise? Or if entropy is just something we have to design around.

      Disclaimer: I've built a tool in this space as well (llmsignal.app)

      1 reply →